Identifying failure in a tree network of a parallel computer
Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.
2010-08-24
Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
The Expeditionary Test Set - A fresh approach to automatic testing
NASA Astrophysics Data System (ADS)
Williams, D. L.; Austin, W. J.
This paper discusses the key design decisions and tradeoffs leading from the conceptual stage to the production version of the Expeditionary Test Set (ETS) for the USMC. This included a ten-month feasibility study program funded by the Naval Air Systems Command which culminated in the successful demonstration of a working tester model. The demonstration of the test set was preceded by a substantial re-thinking of conventional ATE test methods. Considerable discussion is devoted to the impact of test philosophy, both on the test set design and the overall effectiveness of avionic testing. Major architectural features of the test set are presented in some detail, and the many areas which break from traditional ATE design are emphasized.
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Casey, C. J.; Kourtides, D. A.; Parker, J. A.
1977-01-01
Approximately 300 materials were evaluated using a specific set of test conditions. Materials tested included wood, fibers, fabrics and synthetic polymers. Data obtained using 10 different sets of test conditions are presented.
NASA Technical Reports Server (NTRS)
Waggoner, J. T.; Phinney, D. E. (Principal Investigator)
1981-01-01
Foreign Commodity Production Forecasting testing activities through June 1981 are documented. A log of test reports is presented. Standard documentation sets are included for each test. The documentation elements presented in each set are summarized.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Seal material development test program
NASA Technical Reports Server (NTRS)
1971-01-01
A program designed to characterize an experimental fluoroelastomer material designated AF-E-124D, is examined. Tests conducted include liquid nitrogen load compression tests, flexure tests and valve seal tests, ambient and elevated temperature compression set tests, and cleaning and flushing fluid exposure tests. The results of these tests indicate the AF-E-124D is a good choice for a cryogenic seal, since it exhibits good low temperature sealing characteristics and resistance to permanent set. The status of this material as an experimental fluorelastomer is stressed and recommended. Activity includes definition and control of critical processing to ensure consistent material properties. Design, fabrication and test of this and other materials is recommended in valve and static seal applications.
Setting Standards for Minimum Competency Tests.
ERIC Educational Resources Information Center
Mehrens, William A.
Some general questions about minimum competency tests are discussed, and various methods of setting standards are reviewed with major attention devoted to those methods used for dichotomizing a continuum. Methods reviewed under the heading of Absolute Judgments of Test Content include Nedelsky's, Angoff's, Ebel's, and Jaeger's. These methods are…
Laboratory Performance Evaluation Report of SEL 421 Phasor Measurement Unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; faris, Anthony J.; Martin, Kenneth E.
2007-12-01
PNNL and BPA have been in close collaboration on laboratory performance evaluation of phasor measurement units for over ten years. A series of evaluation tests are designed to confirm accuracy and determine measurement performance under a variety of conditions that may be encountered in actual use. Ultimately the testing conducted should provide parameters that can be used to adjust all measurements to a standardized basis. These tests are performed with a standard relay test set using recorded files of precisely generated test signals. The test set provides test signals at a level and in a format suitable for input tomore » a PMU that accurately reproduces the signals in both signal amplitude and timing. Test set outputs are checked to confirm the accuracy of the output signal. The recorded signals include both current and voltage waveforms and a digital timing track used to relate the PMU measured value with the test signal. Test signals include steady-state waveforms to test amplitude, phase, and frequency accuracy, modulated signals to determine measurement and rejection bands, and step tests to determine timing and response accuracy. Additional tests are included as necessary to fully describe the PMU operation. Testing is done with a BPA phasor data concentrator (PDC) which provides communication support and monitors data input for dropouts and data errors.« less
Remote temperature-set-point controller
Burke, W.F.; Winiecki, A.L.
1984-10-17
An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Remote temperature-set-point controller
Burke, William F.; Winiecki, Alan L.
1986-01-01
An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Gonzalo-Skok, Oliver; Tous-Fajardo, Julio; Valero-Campo, Carlos; Berzosa, César; Bataller, Ana Vanessa; Arjol-Serrano, José Luis; Moras, Gerard; Mendez-Villanueva, Alberto
2017-08-01
To analyze the effects of 2 different eccentric-overload training (EOT) programs, using a rotational conical pulley, on functional performance in team-sport players. A traditional movement paradigm (ie, squat) including several sets of 1 bilateral and vertical movement was compared with a novel paradigm including a different exercise in each set of unilateral and multi-directional movements. Forty-eight amateur or semiprofessional team-sport players were randomly assigned to an EOT program including either the same bilateral vertical (CBV, n = 24) movement (squat) or different unilateral multidirectional (VUMD, n = 24) movements. Training programs consisted of 6 sets of 1 exercise (CBV) or 1 set of 6 exercises (VUMD) × 6-10 repetitions with 3 min of passive recovery between sets and exercises, biweekly for 8 wk. Functional-performance assessment included several change-of-direction (COD) tests, a 25-m linear-sprint test, unilateral multidirectional jumping tests (ie, lateral, horizontal, and vertical), and a bilateral vertical-jump test. Within-group analysis showed substantial improvements in all tests in both groups, with VUMD showing more robust adaptations in pooled COD tests and lateral/horizontal jumping, whereas the opposite occurred in CBV respecting linear sprinting and vertical jumping. Between-groups analyses showed substantially better results in lateral jumps (ES = 0.21), left-leg horizontal jump (ES = 0.35), and 10-m COD with right leg (ES = 0.42) in VUMD than in CBV. In contrast, left-leg countermovement jump (ES = 0.26) was possibly better in CBV than in VUMD. Eight weeks of EOT induced substantial improvements in functional-performance tests, although the force-vector application may play a key role to develop different and specific functional adaptations.
DOT National Transportation Integrated Search
2016-08-01
The primary objectives of this research include: performing static and dynamic load tests on : newly instrumented test piles to better understand the set-up mechanism for individual soil : layers, verifying or recalibrating previously developed empir...
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
Application for managing model-based material properties for simulation-based engineering
Hoffman, Edward L [Alameda, CA
2009-03-03
An application for generating a property set associated with a constitutive model of a material includes a first program module adapted to receive test data associated with the material and to extract loading conditions from the test data. A material model driver is adapted to receive the loading conditions and a property set and operable in response to the loading conditions and the property set to generate a model response for the material. A numerical optimization module is adapted to receive the test data and the model response and operable in response to the test data and the model response to generate the property set.
Expanding the test set: Chemicals with potential to disrupt mammalian brain development
High-throughput test methods including molecular, cellular, and alternative species-based assays that examine critical events of normal brain development are being developed for detection of developmental neurotoxcants. As new assays are developed, a "training set' of chemicals i...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.
Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland
2009-06-09
The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.
Pharmacogenomics in diverse practice settings: implementation beyond major metropolitan areas
Dorfman, Elizabeth H; Trinidad, Susan Brown; Morales, Chelsea T; Howlett, Kevin; Burke, Wylie; Woodahl, Erica L
2015-01-01
Aim The limited formal study of the clinical feasibility of implementing pharmacogenomic tests has thus far focused on providers at large medical centers in urban areas. Our research focuses on small metropolitan, rural and tribal practice settings. Materials & methods We interviewed 17 healthcare providers in western Montana regarding pharmacogenomic testing. Results Participants were optimistic about the potential of pharmacogenomic tests, but noted unique barriers in small and rural settings including cost, adherence, patient acceptability and testing timeframe. Participants in tribal settings identified heightened sensitivity to genetics and need for community leadership approval as additional considerations. Conclusion Implementation differences in small metropolitan, rural and tribal communities may affect pharmacogenomic test adoption and utilization, potentially impacting many patients. PMID:25712186
Code of Federal Regulations, 2010 CFR
2010-01-01
... confirmation test on the mattress set it manufactures. (r) Confirmation test means a pre-market test conducted... included; examples are convertible sofa bed mattresses, corner group mattresses, day bed mattresses, roll...) This term includes any one, or any combination of the following: replacing the ticking or batting...
Code of Federal Regulations, 2012 CFR
2012-01-01
... confirmation test on the mattress set it manufactures. (r) Confirmation test means a pre-market test conducted... included; examples are convertible sofa bed mattresses, corner group mattresses, day bed mattresses, roll...) This term includes any one, or any combination of the following: replacing the ticking or batting...
Code of Federal Regulations, 2011 CFR
2011-01-01
... confirmation test on the mattress set it manufactures. (r) Confirmation test means a pre-market test conducted... included; examples are convertible sofa bed mattresses, corner group mattresses, day bed mattresses, roll...) This term includes any one, or any combination of the following: replacing the ticking or batting...
Code of Federal Regulations, 2014 CFR
2014-01-01
... confirmation test on the mattress set it manufactures. (r) Confirmation test means a pre-market test conducted... included; examples are convertible sofa bed mattresses, corner group mattresses, day bed mattresses, roll...) This term includes any one, or any combination of the following: replacing the ticking or batting...
16 CFR § 1633.2 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... confirmation test on the mattress set it manufactures. (r) Confirmation test means a pre-market test conducted... included; examples are convertible sofa bed mattresses, corner group mattresses, day bed mattresses, roll...) This term includes any one, or any combination of the following: replacing the ticking or batting...
Lauer, Michael S; Pothier, Claire E; Magid, David J; Smith, S Scott; Kattan, Michael W
2007-12-18
The exercise treadmill test is recommended for risk stratification among patients with intermediate to high pretest probability of coronary artery disease. Posttest risk stratification is based on the Duke treadmill score, which includes only functional capacity and measures of ischemia. To develop and externally validate a post-treadmill test, multivariable mortality prediction rule for adults with suspected coronary artery disease and normal electrocardiograms. Prospective cohort study conducted from September 1990 to May 2004. Exercise treadmill laboratories in a major medical center (derivation set) and a separate HMO (validation set). 33,268 patients in the derivation set and 5821 in the validation set. All patients had normal electrocardiograms and were referred for evaluation of suspected coronary artery disease. The derivation set patients were followed for a median of 6.2 years. A nomogram-illustrated model was derived on the basis of variables easily obtained in the stress laboratory, including age; sex; history of smoking, hypertension, diabetes, or typical angina; and exercise findings of functional capacity, ST-segment changes, symptoms, heart rate recovery, and frequent ventricular ectopy in recovery. The derivation data set included 1619 deaths. Although both the Duke treadmill score and our nomogram-illustrated model were significantly associated with death (P < 0.001), the nomogram was better at discrimination (concordance index for right-censored data, 0.83 vs. 0.73) and calibration. We reclassified many patients with intermediate- to high-risk Duke treadmill scores as low risk on the basis of the nomogram. The model also predicted 3-year mortality rates well in the validation set: Based on an optimal cut-point for a negative predictive value of 0.97, derivation and validation rates were, respectively, 1.7% and 2.5% below the cut-point and 25% and 29% above the cut-point. Blood test-based measures or left ventricular ejection fraction were not included. The nomogram can be applied only to patients with a normal electrocardiogram. Clinical utility remains to be tested. A simple nomogram based on easily obtained pretest and exercise test variables predicted all-cause mortality in adults with suspected coronary artery disease and normal electrocardiograms.
Evolution of solid rocket booster component testing
NASA Technical Reports Server (NTRS)
Lessey, Joseph A.
1989-01-01
The evolution of one of the new generation of test sets developed for the Solid Rocket Booster of the U.S. Space Transportation System. Requirements leading to factory checkout of the test set are explained, including the evolution from manual to semiautomated toward fully automated status. Individual improvements in the built-in test equipment, self-calibration, and software flexibility are addressed, and the insertion of fault detection to improve reliability is discussed.
ERIC Educational Resources Information Center
Hambleton, Ronald K., Ed.; Zaal, Jac N., Ed.
The 14 chapters of this book focus on the technical advances, advances in applied settings, and emerging topics in the testing field. Part 1 discusses methodological advances, Part 2 considers developments in applied settings, and Part 3 reviews emerging topics in the field of testing. Part 1 papers include: (1) "Advances in…
A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps
NASA Astrophysics Data System (ADS)
Brown, Scott
Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
Routine development of objectively derived search strategies.
Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael
2012-02-29
Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.
Beliefs about Cancer and Diet among Those Considering Genetic Testing for Colon Cancer
ERIC Educational Resources Information Center
Palmquist, Aunchalee E. L.; Upton, Rachel; Lee, Seungjin; Panter, Abby T.; Hadley, Don W.; Koehly, Laura M.
2011-01-01
Objective: To assess beliefs about the role of diet in cancer prevention among individuals considering genetic testing for Lynch Syndrome. Design: Family-centered, cascade recruitment; baseline assessment of a longitudinal study. Setting: Clinical research setting. Participants: Participants were 390 persons, ages 18 and older, including persons…
A Compendium of Recent Optocoupler Radiation Test Data
NASA Technical Reports Server (NTRS)
Label, K. A.; Kniffin, S. D.; Reed, R. A.; Kim, H. S.; Wert, J. L.; Oberg, D. L.; Normand, E.; Johnston, A. H.; Lum, G. K.; Koga, R.;
2000-01-01
We present a compendium of optocoupler radiation test data including neutron, proton and heavy ion Displacement Damage (DD), Single Event Transients (SET) and Total Ionizing Dose (TID). Proton data includes ionizing and non-ionizing damage mechanisms.
Software for Automated Testing of Mission-Control Displays
NASA Technical Reports Server (NTRS)
OHagan, Brian
2004-01-01
MCC Display Cert Tool is a set of software tools for automated testing of computerterminal displays in spacecraft mission-control centers, including those of the space shuttle and the International Space Station. This software makes it possible to perform tests that are more thorough, take less time, and are less likely to lead to erroneous results, relative to tests performed manually. This software enables comparison of two sets of displays to report command and telemetry differences, generates test scripts for verifying telemetry and commands, and generates a documentary record containing display information, including version and corrective-maintenance data. At the time of reporting the information for this article, work was continuing to add a capability for validation of display parameters against a reconfiguration file.
Williamson, Joyce E.; Jarrell, Gregory J.; Clawges, Rick M.; Galloway, Joel M.; Carter, Janet M.
2000-01-01
This compact disk contains digital data produced as part of the 1:100,000-scale map products for the Black Hills Hydrology Study conducted in western South Dakota. The digital data include 28 individual Geographic Information System (GIS) data sets: data sets for the hydrogeologic unit map including all mapped hydrogeologic units within the study area (1 data set) and major geologic structure including anticlines and synclines (1 data set); data sets for potentiometric maps including the potentiometric contours for the Inyan Kara, Minnekahta, Minnelusa, Madison, and Deadwood aquifers (5 data sets), wells used as control points for each aquifer (5 data sets), and springs used as control points for the potentiometric contours (1 data set); and data sets for the structure-contour maps including the structure contours for the top of each formation that contains major aquifers (5 data sets), wells and tests holes used as control points for each formation (5 data sets), and surficial deposits (alluvium and terrace deposits) that directly overlie each of the major aquifer outcrops (5 data sets). These data sets were used to produce the maps published by the U.S. Geological Survey.
Methodology for extracting local constants from petroleum cracking flows
Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.
2000-01-01
A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.
Pilot study of pharmacist-assisted delivery of pharmacogenetic testing in a primary care setting.
Haga, Susanne B; LaPointe, Nancy M Allen; Cho, Alex; Reed, Shelby D; Mills, Rachel; Moaddeb, Jivan; Ginsburg, Geoffrey S
2014-09-01
To describe the rationale and design of a pilot program to implement and evaluate pharmacogenetic (PGx) testing in a primary care setting. Several factors have impeded the uptake of PGx testing, including lack of provider knowledge and challenges with operationalizing PGx testing in a clinical practice setting. We plan to compare two strategies for the implementation of PGx testing: a pharmacist-initiated testing arm compared with a physician-initiated PGx testing arm. Providers in both groups will be required to attend an introduction to PGx seminar. Anticipated results: We anticipate that providers in the pharmacist-initiated group will be more likely to order PGx testing than providers in the physician-initiated group. Overall, we aim to generate data that will inform an effective delivery model for PGx testing and to facilitate a seamless integration of PGx testing in primary care practices.
Maternal Plasma DNA and RNA Sequencing for Prenatal Testing.
Tamminga, Saskia; van Maarle, Merel; Henneman, Lidewij; Oudejans, Cees B M; Cornel, Martina C; Sistermans, Erik A
2016-01-01
Cell-free DNA (cfDNA) testing has recently become indispensable in diagnostic testing and screening. In the prenatal setting, this type of testing is often called noninvasive prenatal testing (NIPT). With a number of techniques, using either next-generation sequencing or single nucleotide polymorphism-based approaches, fetal cfDNA in maternal plasma can be analyzed to screen for rhesus D genotype, common chromosomal aneuploidies, and increasingly for testing other conditions, including monogenic disorders. With regard to screening for common aneuploidies, challenges arise when implementing NIPT in current prenatal settings. Depending on the method used (targeted or nontargeted), chromosomal anomalies other than trisomy 21, 18, or 13 can be detected, either of fetal or maternal origin, also referred to as unsolicited or incidental findings. For various biological reasons, there is a small chance of having either a false-positive or false-negative NIPT result, or no result, also referred to as a "no-call." Both pre- and posttest counseling for NIPT should include discussing potential discrepancies. Since NIPT remains a screening test, a positive NIPT result should be confirmed by invasive diagnostic testing (either by chorionic villus biopsy or by amniocentesis). As the scope of NIPT is widening, professional guidelines need to discuss the ethics of what to offer and how to offer. In this review, we discuss the current biochemical, clinical, and ethical challenges of cfDNA testing in the prenatal setting and its future perspectives including novel applications that target RNA instead of DNA. © 2016 Elsevier Inc. All rights reserved.
Continued monitoring of instrumented pavement in Ohio
DOT National Transportation Integrated Search
2003-12-01
Performance and environmental data continued to be monitored throughout this study on the Ohio SHRP Test Road. : Response testing included three new series of controlled vehicle tests and two sets of nondestructive tests. Cracking in two : SPS-2 sect...
Continued monitoring of instrumented pavement in Ohio.
DOT National Transportation Integrated Search
2002-12-01
Performance and environmental data continued to be monitored throughout this study on the Ohio SHRP Test Road. Response testing included three new series of controlled vehicle tests and two sets of nondestructive tests. Cracking in two SPS-2 sections...
16 CFR 1500.42 - Test for eye irritants.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., including testing that does not require animals, are presented in the CPSC's animal testing policy set forth... conducted, a sequential testing strategy is recommended to reduce the number of test animals. Additionally... eye irritation. Both eyes of each animal in the test group shall be examined before testing, and only...
Utah State Office of Education Fingertip Facts, 2013-14
ERIC Educational Resources Information Center
Utah State Office of Education, 2014
2014-01-01
Fingertip Facts is a compendium of some of the most frequently requested data sets from the Utah State Office of Education. Data sets in this year's Fingertip Facts include: Core CRT Language Arts Testing, 2013; Core CRT Mathematics Testing, 2013; 2013 Public Education General Fund; 2012-13 Enrollment Demographics; Public Schools by Grade Level,…
Foreign Language Analysis and Recognition (FLARe)
2016-10-08
10 7 Chinese CER ...Rates ( CERs ) were obtained with each feature set: (1) 19.2%, (2) 17.3%, and (3) 15.3%. Based on these results, a GMM-HMM speech recognition system...These systems were evaluated on the HUB4 and HKUST test partitions. Table 7 shows the CER obtained on each test set. Whereas including the HKUST data
Ceasar, Rachel; Chang, Jamie; Zamora, Kara; Hurstak, Emily; Kushel, Margot; Miaskowski, Christine; Knight, Kelly
2016-01-01
Background Guideline recommendations to reduce prescription opioid misuse among patients with chronic non-cancer pain include the routine use of urine toxicology tests for high-risk patients. Yet little is known about how the implementation of urine toxicology tests among patients with co-occurring chronic non-cancer pain and substance use impacts primary care providers’ management of misuse. In this paper, we present clinicians’ perspectives on the benefits and challenges of implementing urine toxicology tests in the monitoring of opioid misuse and substance use in safety net healthcare settings. Methods We interviewed 23 primary care providers from six safety net healthcare settings whose patients had a diagnosis of co-occurring chronic non-cancer pain and substance use. We transcribed, coded, and analyzed interviews using grounded theory methodology. Results The benefits of implementing urine toxicology tests for primary care providers included less reliance on intuition to assess for misuse and the ability to identify unknown opioid misuse and/or substance use. The challenges of implementing urine toxicology tests included insufficient education and training about how to interpret and implement tests, and a lack of clarity on how and when to act on tests that indicated misuse and/or substance use. Conclusions These data suggest that primary care clinicians’ lack of education and training to interpret and implement urine toxicology tests may impact their management of patient opioid misuse and/or substance use. Clinicians may benefit from additional education and training about the clinical implementation and use of urine toxicology tests. Additional research is needed on how primary care providers implementation and use of urine toxicology tests impacts chronic non-cancer pain management in primary care and safety net healthcare settings among patients with co-occurring chronic non-cancer pain and substance use. PMID:26682471
Camus, Melinda S; Flatland, Bente; Freeman, Kathleen P; Cruz Cardona, Janice A
2015-12-01
The purpose of this document is to educate providers of veterinary laboratory diagnostic testing in any setting about comparative testing. These guidelines will define, explain, and illustrate the importance of a multi-faceted laboratory quality management program which includes comparative testing. The guidelines will provide suggestions for implementation of such testing, including which samples should be tested, frequency of testing, and recommendations for result interpretation. Examples and a list of vendors and manufacturers supplying control materials and services to veterinary laboratories are also included. © 2015 American Society for Veterinary Clinical Pathology.
Measurements by a Vector Network Analyzer at 325 to 508 GHz
NASA Technical Reports Server (NTRS)
Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony
2008-01-01
Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.
Shrestha, Ram K; Clark, Hollie A; Sansom, Stephanie L; Song, Binwei; Buckendahl, Holly; Calhoun, Cindy B; Hutchinson, Angela B; Heffelfinger, James D
2008-01-01
We assessed the cost-effectiveness of determining new human immunodeficiency virus (HIV) diagnoses using rapid HIV testing performed by community-based organizations (CBOs) in Kansas City, Missouri, and Detroit, Michigan. The CBOs performed rapid HIV testing during April 2004 through March 2006. In Kansas City, testing was performed in a clinic and in outreach settings. In Detroit, testing was performed in outreach settings only. Both CBOs used mobile testing vans. Measures of effectiveness were the number of HIV tests performed and the number of people notified of new HIV diagnoses, based on rapid tests. We retrospectively collected program costs, including those for personnel, test kits, mobile vans, and facility space. The CBO in Kansas City tested a mean of 855 people a year in its clinic and 703 people a year in outreach settings. The number of people notified of new HIV diagnoses was 19 (2.2%) in the clinic and five (0.7%) in outreach settings. The CBO in Detroit tested 976 people a year in outreach settings, and the number notified of new HIV diagnoses was 15 (1.5%). In Kansas City, the cost per person notified of a new HIV diagnosis was $3,637 in the clinic and $16,985 in outreach settings. In the Detroit outreach settings, the cost per notification was $13,448. The cost of providing a new HIV diagnosis was considerably higher in the outreach settings than in the clinic. The variation can be largely explained by differences in the number of undiagnosed infections among the people tested and by the costs of purchasing and operating a mobile van.
Halpern, Yoni; Jernite, Yacine; Shapiro, Nathan I.; Nathanson, Larry A.
2017-01-01
Objective To demonstrate the incremental benefit of using free text data in addition to vital sign and demographic data to identify patients with suspected infection in the emergency department. Methods This was a retrospective, observational cohort study performed at a tertiary academic teaching hospital. All consecutive ED patient visits between 12/17/08 and 2/17/13 were included. No patients were excluded. The primary outcome measure was infection diagnosed in the emergency department defined as a patient having an infection related ED ICD-9-CM discharge diagnosis. Patients were randomly allocated to train (64%), validate (20%), and test (16%) data sets. After preprocessing the free text using bigram and negation detection, we built four models to predict infection, incrementally adding vital signs, chief complaint, and free text nursing assessment. We used two different methods to represent free text: a bag of words model and a topic model. We then used a support vector machine to build the prediction model. We calculated the area under the receiver operating characteristic curve to compare the discriminatory power of each model. Results A total of 230,936 patient visits were included in the study. Approximately 14% of patients had the primary outcome of diagnosed infection. The area under the ROC curve (AUC) for the vitals model, which used only vital signs and demographic data, was 0.67 for the training data set, 0.67 for the validation data set, and 0.67 (95% CI 0.65–0.69) for the test data set. The AUC for the chief complaint model which also included demographic and vital sign data was 0.84 for the training data set, 0.83 for the validation data set, and 0.83 (95% CI 0.81–0.84) for the test data set. The best performing methods made use of all of the free text. In particular, the AUC for the bag-of-words model was 0.89 for training data set, 0.86 for the validation data set, and 0.86 (95% CI 0.85–0.87) for the test data set. The AUC for the topic model was 0.86 for the training data set, 0.86 for the validation data set, and 0.85 (95% CI 0.84–0.86) for the test data set. Conclusion Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection. PMID:28384212
Horng, Steven; Sontag, David A; Halpern, Yoni; Jernite, Yacine; Shapiro, Nathan I; Nathanson, Larry A
2017-01-01
To demonstrate the incremental benefit of using free text data in addition to vital sign and demographic data to identify patients with suspected infection in the emergency department. This was a retrospective, observational cohort study performed at a tertiary academic teaching hospital. All consecutive ED patient visits between 12/17/08 and 2/17/13 were included. No patients were excluded. The primary outcome measure was infection diagnosed in the emergency department defined as a patient having an infection related ED ICD-9-CM discharge diagnosis. Patients were randomly allocated to train (64%), validate (20%), and test (16%) data sets. After preprocessing the free text using bigram and negation detection, we built four models to predict infection, incrementally adding vital signs, chief complaint, and free text nursing assessment. We used two different methods to represent free text: a bag of words model and a topic model. We then used a support vector machine to build the prediction model. We calculated the area under the receiver operating characteristic curve to compare the discriminatory power of each model. A total of 230,936 patient visits were included in the study. Approximately 14% of patients had the primary outcome of diagnosed infection. The area under the ROC curve (AUC) for the vitals model, which used only vital signs and demographic data, was 0.67 for the training data set, 0.67 for the validation data set, and 0.67 (95% CI 0.65-0.69) for the test data set. The AUC for the chief complaint model which also included demographic and vital sign data was 0.84 for the training data set, 0.83 for the validation data set, and 0.83 (95% CI 0.81-0.84) for the test data set. The best performing methods made use of all of the free text. In particular, the AUC for the bag-of-words model was 0.89 for training data set, 0.86 for the validation data set, and 0.86 (95% CI 0.85-0.87) for the test data set. The AUC for the topic model was 0.86 for the training data set, 0.86 for the validation data set, and 0.85 (95% CI 0.84-0.86) for the test data set. Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection.
Lim, Cherry; Wannapinij, Prapass; White, Lisa; Day, Nicholas P J; Cooper, Ben S; Peacock, Sharon J; Limmathurotsakul, Direk
2013-01-01
Estimates of the sensitivity and specificity for new diagnostic tests based on evaluation against a known gold standard are imprecise when the accuracy of the gold standard is imperfect. Bayesian latent class models (LCMs) can be helpful under these circumstances, but the necessary analysis requires expertise in computational programming. Here, we describe open-access web-based applications that allow non-experts to apply Bayesian LCMs to their own data sets via a user-friendly interface. Applications for Bayesian LCMs were constructed on a web server using R and WinBUGS programs. The models provided (http://mice.tropmedres.ac) include two Bayesian LCMs: the two-tests in two-population model (Hui and Walter model) and the three-tests in one-population model (Walter and Irwig model). Both models are available with simplified and advanced interfaces. In the former, all settings for Bayesian statistics are fixed as defaults. Users input their data set into a table provided on the webpage. Disease prevalence and accuracy of diagnostic tests are then estimated using the Bayesian LCM, and provided on the web page within a few minutes. With the advanced interfaces, experienced researchers can modify all settings in the models as needed. These settings include correlation among diagnostic test results and prior distributions for all unknown parameters. The web pages provide worked examples with both models using the original data sets presented by Hui and Walter in 1980, and by Walter and Irwig in 1988. We also illustrate the utility of the advanced interface using the Walter and Irwig model on a data set from a recent melioidosis study. The results obtained from the web-based applications were comparable to those published previously. The newly developed web-based applications are open-access and provide an important new resource for researchers worldwide to evaluate new diagnostic tests.
A framework for the design and development of physical employment tests and standards.
Payne, W; Harvey, J
2010-07-01
Because operational tasks in the uniformed services (military, police, fire and emergency services) are physically demanding and incur the risk of injury, employment policy in these services is usually competency based and predicated on objective physical employment standards (PESs) based on physical employment tests (PETs). In this paper, a comprehensive framework for the design of PETs and PESs is presented. Three broad approaches to physical employment testing are described and compared: generic predictive testing; task-related predictive testing; task simulation testing. Techniques for the selection of a set of tests with good coverage of job requirements, including job task analysis, physical demands analysis and correlation analysis, are discussed. Regarding individual PETs, theoretical considerations including measurability, discriminating power, reliability and validity, and practical considerations, including development of protocols, resource requirements, administrative issues and safety, are considered. With regard to the setting of PESs, criterion referencing and norm referencing are discussed. STATEMENT OF RELEVANCE: This paper presents an integrated and coherent framework for the development of PESs and hence provides a much needed theoretically based but practically oriented guide for organisations seeking to establish valid and defensible PESs.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-18
... Log Sets b. Vented Hearth Products C. National Energy Savings D. Other Comments 1. Test Procedures 2... address vented gas log sets. DOE clarified its position on vented gas log sets in a document published on... vented gas log sets are included in the definition of ``vented hearth heater''; DOE has reached this...
The Language Testing Cycle: From Inception to Washback. Series S, Number 13.
ERIC Educational Resources Information Center
Wigglesworth, Gillian, Ed.; Elder, Catherine, Ed.
A selection of essays on language testing includes: "Perspectives on the Testing Cycle: Setting the Scene" (Catherine Elder, Gillian Wigglesworth); "The Politicisation of English: The Case of the STEP Test and the Chinese Students" (Lesleyanne Hawthorne); "Developing Language Tests for Specific Populations" (Rosemary…
Code of Federal Regulations, 2010 CFR
2010-01-01
... conditioning area before starting test, prototype or production identification number, and test data including.... For confirmation tests, the identification number must be that of the prototype tested. (2) Video and... prototype identification number or production lot identification number of the mattress set, date and time...
The Psychology Experiment Building Language (PEBL) and PEBL Test Battery.
Mueller, Shane T; Piper, Brian J
2014-01-30
We briefly describe the Psychology Experiment Building Language (PEBL), an open source software system for designing and running psychological experiments. We describe the PEBL Test Battery, a set of approximately 70 behavioral tests which can be freely used, shared, and modified. Included is a comprehensive set of past research upon which tests in the battery are based. We report the results of benchmark tests that establish the timing precision of PEBL. We consider alternatives to the PEBL system and battery tests. We conclude with a discussion of the ethical factors involved in the open source testing movement. Copyright © 2013 Elsevier B.V. All rights reserved.
The Psychology Experiment Building Language (PEBL) and PEBL Test Battery
Mueller, Shane T.; Piper, Brian J.
2014-01-01
Background We briefly describe the Psychology Experiment Building Language (PEBL), an open source software system for designing and running psychological experiments. New Method We describe the PEBL test battery, a set of approximately 70 behavioral tests which can be freely used, shared, and modified. Included is a comprehensive set of past research upon which tests in the battery are based. Results We report the results of benchmark tests that establish the timing precision of PEBL. Comparison with Existing Method We consider alternatives to the PEBL system and battery tests. Conclusions We conclude with a discussion of the ethical factors involved in the open source testing movement. PMID:24269254
The robustness of the horizontal gaze nystagmus test
DOT National Transportation Integrated Search
2007-09-01
Police officers follow procedures set forth in the NHTSA/IACP curriculum when they administer the Standardized Field Sobriety Tests (SFSTs) to suspected alcohol-impaired drivers. The SFSTs include Horizontal Gaze Nystagmus (HGN) test, Walk-and-Turn (...
Tillmar, Andreas O; Phillips, Chris
2017-01-01
Advances in massively parallel sequencing technology have enabled the combination of a much-expanded number of DNA markers (notably STRs and SNPs in one or combined multiplexes), with the aim of increasing the weight of evidence in forensic casework. However, when data from multiple loci on the same chromosome are used, genetic linkage can affect the final likelihood calculation. In order to study the effect of linkage for different sets of markers we developed the biostatistical tool ILIR, (Impact of Linkage on forensic markers for Identity and Relationship tests). The ILIR tool can be used to study the overall impact of genetic linkage for an arbitrary set of markers used in forensic testing. Application of ILIR can be useful during marker selection and design of new marker panels, as well as being highly relevant for existing marker sets as a way to properly evaluate the effects of linkage on a case-by-case basis. ILIR, implemented via the open source platform R, includes variation and genomic position reference data for over 40 STRs and 140 SNPs, combined with the ability to include additional forensic markers of interest. The use of the software is demonstrated with examples from several different established marker sets (such as the expanded CODIS core loci) including a review of the interpretation of linked genetic data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Viscoelastic testing for hepatic surgery: a systematic review with meta-analysis-a protocol.
McCrossin, Kate Elizabeth; Bramley, David Edmund Piers; Hessian, Elizabeth; Hutcheon, Evelyn; Imberger, Georgina
2016-09-06
Viscoelastic tests, including thromboelastography (TEG) and rotational thromboelastometry (ROTEM), provide a global assessment of haemostatic function at the point of care. The use of a TEG or ROTEM system to guide blood product administration has been shown in some surgical settings to reduce transfusion requirements. The aim of this review is to evaluate all published evidence regarding viscoelastic testing in the setting of hepatic surgery. We will search MEDLINE, EMBASE and the Cochrane Central Register of Controlled Trials databases to identify randomised controlled trials examining the use of viscoelastic testing for hepatic surgery. Two reviewers will independently screen titles and abstracts of studies identified and will independently extract data. Any disagreements will be resolved by discussion with a third reviewer. A meta-analysis will be conducted if feasible. Viscoelastic devices such as TEG and ROTEM are increasingly available to clinicians as a bedside test. Patients undergoing hepatic surgery have a significant risk of blood loss and coagulopathy requiring transfusion. Theoretical benefits of use of a TEG or ROTEM system in the hepatic surgical setting include a rationalisation of blood products, a reduction in transfusion-related side effects, an improvement in patient outcomes including mortality, and a reduction in cost. This systematic review will summarise the current evidence regarding the use of viscoelastic testing for hepatic surgery. PROSPERO CRD42016036732.
Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R
2017-04-14
Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
A developmental model of recreation choice behavior
Daniel R. Williams
1985-01-01
Recreation choices are viewed as including, at least implicitly, a selection of an activity, a setting, and a set of companions. With development these three elements become increasingly differentiated from one another. The model is tested by examining the perceived similarities among a set of 15 recreation choices depicted in color slides.
A test for patterns of modularity in sequences of developmental events.
Poe, Steven
2004-08-01
This study presents a statistical test for modularity in the context of relative timing of developmental events. The test assesses whether sets of developmental events show special phylogenetic conservation of rank order. The test statistic is the correlation coefficient of developmental ranks of the N events of the hypothesized module across taxa. The null distribution is obtained by taking correlation coefficients for randomly sampled sets of N events. This test was applied to two datasets, including one where phylogenetic information was taken into account. The events of limb development in two frog species were found to behave as a module.
A minimal standardization setting for language mapping tests: an Italian example.
Rofes, Adrià; de Aguiar, Vânia; Miceli, Gabriele
2015-07-01
During awake surgery, picture-naming tests are administered to identify brain structures related to language function (language mapping), and to avoid iatrogenic damage. Before and after surgery, naming tests and other neuropsychological procedures aim at charting naming abilities, and at detecting which items the subject can respond to correctly. To achieve this goal, sufficiently large samples of normed and standardized stimuli must be available for preoperative and postoperative testing, and to prepare intraoperative tasks, the latter only including items named flawlessly preoperatively. To discuss design, norming and presentation of stimuli, and to describe the minimal standardization setting used to develop two sets of Italian stimuli, one for object naming and one for verb naming, respectively. The setting includes a naming study (to obtain picture-name agreement ratings), two on-line questionnaires (to acquire age-of-acquisition and imageability ratings for all test items), and the norming of other relevant language variables. The two sets of stimuli have >80 % picture-name agreement, high levels of internal consistency and reliability for imageability and age of acquisition ratings. They are normed for psycholinguistic variables known to affect lexical access and retrieval, and are validated in a clinical population. This framework can be used to increase the probability of reliably detecting language impairments before and after surgery, to prepare intraoperative tests based on sufficient knowledge of pre-surgical language abilities in each patient, and to decrease the probability of false positives during surgery. Examples of data usage are provided. Normative data can be found in the supplementary materials.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Fuzzy Set Methods for Object Recognition in Space Applications
NASA Technical Reports Server (NTRS)
Keller, James M. (Editor)
1992-01-01
Progress on the following four tasks is described: (1) fuzzy set based decision methodologies; (2) membership calculation; (3) clustering methods (including derivation of pose estimation parameters), and (4) acquisition of images and testing of algorithms.
Configuration and Sizing of a Test Fixture for Panels Under Combined Loads
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2006-01-01
Future air and space structures are expected to utilize composite panels that are subjected to combined mechanical loads, such as bi-axial compression/tension, shear and pressure. Therefore, the ability to accurately predict the buckling and strength failures of such panels is important. While computational analysis can provide tremendous insight into panel response, experimental results are necessary to verify predicted performances of these panels to judge the accuracy of computational methods. However, application of combined loads is an extremely difficult task due to the complex test fixtures and set-up required. Presented herein is a comparison of several test set-ups capable of testing panels under combined loads. Configurations compared include a D-box, a segmented cylinder and a single panel set-up. The study primarily focuses on the preliminary sizing of a single panel test configuration capable of testing flat panels under combined in-plane mechanical loads. This single panel set-up appears to be best suited to the testing of both strength critical and buckling critical panels. Required actuator loads and strokes are provided for various square, flat panels.
Appropriate Use of Drug Testing in Clinical Addiction Medicine.
Jarvis, Margaret; Williams, Jessica; Hurford, Matthew; Lindsay, Dawn; Lincoln, Piper; Giles, Leila; Luongo, Peter; Safarian, Taleen
: Biological drug testing is a tool that provides information about an individual's recent substance use. Like any tool, its value depends on using it correctly; that is, on selecting the right test for the right person at the right time. This document is intended to clarify appropriate clinical use of drug testing in addiction medicine and aid providers in their decisions about drug testing for the identification, diagnosis, treatment, and recovery of patients with, or at risk for, addiction. The RAND Corporation (RAND)/University of California, Los Angeles (UCLA) Appropriateness Method (RAM) process for combining scientific evidence with the collective judgment of experts was used to identify appropriate clinical practices and highlight areas where research is needed. Although consensus panels and expert groups have offered guidance on the use of drug testing for patients with addiction, very few addressed considerations for patients across settings and in different levels of care. This document will focus primarily on patients in addiction treatment and recovery, where drug testing is used to assess patients for a substance use disorder, monitor the effectiveness of a treatment plan, and support recovery. Inasmuch as the scope includes the recognition of addiction, which often occurs in general healthcare settings, selected special populations at risk for addiction visiting these settings are briefly included.
16 CFR 1500.41 - Method of testing primary irritant substances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... corrosivity properties of substances, including testing that does not require animals, are presented in the CPSC's animal testing policy set forth in 16 CFR 1500.232. A weight-of-evidence analysis or a validated... conducted, a sequential testing strategy is recommended to reduce the number of test animals. The method of...
A broad survey of recombination in animal mitochondria.
Piganeau, Gwenaël; Gardner, Michael; Eyre-Walker, Adam
2004-12-01
Recombination in mitochondrial DNA (mtDNA) remains a controversial topic. Here we present a survey of 279 animal mtDNA data sets, of which 12 were from asexual species. Using four separate tests, we show that there is widespread evidence of recombination; for one test as many as 14.2% of the data sets reject a model of clonal inheritance and in several data sets, including primates, the recombinants can be identified visually. We show that none of the tests give significant results for obligate clonal species (apomictic pathogens) and that the sexual species show significantly greater evidence of recombination than asexual species. For some data sets, such as Macaca nemestrina, additional data sets suggest that the recombinants are not artifacts. For others, it cannot be determined whether the recombinants are real or produced by laboratory error. Either way, the results have important implications for how mtDNA is sequenced and used.
Development of a grinding-specific performance test set-up.
Olesen, C G; Larsen, B H; Andresen, E L; de Zee, M
2015-01-01
The aim of this study was to develop a performance test set-up for America's Cup grinders. The test set-up had to mimic the on-boat grinding activity and be capable of collecting data for analysis and evaluation of grinding performance. This study included a literature-based analysis of grinding demands and a test protocol developed to accommodate the necessary physiological loads. This study resulted in a test protocol consisting of 10 intervals of 20 revolutions each interspersed with active resting periods of 50 s. The 20 revolutions are a combination of both forward and backward grinding and an exponentially rising resistance. A custom-made grinding ergometer was developed with computer-controlled resistance and capable of collecting data during the test. The data collected can be used to find measures of grinding performance such as peak power, time to complete and the decline in repeated grinding performance.
Spielberg, Freya; Kurth, Ann E; Severynen, Anneleen; Hsieh, Yu-Hsiang; Moring-Parris, Daniel; Mackenzie, Sara; Rothman, Richard
2011-06-01
Providers in emergency care settings (ECSs) often face barriers to expanded HIV testing. We undertook formative research to understand the potential utility of a computer tool, "CARE," to facilitate rapid HIV testing in ECSs. Computer tool usability and acceptability were assessed among 35 adult patients, and provider focus groups were held, in two ECSs in Washington State and Maryland. The computer tool was usable by patients of varying computer literacy. Patients appreciated the tool's privacy and lack of judgment and their ability to reflect on HIV risks and create risk reduction plans. Staff voiced concerns regarding ECS-based HIV testing generally, including resources for follow-up of newly diagnosed people. Computer-delivered HIV testing support was acceptable and usable among low-literacy populations in two ECSs. Such tools may help circumvent some practical barriers associated with routine HIV testing in busy settings though linkages to care will still be needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ringgenberg, P.D.; Burris, W.J.
1988-06-28
A method is described of flow testing a formation in a wellbore, comprising: providing a testing string including at least one annulus pressure responsive tool bore closure valve; providing a packer and setting the packer in the wellbore to seal thereacross; running the testing string into the wellbore with the tool bore closure valve in an open position; stinging into the set packer with the bottom of the testing string; increasing pressure a first time in the wellbore annulus around the testing string and above the set packer without cycling the tool bore closure valve; reducing pressure in the wellboremore » annulus; closing the tool bore closure valve responsive to the pressure reduction; increasing pressure a second time in the wellbore annulus; reopening the tool bore closure valve responsive to the second increase; and flowing fluids from the formation through the reopened tool bore closure valve.« less
Handbook for Driving Knowledge Testing.
ERIC Educational Resources Information Center
Pollock, William T.; McDole, Thomas L.
Materials intended for driving knowledge test development for use by operational licensing and education agencies are presented. A pool of 1,313 multiple choice test items is included, consisting of sets of specially developed and tested items covering principles of safe driving, legal regulations, and traffic control device knowledge pertinent to…
Assessment of Accelerated Tests Compared to Beachfront Test and Proposed Evaluation Method
2009-09-03
Certification Program (ESTCP) funded project entitled “Non-Chromate Aluminum Pretreatments” ( NCAP ) – Funding began in 2000, ended 2004 for Phase I...corrosion tests to beachfront test NCAP Data Assessment Data set includes: – 4 aluminum alloys: 2024, 7075, 2219, 5083 – 9 conversion coatings
Summary of: radiation protection in dental X-ray surgeries--still rooms for improvement.
Walker, Anne
2013-03-01
To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.
Radiation protection in dental X-ray surgeries--still rooms for improvement.
Hart, G; Dugdale, M
2013-03-01
To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zili; Nordhaus, William
2009-03-19
In the duration of this project, we finished the main tasks set up in the initial proposal. These tasks include: setting up the basic platform in GAMS language for the new RICE 2007 model; testing various model structure of RICE 2007; incorporating PPP data set in the new RICE model; developing gridded data set for IA modeling.
Experimental Data from the Benchmark SuperCritical Wing Wind Tunnel Test on an Oscillating Turntable
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Piatak, David J.
2013-01-01
The Benchmark SuperCritical Wing (BSCW) wind tunnel model served as a semi-blind testcase for the 2012 AIAA Aeroelastic Prediction Workshop (AePW). The BSCW was chosen as a testcase due to its geometric simplicity and flow physics complexity. The data sets examined include unforced system information and forced pitching oscillations. The aerodynamic challenges presented by this AePW testcase include a strong shock that was observed to be unsteady for even the unforced system cases, shock-induced separation and trailing edge separation. The current paper quantifies these characteristics at the AePW test condition and at a suggested benchmarking test condition. General characteristics of the model's behavior are examined for the entire available data set.
Shuttle roll-out set for 17 September 1976
NASA Technical Reports Server (NTRS)
1976-01-01
The unveiling of the first reusable space shuttle vehicle by the National Aeronautics and Space Administration is discussed. The role of orbiter 101 as a test vehicle is stressed. Approach and landing tests, ground vibration tests, crew are among the topics included.
Test Generators: Teacher's Tool or Teacher's Headache?
ERIC Educational Resources Information Center
Eiser, Leslie
1988-01-01
Discusses the advantages and disadvantages of test generation programs. Includes setting up, printing exams and "bells and whistles." Reviews eight computer packages for Apple and IBM personal computers. Compares features, costs, and usage. (CW)
Slavov, Svetoslav H; Stoyanova-Slavova, Iva; Mattes, William; Beger, Richard D; Brüschweiler, Beat J
2018-07-01
A grid-based, alignment-independent 3D-SDAR (three-dimensional spectral data-activity relationship) approach based on simulated 13 C and 15 N NMR chemical shifts augmented with through-space interatomic distances was used to model the mutagenicity of 554 primary and 419 secondary aromatic amines. A robust modeling strategy supported by extensive validation including randomized training/hold-out test set pairs, validation sets, "blind" external test sets as well as experimental validation was applied to avoid over-parameterization and build Organization for Economic Cooperation and Development (OECD 2004) compliant models. Based on an experimental validation set of 23 chemicals tested in a two-strain Salmonella typhimurium Ames assay, 3D-SDAR was able to achieve performance comparable to 5-strain (Ames) predictions by Lhasa Limited's Derek and Sarah Nexus for the same set. Furthermore, mapping of the most frequently occurring bins on the primary and secondary aromatic amine structures allowed the identification of molecular features that were associated either positively or negatively with mutagenicity. Prominent structural features found to enhance the mutagenic potential included: nitrobenzene moieties, conjugated π-systems, nitrothiophene groups, and aromatic hydroxylamine moieties. 3D-SDAR was also able to capture "true" negative contributions that are particularly difficult to detect through alternative methods. These include sulphonamide, acetamide, and other functional groups, which not only lack contributions to the overall mutagenic potential, but are known to actively lower it, if present in the chemical structures of what otherwise would be potential mutagens.
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus
2011-11-01
A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.
Analysis of Multilayered Printed Circuit Boards using Computed Tomography
2014-05-01
complex PCBs that present a challenge for any testing or fault analysis. Set-to- work testing and fault analysis of any electronic circuit require...Electronic Warfare and Radar Division in December 2010. He is currently in Electro- Optic Countermeasures Group. Samuel works on embedded system design...and software optimisation of complex electro-optical systems, including the set to work and characterisation of these systems. He has a Bachelor of
NASA Technical Reports Server (NTRS)
Sketoe, J. G.; Clark, Anthony
2000-01-01
This paper presents a DOD E3 program overview on integrated circuit immunity. The topics include: 1) EMI Immunity Testing; 2) Threshold Definition; 3) Bias Tee Function; 4) Bias Tee Calibration Set-Up; 5) EDM Test Figure; 6) EMI Immunity Levels; 7) NAND vs. and Gate Immunity; 8) TTL vs. LS Immunity Levels; 9) TP vs. OC Immunity Levels; 10) 7805 Volt Reg Immunity; and 11) Seventies Chip Set. This paper is presented in viewgraph form.
Pretest and refinement of items for alcohol highway safety surveys
DOT National Transportation Integrated Search
1984-05-30
This study summarizes the procedures employed in pre-testing a set of alcohol-highway safety questionnaire items. The procedures included conducting a set of focus groups and a series of telephone interviews on several forms of the questionnaires. Th...
Robust tracking control of a magnetically suspended rigid body
NASA Technical Reports Server (NTRS)
Lim, Kyong B.; Cox, David E.
1994-01-01
This study is an application of H-infinity and micro-synthesis for designing robust tracking controllers for the Large Angle Magnetic Suspension Test Facility. The modeling, design, analysis, simulation, and testing of a control law that guarantees tracking performance under external disturbances and model uncertainties is investigated. The type of uncertainties considered and the tracking performance metric used is discussed. This study demonstrates the tradeoff between tracking performance at low frequencies and robustness at high frequencies. Two sets of controllers were designed and tested. The first set emphasized performance over robustness, while the second set traded off performance for robustness. Comparisons of simulation and test results are also included. Current simulation and experimental results indicate that reasonably good robust tracking performance can be attained for this system using multivariable robust control approach.
Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua
2018-04-28
To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A Critical Analysis of the Body of Work Method for Setting Cut-Scores
ERIC Educational Resources Information Center
Radwan, Nizam; Rogers, W. Todd
2006-01-01
The recent increase in the use of constructed-response items in educational assessment and the dissatisfaction with the nature of the decision that the judges must make using traditional standard-setting methods created a need to develop new and effective standard-setting procedures for tests that include both multiple-choice and…
Genetic Testing in Clinical Settings.
Franceschini, Nora; Frick, Amber; Kopp, Jeffrey B
2018-04-11
Genetic testing is used for screening, diagnosis, and prognosis of diseases consistent with a genetic cause and to guide drug therapy to improve drug efficacy and avoid adverse effects (pharmacogenomics). This In Practice review aims to inform about DNA-related genetic test availability, interpretation, and recommended clinical actions based on results using evidence from clinical guidelines, when available. We discuss challenges that limit the widespread use of genetic information in the clinical care setting, including a small number of actionable genetic variants with strong evidence of clinical validity and utility, and the need for improving the health literacy of health care providers and the public, including for direct-to-consumer tests. Ethical, legal, and social issues and incidental findings also need to be addressed. Because our understanding of genetic factors associated with disease and drug response is rapidly increasing and new genetic tests are being developed that could be adopted by clinicians in the short term, we also provide extensive resources for information and education on genetic testing. Copyright © 2018 National Kidney Foundation, Inc. All rights reserved.
Test Information. Using the Essay as an Assessment Technique. Set 77. Number One. Item 13.
ERIC Educational Resources Information Center
Cowie, Colin
Certain testing procedures will overcome some of the problems associated with the use of essay tests. Essay tests may not validly indicate achievement because the questions included in the test may not fairly represent instructional content. Reliability may be a problem because of variations in examinee response in different situations, in test…
40 CFR 53.21 - Test conditions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Test conditions. 53.21 Section 53.21... Methods SO2, CO, O3, and NO2 § 53.21 Test conditions. (a) Set-up and start-up of the test analyzer shall... before beginning the tests. If the candidate method does not include an integral strip chart recorder...
NASA Astrophysics Data System (ADS)
Ishak-Boushaki, Mustapha B.
2018-06-01
Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on (in)consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use new statistical measures that can detect discordances between data sets when present. We use an algorithmic procedure based on these new measures that is able to identify in some cases whether an inconsistency is due to problems related to systematic effects in the data or to the underlying model. Some recent published tensions between data sets are also examined using our formalism, including the Hubble constant measurements, Planck and Large-Scale-Structure. (Work supported in part by NSF under Grant No. AST-1517768).
Speededness and Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.; Xiong, Xinhui
2013-01-01
Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…
2014-01-01
Background Patient-reported outcome validation needs to achieve validity and reliability standards. Among reliability analysis parameters, test-retest reliability is an important psychometric property. Retested patients must be in a clinically stable condition. This is particularly problematic in palliative care (PC) settings because advanced cancer patients are prone to a faster rate of clinical deterioration. The aim of this study was to evaluate the methods by which multi-symptom and health-related qualities of life (HRQoL) based on patient-reported outcomes (PROs) have been validated in oncological PC settings with regards to test-retest reliability. Methods A systematic search of PubMed (1966 to June 2013), EMBASE (1980 to June 2013), PsychInfo (1806 to June 2013), CINAHL (1980 to June 2013), and SCIELO (1998 to June 2013), and specific PRO databases was performed. Studies were included if they described a set of validation studies. Studies were included if they described a set of validation studies for an instrument developed to measure multi-symptom or multidimensional HRQoL in advanced cancer patients under PC. The COSMIN checklist was used to rate the methodological quality of the study designs. Results We identified 89 validation studies from 746 potentially relevant articles. From those 89 articles, 31 measured test-retest reliability and were included in this review. Upon critical analysis of the overall quality of the criteria used to determine the test-retest reliability, 6 (19.4%), 17 (54.8%), and 8 (25.8%) of these articles were rated as good, fair, or poor, respectively, and no article was classified as excellent. Multi-symptom instruments were retested over a shortened interval when compared to the HRQoL instruments (median values 24 hours and 168 hours, respectively; p = 0.001). Validation studies that included objective confirmation of clinical stability in their design yielded better results for the test-retest analysis with regard to both pain and global HRQoL scores (p < 0.05). The quality of the statistical analysis and its description were of great concern. Conclusion Test-retest reliability has been infrequently and poorly evaluated. The confirmation of clinical stability was an important factor in our analysis, and we suggest that special attention be focused on clinical stability when designing a PRO validation study that includes advanced cancer patients under PC. PMID:24447633
40 CFR 1066.410 - Dynamometer test procedure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... drive mode. (For purposes of this paragraph (g), the term four-wheel drive includes other multiple drive... Dynamometer test procedure. (a) Dynamometer testing may consist of multiple drive cycles with both cold-start...-setting part identifies the driving schedules and the associated sample intervals, soak periods, engine...
NASA Technical Reports Server (NTRS)
Slater, Richard
1996-01-01
A joint U.S./Russian film test was conducted during MIR Mission 18 to evaluate the effects of radiation on photographic film during long-duration space flights. Two duplicate sets of film were flown on this MIR mission: one set was processed and evaluated by the NASA/JSC Photographic Laboratory, and the other by the RKK Energia's Photographic Laboratory in Moscow. This preliminary report includes only the results of the JSC evaluation (excluding the SN-10 film which was not available for evaluation at the time this report was written). The final report will include an evaluation by JSC of the SN-10 film and an evaluation of the test data by the RKK Energia. ISC's evaluation of the test data showed the positive film flown was damaged very little when exposed to approximately 8 rads of radiation. Two of the three negative films were significantly damaged and the third film was damaged only moderately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Lucia, Frank C. Jr.; Gottfried, Jennifer L.; Munson, Chase A.
2008-11-01
A technique being evaluated for standoff explosives detection is laser-induced breakdown spectroscopy (LIBS). LIBS is a real-time sensor technology that uses components that can be configured into a ruggedized standoff instrument. The U.S. Army Research Laboratory has been coupling standoff LIBS spectra with chemometrics for several years now in order to discriminate between explosives and nonexplosives. We have investigated the use of partial least squares discriminant analysis (PLS-DA) for explosives detection. We have extended our study of PLS-DA to more complex sample types, including binary mixtures, different types of explosives, and samples not included in the model. We demonstrate themore » importance of building the PLS-DA model by iteratively testing it against sample test sets. Independent test sets are used to test the robustness of the final model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, R.
1996-04-01
A joint U.S./Russian film test was conducted during MIR Mission 18 to evaluate the effects of radiation on photographic film during long-duration space flights. Two duplicate sets of film were flown on this MIR mission: one set was processed and evaluated by the NASA/JSC Photographic Laboratory, and the other by the RKK Energia`s Photographic Laboratory in Moscow. This preliminary report includes only the results of the JSC evaluation (excluding the SN-10 film which was not available for evaluation at the time this report was written). The final report will include an evaluation by JSC of the SN-10 film and anmore » evaluation of the test data by the RKK Energia. ISC`s evaluation of the test data showed the positive film flown was damaged very little when exposed to approximately 8 rads of radiation. Two of the three negative films were significantly damaged and the third film was damaged only moderately.« less
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Barbee, Lindley A; Tat, Susana; Dhanireddy, Shireesha; Marrazzo, Jeanne M
2016-06-01
Rates of screening for bacterial sexually transmitted infections (STI) among men who have sex with men in HIV care settings remain low despite high prevalence of these infections. STI self-testing may help increase screening rates in clinical settings. We implemented an STI self-testing program at a large, urban HIV care clinic and evaluated its effectiveness and acceptability. We compared measures obtained during the first year of the STI self-testing program (Intervention Year, April 1, 2013-March 31, 2014) to Baseline Year (January 1, 2012-December 31, 2012) to determine: (1) overall clinic change in STI testing coverage and diagnostic yield and; (2) program-specific outcomes including appropriate anatomic site screening and patient-reported acceptability. Overall, testing for gonorrhea and chlamydia increased significantly between Baseline and Intervention Year, and 50% more gonococcal and 47% more chlamydial infections were detected. Syphilis testing coverage remained unchanged. Nearly 95% of 350 men who participated in the STI self-testing program completed site-specific testing appropriately based on self-reported exposures, and 92% rated their self-testing experience as "good" or "very good." STI self-testing in HIV care settings significantly increases testing coverage and detection of gonorrhea and chlamydia, and the program is acceptable to patients. Additional interventions to increase syphilis screening rates are needed.
Test Cases for the Benchmark Active Controls: Spoiler and Control Surface Oscillations and Flutter
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Scott, Robert C.; Wieseman, Carol D.
2000-01-01
As a portion of the Benchmark Models Program at NASA Langley, a simple generic model was developed for active controls research and was called BACT for Benchmark Active Controls Technology model. This model was based on the previously-tested Benchmark Models rectangular wing with the NACA 0012 airfoil section that was mounted on the Pitch and Plunge Apparatus (PAPA) for flutter testing. The BACT model had an upper surface spoiler, a lower surface spoiler, and a trailing edge control surface for use in flutter suppression and dynamic response excitation. Previous experience with flutter suppression indicated a need for measured control surface aerodynamics for accurate control law design. Three different types of flutter instability boundaries had also been determined for the NACA 0012/PAPA model, a classical flutter boundary, a transonic stall flutter boundary at angle of attack, and a plunge instability near M = 0.9. Therefore an extensive set of steady and control surface oscillation data was generated spanning the range of the three types of instabilities. This information was subsequently used to design control laws to suppress each flutter instability. There have been three tests of the BACT model. The objective of the first test, TDT Test 485, was to generate a data set of steady and unsteady control surface effectiveness data, and to determine the open loop dynamic characteristics of the control systems including the actuators. Unsteady pressures, loads, and transfer functions were measured. The other two tests, TDT Test 502 and TDT Test 5 18, were primarily oriented towards active controls research, but some data supplementary to the first test were obtained. Dynamic response of the flexible system to control surface excitation and open loop flutter characteristics were determined during Test 502. Loads were not measured during the last two tests. During these tests, a database of over 3000 data sets was obtained. A reasonably extensive subset of the data sets from the first two tests have been chosen for Test Cases for computational comparisons concentrating on static conditions and cases with harmonically oscillating control surfaces. Several flutter Test Cases from both tests have also been included. Some aerodynamic comparisons with the BACT data have been made using computational fluid dynamics codes at the Navier-Stokes level (and in the accompanying chapter SC). Some mechanical and active control studies have been presented. In this report several Test Cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flow effects. Cases for static angles of attack, static trailing-edge and upper-surface spoiler deflections are included for a range of conditions near those for the oscillation cases. Cases for trailing-edge control and upper-surface spoiler oscillations for a range of Mach numbers, angle of attack, and static control deflections are included. Cases for all three types of flutter instability are selected. In addition some cases are included for dynamic response measurements during forced oscillations of the controls on the flexible mount. An overview of the model and tests is given, and the standard formulary for these data is listed. Some sample data and sample results of calculations are presented. Only the static pressures and the first harmonic real and imaginary parts of the pressures are included in the data for the Test Cases, but digitized time histories have been archived. The data for the Test Cases are also available as separate electronic files.
NASA Technical Reports Server (NTRS)
Caplin, R. S.; Royer, E. R.
1978-01-01
Attempts are made to provide a total design of a Microbial Load Monitor (MLM) system flight engineering model. Activities include assembly and testing of Sample Receiving and Card Loading Devices (SRCLDs), operator related software, and testing of biological samples in the MLM. Progress was made in assembling SRCLDs with minimal leaks and which operate reliably in the Sample Loading System. Seven operator commands are used to control various aspects of the MLM such as calibrating and reading the incubating reading head, setting the clock and reading time, and status of Card. Testing of the instrument, both in hardware and biologically, was performed. Hardware testing concentrated on SRCLDs. Biological testing covered 66 clinical and seeded samples. Tentative thresholds were set and media performance listed.
Scale out databases for CERN use cases
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Lanza Garcia, Daniel; Surdy, Kacper
2015-12-01
Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database.
Exploring pharmacy and home-based sexually transmissible infection testing
Habel, Melissa A.; Scheinmann, Roberta; Verdesoto, Elizabeth; Gaydos, Charlotte; Bertisch, Maggie; Chiasson, Mary Ann
2015-01-01
Background This study assessed the feasibility and acceptability of pharmacy and home-based sexually transmissible infection (STI) screening as alternate testing venues among emergency contraception (EC) users. Methods The study included two phases in February 2011–July 2012. In Phase I, customers purchasing EC from eight pharmacies in Manhattan received vouchers for free STI testing at onsite medical clinics. In Phase II, three Facebook ads targeted EC users to connect them with free home-based STI test kits ordered online. Participants completed a self-administered survey. Results Only 38 participants enrolled in Phase I: 90% female, ≤29 years (74%), 45% White non-Hispanic and 75% college graduates; 71% were not tested for STIs in the past year and 68% reported a new partner in the past 3 months. None tested positive for STIs. In Phase II, ads led to >45 000 click-throughs, 382 completed the survey and 290 requested kits; 28% were returned. Phase II participants were younger and less educated than Phase I participants; six tested positive for STIs. Challenges included recruitment, pharmacy staff participation, advertising with discretion and cost. Conclusions This study found low uptake of pharmacy and home-based testing among EC users; however, STI testing in these settings is feasible and the acceptability findings indicate an appeal among younger women for testing in non-traditional settings. Collaborating with and training pharmacy and medical staff are key elements of service provision. Future research should explore how different permutations of expanding screening in non-traditional settings could improve testing uptake and detect additional STI cases. PMID:26409484
Exploring pharmacy and home-based sexually transmissible infection testing.
Habel, Melissa A; Scheinmann, Roberta; Verdesoto, Elizabeth; Gaydos, Charlotte; Bertisch, Maggie; Chiasson, Mary Ann
2015-11-01
Background This study assessed the feasibility and acceptability of pharmacy and home-based sexually transmissible infection (STI) screening as alternate testing venues among emergency contraception (EC) users. The study included two phases in February 2011-July 2012. In Phase I, customers purchasing EC from eight pharmacies in Manhattan received vouchers for free STI testing at onsite medical clinics. In Phase II, three Facebook ads targeted EC users to connect them with free home-based STI test kits ordered online. Participants completed a self-administered survey. Only 38 participants enrolled in Phase I: 90% female, ≤29 years (74%), 45% White non-Hispanic and 75% college graduates; 71% were not tested for STIs in the past year and 68% reported a new partner in the past 3 months. None tested positive for STIs. In Phase II, ads led to >45000 click-throughs, 382 completed the survey and 290 requested kits; 28% were returned. Phase II participants were younger and less educated than Phase I participants; six tested positive for STIs. Challenges included recruitment, pharmacy staff participation, advertising with discretion and cost. This study found low uptake of pharmacy and home-based testing among EC users; however, STI testing in these settings is feasible and the acceptability findings indicate an appeal among younger women for testing in non-traditional settings. Collaborating with and training pharmacy and medical staff are key elements of service provision. Future research should explore how different permutations of expanding screening in non-traditional settings could improve testing uptake and detect additional STI cases.
40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...
40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...
40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...
40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...
40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...
EMC system test performance on Spacelab
NASA Astrophysics Data System (ADS)
Schwan, F.
1982-07-01
Electromagnetic compatibility testing of the Spacelab engineering model is discussed. Documentation, test procedures (including data monitoring and test configuration set up) and performance assessment approach are described. Equipment was assembled into selected representative flight configurations. The physical and functional interfaces between the subsystems were demonstrated within the integration and test sequence which culminated in the flyable configuration Long Module plus one Pallet.
Test method research on weakening interface strength of steel - concrete under cyclic loading
NASA Astrophysics Data System (ADS)
Liu, Ming-wei; Zhang, Fang-hua; Su, Guang-quan
2018-02-01
The mechanical properties of steel - concrete interface under cyclic loading are the key factors affecting the rule of horizontal load transfer, the calculation of bearing capacity and cumulative horizontal deformation. Cyclic shear test is an effective method to study the strength reduction of steel - concrete interface. A test system composed of large repeated direct shear test instrument, hydraulic servo system, data acquisition system, test control software system and so on is independently designed, and a set of test method, including the specimen preparation, the instrument preparation, the loading method and so on, is put forward. By listing a set of test results, the validity of the test method is verified. The test system and the test method based on it provide a reference for the experimental study on mechanical properties of steel - concrete interface.
Oscillating-flow regenerator test rig
NASA Technical Reports Server (NTRS)
Wood, J. G.; Gedeon, D. R.
1994-01-01
This report summarizes work performed in setting up and performing tests on a regenerator test rig. An earlier status report presented test results, together with heat transfer correlations, for four regenerator samples (two woven screen samples and two felt metal samples). Lessons learned from this testing led to improvements to the experimental setup, mainly instrumentation as well as to the test procedure. Given funding and time constraints for this project it was decided to complete as much testing as possible while the rig was set up and operational, and to forego final data reduction and analysis until later. Additional testing was performed on several of the previously tested samples as well an on five newly fabricated samples. The following report is a summary of the work performed at OU, with many of the final test results included in raw data form.
NASA Technical Reports Server (NTRS)
Milam, Laura J.
1990-01-01
The Cosmic Background Explorer Observatory (COBE) underwent a thermal vacuum thermal balance test in the Space Environment Simulator (SES). This was the largest and most complex test ever conducted at this facility. The 4 x 4 m (13 x 13 ft) spacecraft weighed approx. 2223 kg (4900 lbs) for the test. The test set up included simulator panels for the inboard solar array panels, simulator panels for the flight cowlings, Sun and Earth Sensor stimuli, Thermal Radio Frequency Shield heater stimuli and a cryopanel for thermal control in the Attitude Control System Shunt Dissipator area. The fixturing also included a unique 4.3 m (14 ft) diameter Gaseous Helium Cryopanel which provided a 20 K environment for the calibration of one of the spacecraft's instruments, the Differential Microwave Radiometer. This cryogenic panel caused extra contamination concerns and a special method was developed and written into the test procedure to prevent the high buildup of condensibles on the panel which could have led to backstreaming of the thermal vacuum chamber. The test was completed with a high quality simulated space environment provided to the spacecraft. The test requirements, test set up, and special fixturing are described.
NASA Technical Reports Server (NTRS)
Milam, Laura J.
1991-01-01
The Cosmic Background Explorer Observatory (COBE) underwant a thermal vacuum thermal balance test in the Space Environment Simulator (SES). This was the largest and most complex test ever conducted at this facility. The 4 x 4 m (13 x 13 ft) spacecraft weighed approx. 2223 kg (4900 lbs) for the test. The test set up included simulator panels for the inboard solar array panels, simulator panels for the flight cowlings, Sun and Earth Sensor stimuli, Thermal Radio Frequency Shield heater stimuli and a cryopanel for thermal control in the Attitude Control System Shunt Dissipator area. The fixturing also included a unique 4.3 m (14 ft) diameter Gaseous Helium Cryopanel which provided a 20 K environment for the calibration of one of the spacecraft's instruments, the Differential Microwave Radiometer. This cryogenic panel caused extra contamination concerns and a special method was developed and written into the test procedure to prevent the high buildup of condensibles on the panel which could have led to backstreaming of the thermal vacuum chamber. The test was completed with a high quality simulated space environment provided to the spacecraft. The test requirements, test set up, and special fixturing are described.
Wireless Coexistence and EMC of Bluetooth and 802.11b Devices in Controlled Laboratory Settings
Seidman, Seth; Kainz, Wolfgang; Ruggera, Paul; Mendoza, Gonzalo
2011-01-01
This paper presents experimental testing that has been performed on wireless communication devices as victims of electromagnetic interference (EMI). Wireless victims included universal serial bus (USB) network adapters and personal digital assistants (PDAs) equipped with IEEE 802.11b and Bluetooth technologies. The experimental data in this paper was gathered in an anechoic chamber and a gigahertz transverse electromagnetic (GTEM) cell to ensure reliable and repeatable results. This testing includes: Electromagnetic compatibility (EMC) testing performed in accordance with IEC 60601-1-2, an in-band sweep of EMC testing, and coexistence testing. The tests in this study show that a Bluetooth communication was able to coexist with other Bluetooth devices with no decrease in throughput and no communication breakdowns. However, testing revealed a significant decrease in throughput and increase in communication breakdowns when an 802.11b source is near an 802.11b victim. In a hospital setting decreased throughput and communication breakdowns can cause wireless medical devices to fail. It is therefore vital to have an understanding of the effect EMI can have on wireless communication devices. PMID:22043254
Wireless Coexistence and EMC of Bluetooth and 802.11b Devices in Controlled Laboratory Settings.
Seidman, Seth; Kainz, Wolfgang; Ruggera, Paul; Mendoza, Gonzalo
2011-01-01
This paper presents experimental testing that has been performed on wireless communication devices as victims of electromagnetic interference (EMI). Wireless victims included universal serial bus (USB) network adapters and personal digital assistants (PDAs) equipped with IEEE 802.11b and Bluetooth technologies. The experimental data in this paper was gathered in an anechoic chamber and a gigahertz transverse electromagnetic (GTEM) cell to ensure reliable and repeatable results. This testing includes: Electromagnetic compatibility (EMC) testing performed in accordance with IEC 60601-1-2, an in-band sweep of EMC testing, and coexistence testing. The tests in this study show that a Bluetooth communication was able to coexist with other Bluetooth devices with no decrease in throughput and no communication breakdowns. However, testing revealed a significant decrease in throughput and increase in communication breakdowns when an 802.11b source is near an 802.11b victim. In a hospital setting decreased throughput and communication breakdowns can cause wireless medical devices to fail. It is therefore vital to have an understanding of the effect EMI can have on wireless communication devices.
System for testing properties of a network
Rawle, Michael; Bartholomew, David B.; Soares, Marshall A.
2009-06-16
A method for identifying properties of a downhole electromagnetic network in a downhole tool sting, including the step of providing an electromagnetic path intermediate a first location and a second location on the electromagnetic network. The method further includes the step of providing a receiver at the second location. The receiver includes a known reference. The analog signal includes a set amplitude, a set range of frequencies, and a set rate of change between the frequencies. The method further includes the steps of sending the analog signal, and passively modifying the signal. The analog signal is sent from the first location through the electromagnetic path, and the signal is modified by the properties of the electromagnetic path. The method further includes the step of receiving a modified signal at the second location and comparing the known reference to the modified signal.
The user's guide describes the methods used by TEST to predict toxicity and physical properties (including the new mode of action based method used to predict acute aquatic toxicity). It describes all of the experimental data sets included in the tool. It gives the prediction res...
NASA Technical Reports Server (NTRS)
Smith, Gerald A.
1999-01-01
Included in Appendix I to this report is a complete set of design and assembly schematics for the high vacuum inner trap assembly, cryostat interfaces and electronic components for the MSFC HI-PAT. Also included in the final report are summaries of vacuum tests, and electronic tests performed upon completion of the assembly.
A discontinuous Galerkin conservative level set scheme for interface capturing in multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owkes, Mark, E-mail: mfc86@cornell.edu; Desjardins, Olivier
2013-09-15
The accurate conservative level set (ACLS) method of Desjardins et al. [O. Desjardins, V. Moureau, H. Pitsch, An accurate conservative level set/ghost fluid method for simulating turbulent atomization, J. Comput. Phys. 227 (18) (2008) 8395–8416] is extended by using a discontinuous Galerkin (DG) discretization. DG allows for the scheme to have an arbitrarily high order of accuracy with the smallest possible computational stencil resulting in an accurate method with good parallel scaling. This work includes a DG implementation of the level set transport equation, which moves the level set with the flow field velocity, and a DG implementation of themore » reinitialization equation, which is used to maintain the shape of the level set profile to promote good mass conservation. A near second order converging interface curvature is obtained by following a height function methodology (common amongst volume of fluid schemes) in the context of the conservative level set. Various numerical experiments are conducted to test the properties of the method and show excellent results, even on coarse meshes. The tests include Zalesak’s disk, two-dimensional deformation of a circle, time evolution of a standing wave, and a study of the Kelvin–Helmholtz instability. Finally, this novel methodology is employed to simulate the break-up of a turbulent liquid jet.« less
Application of the Trend Filtering Algorithm for Photometric Time Series Data
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.
2016-08-01
Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.
Integrative set enrichment testing for multiple omics platforms
2011-01-01
Background Enrichment testing assesses the overall evidence of differential expression behavior of the elements within a defined set. When we have measured many molecular aspects, e.g. gene expression, metabolites, proteins, it is desirable to assess their differential tendencies jointly across platforms using an integrated set enrichment test. In this work we explore the properties of several methods for performing a combined enrichment test using gene expression and metabolomics as the motivating platforms. Results Using two simulation models we explored the properties of several enrichment methods including two novel methods: the logistic regression 2-degree of freedom Wald test and the 2-dimensional permutation p-value for the sum-of-squared statistics test. In relation to their univariate counterparts we find that the joint tests can improve our ability to detect results that are marginal univariately. We also find that joint tests improve the ranking of associated pathways compared to their univariate counterparts. However, there is a risk of Type I error inflation with some methods and self-contained methods lose specificity when the sets are not representative of underlying association. Conclusions In this work we show that consideration of data from multiple platforms, in conjunction with summarization via a priori pathway information, leads to increased power in detection of genomic associations with phenotypes. PMID:22118224
Hubben, Gijs; Bootsma, Martin; Luteijn, Michiel; Glynn, Diarmuid; Bishai, David
2011-01-01
Background Screening at hospital admission for carriage of methicillin-resistant Staphylococcus aureus (MRSA) has been proposed as a strategy to reduce nosocomial infections. The objective of this study was to determine the long-term costs and health benefits of selective and universal screening for MRSA at hospital admission, using both PCR-based and chromogenic media-based tests in various settings. Methodology/Principal Findings A simulation model of MRSA transmission was used to determine costs and effects over 15 years from a US healthcare perspective. We compared admission screening together with isolation of identified carriers against a baseline policy without screening or isolation. Strategies included selective screening of high risk patients or universal admission screening, with PCR-based or chromogenic media-based tests, in medium (5%) or high nosocomial prevalence (15%) settings. The costs of screening and isolation per averted MRSA infection were lowest using selective chromogenic-based screening in high and medium prevalence settings, at $4,100 and $10,300, respectively. Replacing the chromogenic-based test with a PCR-based test costs $13,000 and $36,200 per additional infection averted, and subsequent extension to universal screening with PCR would cost $131,000 and $232,700 per additional infection averted, in high and medium prevalence settings respectively. Assuming $17,645 benefit per infection averted, the most cost-saving strategies in high and medium prevalence settings were selective screening with PCR and selective screening with chromogenic, respectively. Conclusions/Significance Admission screening costs $4,100–$21,200 per infection averted, depending on strategy and setting. Including financial benefits from averted infections, screening could well be cost saving. PMID:21483492
ERIC Educational Resources Information Center
Cossairt, Travis J.; Grubbs, W. Tandy
2011-01-01
An open-access, Web-based mnemonic game is described whereby introductory chemistry knowledge is tested using mahjong solitaire game play. Several tile sets and board layouts are included that are themed upon different chemical topics. Introductory tile sets can be selected that prompt the player to match element names to symbols and metric…
Access and Quality of HIV-Related Point-of-Care Diagnostic Testing in Global Health Programs.
Fonjungo, Peter N; Boeras, Debrah I; Zeh, Clement; Alexander, Heather; Parekh, Bharat S; Nkengasong, John N
2016-02-01
Access to point-of-care testing (POCT) improves patient care, especially in resource-limited settings where laboratory infrastructure is poor and the bulk of the population lives in rural settings. However, because of challenges in rolling out the technology and weak quality assurance measures, the promise of human immunodeficiency virus (HIV)-related POCT in resource-limited settings has not been fully exploited to improve patient care and impact public health. Because of these challenges, the Joint United Nations Programme on HIV/AIDS (UNAIDS), in partnership with other organizations, recently launched the Diagnostics Access Initiative. Expanding HIV programs, including the "test and treat" strategies and the newly established UNAIDS 90-90-90 targets, will require increased access to reliable and accurate POCT results. In this review, we examine various components that could improve access and uptake of quality-assured POC tests to ensure coverage and public health impact. These components include evaluation, policy, regulation, and innovative approaches to strengthen the quality of POCT. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Aerodynamic heating to representative SRB and ET protuberances
NASA Technical Reports Server (NTRS)
Engel, C. D.; Lapointe, J. K.
1979-01-01
Heating data and data scaling methods which can be used on representative solid rocket booster and external tank (ET) protuberances are described. Topics covered include (1) ET geometry and heating points; (2) interference heating test data (51A); (3) heat transfer data from tests FH-15 and FH-16; (4) individual protuberance data; and (5) interference heating of paint data from test IH-42. A set of drawings of the ET moldline and protuberances is included.
Genetic susceptibility testing for neurodegenerative diseases: Ethical and practice issues
Roberts, J. Scott; Uhlmann, Wendy R.
2013-01-01
As the genetics of neurodegenerative disease become better understood, opportunities for genetic susceptibility testing for at-risk individuals will increase. Such testing raises important ethical and practice issues related to test access, informed consent, risk estimation and communication, return of results, and policies to prevent genetic discrimination. The advent of direct-to-consumer genetic susceptibility testing for various neurodegenerative disorders (including Alzheimer’s disease, Parkinson’s disease, and certain prion diseases) means that ethical and practical challenges must be faced not only in traditional research and clinical settings, but also in broader society. This review addresses several topics relevant to the development and implementation of genetic susceptibility tests across research, clinical, and consumer settings; these include appropriate indications for testing, the implications of different methods for disclosing test results, clinical versus personal utility of risk information, psychological and behavioral responses to test results, testing of minors, genetic discrimination, and ethical dilemmas posed by whole-genome sequencing. We also identify future areas of likely growth in the field, including pharmacogenomics and genetic screening for individuals considering or engaged in activities that pose elevated risk of brain injury (e.g., football players, military personnel). APOE gene testing for risk of Alzheimer’s disease is used throughout as an instructive case example, drawing upon the authors’ experience as investigators in a series of multisite randomized clinical trials that have examined the impact of disclosing APOE genotype status to interested individuals (e.g., first-degree relatives, persons with mild cognitive impairment). PMID:23583530
Implementation of transverse variable asphalt rate seal coat practices in Texas.
DOT National Transportation Integrated Search
2011-01-01
An implementation project was performed to expand use of transversely varied asphalt rate (TVAR) seal : coat practices in all districts. The project included nine regional workshops, continued field texture testing of : test sites, provided one set o...
A semiconductor bridge ignited hot gas piston ejector
NASA Technical Reports Server (NTRS)
Grubelich, M. C.; Bickes, Robert W., Jr.
1993-01-01
The topics are presented in viewgraph form and include the following: semiconductor bridge technology (SCB); SCB philosophy; technology transfer; simplified sketch of SCB; SCB processing; SCB design; SCB test assembly; 5 mJ SCB burst based on a polaroid photograph; micro-convective heat transfer hypothesis; SCB fire set; comparison of SCB and hot-wire actuators; satellite firing sets; logic fire set; SCB smart component; SCB smart firing set; semiconductor design considerations; and the adjustable actuator system.
van der Togt, Remko; Bakker, Piet J M; Jaspers, Monique W M
2011-04-01
RFID offers great opportunities to health care. Nevertheless, prior experiences also show that RFID systems have not been designed and tested in response to the particular needs of health care settings and might introduce new risks. The aim of this study is to present a framework that can be used to assess the performance of RFID systems particularly in health care settings. We developed a framework describing a systematic approach that can be used for assessing the feasibility of using an RFID technology in a particular healthcare setting; more specific for testing the impact of environmental factors on the quality of RFID generated data and vice versa. This framework is based on our own experiences with an RFID pilot implementation in an academic hospital in The Netherlands and a literature review concerning RFID test methods and current insights of RFID implementations in healthcare. The implementation of an RFID system within the blood transfusion chain inside a hospital setting was used as a show case to explain the different phases of the framework. The framework consists of nine phases, including an implementation development plan, RFID and medical equipment interference tests, data accuracy- and data completeness tests to be run in laboratory, simulated field and real field settings. The potential risks that RFID technologies may bring to the healthcare setting should be thoroughly evaluated before they are introduced into a vital environment. The RFID performance assessment framework that we present can act as a reference model to start an RFID development, engineering, implementation and testing plan and more specific, to assess the potential risks of interference and to test the quality of the RFID generated data potentially influenced by physical objects in specific health care environments. Copyright © 2010 Elsevier Inc. All rights reserved.
A support vector machine based test for incongruence between sets of trees in tree space
2012-01-01
Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268
40 CFR 51.363 - Quality assurance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... per year per number of inspectors using covert vehicles set to fail (this requirement sets a minimum... stations that conduct both testing and repairs, at least one covert vehicle visit per station per year including the purchase of repairs and subsequent retesting if the vehicle is initially failed for tailpipe...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-23
... Board (CARB) its request for a waiver of preemption for emission standards and related test procedures... standards and test procedures for heavy-duty urban bus engines and vehicles. The 2000 rulemaking included... to emission standards and test procedures resulting from these five sets of amendments were codified...
Unit: Sticking Together, First Trial Materials, Inspection Set.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
These materials, including teacher's guide, student test booklet and laboratory guide, student workbook, test booklet, and a booklet explaining the answers to the questions in the test booklet, are first trial versions of a unit that will form part of the Australian Science Education Project instructional materials for grades seven through ten.…
NASA Technical Reports Server (NTRS)
Allen, Jerry M.
2005-01-01
An experimental study has been performed to develop a large force and moment aerodynamic data set on a slender axisymmetric missile configuration having cruciform strakes and in-line control tail fins. The data include six-component balance measurements of the configuration aerodynamics and three-component measurements on all four tail fins. The test variables include angle of attack, roll angle, Mach number, model buildup, strake length, nose size, and tail fin deflection angles to provide pitch, yaw, and roll control. Test Mach numbers ranged from 0.60 to 4.63. The entire data set is presented on a CD-ROM that is attached to this paper. The CD-ROM also includes extensive plots of both the six-component configuration data and the three-component tail fin data. Selected samples of these plots are presented in this paper to illustrate the features of the data and to investigate the effects of the test variables.
NASA Technical Reports Server (NTRS)
Allen, Jerry M.
2005-01-01
An experimental study has been performed to develop a large force and moment aerodynamic data set on a slender axisymmetric missile configuration having cruciform strakes and in-line control tail fins. The data include six-component balance measurements of the configuration aerodynamics and three-component measurements on all four tail fins. The test variables include angle of attack, roll angle, Mach number, model buildup, strake length, nose size, and tail fin deflection angles to provide pitch, yaw, and roll control. Test Mach numbers ranged from 0.60 to 4.63. The entire data set is presented on a CD-ROM that is attached to this paper. The CD-ROM also includes extensive plots of both the six-component configuration data and the three-component tail fin data. Selected samples of these plots are presented in this paper to illustrate the features of the data and to investigate the effects of the test variables.
How well does multiple OCR error correction generalize?
NASA Astrophysics Data System (ADS)
Lund, William B.; Ringger, Eric K.; Walker, Daniel D.
2013-12-01
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.
Point-Process Models of Social Network Interactions: Parameter Estimation and Missing Data Recovery
2014-08-01
treating them as zero will have a de minimis impact on the results, but avoiding computing them (and computing with them) saves tremendous time. Set a... test the methods on simulated time series on artificial social networks, including some toy networks and some meant to resemble IkeNet. We conclude...the section by discussing the results in detail. In each of our tests we begin with a complete data set, whether it is real (IkeNet) or simulated. Then
Case-based statistical learning applied to SPECT image classification
NASA Astrophysics Data System (ADS)
Górriz, Juan M.; Ramírez, Javier; Illán, I. A.; Martínez-Murcia, Francisco J.; Segovia, Fermín.; Salas-Gonzalez, Diego; Ortiz, A.
2017-03-01
Statistical learning and decision theory play a key role in many areas of science and engineering. Some examples include time series regression and prediction, optical character recognition, signal detection in communications or biomedical applications for diagnosis and prognosis. This paper deals with the topic of learning from biomedical image data in the classification problem. In a typical scenario we have a training set that is employed to fit a prediction model or learner and a testing set on which the learner is applied to in order to predict the outcome for new unseen patterns. Both processes are usually completely separated to avoid over-fitting and due to the fact that, in practice, the unseen new objects (testing set) have unknown outcomes. However, the outcome yields one of a discrete set of values, i.e. the binary diagnosis problem. Thus, assumptions on these outcome values could be established to obtain the most likely prediction model at the training stage, that could improve the overall classification accuracy on the testing set, or keep its performance at least at the level of the selected statistical classifier. In this sense, a novel case-based learning (c-learning) procedure is proposed which combines hypothesis testing from a discrete set of expected outcomes and a cross-validated classification stage.
Express Testing Makes for More Effective Vet Visit
NASA Technical Reports Server (NTRS)
2003-01-01
This paper presents a discussion on Vetscan, a system designed to provide veterinarians with instant diagnostic information needed for rapid treatment decisions. VetScan is designed for point-of-care testing in any treatment setting, including mobile environments, where veterinarians can operate the analyzer from a car-lighter adapter. A full range of tests is available for almost every species normally treated by veterinarians, including cats, dogs, birds, reptiles, and large animals, such as those in the equine and bovine families.
A quality management systems approach for CD4 testing in resource-poor settings.
Westerman, Larry E; Kohatsu, Luciana; Ortiz, Astrid; McClain, Bernice; Kaplan, Jonathan; Spira, Thomas; Marston, Barbara; Jani, Ilesh V; Nkengasong, John; Parsons, Linda M
2010-10-01
Quality assurance (QA) is a systematic process to monitor and improve clinical laboratory practices. The fundamental components of a laboratory QA program include providing a functional and safe laboratory environment, trained and competent personnel, maintained equipment, adequate supplies and reagents, testing of appropriate specimens, internal monitoring of quality, accurate reporting, and external quality assessments. These components are necessary to provide accurate and precise CD4 T-cell counts, an essential test to evaluate start of and monitor effectiveness of antiretroviral therapy for HIV-infected patients. In recent years, CD4 testing has expanded dramatically in resource-limited settings. Information on a CD4 QA program as described in this article will provide guidelines not only for clinical laboratory staff but also for managers of programs responsible for supporting CD4 testing. All agencies involved in implementing CD4 testing must understand the needs of the laboratory and provide advocacy, guidance, and financial support to established CD4 testing sites and programs. This article describes and explains the procedures that must be put in place to provide reliable CD4 determinations in a variety of settings.
Harrison, Jennifer K; Fearon, Patricia; Noel-Storr, Anna H; McShane, Rupert; Stott, David J; Quinn, Terry J
2015-03-10
The diagnosis of dementia relies on the presence of new-onset cognitive impairment affecting an individual's functioning and activities of daily living. The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) is a questionnaire instrument, completed by a suitable 'informant' who knows the patient well, designed to assess change in functional performance secondary to cognitive change; it is used as a tool to identifying those who may have dementia.In secondary care there are two specific instances where patients may be assessed for the presence of dementia. These are in the general acute hospital setting, where opportunistic screening may be undertaken, or in specialist memory services where individuals have been referred due to perceived cognitive problems. To ensure an instrument is suitable for diagnostic use in these settings, its test accuracy must be established. To determine the diagnostic accuracy of the informant-based questionnaire IQCODE, for detection of all-cause (undifferentiated) dementia in adults presenting to secondary-care services. We searched the following sources on the 28th of January 2013: ALOIS (Cochrane Dementia and Cognitive Improvement Group), MEDLINE (Ovid SP), EMBASE (Ovid SP), PsycINFO (Ovid SP), BIOSIS Previews (Thomson Reuters Web of Science), Web of Science Core Collection (includes Conference Proceedings Citation Index) (Thomson Reuters Web of Science), CINAHL (EBSCOhost) and LILACS (BIREME). We also searched sources specific to diagnostic test accuracy: MEDION (Universities of Maastricht and Leuven); DARE (Database of Abstracts of Reviews of Effects - via the Cochrane Library); HTA Database (Health Technology Assessment Database via the Cochrane Library) and ARIF (Birmingham University). We also checked reference lists of relevant studies and reviews, used searches of known relevant studies in PubMed to track related articles, and contacted research groups conducting work on IQCODE for dementia diagnosis to try to find additional studies. We developed a sensitive search strategy; search terms were designed to cover key concepts using several different approaches run in parallel and included terms relating to cognitive tests, cognitive screening and dementia. We used standardised database subject headings such as MeSH terms (in MEDLINE) and other standardised headings (controlled vocabulary) in other databases, as appropriate. We selected those studies performed in secondary-care settings, which included (not necessarily exclusively) IQCODE to assess for the presence of dementia and where dementia diagnosis was confirmed with clinical assessment. For the 'secondary care' setting we included all studies which assessed patients in hospital (e.g. acute unscheduled admissions, referrals to specialist geriatric assessment services etc.) and those referred for specialist 'memory' assessment, typically in psychogeriatric services. We screened all titles generated by electronic database searches, and reviewed abstracts of all potentially relevant studies. Two independent assessors checked full papers for eligibility and extracted data. We determined quality assessment (risk of bias and applicability) using the QUADAS-2 tool, and reporting quality using the STARD tool. From 72 papers describing IQCODE test accuracy, we included 13 papers, representing data from 2745 individuals (n = 1413 (51%) with dementia). Pooled analysis of all studies using data presented closest to a cut-off of 3.3 indicated that sensitivity was 0.91 (95% CI 0.86 to 0.94); specificity 0.66 (95% CI 0.56 to 0.75); the positive likelihood ratio was 2.7 (95% CI 2.0 to 3.6) and the negative likelihood ratio was 0.14 (95% CI 0.09 to 0.22).There was a statistically significant difference in test accuracy between the general hospital setting and the specialist memory setting (P = 0.019), suggesting that IQCODE performs better in a 'general' setting.We found no significant differences in the test accuracy of the short (16-item) versus the 26-item IQCODE, or in the language of administration.There was significant heterogeneity in the included studies, including a highly varied prevalence of dementia (10.5% to 87.4%). Across the included papers there was substantial potential for bias, particularly around sampling of included participants and selection criteria, which may limit generalisability. There was also evidence of suboptimal reporting, particularly around disease severity and handling indeterminate results, which are important if considering use in clinical practice. The IQCODE can be used to identify older adults in the general hospital setting who are at risk of dementia and require specialist assessment; it is useful specifically for ruling out those without evidence of cognitive decline. The language of administration did not affect test accuracy, which supports the cross-cultural use of the tool. These findings are qualified by the significant heterogeneity, the potential for bias and suboptimal reporting found in the included studies.
Bertrais, Sandrine; Boursier, Jérôme; Ducancelle, Alexandra; Oberti, Frédéric; Fouchard-Hubert, Isabelle; Moal, Valérie; Calès, Paul
2017-06-01
There is currently no recommended time interval between noninvasive fibrosis measurements for monitoring chronic liver diseases. We determined how long a single liver fibrosis evaluation may accurately predict mortality, and assessed whether combining tests improves prognostic performance. We included 1559 patients with chronic liver disease and available baseline liver stiffness measurement (LSM) by Fibroscan, aspartate aminotransferase to platelet ratio index (APRI), FIB-4, Hepascore, and FibroMeter V2G . Median follow-up was 2.8 years during which 262 (16.8%) patients died, with 115 liver-related deaths. All fibrosis tests were able to predict mortality, although APRI (and FIB-4 for liver-related mortality) showed lower overall discriminative ability than the other tests (differences in Harrell's C-index: P < 0.050). According to time-dependent AUROCs, the time period with optimal predictive performance was 2-3 years in patients with no/mild fibrosis, 1 year in patients with significant fibrosis, and <6 months in cirrhotic patients even in those with a model of end-stage liver disease (MELD) score <15. Patients were then randomly split in training/testing sets. In the training set, blood tests and LSM were independent predictors of all-cause mortality. The best-fit multivariate model included age, sex, LSM, and FibroMeter V2G with C-index = 0.834 (95% confidence interval, 0.803-0.862). The prognostic model for liver-related mortality included the same covariates with C-index = 0.868 (0.831-0.902). In the testing set, the multivariate models had higher prognostic accuracy than FibroMeter V2G or LSM alone for all-cause mortality and FibroMeter V2G alone for liver-related mortality. The prognostic durability of a single baseline fibrosis evaluation depends on the liver fibrosis level. Combining LSM with a blood fibrosis test improves mortality risk assessment. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Laser data transfer flight experiment definition
NASA Technical Reports Server (NTRS)
Merritt, J. R.
1975-01-01
A set of laser communication flight experiments to be performed between a relay satellite, ground terminals, and space shuttles were synthesized and evaluated. Results include a definition of the space terminals, NASA ground terminals, test methods, and test schedules required to perform the experiments.
Duong, Veasna; Tarantola, Arnaud; Ong, Sivuth; Mey, Channa; Choeung, Rithy; Ly, Sowath; Bourhy, Hervé; Dussart, Philippe; Buchy, Philippe
2016-05-01
The diagnosis of dog-mediated rabies in humans and animals has greatly benefited from technical advances in the laboratory setting. Approaches to diagnosis now include the detection of rabies virus (RABV), RABV RNA, or RABV antigens. These assays are important tools in the current efforts aimed at the global elimination of dog-mediated rabies. The assays available for use in laboratories are reviewed herein, as well as their strengths and weaknesses, which vary with the types of sample analyzed. Depending on the setting, however, the public health objectives and use of RABV diagnosis in the field will also vary. In non-endemic settings, the detection of all introduced or emergent animal or human cases justifies exhaustive testing. In dog RABV-endemic settings, such as rural areas of developing countries where most cases occur, the availability of or access to testing may be severely constrained. Thus, these issues are also discussed along with a proposed strategy to prioritize testing while access to rabies testing in the resource-poor, highly endemic setting is improved. As the epidemiological situation of rabies in a country evolves, the strategy should shift from that of an endemic setting to one more suitable for a decreased rabies incidence following the implementation of efficient control measures and when nearing the target of dog-mediated rabies elimination. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bell, Stephen; Casabona, Jordi; Tsereteli, Nino; Raben, Dorthe; de Wit, John
2017-05-01
The objective of this study was to gather health professionals' perceptions about gaining informed consent and delivering HIV pre-test information. An online self-report survey was completed by 338 respondents involved in HIV testing in 55 countries in the WHO European Region. Nearly two thirds (61.5%) of respondents thought that HIV testing guidelines used in their country of work included recommendations about pre-test information; 83% thought they included recommendations regarding obtaining informed consent. One third (34%) of respondents thought that written informed consent was required; respondents from Eastern Europe and Central Asia were more likely to perceive this as required. Respondents from Western Europe thought pre-test information about the following aspects was significantly less likely to be addressed than respondents in other regions: the right to decline a test; services available after a positive test; laws/regulations impacting someone being tested and receiving a positive test result; potential risks for a client taking an HIV test; the possible need for partner notification after a positive test result. Results offer insight into perceived HIV pre-test practices in all but two national settings across the WHO European Region, and can be used in the development and evaluation of future HIV testing guidelines in the WHO European Region. Findings highlight that practices of obtaining written informed consent depart from current guidelines in some HIV testing settings. Furthermore, findings underscore that it is uncommon for pre-test information to address legal and social risks and harms that people testing HIV-positive may incur. This differs from the most recent global WHO guidelines emphasising the importance of such information, and raises important questions regarding the implications and appropriateness of the currently dominant focus of recommendations on streamlining the HIV testing process.
Chen, Hongda; Werner, Simone; Butt, Julia; Zörnig, Inka; Knebel, Phillip; Michel, Angelika; Eichmüller, Stefan B; Jäger, Dirk; Waterboer, Tim; Pawlita, Michael; Brenner, Hermann
2016-03-29
Novel blood-based screening tests are strongly desirable for early detection of colorectal cancer (CRC). We aimed to identify and evaluate autoantibodies against tumor-associated antigens as biomarkers for early detection of CRC. 380 clinically identified CRC patients and samples of participants with selected findings from a cohort of screening colonoscopy participants in 2005-2013 (N=6826) were included in this analysis. Sixty-four serum autoantibody markers were measured by multiplex bead-based serological assays. A two-step approach with selection of biomarkers in a training set, and validation of findings in a validation set, the latter exclusively including participants from the screening setting, was applied. Anti-MAGEA4 exhibited the highest sensitivity for detecting early stage CRC and advanced adenoma. Multi-marker combinations substantially increased sensitivity at the price of a moderate loss of specificity. Anti-TP53, anti-IMPDH2, anti-MDM2 and anti-MAGEA4 were consistently included in the best-performing 4-, 5-, and 6-marker combinations. This four-marker panel yielded a sensitivity of 26% (95% CI, 13-45%) for early stage CRC at a specificity of 90% (95% CI, 83-94%) in the validation set. Notably, it also detected 20% (95% CI, 13-29%) of advanced adenomas. Taken together, the identified biomarkers could contribute to the development of a useful multi-marker blood-based test for CRC early detection.
Guo, Suqin; He, Lishan; Tisch, Daniel J; Kazura, James; Mharakurwa, Sungano; Mahanta, Jagadish; Herrera, Sócrates; Wang, Baomin; Cui, Liwang
2016-01-01
Good-quality artemisinin drugs are essential for malaria treatment, but increasing prevalence of poor-quality artemisinin drugs in many endemic countries hinders effective management of malaria cases. To develop a point-of-care assay for rapid identification of counterfeit and substandard artemisinin drugs for resource-limited areas, we used specific monoclonal antibodies against artesunate and artemether, and developed prototypes of lateral flow dipstick assays. In this pilot test, we evaluated the feasibility of these dipsticks under different endemic settings and their performance in the hands of untrained personnel. The results showed that the dipstick tests can be successfully performed by different investigators with the included instruction sheet. None of the artemether and artesunate drugs collected from public pharmacies in different endemic countries failed the test. It is possible that the simple dipstick assays, with future optimization of test conditions and sensitivity, can be used as a qualitative and semi-quantitative assay for rapid screening of counterfeit artemisinin drugs in endemic settings.
Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms.
Coppini, Giuseppe; Diciotti, Stefano; Falchini, Massimo; Villari, Natale; Valli, Guido
2003-12-01
The paper describes a neural-network-based system for the computer aided detection of lung nodules in chest radiograms. Our approach is based on multiscale processing and artificial neural networks (ANNs). The problem of nodule detection is faced by using a two-stage architecture including: 1) an attention focusing subsystem that processes whole radiographs to locate possible nodular regions ensuring high sensitivity; 2) a validation subsystem that processes regions of interest to evaluate the likelihood of the presence of a nodule, so as to reduce false alarms and increase detection specificity. Biologically inspired filters (both LoG and Gabor kernels) are used to enhance salient image features. ANNs of the feedforward type are employed, which allow an efficient use of a priori knowledge about the shape of nodules, and the background structure. The images from the public JSRT database, including 247 radiograms, were used to build and test the system. We performed a further test by using a second private database with 65 radiograms collected and annotated at the Radiology Department of the University of Florence. Both data sets include nodule and nonnodule radiographs. The use of a public data set along with independent testing with a different image set makes the comparison with other systems easier and allows a deeper understanding of system behavior. Experimental results are described by ROC/FROC analysis. For the JSRT database, we observed that by varying sensitivity from 60 to 75% the number of false alarms per image lies in the range 4-10, while accuracy is in the range 95.7-98.0%. When the second data set was used comparable results were obtained. The observed system performances support the undertaking of system validation in clinical settings.
NASA Astrophysics Data System (ADS)
Lee, Stephen R.; Kardos, Keith W.; Yearwood, Graham D.; Guillon, Geraldine B.; Kurtz, Lisa A.; Mokkapati, Vijaya K.
2008-04-01
Rapid, point of care (POC) testing has been increasingly deployed as an aid in the diagnosis of infectious disease, due to its ability to deliver rapid, actionable results. In the case of HIV, a number of rapid test devices have been FDA approved and CLIA-waived in order to enable diagnosis of HIV infection outside of traditional laboratory settings. These settings include STD clinics, community outreach centers and mobile testing units, as well as identifying HIV infection among pregnant women and managing occupational exposure to infection. The OraQuick ® rapid test platform has been widely used to identify HIV in POC settings, due to its simplicity, ease of use and the ability to utilize oral fluid as an alternative specimen to blood. More recently, a rapid test for antibodies to hepatitis C virus (HCV) has been developed on the same test platform which uses serum, plasma, finger-stick blood, venous blood and oral fluid. Clinical testing using this POC test device has shown that performance is equivalent to state of the art, laboratory based tests. These devices may be suitable for rapid field testing of blood and other body fluids for the presence of infectious agents.
Sadowski, Brett W; Lane, Alison B; Wood, Shannon M; Robinson, Sara L; Kim, Chin Hee
2017-09-01
Inappropriate testing contributes to soaring healthcare costs within the United States, and teaching hospitals are vulnerable to providing care largely for academic development. Via its "Choosing Wisely" campaign, the American Board of Internal Medicine recommends avoiding repetitive testing for stable inpatients. We designed systems-based interventions to reduce laboratory orders for patients admitted to the wards at an academic facility. We identified the computer-based order entry system as an appropriate target for sustainable intervention. The admission order set had allowed multiple routine tests to be ordered repetitively each day. Our iterative study included interventions on the automated order set and cost displays at order entry. The primary outcome was number of routine tests controlled for inpatient days compared with the preceding year. Secondary outcomes included cost savings, delays in care, and adverse events. Data were collected over a 2-month period following interventions in sequential years and compared with the year prior. The first intervention led to 0.97 fewer laboratory tests per inpatient day (19.4%). The second intervention led to sustained reduction, although by less of a margin than order set modifications alone (15.3%). When extrapolating the results utilizing fees from the Centers for Medicare and Medicaid Services, there was a cost savings of $290,000 over 2 years. Qualitative survey data did not suggest an increase in care delays or near-miss events. This series of interventions targeting unnecessary testing demonstrated a sustained reduction in the number of routine tests ordered, without adverse effects on clinical care. Published by Elsevier Inc.
Ritchie, Andrew M; Lo, Nathan; Ho, Simon Y W
2017-05-01
In Bayesian phylogenetic analyses of genetic data, prior probability distributions need to be specified for the model parameters, including the tree. When Bayesian methods are used for molecular dating, available tree priors include those designed for species-level data, such as the pure-birth and birth-death priors, and coalescent-based priors designed for population-level data. However, molecular dating methods are frequently applied to data sets that include multiple individuals across multiple species. Such data sets violate the assumptions of both the speciation and coalescent-based tree priors, making it unclear which should be chosen and whether this choice can affect the estimation of node times. To investigate this problem, we used a simulation approach to produce data sets with different proportions of within- and between-species sampling under the multispecies coalescent model. These data sets were then analyzed under pure-birth, birth-death, constant-size coalescent, and skyline coalescent tree priors. We also explored the ability of Bayesian model testing to select the best-performing priors. We confirmed the applicability of our results to empirical data sets from cetaceans, phocids, and coregonid whitefish. Estimates of node times were generally robust to the choice of tree prior, but some combinations of tree priors and sampling schemes led to large differences in the age estimates. In particular, the pure-birth tree prior frequently led to inaccurate estimates for data sets containing a mixture of inter- and intraspecific sampling, whereas the birth-death and skyline coalescent priors produced stable results across all scenarios. Model testing provided an adequate means of rejecting inappropriate tree priors. Our results suggest that tree priors do not strongly affect Bayesian molecular dating results in most cases, even when severely misspecified. However, the choice of tree prior can be significant for the accuracy of dating results in the case of data sets with mixed inter- and intraspecies sampling. [Bayesian phylogenetic methods; model testing; molecular dating; node time; tree prior.]. © The authors 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.
NASA Astrophysics Data System (ADS)
Borecki, M.; Prus, P.; Korwin-Pawlowski, M. L.; Rychlik, A.; Kozubel, W.
2017-08-01
Modern rims and wheels are tested at the design and production stages. Tests can be performed in laboratory conditions and on the ride. In the laboratory, complex and costly equipment is used, as for example wheel balancers and impact testers. Modern wheel balancers are equipped with electronic and electro-mechanical units that enable touch-less measurement of dimensions, including precision measurement of radial and lateral wheel run-out, automatic positioning and application of the counterweights, and vehicle wheel set monitoring - tread wear, drift angles and run-out unbalance. Those tests are performed by on-wheel axis measurements with laser distance meters. The impact tester enables dropping of weights from a defined height onto a wheel. Test criteria are the loss of pressure of the tire and generation of cracks in the wheel without direct impact of the falling weights. In the present paper, a set up composed of three accelerometers, a temperature sensor and a pressure sensor is examined as the base of a wheel tester. The sensor set-up configuration, on-line diagnostic and signal transmission are discussed.
Will the "Real" Proficiency Standard Please Stand Up?
ERIC Educational Resources Information Center
Baron, Joan Boykoff; And Others
Connecticut's experience with four different standard-setting methods regarding multiple choice proficiency tests is described. The methods include Angoff, Nedelsky, Borderline Group, and Contrasting Groups Methods. All Connecticut ninth graders were administered proficiency tests in reading, language arts, and mathematics. As soon as final test…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin
2016-06-07
A combinatorially optimized, range-separated hybrid, meta-GGA density functional with VV10 nonlocal correlation is presented in this paper. The final 12-parameter functional form is selected from approximately 10 × 10 9 candidate fits that are trained on a training set of 870 data points and tested on a primary test set of 2964 data points. The resulting density functional, ωB97M-V, is further tested for transferability on a secondary test set of 1152 data points. For comparison, ωB97M-V is benchmarked against 11 leading density functionals including M06-2X, ωB97X-D, M08-HX, M11, ωM05-D, ωB97X-V, and MN15. Encouragingly, the overall performance of ωB97M-V on nearlymore » 5000 data points clearly surpasses that of all of the tested density functionals. Finally, in order to facilitate the use of ωB97M-V, its basis set dependence and integration grid sensitivity are thoroughly assessed, and recommendations that take into account both efficiency and accuracy are provided.« less
Investigation of Flash Fill{reg_sign} as a thermal backfill material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayers, P.H.; Charlton, C.B.; Frishette, C.W.
1995-09-01
Flash Fill{reg_sign} was created as a fast-setting, flowable backfill material made entirely from coal combustion by-products and water. Its quick-setting, self-leveling, self-compacting characteristics makes trench road repairs faster, easier, and more economical. Other uses include building foundations, fill around pipes, gas lines, and manholes, and replacement of weak subgrade beneath rooters. Flash Fill can be hand-excavated without the use of power assisted tools or machinery. To enhance thermal resistivity, the original Flash Fill mix was modified to include concrete sand. This resulted in a new Flash Fill, designated FSAND, with all of the aforementioned desirable characteristics of Flash Fill andmore » a thermal resistivity of approximately 50{degree} C-cm/watt. Thermal resistivity tests using conventional laboratory thermal probes, high-current thermal tests, and moisture migration tests have been performed to determine the properties of FSAND. As a result of these tests, FSAND has been approved for use as power cable thermal backfill on all AEP System distribution projects.« less
Staged-Fault Testing of Distance Protection Relay Settings
NASA Astrophysics Data System (ADS)
Havelka, J.; Malarić, R.; Frlan, K.
2012-01-01
In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.
Utilization of Ancillary Studies in the Cytologic Diagnosis of Respiratory Lesions
Layfield, Lester J.; Roy-Chowdhuri, Sinchita; Baloch, Zubair; Ehya, Hormoz; Geisinger, Kim; Hsiao, Susan J.; Lin, Oscar; Lindeman, Neal I.; Roh, Michael; Schmitt, Fernando; Sidiropoulos, Nikoletta; VanderLaan, Paul A.
2017-01-01
The Papanicolaou Society of Cytopathology has developed a set of guidelines for respiratory cytology including indications for sputum examination, bronchial washings and brushings, CT-guided FNA and endobronchial ultrasound guided fine needle aspiration (EBUS-FNA), as well as recommendations for classification and criteria, ancillary testing and post-cytologic diagnosis management and follow-up. All recommendation documents are based on the expertise of committee members, an extensive literature review, and feedback from presentations at national and international conferences. The guideline documents selectively present the results of these discussions. The present document summarizes recommendations for ancillary testing of cytologic samples. Ancillary testing including microbiologic, immunocytochemical, flow cytometric, and molecular testing, including next-generation sequencing are discussed. PMID:27561242
Publication Bias in Meta-Analyses of the Efficacy of Psychotherapeutic Interventions for Depression
ERIC Educational Resources Information Center
Niemeyer, Helen; Musch, Jochen; Pietrowsky, Reinhard
2013-01-01
Objective: The aim of this study was to assess whether systematic reviews investigating psychotherapeutic interventions for depression are affected by publication bias. Only homogeneous data sets were included, as heterogeneous data sets can distort statistical tests of publication bias. Method: We applied Begg and Mazumdar's adjusted rank…
Utah State Office of Education Fingertip Facts, 2015-16
ERIC Educational Resources Information Center
Utah State Office of Education, 2016
2016-01-01
Fingertip Facts is a compendium of some of the most frequently requested data sets from the Utah State Office of Education. This year's Fingertip Facts includes the following data sets: SAGE Testing, 2014-15; 2015 Public Education General Fund; 2014-15 Public School Enrollment Demographics; Public Schools by Grade Level, 2014-15; Number of…
R. Sam Williams
2009-01-01
A brief history of paint research at the Forest Products Laboratory (FPL) in Madison, Wisconsin, sets the stage for a discussion of testing paint on wood and wood products. Tests include laboratory and outdoor tests, and I discuss them in terms of several degradation mechanisms (loss of gloss and fading, mildew growth, extractives bleed, and cracking, flaking, and...
Characterizing noise in the global nuclear weapon monitoring system
NASA Astrophysics Data System (ADS)
Schultz, Colin
2013-03-01
Under the auspices of the Comprehensive Nuclear-Test-Ban Treaty Organization, a worldwide monitoring system designed to detect the illegal testing of nuclear weaponry has been under construction since 1999. The International Monitoring System is composed of a range of sensors, including detectors for hydroacoustic and seismic signals, and when completed, will include 60 infrasound measurement arrays set to detect low-frequency sound waves produced by an atmospheric nuclear detonation.
Evidence-based point-of-care tests and device designs for disaster preparedness.
Brock, T Keith; Mecozzi, Daniel M; Sumner, Stephanie; Kost, Gerald J
2010-01-01
To define pathogen tests and device specifications needed for emerging point-of-care (POC) technologies used in disasters. Surveys included multiple-choice and ranking questions. Multiple-choice questions were analyzed with the chi2 test for goodness-of-fit and the binomial distribution test. Rankings were scored and compared using analysis of variance and Tukey's multiple comparison test. Disaster care experts on the editorial boards of the American Journal of Disaster Medicine and the Disaster Medicine and Public Health Preparedness, and the readers of the POC Journal. Vibrio cholera and Staphylococcus aureus were top-ranked pathogens for testing in disaster settings. Respondents felt that disaster response teams should be equipped with pandemic infectious disease tests for novel 2009 H1N1 and avian H5N1 influenza (disaster care, p < 0.05; POC, p < 0.01). In disaster settings, respondents preferred self-contained test cassettes (disaster care, p < 0.05; POC, p < 0.001) for direct blood sampling (POC, p < 0.01) and disposal of biological waste (disaster care, p < 0.05; POC, p < 0.001). Multiplex testing performed at the POC was preferred in urgent care and emergency room settings. Evidence-based needs assessment identifies pathogen detection priorities in disaster care scenarios, in which Vibrio cholera, methicillin-sensitive and methicillin-resistant Staphylococcus aureus, and Escherichia coli ranked the highest. POC testing should incorporate setting-specific design criteria such as safe disposable cassettes and direct blood sampling at the site of care.
Genetic susceptibility testing for neurodegenerative diseases: ethical and practice issues.
Roberts, J Scott; Uhlmann, Wendy R
2013-11-01
As the genetics of neurodegenerative disease become better understood, opportunities for genetic susceptibility testing for at-risk individuals will increase. Such testing raises important ethical and practice issues related to test access, informed consent, risk estimation and communication, return of results, and policies to prevent genetic discrimination. The advent of direct-to-consumer genetic susceptibility testing for various neurodegenerative disorders (including Alzheimer's disease (AD), Parkinson's disease, and certain prion diseases) means that ethical and practical challenges must be faced not only in traditional research and clinical settings, but also in broader society. This review addresses several topics relevant to the development and implementation of genetic susceptibility tests across research, clinical, and consumer settings; these include appropriate indications for testing, the implications of different methods for disclosing test results, clinical versus personal utility of risk information, psychological and behavioral responses to test results, testing of minors, genetic discrimination, and ethical dilemmas posed by whole-genome sequencing. We also identify future areas of likely growth in the field, including pharmacogenomics and genetic screening for individuals considering or engaged in activities that pose elevated risk of brain injury (e.g., football players, military personnel). APOE gene testing for risk of Alzheimer's disease is used throughout as an instructive case example, drawing upon the authors' experience as investigators in a series of multisite randomized clinical trials that have examined the impact of disclosing APOE genotype status to interested individuals (e.g., first-degree relatives of AD patients, persons with mild cognitive impairment). Copyright © 2013 Elsevier Ltd. All rights reserved.
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
Common IED exploitation target set ontology
NASA Astrophysics Data System (ADS)
Russomanno, David J.; Qualls, Joseph; Wowczuk, Zenovy; Franken, Paul; Robinson, William
2010-04-01
The Common IED Exploitation Target Set (CIEDETS) ontology provides a comprehensive semantic data model for capturing knowledge about sensors, platforms, missions, environments, and other aspects of systems under test. The ontology also includes representative IEDs; modeled as explosives, camouflage, concealment objects, and other background objects, which comprise an overall threat scene. The ontology is represented using the Web Ontology Language and the SPARQL Protocol and RDF Query Language, which ensures portability of the acquired knowledge base across applications. The resulting knowledge base is a component of the CIEDETS application, which is intended to support the end user sensor test and evaluation community. CIEDETS associates a system under test to a subset of cataloged threats based on the probability that the system will detect the threat. The associations between systems under test, threats, and the detection probabilities are established based on a hybrid reasoning strategy, which applies a combination of heuristics and simplified modeling techniques. Besides supporting the CIEDETS application, which is focused on efficient and consistent system testing, the ontology can be leveraged in a myriad of other applications, including serving as a knowledge source for mission planning tools.
Initial Investigation into the Psychoacoustic Properties of Small Unmanned Aerial System Noise
NASA Technical Reports Server (NTRS)
Christian, Andrew; Cabell, Randolph
2017-01-01
For the past several years, researchers at NASA Langley have been engaged in a series of projects to study the degree to which existing facilities and capabilities, originally created for work on full-scale aircraft, are extensible to smaller scales --those of the small unmanned aerial systems (sUAS, also UAVs and, colloquially, `drones') that have been showing up in the nation's airspace of late. This paper follows an e ort that has led to an initial human{subject psychoacoustic test regarding the annoyance generated by sUAS noise. This e ort spans three phases: 1. The collection of the sounds through field recordings. 2. The formulation and execution of a psychoacoustic test using those recordings. 3. The initial analysis of the data from that test. The data suggests a lack of parity between the noise of the recorded sUAS and that of a set of road vehicles that were also recorded and included in the test, as measured by a set of contemporary noise metrics. Future work, including the possibility of further human subject testing, is discussed in light of this suggestion.
Bench test evaluation of adaptive servoventilation devices for sleep apnea treatment.
Zhu, Kaixian; Kharboutly, Haissam; Ma, Jianting; Bouzit, Mourad; Escourrou, Pierre
2013-09-15
Adaptive servoventilation devices are marketed to overcome sleep disordered breathing with apneas and hypopneas of both central and obstructive mechanisms often experienced by patients with chronic heart failure. The clinical efficacy of these devices is still questioned. This study challenged the detection and treatment capabilities of the three commercially available adaptive servoventilation devices in response to sleep disordered breathing events reproduced on an innovative bench test. The bench test consisted of a computer-controlled piston and a Starling resistor. The three devices were subjected to a flow sequence composed of central and obstructive apneas and hypopneas including Cheyne-Stokes respiration derived from a patient. The responses of the devices were separately evaluated with the maximum and the clinical settings (titrated expiratory positive airway pressure), and the detected events were compared to the bench-scored values. The three devices responded similarly to central events, by increasing pressure support to raise airflow. All central apneas were eliminated, whereas hypopneas remained. The three devices responded differently to the obstructive events with the maximum settings. These obstructive events could be normalized with clinical settings. The residual events of all the devices were scored lower than bench test values with the maximum settings, but were in agreement with the clinical settings. However, their mechanisms were misclassified. The tested devices reacted as expected to the disordered breathing events, but not sufficiently to normalize the breathing flow. The device-scored results should be used with caution to judge efficacy, as their validity depends upon the initial settings.
NASA Astrophysics Data System (ADS)
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-01
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.
Evaluation of a Serum Lung Cancer Biomarker Panel.
Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria
2018-01-01
A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results.
Evaluation of a Serum Lung Cancer Biomarker Panel
Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria
2018-01-01
Background: A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. Methods: The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. Results: The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. Conclusions: This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results. PMID:29371783
NASA Technical Reports Server (NTRS)
Hughes, Mark S.; Hebert, Phillip W.; Davis, Dawn M.; Jensen, Scott L.; Abell, Frederick K., Jr.
2004-01-01
The John C. Stennis Space Center (SSC) provides test operations services to a variety of customers, including NASA, DoD, and commercial enterprises for the development of current and next-generation rocket propulsion systems. Many of these testing services are provided in the E-Complex test facilities composed of three active test stands (E1, E2, & E3) and 7 total test positions. Each test position is outfitted with unique sets of data acquisition and controls hardware and software that record both facility and test article data and enable safe operation of the test facility. This paper addresses each system in more detail including efforts to upgrade hardware and software.
Feasibility of an appliance energy testing and labeling program for Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biermayer, Peter; Busch, John; Hakim, Sajid
2000-04-01
A feasibility study evaluated the costs and benefits of establishing a program for testing, labeling and setting minimum efficiency standards for appliances and lighting in Sri Lanka. The feasibility study included: refrigerators, air-conditioners, flourescent lighting (ballasts & CFls), ceiling fans, motors, and televisions.
Cold Regions Test of Tracked and Wheeled Vehicles
2015-12-11
with CTIS setting in the Highway setting and Mud, Sand and Snow setting. (7) Conduct the trials a minimum of three times at each speed as stated in...lock brake system. Record the stopping distance data and record any slew from the centerline. Document if the vehicle experiences engine stall ...while operating in snow. The TOP includes guidance for snow as well as mud, sand , swamps, and wet clay. Most conventional wheeled vehicles cannot
Urine specimen validity test for drug abuse testing in workplace and court settings.
Lin, Shin-Yu; Lee, Hei-Hwa; Lee, Jong-Feng; Chen, Bai-Hsiun
2018-01-01
In recent decades, urine drug testing in the workplace has become common in many countries in the world. There have been several studies concerning the use of the urine specimen validity test (SVT) for drug abuse testing administered in the workplace. However, very little data exists concerning the urine SVT on drug abuse tests from court specimens, including dilute, substituted, adulterated, and invalid tests. We investigated 21,696 submitted urine drug test samples for SVT from workplace and court settings in southern Taiwan over 5 years. All immunoassay screen-positive urine specimen drug tests were confirmed by gas chromatography/mass spectrometry. We found that the mean 5-year prevalence of tampering (dilute, substituted, or invalid tests) in urine specimens from the workplace and court settings were 1.09% and 3.81%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the workplace were 89.2%, 6.8%, and 4.1%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the court were 94.8%, 1.4%, and 3.8%, respectively. No adulterated cases were found among the workplace or court samples. The most common drug identified from the workplace specimens was amphetamine, followed by opiates. The most common drug identified from the court specimens was ketamine, followed by amphetamine. We suggest that all urine specimens taken for drug testing from both the workplace and court settings need to be tested for validity. Copyright © 2017. Published by Elsevier B.V.
Giel, Katrin E; Wittorf, Andreas; Wolkenstein, Larissa; Klingberg, Stefan; Drimmer, Eyal; Schönenberg, Michael; Rapp, Alexander M; Fallgatter, Andreas J; Hautzinger, Martin; Zipfel, Stephan
2012-12-30
Impaired set-shifting has been reported in patients with anorexia nervosa (AN) and in patients with affective disorders, including major depression. Due to the prevalent comorbidity of major depression in AN, this study aimed to examine the role of depression in set-shifting ability. Fifteen patients with AN without a current comorbid depression, 20 patients with unipolar depression (UD) and 35 healthy control participants were assessed using the Trail Making Test (TMT), the Wisconsin Card Sorting Test (WCST) and a Parametric Go/No-Go Test (PGNG). Set-shifting ability was intact in patients with AN without a comorbid depression. However, patients with UD performed significantly poorer in all three tasks compared to AN patients and in the TMT compared to healthy control participants. In both patient groups, set-shifting ability was moderately negatively correlated with severity of depressive symptoms, but was unrelated to BMI and severity of eating disorder symptoms in AN patients. Our results suggest a pivotal role of comorbidity for neuropsychological functioning in AN. Impairments of set-shifting ability in AN patients may have been overrated and may partly be due to comorbid depressive disorders in investigated patients. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
GOES Type III Loop Heat Pipe Life Test Results
NASA Technical Reports Server (NTRS)
Ottenstein, Laura
2011-01-01
The GOES Type III Loop Heat Pipe (LHP) was built as a life test unit for the loop heat pipes on the GOES N-Q series satellites. This propylene LHP was built by Dynatherm Corporation in 2000 and tested continuously for approximately 14 months. It was then put into storage for 3 years. Following the storage period, the LHP was tested at Swales Aerospace to verify that the loop performance hadn t changed. Most test results were consistent with earlier results. At the conclusion of testing at Swales, the LHP was transferred to NASA/GSFC for continued periodic testing. The LHP has been set up for testing in the Thermal Lab at GSFC since 2006. A group of tests consisting of start-ups, power cycles, and a heat transport limit test have been performed every six to nine months since March 2006. Tests results have shown no change in the loop performance over the five years of testing. This presentation will discuss the test hardware, test set-up, and tests performed. Test results to be presented include sample plots from individual tests, along with conductance measurements for all tests performed.
In pursuit of change: youth response to intensive goal setting embedded in a serious video game.
Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Juliano, Melissa; Frazior, McKee; Wilsdon, Jon; Jago, Russell
2007-11-01
Type 2 diabetes has increased in prevalence among youth, paralleling the increase in pediatric obesity. Helping youth achieve energy balance by changing diet and physical activity behaviors should decrease the risk for type 2 diabetes and obesity. Goal setting and goal review are critical components of behavior change. Theory-informed video games that emphasize development and refinement of goal setting and goal review skills provide a method for achieving energy balance in an informative, entertaining format. This article reports alpha-testing results of early versions of theory-informed goal setting and reviews components of two diabetes and obesity prevention video games for preadolescents. Two episodes each of two video games were alpha tested with 9- to 11-year-old youth from multiple ethnic groups. Alpha testing included observed game play followed by a scripted interview. The staff was trained in observation and interview techniques prior to data collection. Although some difficulties were encountered, alpha testers generally understood goal setting and review components and comprehended they were setting personal goals. Although goal setting and review involved multiple steps, youth were generally able to complete them quickly, with minimal difficulty. Few technical issues arose; however, several usability and comprehension problems were identified. Theory-informed video games may be an effective medium for promoting youth diabetes and obesity prevention. Alpha testing helps identify problems likely to have a negative effect on functionality, usability, and comprehension during development, thereby providing an opportunity to correct these issues prior to final production.
ERIC Educational Resources Information Center
Mowsesian, Richard; Hays, William L.
The Graduate Record Examination (GRE) Aptitude Test has been in use since 1938. In 1975 the GRE Aptitude Test was broadened to include an experimental set of items designed to tap a respondent's recognition of logical relationships and consistency of interrelated statements, and to make inferences from abstract relationships. To test the…
NASA safety standard for lifting devices and equipment
NASA Astrophysics Data System (ADS)
1990-09-01
NASA's minimum safety requirements are established for the design, testing, inspection, maintenance, certification, and use of overhead and gantry cranes (including top running monorail, underhung, and jib cranes), mobile cranes, derrick hoists, and special hoist supported personnel lifting devices (these do not include elevators, ground supported personnel lifts, or powered platforms). Minimum requirements are also addressed for the testing, inspection, and use of Hydra-sets, hooks, and slings. Safety standards are thoroughly detailed.
NASA safety standard for lifting devices and equipment
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's minimum safety requirements are established for the design, testing, inspection, maintenance, certification, and use of overhead and gantry cranes (including top running monorail, underhung, and jib cranes), mobile cranes, derrick hoists, and special hoist supported personnel lifting devices (these do not include elevators, ground supported personnel lifts, or powered platforms). Minimum requirements are also addressed for the testing, inspection, and use of Hydra-sets, hooks, and slings. Safety standards are thoroughly detailed.
Student Performance Evaluation. Physical Educators for Equity. Module 7.
ERIC Educational Resources Information Center
Uhlir, Ann
Guidelines are presented to aid secondary school physical education teachers in evaluating student performance in a way that avoids sex-role stereotyping and sex discrimination. Suggestions made for conducting testing in a bias-free setting include: (1) avoid sex-differentiated role tasks; (2) organize motor-performance testing procedures so that…
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
ERIC Educational Resources Information Center
Sawchuk, Craig N.; Russo, Joan E.; Charles, Steve; Goldberg, Jack; Forquera, Ralph; Roy-Byrne, Peter; Buchwald, Dedra
2011-01-01
We examined if step-count goal setting resulted in increases in physical activity and walking compared to only monitoring step counts with pedometers among American Indian/Alaska Native elders. Outcomes included step counts, self-reported physical activity and well-being, and performance on the 6-minute walk test. Although no significant…
Utah State Office of Education Fingertip Facts, 2014-15
ERIC Educational Resources Information Center
Utah State Office of Education, 2015
2015-01-01
Fingertip Facts is a compendium of some of the most frequently requested data sets from the Utah State Office of Education. Data sets in this year's Fingertip Facts include: SAGE Testing, 2014; 2013 Public Education General Fund; 2014-15 Public School Enrollment Demographics; Public Schools by Grade Level, 2013-14; Number of Licensed Educators;…
A Mold by Any Other Name: One Librarian's Battle Against a Mold Bloom.
ERIC Educational Resources Information Center
Smith, Laura Katz
1997-01-01
Describes how library staff at Virginia Polytechnic Institute and State University cleaned up materials after a mold bloom in the rare book room. Includes advice for controlling mold: set up a hygrothermograph, clean dust from books, set up fans, do a "skin" test at regular intervals, keep windows closed, have dehumidifiers available.…
Bhagyashree, Sheshadri Iyengar Raghavan; Nagaraj, Kiran; Prince, Martin; Fall, Caroline H D; Krishna, Murali
2018-01-01
There are limited data on the use of artificial intelligence methods for the diagnosis of dementia in epidemiological studies in low- and middle-income country (LMIC) settings. A culture and education fair battery of cognitive tests was developed and validated for population based studies in low- and middle-income countries including India by the 10/66 Dementia Research Group. We explored the machine learning methods based on the 10/66 battery of cognitive tests for the diagnosis of dementia based in a birth cohort study in South India. The data sets for 466 men and women for this study were obtained from the on-going Mysore Studies of Natal effect of Health and Ageing (MYNAH), in south India. The data sets included: demographics, performance on the 10/66 cognitive function tests, the 10/66 diagnosis of mental disorders and population based normative data for the 10/66 battery of cognitive function tests. Diagnosis of dementia from the rule based approach was compared against the 10/66 diagnosis of dementia. We have applied machine learning techniques to identify minimal number of the 10/66 cognitive function tests required for diagnosing dementia and derived an algorithm to improve the accuracy of dementia diagnosis. Of 466 subjects, 27 had 10/66 diagnosis of dementia, 19 of whom were correctly identified as having dementia by Jrip classification with 100% accuracy. This pilot exploratory study indicates that machine learning methods can help identify community dwelling older adults with 10/66 criterion diagnosis of dementia with good accuracy in a LMIC setting such as India. This should reduce the duration of the diagnostic assessment and make the process easier and quicker for clinicians, patients and will be useful for 'case' ascertainment in population based epidemiological studies.
NIKE: a new clinical tool for establishing levels of indications for cataract surgery.
Lundström, Mats; Albrecht, Susanne; Håkansson, Ingemar; Lorefors, Ragnhild; Ohlsson, Sven; Polland, Werner; Schmid, Andrea; Svensson, Göran; Wendel, Eva
2006-08-01
The purpose of this study was to construct a new clinical tool for establishing levels of indications for cataract surgery, and to validate this tool. Teams from nine eye clinics reached an agreement about the need to develop a clinical tool for setting levels of indications for cataract surgery and about the items that should be included in the tool. The tool was to be called 'NIKE' (Nationell Indikationsmodell för Kataraktextraktion). The Canadian Cataract Priority Criteria Tool served as a model for the NIKE tool, which was modified for Swedish conditions. Items included in the tool were visual acuity of both eyes, patients' perceived difficulties in day-to-day life, cataract symptoms, the ability to live independently, and medical/ophthalmic reasons for surgery. The tool was validated and tested in 343 cataract surgery patients. Validity, stability and reliability were tested and the outcome of surgery was studied in relation to the indication setting. Four indication groups (IGs) were suggested. The group with the greatest indications for surgery was named group 1 and that with the lowest, group 4. Validity was proved to be good. Surgery had the greatest impact on the group with the highest indications for surgery. Test-retest reliability test and interexaminer tests of indication settings showed statistically significant intraclass correlations (intraclass correlation coefficients [ICCs] 0.526 and 0.923, respectively). A new clinical tool for indication setting in cataract surgery is presented. This tool, the NIKE, takes into account both visual acuity and the patient's perceived problems in day-to-day life because of cataract. The tool seems to be stable and reliable and neutral towards different examiners.
Comas, Carmina; Echevarria, Mónica; Rodríguez, M Angeles; Prats, Pilar; Rodríguez, Ignacio; Serra, Bernat
2015-07-01
To evaluate non-invasive prenatal testing (NIPT) of cell-free DNA (cfDNA) as a screening method for major chromosomal anomalies (CA) in a clinical setting. From January to December 2013, Panorama™ test or Harmony™ prenatal test were offered as advanced NIPT, in addition to first-trimester combined screening in singleton pregnancies. The cohort included 333 pregnant women with a mean maternal age (MA) of 37 years who underwent testing at a mean gestational age of 14.6 weeks. Eighty-four percent were low-risk pregnancies. Results were provided in 97.3% of patients at a mean reporting time of 12.9 calendar days. Repeat sampling was performed in six cases and results were obtained in five of them. No results were provided in four cases. Four cases of Down syndrome were detected and there was one discordant result of Turner syndrome. We found no statistical differences between commercial tests except in reporting time, fetal fraction and MA. The cfDNA fraction was statistically associated with test type, maternal weight, BMI and log βhCG levels. NIPT has the potential to be a highly effective screening method for major CA in a clinical setting.
Chen, Hongda; Werner, Simone; Butt, Julia; Zörnig, Inka; Knebel, Phillip; Michel, Angelika; Eichmüller, Stefan B.; Jäger, Dirk; Waterboer, Tim; Pawlita, Michael; Brenner, Hermann
2016-01-01
Novel blood-based screening tests are strongly desirable for early detection of colorectal cancer (CRC). We aimed to identify and evaluate autoantibodies against tumor-associated antigens as biomarkers for early detection of CRC. 380 clinically identified CRC patients and samples of participants with selected findings from a cohort of screening colonoscopy participants in 2005–2013 (N=6826) were included in this analysis. Sixty-four serum autoantibody markers were measured by multiplex bead-based serological assays. A two-step approach with selection of biomarkers in a training set, and validation of findings in a validation set, the latter exclusively including participants from the screening setting, was applied. Anti-MAGEA4 exhibited the highest sensitivity for detecting early stage CRC and advanced adenoma. Multi-marker combinations substantially increased sensitivity at the price of a moderate loss of specificity. Anti-TP53, anti-IMPDH2, anti-MDM2 and anti-MAGEA4 were consistently included in the best-performing 4-, 5-, and 6-marker combinations. This four-marker panel yielded a sensitivity of 26% (95% CI, 13–45%) for early stage CRC at a specificity of 90% (95% CI, 83–94%) in the validation set. Notably, it also detected 20% (95% CI, 13–29%) of advanced adenomas. Taken together, the identified biomarkers could contribute to the development of a useful multi-marker blood-based test for CRC early detection. PMID:26909861
Biostatistics Series Module 3: Comparing Groups: Numerical Variables.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.
A standard bacterial isolate set for research on contemporary dairy spoilage.
Trmčić, A; Martin, N H; Boor, K J; Wiedmann, M
2015-08-01
Food spoilage is an ongoing issue that could be dealt with more efficiently if some standardization and unification was introduced in this field of research. For example, research and development efforts to understand and reduce food spoilage can greatly be enhanced through availability and use of standardized isolate sets. To address this critical issue, we have assembled a standard isolate set of dairy spoilers and other selected nonpathogenic organisms frequently associated with dairy products. This publicly available bacterial set consists of (1) 35 gram-positive isolates including 9 Bacillus and 15 Paenibacillus isolates and (2) 16 gram-negative isolates including 4 Pseudomonas and 8 coliform isolates. The set includes isolates obtained from samples of pasteurized milk (n=43), pasteurized chocolate milk (n=1), raw milk (n=1), cheese (n=2), as well as isolates obtained from samples obtained from dairy-powder production (n=4). Analysis of growth characteristics in skim milk broth identified 16 gram-positive and 13 gram-negative isolates as psychrotolerant. Additional phenotypic characterization of isolates included testing for activity of β-galactosidase and lipolytic and proteolytic enzymes. All groups of isolates included in the isolate set exhibited diversity in growth and enzyme activity. Source data for all isolates in this isolate set are publicly available in the FoodMicrobeTracker database (http://www.foodmicrobetracker.com), which allows for continuous updating of information and advancement of knowledge on dairy-spoilage representatives included in this isolate set. This isolate set along with publicly available isolate data provide a unique resource that will help advance knowledge of dairy-spoilage organisms as well as aid industry in development and validation of new control strategies. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Prediction of biodegradability from chemical structure: Modeling or ready biodegradation test data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loonen, H.; Lindgren, F.; Hansen, B.
1999-08-01
Biodegradation data were collected and evaluated for 894 substances with widely varying chemical structures. All data were determined according to the Japanese Ministry of International Trade and Industry (MITI) I test protocol. The MITI I test is a screening test for ready biodegradability and has been described by Organization for Economic Cooperation and Development (OECD) test guideline 301 C and European Union (EU) test guideline C4F. The chemicals were characterized by a set of 127 predefined structural fragments. This data set was used to develop a model for the prediction of the biodegradability of chemicals under standardized OECD and EUmore » ready biodegradation test conditions. Partial least squares (PLS) discriminant analysis was used for the model development. The model was evaluated by means of internal cross-validation and repeated external validation. The importance of various structural fragments and fragment interactions was investigated. The most important fragments include the presence of a long alkyl chain; hydroxy, ester, and acid groups (enhancing biodegradation); and the presence of one or more aromatic rings and halogen substituents (regarding biodegradation). More than 85% of the model predictions were correct for using the complete data set. The not readily biodegradable predictions were slightly better than the readily biodegradable predictions (86 vs 84%). The average percentage of correct predictions from four external validation studies was 83%. Model optimization by including fragment interactions improve the model predicting capabilities to 89%. It can be concluded that the PLS model provides predictions of high reliability for a diverse range of chemical structures. The predictions conform to the concept of readily biodegradable (or not readily biodegradable) as defined by OECD and EU test guidelines.« less
Kaewsuksai, Peeranan; Jitsurong, Siroj
2017-11-01
To evaluate the feasibility and effectiveness of a quadruple test for Down syndrome in the second trimester of pregnancy in clinical settings in Thailand. From October 2015 to September 2016, a prospective study was undertaken in 19 hospitals in Songkhla province, Thailand. Women with a singleton pregnancy of 14-18 weeks were enrolled and underwent the quadruple test. The risk cutoff value was set at 1:250. All women with a positive test (risk ≥1:250) were offered amniocentesis. Women were followed up until delivery. Among 2375 women, 206 (8.7%) had a positive quadruple test; 98 (47.6%) of these women voluntarily underwent amniocentesis. Overall, seven pregnancies were complicated with chromosomal abnormalities (2.9 cases in 1000), including four cases of Down syndrome (1.7 in 1000) and three of other abnormalities. The detection, false-positive, and accuracy rates of the quadruple test for Down syndrome were 75.0%, 8.6%, and 91.4%, respectively. The quadruple test was found to be a feasible and efficient method for screening for Down syndrome in the second trimester of pregnancy in a Thai clinical setting. The test should be performed for pregnant women before an invasive test for Down syndrome. © 2017 International Federation of Gynecology and Obstetrics.
Sub-Scale Testing and Development of the J-2X Fuel Turbopump Inducer
NASA Technical Reports Server (NTRS)
Sargent, Scott R.; Becht, David G.
2011-01-01
In the early stages of the J-2X upper stage engine program, various inducer configurations proposed for use in the fuel turbopump (FTP) were tested in water. The primary objectives of this test effort were twofold. First, to obtain a more comprehensive data set than that which existed in the Pratt & Whitney Rocketdyne (PWR) historical archives from the original J-2S program, and second, to supplement that data set with information regarding the cavitation induced vibrations for both the historical J-2S configuration as well as those tested for the J-2X program. The J-2X FTP inducer, which actually consists of an inducer stage mechanically attached to a kicker stage, underwent 4 primary iterations utilizing sub-scaled test articles manufactured and tested in PWR's Engineering Development Laboratory (EDL). The kicker remained unchanged throughout the test series. The four inducer configurations tested retained many of the basic design features of the J-2S inducer, but also included variations on leading edge blade thickness and blade angle distribution, primarily aimed at improving suction performance at higher flow coefficients. From these data sets, the effects of the tested design variables on hydrodynamic performance and cavitation instabilities were discerned. A limited comparison of impact to the inducer efficiency was determined as well.
Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities
NASA Technical Reports Server (NTRS)
Richter, Hanz
2004-01-01
A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.
Rank estimation and the multivariate analysis of in vivo fast-scan cyclic voltammetric data
Keithley, Richard B.; Carelli, Regina M.; Wightman, R. Mark
2010-01-01
Principal component regression has been used in the past to separate current contributions from different neuromodulators measured with in vivo fast-scan cyclic voltammetry. Traditionally, a percent cumulative variance approach has been used to determine the rank of the training set voltammetric matrix during model development, however this approach suffers from several disadvantages including the use of arbitrary percentages and the requirement of extreme precision of training sets. Here we propose that Malinowski’s F-test, a method based on a statistical analysis of the variance contained within the training set, can be used to improve factor selection for the analysis of in vivo fast-scan cyclic voltammetric data. These two methods of rank estimation were compared at all steps in the calibration protocol including the number of principal components retained, overall noise levels, model validation as determined using a residual analysis procedure, and predicted concentration information. By analyzing 119 training sets from two different laboratories amassed over several years, we were able to gain insight into the heterogeneity of in vivo fast-scan cyclic voltammetric data and study how differences in factor selection propagate throughout the entire principal component regression analysis procedure. Visualizing cyclic voltammetric representations of the data contained in the retained and discarded principal components showed that using Malinowski’s F-test for rank estimation of in vivo training sets allowed for noise to be more accurately removed. Malinowski’s F-test also improved the robustness of our criterion for judging multivariate model validity, even though signal-to-noise ratios of the data varied. In addition, pH change was the majority noise carrier of in vivo training sets while dopamine prediction was more sensitive to noise. PMID:20527815
Mehta, Urvakhsh M; Thirthalli, Jagadisha; Naveen Kumar, C; Mahadevaiah, Mahesh; Rao, Kiran; Subbakrishna, Doddaballapura K; Gangadhar, Bangalore N; Keshavan, Matcheri S
2011-09-01
Social cognition is a cognitive domain that is under substantial cultural influence. There are no culturally appropriate standardized tools in India to comprehensively test social cognition. This study describes validation of tools for three social cognition constructs: theory of mind, social perception and attributional bias. Theory of mind tests included adaptations of, (a) two first order tasks [Sally-Anne and Smarties task], (b) two second order tasks [Ice cream van and Missing cookies story], (c) two metaphor-irony tasks and (d) the faux pas recognition test. Internal, Personal, and Situational Attributions Questionnaire (IPSAQ) and Social Cue Recognition Test were adapted to assess attributional bias and social perception, respectively. These tests were first modified to suit the Indian cultural context without changing the constructs to be tested. A panel of experts then rated the tests on likert scales as to (1) whether the modified tasks tested the same construct as in the original and (2) whether they were culturally appropriate. The modified tests were then administered to groups of actively symptomatic and remitted schizophrenia patients as well as healthy comparison subjects. All tests of the Social Cognition Rating Tools in Indian Setting had good content validity and known groups validity. In addition, the social cure recognition test in Indian setting had good internal consistency and concurrent validity. Copyright © 2011 Elsevier B.V. All rights reserved.
Easterbrook, Philippa J.; Roberts, Teri; Sands, Anita; Peeling, Rosanna
2017-01-01
Purpose of review Chronic hepatitis B virus (HBV) and hepatitis C virus (HCV) infections and HIV–HBV and HCV coinfection are major causes of chronic liver disease worldwide. Testing and diagnosis is the gateway for access to both treatment and prevention services, but there remains a large burden of undiagnosed infection globally. We review the global epidemiology, key challenges in the current hepatitis testing response, new tools to support the hepatitis global response (2016–2020 Global Hepatitis Health Sector strategy, and 2017 WHO guidelines on hepatitis testing) and future directions and innovations in hepatitis diagnostics. Recent findings Key challenges in the current hepatitis testing response include lack of quality-assured serological and low-cost virological in-vitro diagnostics, limited facilities for testing, inadequate data to guide country-specific hepatitis testing approaches, stigmatization of those with or at risk of viral hepatitis and lack of guidelines on hepatitis testing for resource-limited settings. The new Global Hepatitis Health Sector strategy sets out goals for elimination of viral hepatitis as a public health threat by 2030 and gives outcome targets for reductions in new infections and mortality, as well as service delivery targets that include testing, diagnosis and treatment. The 2017 WHO hepatitis testing guidelines for adults, adolescents and children in low-income and middle-income countries outline the public health approach to strengthen and expand current testing practices for viral hepatitis and addresses who to test (testing approaches), which serological and virological assays to use (testing strategies) as well as interventions to promote linkage to prevention and care. Summary Future directions and innovations in hepatitis testing include strategies to improve access such as through use of existing facility and community-based testing opportunities for hepatitis testing, near-patient or point-of-care assays for virological markers (nucleic acid testing and HCV core antigen), dried blood spot specimens used with different serological and nucleic acid test assays, multiplex and multi-disease platforms to enable testing for multiple analytes/pathogens and potential self-testing for viral hepatitis. PMID:28306597
Moore, G.K.; Baten, L.G.; Allord, G.J.; Robinove, C.J.
1983-01-01
The Fox-Wolf River basin in east-central Wisconsin was selected to test concepts for a water-resources information system using digital mapping technology. This basin of 16,800 sq km is typical of many areas in the country. Fifty digital data sets were included in the Fox-Wolf information system. Many data sets were digitized from 1:500,000 scale maps and overlays. Some thematic data were acquired from WATSTORE and other digital data files. All data were geometrically transformed into a Lambert Conformal Conic map projection and converted to a raster format with a 1-km resolution. The result of this preliminary processing was a group of spatially registered, digital data sets in map form. Parameter evaluation, areal stratification, data merging, and data integration were used to achieve the processing objectives and to obtain analysis results for the Fox-Wolf basin. Parameter evaluation includes the visual interpretation of single data sets and digital processing to obtain new derived data sets. In the areal stratification stage, masks were used to extract from one data set all features that are within a selected area on another data set. Most processing results were obtained by data merging. Merging is the combination of two or more data sets into a composite product, in which the contribution of each original data set is apparent and can be extracted from the composite. One processing result was also obtained by data integration. Integration is the combination of two or more data sets into a single new product, from which the original data cannot be separated or calculated. (USGS)
NASA Technical Reports Server (NTRS)
Barrett, Charles A.
2003-01-01
The cyclic oxidation test results for some 1000 high temperature commercial and experimental alloys have been collected in an EXCEL database. This database represents over thirty years of research at NASA Glenn Research Center in Cleveland, Ohio. The data is in the form of a series of runs of specific weight change versus time values for a set of samples tested at a given temperature, cycle time, and exposure time. Included on each run is a set of embedded plots of the critical data. The nature of the data is discussed along with analysis of the cyclic oxidation process. In addition examples are given as to how a set of results can be analyzed. The data is assembled on a read-only compact disk which is available on request from Materials Durability Branch, NASA Glenn Research Center, Cleveland, Ohio.
Measuring and testing for gender discrimination in physician pay: English family doctors.
Gravelle, Hugh; Hole, Arne Risa; Santos, Rita
2011-07-01
In 2008 the income of female GPs was 70%, and their wages (income per hour) were 89%, of those of male GPs. We estimate Oaxaca decompositions using OLS models of wages and 2SLS models of income and propose a set of new direct tests for within workplace gender discrimination. The direct tests are based on a comparison of the differences in income of female and male GPs in practices with varying proportions of female GPs and with female or male senior partners. These tests provide only weak evidence for discrimination. We also propose a set of indirect tests for discrimination, including a comparison of a GP's actual income with the income they report as an acceptable reward for their job. The indirect tests provide no evidence for gender discrimination within practices. Copyright © 2011 Elsevier B.V. All rights reserved.
Gargis, Amy S; Kalman, Lisa; Lubin, Ira M
2016-12-01
Clinical microbiology and public health laboratories are beginning to utilize next-generation sequencing (NGS) for a range of applications. This technology has the potential to transform the field by providing approaches that will complement, or even replace, many conventional laboratory tests. While the benefits of NGS are significant, the complexities of these assays require an evolving set of standards to ensure testing quality. Regulatory and accreditation requirements, professional guidelines, and best practices that help ensure the quality of NGS-based tests are emerging. This review highlights currently available standards and guidelines for the implementation of NGS in the clinical and public health laboratory setting, and it includes considerations for NGS test validation, quality control procedures, proficiency testing, and reference materials. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Bench Test Evaluation of Adaptive Servoventilation Devices for Sleep Apnea Treatment
Zhu, Kaixian; Kharboutly, Haissam; Ma, Jianting; Bouzit, Mourad; Escourrou, Pierre
2013-01-01
Rationale: Adaptive servoventilation devices are marketed to overcome sleep disordered breathing with apneas and hypopneas of both central and obstructive mechanisms often experienced by patients with chronic heart failure. The clinical efficacy of these devices is still questioned. Study Objectives: This study challenged the detection and treatment capabilities of the three commercially available adaptive servoventilation devices in response to sleep disordered breathing events reproduced on an innovative bench test. Methods: The bench test consisted of a computer-controlled piston and a Starling resistor. The three devices were subjected to a flow sequence composed of central and obstructive apneas and hypopneas including Cheyne-Stokes respiration derived from a patient. The responses of the devices were separately evaluated with the maximum and the clinical settings (titrated expiratory positive airway pressure), and the detected events were compared to the bench-scored values. Results: The three devices responded similarly to central events, by increasing pressure support to raise airflow. All central apneas were eliminated, whereas hypopneas remained. The three devices responded differently to the obstructive events with the maximum settings. These obstructive events could be normalized with clinical settings. The residual events of all the devices were scored lower than bench test values with the maximum settings, but were in agreement with the clinical settings. However, their mechanisms were misclassified. Conclusion: The tested devices reacted as expected to the disordered breathing events, but not sufficiently to normalize the breathing flow. The device-scored results should be used with caution to judge efficacy, as their validity depends upon the initial settings. Citation: Zhu K; Kharboutly H; Ma J; Bouzit M; Escourrou P. Bench test evaluation of adaptive servoventilation devices for sleep apnea treatment. J Clin Sleep Med 2013;9(9):861-871. PMID:23997698
Optimal quantum networks and one-shot entropies
NASA Astrophysics Data System (ADS)
Chiribella, Giulio; Ebler, Daniel
2016-09-01
We develop a semidefinite programming method for the optimization of quantum networks, including both causal networks and networks with indefinite causal structure. Our method applies to a broad class of performance measures, defined operationally in terms of interative tests set up by a verifier. We show that the optimal performance is equal to a max relative entropy, which quantifies the informativeness of the test. Building on this result, we extend the notion of conditional min-entropy from quantum states to quantum causal networks. The optimization method is illustrated in a number of applications, including the inversion, charge conjugation, and controlization of an unknown unitary dynamics. In the non-causal setting, we show a proof-of-principle application to the maximization of the winning probability in a non-causal quantum game.
NASA Technical Reports Server (NTRS)
Marti, Willy
1937-01-01
Test equipment is described that includes a system of three quartz indicators whereby three different pressures could be synchronized and simultaneously recorded on a single oscillogram. This equipment was used to test the reliction of waves at ends of valve spring, the dynamical stress of the valve spring for a single lift of the valve, and measurement of the curve of the cam tested. Other tests included simultaneous recording of the stress at both ends of the spring, spring oscillation during a single lift as a function of speed, computation of amplitude of oscillation for a single lift by harmonic analysis, effect of cam profile, the setting up of resonance, and forced spring oscillation with damping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2015-02-21
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-20
We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
Newman, Thomas B; Bernzweig, Jane A; Takayama, John I; Finch, Stacia A; Wasserman, Richard C; Pantell, Robert H
2002-01-01
To determine the predictors and results of urine testing of young febrile infants seen in office settings. Prospective cohort study. Offices of 573 pediatric practitioners from 219 practices in the American Academy of Pediatrics Pediatric Research in Office Settings' research network. A total of 3066 infants 3 months or younger with temperatures of 38 degrees C or higher were evaluated and treated according to the judgment of their practitioners. Urine testing results, early and late urinary tract infections (UTIs), and UTIs with bacteremia. Fifty-four percent of the infants initially had urine tested, of whom 10% had a UTI. The height of the fever was associated with urine testing and a UTI among those tested (adjusted odds ratio per degree Celsius, 2.2 for both). Younger age, ill appearance, and lack of a fever source were associated with urine testing but not with a UTI, whereas lack of circumcision (adjusted odds ratio, 11.6), female sex (adjusted odds ratio, 5.4), and longer duration of fever (adjusted odds ratio, 1.8 for fever lasting > or = 24 hours) were not associated with urine testing but were associated with a UTI. Bacteremia accompanied the UTI in 10% of the patients, including 17% of those younger than 1 month. Among 807 infants not initially tested or treated with antibiotics, only 2 had a subsequent documented UTI; both did well. Practitioners order urine tests selectively, focusing on younger and more ill-appearing infants and on those without an apparent fever source. Such selective urine testing, with close follow-up, was associated with few late UTIs in this large study. Urine testing should focus particularly on uncircumcised boys, girls, the youngest and sickest infants, and those with persistent fever.
Challenging local realism with human choices.
2018-05-01
A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism 1 , in which the properties of the physical world are independent of our observation of them and no signal travels faster than light. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings 2,3 . Although technology can satisfy the first two of these requirements 4-7 , the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human 'free will' could be used rigorously to ensure unpredictability in Bell tests 8 . Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology 9 . The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons 5,6 , single atoms 7 , atomic ensembles 10 and superconducting devices 11 . Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite 12 scenarios. Project outcomes include closing the 'freedom-of-choice loophole' (the possibility that the setting choices are influenced by 'hidden variables' to correlate with the particle properties 13 ), the utilization of video-game methods 14 for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teke, T
Purpose: To present and validate a set of quality control tests for trajectory treatment delivery using synchronized dynamic couch (translation and rotation), MLC and collimator motion. Methods: The quality control tests are based on the Picket fence test, which consist of 5 narrow band 2mm width spaced at 2.5cm intervals, and adds progressively synchronized dynamic motions. The tests were exposed on GafChromic EBT3 films. The first test is a regular (no motion and MLC static while beam is on) Picket Fence test used as baseline. The second test includes simultaneous collimator and couch rotation, each stripe corresponding to a differentmore » rotation speed. Errors in these tests were introduced (0.5 degree and 1 degree error in rotation synchronization) to assess the error sensitivity of this test. The second test is similar to the regular Picket Fence but now including dynamic MLC motion and couch translation (including acceleration during delivery) while the beam is on. Finally in the third test, which is a combination of the first and second test, the Picket Fence pattern is delivered using synchronized collimator and couch rotation and synchronized dynamic MLC and couch translation including acceleration. Films were analyzed with FilmQA Pro. Results: The distance between the peaks in the dose profile where measured (18.5cm away from the isocentre in the inplane direction where non synchronized rotation would have the largest effect) and compared to the regular Picket Fence tests. For well synchronized motions distances between peaks where between 24.9–25.4 mm identical to the regular Picket Fence test. This range increased to 24.4–26.4mm and 23.4–26.4mm for 0.5 degree and 1 degree error respectively. The amplitude also decreased up to 15% when errors are introduced. Conclusion: We demonstrated that the Roucoulette tests can be used as a quality control tests for trajectory treatment delivery using synchronized dynamic motion.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-13
...). The new version of this IEC standard includes a number of methodological changes designed to increase... codified) sets forth a variety of provisions designed to improve energy efficiency and established the... prescribed or amended under this section shall be reasonably designed to produce test results which measure...
ERIC Educational Resources Information Center
Carlson, Janet F.; Benson, Nicholas; Oakland, Thomas
2010-01-01
Implications of the International Classification of Functioning, Disability and Health (ICF) on the development and use of tests in school settings are enumerated. We predict increased demand for behavioural assessments that consider a person's activities, participation and person-environment interactions, including measures that: (a) address…
Predicting Transition to Postsecondary Programs of GED® Earners in a College Setting
ERIC Educational Resources Information Center
Medina, Isabel
2014-01-01
This applied dissertation was designed to identify the characteristics of students enrolled in a GED® preparation program who transitioned to postsecondary programs at the same institution after passing the GED® test. The characteristics studied included age; gender; ethnicity; prematriculation scores in reading, language, and math in the Test of…
A Preliminary Assessment of Phase Separator Ground-Based and Reduced-Gravity Testing for ALS Systems
NASA Technical Reports Server (NTRS)
Hall, Nancy Rabel
2006-01-01
A viewgraph presentation of phase separator ground-based and reduced-gravity testing for Advanced Life Support (ALS) systems is shown. The topics include: 1) Multiphase Flow Technology Program; 2) Types of Separators; 3) MOBI Phase Separators; 4) Experiment set-up; and 5) Preliminary comparison/results.
DOT National Transportation Integrated Search
2017-10-25
The Task 8 D2X Hub Proof-of-Concept Test Evaluation Report provides results of the experimental data analysis performed in accordance with the experimental plan for the proof-of-concept version of the prototype system. The data set analyzed includes ...
Bowles, Kristina E; Clark, Hollie A; Tai, Eric; Sullivan, Patrick S; Song, Binwei; Tsang, Jenny; Dietz, Craig A; Mir, Julita; Mares-DelGrasso, Azul; Calhoun, Cindy; Aguirre, Daisy; Emerson, Cicily; Heffelfinger, James D
2008-01-01
The goals of this project were to assess the feasibility of conducting rapid human immunodeficiency virus (HIV) testing in outreach and community settings to increase knowledge of HIV serostatus among groups disproportionately affected by HIV and to identify effective nonclinical venues for recruiting people in the targeted populations. Community-based organizations (CBOs) in seven U.S. cities conducted rapid HIV testing in outreach and community settings, including public parks, homeless shelters, and bars. People with reactive preliminary positive test results received confirmatory testing, and people confirmed to be HIV-positive were referred to health-care and prevention services. A total of 23,900 people received rapid HIV testing. Of the 267 people (1.1%) with newly diagnosed HIV infection, 75% received their confirmatory test results and 64% were referred to care. Seventy-six percent were from racial/ethnic minority groups, and 58% identified themselves as men who have sex with men, 72% of whom reported having multiple sex partners in the past year. Venues with the highest proportion of new HIV diagnoses were bathhouses, social service organizations, and needle-exchange programs. The acceptance rate for testing was 60% among sites collecting this information. Findings from this demonstration project indicate that offering rapid HIV testing in outreach and community settings is a feasible approach for reaching members of minority groups and people at high risk for HIV infection. The project identified venues that would be important to target and offered lessons that could be used by other CBOs to design and implement similar programs in the future.
Studies of dished accelerator grids for 30-cm ion thrusters
NASA Technical Reports Server (NTRS)
Rawlin, V. K.
1973-01-01
Eighteen geometrically different sets of dished accelerator grids were tested on five 30-cm thrusters. The geometric variation of the grids included the grid-to-grid spacing, the screen and accelerator hole diameters and thicknesses, the screen and accelerator open area fractions, ratio of dish depth to dish diameter, compensation, and aperture shape. In general, the data taken over a range of beam currents for each grid set included the minimum total accelerating voltage required to extract a given beam current and the minimum accelerator grid voltage required to prevent electron backstreaming.
Online medical symbol recognition using a Tablet PC
NASA Astrophysics Data System (ADS)
Kundu, Amlan; Hu, Qian; Boykin, Stanley; Clark, Cheryl; Fish, Randy; Jones, Stephen; Moore, Stephen
2011-01-01
In this paper we describe a scheme to enhance the usability of a Tablet PC's handwriting recognition system by including medical symbols that are not a part of the Tablet PC's symbol library. The goal of this work is to make handwriting recognition more useful for medical professionals accustomed to using medical symbols in medical records. To demonstrate that this new symbol recognition module is robust and expandable, we report results on both a medical symbol set and an expanded symbol test set which includes selected mathematical symbols.
Microarray Genomic Systems Development
2008-06-01
11 species), Escherichia coli TOP10 (7 strains), and Geobacillus stearothermophilus . Using standard molecular biology methods, we isolated genomic...comparisons. Results: Different species of bacteria, including Escherichia coli, Bacillus bacteria, and Geobacillus stearothermophilus produce qualitatively...oligonucleotides to labelled genomic DNA from a set of test samples, including eleven Bacillus species, Geobacillus stearothermophilus , and seven Escherichia
Setting up chronic disease programs: perspectives from Aboriginal Australia.
Hoy, Wendy E; Kondalsamy-Chennakesavan, S; Smith, J; Sharma, S; Davey, R; Gokel, G
2006-01-01
To share some perspectives on setting up programs to improve management of hypertension, renal disease, and diabetes in high-risk populations, derived from experience in remote Australian Aboriginal settings. Regular integrated checks for chronic disease and their risk factors and appropriate treatment are essential elements of regular adult health care. Programs should be run by local health workers, following algorithms for testing and treatment, with back up from nurses. Constant evaluation is essential. COMPONENTS: Theses include testing, treatment, education for individuals and communities, skills and career development for staff, ongoing evaluation, program modification, and advocacy. Target groups, elements, and frequency of testing, as well as the reagents and treatment modalities must be designed for local circumstances, which include disease burden and impact, competing priorities, and available resources. Pilot surveys or record reviews can define target groups and conditions. Opportunistic testing will suffice if people are seen with some regularity for other conditions; otherwise, systematic screening is needed, preferably embedded in primary care streams. The chief goal of treatment is to lower blood pressure, and if the patient is diabetic, to control hyperglycemia. Many people will need multiple drugs for many years. Challenges include lack of resources, competing demands of acute care, the burden of treatment when disease rates are high, problems with information systems, and in our setting, health worker absenteeism. Businesses, altruistic organizations, and pharmaceutical and biotechnology companies might fund feasibility studies. Where governments or insurance companies already support health services, they must ultimately commit to chronic disease services over the long- term. Effective advocacy requires the presentation of an integrated view of chronic disease and a single cross-disciplinary program for its containment. Arguments based on preserving the economic base of societies by preventing or delaying premature death will carry most weight, as will the costs of dialysis avoided in countries that already support open-access programs.
Hancock, Laura; Correia, Stephen; Ahern, David; Barredo, Jennifer; Resnik, Linda
2017-07-01
Purpose The objectives were to 1) identify major cognitive domains involved in learning to use the DEKA Arm; 2) specify cognitive domain-specific skills associated with basic versus advanced users; and 3) examine whether baseline memory and executive function predicted learning. Method Sample included 35 persons with upper limb amputation. Subjects were administered a brief neuropsychological test battery prior to start of DEKA Arm training, as well as physical performance measures at the onset of, and following training. Multiple regression models controlling for age and including neuropsychological tests were developed to predict physical performance scores. Prosthetic performance scores were divided into quartiles and independent samples t-tests compared neuropsychological test scores of advanced scorers and basic scorers. Baseline neuropsychological test scores were used to predict change in scores on physical performance measures across time. Results Cognitive domains of attention and processing speed were statistically significantly related to proficiency of DEKA Arm use and predicted level of proficiency. Conclusions Results support use of neuropsychological tests to predict learning and use of a multifunctional prosthesis. Assessment of cognitive status at the outset of training may help set expectations for the duration and outcomes of treatment. Implications for Rehabilitation Cognitive domains of attention and processing speed were significantly related to level of proficiencyof an advanced multifunctional prosthesis (the DEKA Arm) after training. Results provide initial support for the use of neuropsychological tests to predict advanced learningand use of a multifunctional prosthesis in upper-limb amputees. Results suggest that assessment of patients' cognitive status at the outset of upper limb prosthetictraining may, in the future, help patients, their families and therapists set expectations for theduration and intensity of training and may help set reasonable proficiency goals.
Cragun, Deborah; Besharat, Andrea Doty; Lewis, Courtney; Vadaparampil, Susan T; Pal, Tuya
2013-12-01
With the expansion of genetic testing options due to tremendous advances in sequencing technologies, testing will increasingly be offered by a variety of healthcare providers in diverse settings, as has been observed with BRCA1 and BRCA2 (BRCA) gene testing over the last decade. In an effort to assess the educational needs and preferences of healthcare providers primarily in a community-based setting, we mailed a survey to healthcare providers across Florida who order BRCA testing. Within the packet, a supplemental card was included to give participants the opportunity to request free clinical educational resources from the investigative team. Of 81 eligible providers who completed the survey, most were physicians or nurse practitioners; and over 90 % worked in a community or private practice setting. Respondents provided BRCA testing services for a median of 5 years, but the majority (56 %) reported no formal training in clinical cancer genetics. Most respondents (95 %) expressed interest in formal training opportunities, with 3-day in-person weekend training representing the most highly preferred format. The most widely selected facilitators to participation were minimal requirement to take time off work and continuing education credits. Overall, 64 % of respondents requested free clinical educational resources. Preferences for informal education included written materials and in-person presentations; whereas accessing a DVD or website were less popular. Findings from our study highlight both the need for and interest in ongoing educational opportunities and resources among community providers who order BRCA testing. These results can be used to enhance participation of community-based providers in educational training programs by targeting educational resources to the most preferred format.
Investigation of exomic variants associated with overall survival in ovarian cancer
Ann Chen, Yian; Larson, Melissa C; Fogarty, Zachary C; Earp, Madalene A; Anton-Culver, Hoda; Bandera, Elisa V; Cramer, Daniel; Doherty, Jennifer A; Goodman, Marc T; Gronwald, Jacek; Karlan, Beth Y; Kjaer, Susanne K; Levine, Douglas A; Menon, Usha; Ness, Roberta B; Pearce, Celeste L; Pejovic, Tanja; Rossing, Mary Anne; Wentzensen, Nicolas; Bean, Yukie T; Bisogna, Maria; Brinton, Louise A; Carney, Michael E; Cunningham, Julie M; Cybulski, Cezary; deFazio, Anna; Dicks, Ed M; Edwards, Robert P; Gayther, Simon A; Gentry-Maharaj, Aleksandra; Gore, Martin; Iversen, Edwin S; Jensen, Allan; Johnatty, Sharon E; Lester, Jenny; Lin, Hui-Yi; Lissowska, Jolanta; Lubinski, Jan; Menkiszak, Janusz; Modugno, Francesmary; Moysich, Kirsten B; Orlow, Irene; Pike, Malcolm C; Ramus, Susan J; Song, Honglin; Terry, Kathryn L; Thompson, Pamela J; Tyrer, Jonathan P; van den Berg, David J; Vierkant, Robert A; Vitonis, Allison F; Walsh, Christine; Wilkens, Lynne R; Wu, Anna H; Yang, Hannah; Ziogas, Argyrios; Berchuck, Andrew; Chenevix-Trench, Georgia; Schildkraut, Joellen M; Permuth-Wey, Jennifer; Phelan, Catherine M; Pharoah, Paul D P; Fridley, Brooke L
2016-01-01
Background While numerous susceptibility loci for epithelial ovarian cancer (EOC) have been identified, few associations have been reported with overall survival. In the absence of common prognostic genetic markers, we hypothesize that rare coding variants may be associated with overall EOC survival and assessed their contribution in two exome-based genotyping projects of the Ovarian Cancer Association Consortium (OCAC). Methods The primary patient set (Set 1) included 14 independent EOC studies (4293 patients) and 227,892 variants, and a secondary patient set (Set 2) included six additional EOC studies (1744 patients) and 114,620 variants. Because power to detect rare variants individually is reduced, gene-level tests were conducted. Sets were analyzed separately at individual variants and by gene, and then combined with meta-analyses (73,203 variants and 13,163 genes overlapped). Results No individual variant reached genome-wide statistical significance. A SNP previously implicated to be associated with EOC risk and, to a lesser extent, survival, rs8170, showed the strongest evidence of association with survival and similar effect size estimates across sets (Pmeta=1.1E-6, HRSet1=1.17, HRSet2=1.14). Rare variants in ATG2B, an autophagy gene important for apoptosis, were significantly associated with survival after multiple testing correction (Pmeta=1.1E-6; Pcorrected=0.01). Conclusions Common variant rs8170 and rare variants in ATG2B may be associated with EOC overall survival, although further study is needed. Impact This study represents the first exome-wide association study of EOC survival to include rare variant analyses, and suggests that complementary single variant and gene-level analyses in large studies are needed to identify rare variants that warrant follow-up study. PMID:26747452
Deep Borehole Field Test Requirements and Controlled Assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest
2015-07-01
This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less
Blum, David; Rosa, Daniel; deWolf-Linder, Susanne; Hayoz, Stefanie; Ribi, Karin; Koeberle, Dieter; Strasser, Florian
2014-12-01
Oncologists perform a range of pharmacological and nonpharmacological interventions to manage the symptoms of outpatients with advanced cancer. The aim of this study was to develop and test a symptom management performance checklist (SyMPeC) to review medical charts. First, the content of the checklist was determined by consensus of an interprofessional team. The SyMPeC was tested using the data set of the SAKK 96/06 E-MOSAIC (Electronical Monitoring of Symptoms and Syndromes Associated with Cancer) trial, which included six consecutive visits from 247 patients. In a test data set (half of the data) of medical charts, two people extracted and quantified the definitions of the parameters (content validity). To assess the inter-rater reliability, three independent researchers used the SyMPeC on a random sample (10% of the test data set), and Fleiss's kappa was calculated. To test external validity, the interventions retrieved by the SyMPeC chart review were compared with nurse-led assessment of patient-perceived oncologists' palliative interventions. Five categories of symptoms were included: pain, fatigue, anorexia/nausea, dyspnea, and depression/anxiety. Interventions were categorized as symptom specific or symptom unspecific. In the test data set of 123 patients, 402 unspecific and 299 symptom-specific pharmacological interventions were detected. Nonpharmacological interventions (n = 242) were mostly symptom unspecific. Fleiss's kappa for symptom and intervention detections was K = 0.7 and K = 0.86, respectively. In 1003 of 1167 visits (86%), there was a match between SyMPeC and nurse-led assessment. Seventy-nine percent (195 of 247) of patients had no or one mismatch. Chart review by SyMPeC seems reliable to detect symptom management interventions by oncologists in outpatient clinics. Nonpharmacological interventions were less symptom specific. A template for documentation is needed for standardization. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
In Pursuit of Change: Youth Response to Intensive Goal Setting Embedded in a Serious Video Game
Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Juliano, Melissa; Frazior, McKee; Wilsdon, Jon; Jago, Russell
2007-01-01
Background Type 2 diabetes has increased in prevalence among youth, paralleling the increase in pediatric obesity. Helping youth achieve energy balance by changing diet and physical activity behaviors should decrease the risk for type 2 diabetes and obesity. Goal setting and goal review are critical components of behavior change. Theory-informed video games that emphasize development and refinement of goal setting and goal review skills provide a method for achieving energy balance in an informative, entertaining format. This article reports alpha-testing results of early versions of theory-informed goal setting and reviews components of two diabetes and obesity prevention video games for preadolescents. Method Two episodes each of two video games were alpha tested with 9- to 11-year-old youth from multiple ethnic groups. Alpha testing included observed game play followed by a scripted interview. The staff was trained in observation and interview techniques prior to data collection. Results Although some difficulties were encountered, alpha testers generally understood goal setting and review components and comprehended they were setting personal goals. Although goal setting and review involved multiple steps, youth were generally able to complete them quickly, with minimal difficulty. Few technical issues arose; however, several usability and comprehension problems were identified. Conclusions Theory-informed video games may be an effective medium for promoting youth diabetes and obesity prevention. Alpha testing helps identify problems likely to have a negative effect on functionality, usability, and comprehension during development, thereby providing an opportunity to correct these issues prior to final production. PMID:19885165
The Alzheimer’s Disease Centers’ Uniform Data Set (UDS): The Neuropsychological Test Battery
Weintraub, Sandra; Salmon, David; Mercaldo, Nathaniel; Ferris, Steven; Graff-Radford, Neill R.; Chui, Helena; Cummings, Jeffrey; DeCarli, Charles; Foster, Norman L.; Galasko, Douglas; Peskind, Elaine; Dietrich, Woodrow; Beekly, Duane L.; Kukull, Walter A.; Morris, John C.
2009-01-01
The neuropsychological test battery from the Uniform Data Set (UDS) of the Alzheimer’s Disease Centers (ADC) program of the National Institute on Aging (NIA) consists of brief measures of attention, processing speed, executive function, episodic memory and language. This paper describes development of the battery and preliminary data from the initial UDS evaluation of 3,268 clinically cognitively normal men and women collected over the first 24 months of utilization. The subjects represent a sample of community-dwelling, individuals who volunteer for studies of cognitive aging. Subjects were considered “clinically cognitively normal” based on clinical assessment, including the Clinical Dementia Rating scale and the Functional Assessment Questionnaire. The results demonstrate performance on tests sensitive to cognitive aging and to the early stages of Alzheimer disease (AD) in a relatively well-educated sample. Regression models investigating the impact of age, education, and gender on test scores indicate that these variables will need to be incorporated in subsequent normative studies. Future plans include: 1) determining the psychometric properties of the battery; 2) establishing normative data, including norms for different ethnic minority groups; and 3) conducting longitudinal studies on cognitively normal subjects, individuals with mild cognitive impairment, and individuals with AD and other forms of dementia. PMID:19474567
Atomic and vibrational origins of mechanical toughness in bioactive cement during setting
Tian, Kun V.; Yang, Bin; Yue, Yuanzheng; Bowron, Daniel T.; Mayers, Jerry; Donnan, Robert S.; Dobó-Nagy, Csaba; Nicholson, John W.; Fang, De-Cai; Greer, A. Lindsay; Chass, Gregory A.; Greaves, G. Neville
2015-01-01
Bioactive glass ionomer cements (GICs) have been in widespread use for ∼40 years in dentistry and medicine. However, these composites fall short of the toughness needed for permanent implants. Significant impediment to improvement has been the requisite use of conventional destructive mechanical testing, which is necessarily retrospective. Here we show quantitatively, through the novel use of calorimetry, terahertz (THz) spectroscopy and neutron scattering, how GIC's developing fracture toughness during setting is related to interfacial THz dynamics, changing atomic cohesion and fluctuating interfacial configurations. Contrary to convention, we find setting is non-monotonic, characterized by abrupt features not previously detected, including a glass–polymer coupling point, an early setting point, where decreasing toughness unexpectedly recovers, followed by stress-induced weakening of interfaces. Subsequently, toughness declines asymptotically to long-term fracture test values. We expect the insight afforded by these in situ non-destructive techniques will assist in raising understanding of the setting mechanisms and associated dynamics of cementitious materials. PMID:26548704
Solar electric propulsion thrust subsystem development
NASA Technical Reports Server (NTRS)
Masek, T. D.
1973-01-01
The Solar Electric Propulsion System developed under this program was designed to demonstrate all the thrust subsystem functions needed on an unmanned planetary vehicle. The demonstration included operation of the basic elements, power matching input and output voltage regulation, three-axis thrust vector control, subsystem automatic control including failure detection and correction capability (using a PDP-11 computer), operation of critical elements in thermal-vacuum-, zero-gravity-type propellant storage, and data outputs from all subsystem elements. The subsystem elements, functions, unique features, and test setup are described. General features and capabilities of the test-support data system are also presented. The test program culminated in a 1500-h computer-controlled, system-functional demonstration. This included simultaneous operation of two thruster/power conditioner sets. The results of this testing phase satisfied all the program goals.
Versatility of the mouse reversal/set-shifting test: effects of topiramate and sex
Bissonette, Gregory B.; Lande, Michelle D.; Martins, Gabriela J.; Powell, Elizabeth M.
2012-01-01
The ability to learn a rule to guide behavior is crucial for cognition and executive function. However, in a constantly changing environment, flexibility in terms of learning and changing rules is paramount. Research suggests there may be common underlying causes for the similar rule learning impairments observed in many psychiatric disorders. One of these common anatomical manifestations involves deficits to the GABAergic system, particularly in the frontal cerebral cortical regions. Many common anti-epileptic drugs and mood stabilizers activate the GABA system with the reported adverse side effects of cognitive dysfunction. The mouse reversal/set-shifting test was used to evaluate effects in mice given topiramate, which is reported to impair attention in humans. Here we report that in mice topiramate prevents formation of the attentional set, but does not alter reversal learning. Differences in the GABA system are also found in many neuropsychiatric disorders that are more common in males, including schizophrenia and autism. Initial findings with the reversal/set-shifting task excluded female subjects. In this study, female mice tested on the standard reversal/set-shifting task showed similar reversal learning, but were not able to form the attentional set. The behavioral paradigm was modified and when presented with sufficient discrimination tasks, female mice performed the same as male mice, requiring the same number of trials to reach criterion and form the attentional set. The notable difference was that female mice had an extended latency to complete the trials for all discriminations. In summary, the reversal/set-shifting test can be used to screen for cognitive effects of potential therapeutic compounds in both male and female mice. PMID:22677721
Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E
2013-07-01
To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.
Sun, Rongrong; Wang, Yuanyuan
2008-11-01
Predicting the spontaneous termination of the atrial fibrillation (AF) leads to not only better understanding of mechanisms of the arrhythmia but also the improved treatment of the sustained AF. A novel method is proposed to characterize the AF based on structure and the quantification of the recurrence plot (RP) to predict the termination of the AF. The RP of the electrocardiogram (ECG) signal is firstly obtained and eleven features are extracted to characterize its three basic patterns. Then the sequential forward search (SFS) algorithm and Davies-Bouldin criterion are utilized to select the feature subset which can predict the AF termination effectively. Finally, the multilayer perceptron (MLP) neural network is applied to predict the AF termination. An AF database which includes one training set and two testing sets (A and B) of Holter ECG recordings is studied. Experiment results show that 97% of testing set A and 95% of testing set B are correctly classified. It demonstrates that this algorithm has the ability to predict the spontaneous termination of the AF effectively.
Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program
NASA Technical Reports Server (NTRS)
Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.
2010-01-01
The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the Aeroelasticity Branch will examine other experimental efforts within the Subsonic Fixed Wing (SFW) program (such as testing of the NASA Common Research Model (CRM)) and other NASA programs and assess aeroelasticity issues and research topics.
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.
2015-01-01
This is the final of three reports published on the results of this project. In the first report, results were presented on nineteen tests performed in the NASA Glenn Spiral Bevel Gear Fatigue Test Rig on spiral bevel gear sets designed to simulate helicopter fielded failures. In the second report, fielded helicopter HUMS data from forty helicopters were processed with the same techniques that were applied to spiral bevel rig test data. Twenty of the forty helicopters experienced damage to the spiral bevel gears, while the other twenty helicopters had no known anomalies within the time frame of the datasets. In this report, results from the rig and helicopter data analysis will be compared for differences and similarities in condition indicator (CI) response. Observations and findings using sub-scale rig failure progression tests to validate helicopter gear condition indicators will be presented. In the helicopter, gear health monitoring data was measured when damage occurred and after the gear sets were replaced at two helicopter regimes. For the helicopters or tails, data was taken in the flat pitch ground 101 rotor speed (FPG101) regime. For nine tails, data was also taken at 120 knots true airspeed (120KTA) regime. In the test rig, gear sets were tested until damage initiated and progressed while gear health monitoring data and operational parameters were measured and tooth damage progression documented. For the rig tests, the gear speed was maintained at 3500RPM, a one hour run-in was performed at 4000 in-lb gear torque, than the torque was increased to 8000 in-lbs. The HUMS gear condition indicator data evaluated included Figure of Merit 4 (FM4), Root Mean Square (RMS) or Diagnostic Algorithm 1(DA1), + 3 Sideband Index (SI3) and + 1 Sideband Index (SI1). These were selected based on their sensitivity in detecting contact fatigue damage modes from analytical, experimental and historical helicopter data. For this report, the helicopter dataset was reduced to fourteen tails and the test rig data set was reduced to eight tested gear sets. The damage modes compared were separated into three cases. For case one, both the gear and pinion showed signs of contact fatigue or scuffing damage. For case two, only the pinion showed signs of contact fatigue damage or scuffing. Case three was limited to the gear tests when scuffing occurred immediately after the gear run-in. Results of this investigation highlighted the importance of understanding the complete monitored systems, for both the helicopter and test rig, before interpreting health monitoring data. Further work is required to better define these two systems that include better state awareness of the fielded systems, new sensing technologies, new experimental methods or models that quantify the effect of system design on CI response and new methods for setting thresholds that take into consideration the variance of each system.
GARNET--gene set analysis with exploration of annotation relations.
Rho, Kyoohyoung; Kim, Bumjin; Jang, Youngjun; Lee, Sanghyun; Bae, Taejeong; Seo, Jihae; Seo, Chaehwa; Lee, Jihyun; Kang, Hyunjung; Yu, Ungsik; Kim, Sunghoon; Lee, Sanghyuk; Kim, Wan Kyu
2011-02-15
Gene set analysis is a powerful method of deducing biological meaning for an a priori defined set of genes. Numerous tools have been developed to test statistical enrichment or depletion in specific pathways or gene ontology (GO) terms. Major difficulties towards biological interpretation are integrating diverse types of annotation categories and exploring the relationships between annotation terms of similar information. GARNET (Gene Annotation Relationship NEtwork Tools) is an integrative platform for gene set analysis with many novel features. It includes tools for retrieval of genes from annotation database, statistical analysis & visualization of annotation relationships, and managing gene sets. In an effort to allow access to a full spectrum of amassed biological knowledge, we have integrated a variety of annotation data that include the GO, domain, disease, drug, chromosomal location, and custom-defined annotations. Diverse types of molecular networks (pathways, transcription and microRNA regulations, protein-protein interaction) are also included. The pair-wise relationship between annotation gene sets was calculated using kappa statistics. GARNET consists of three modules--gene set manager, gene set analysis and gene set retrieval, which are tightly integrated to provide virtually automatic analysis for gene sets. A dedicated viewer for annotation network has been developed to facilitate exploration of the related annotations. GARNET (gene annotation relationship network tools) is an integrative platform for diverse types of gene set analysis, where complex relationships among gene annotations can be easily explored with an intuitive network visualization tool (http://garnet.isysbio.org/ or http://ercsb.ewha.ac.kr/garnet/).
Outbreaks in Health Care Settings.
Sood, Geeta; Perl, Trish M
2016-09-01
Outbreaks and pseudo-outbreaks in health care settings can be complex and should be evaluated systematically using epidemiologic tools. Laboratory testing is an important part of an outbreak evaluation. Health care personnel, equipment, supplies, water, ventilation systems, and the hospital environment have been associated with health care outbreaks. Settings including the neonatal intensive care unit, endoscopy, oncology, and transplant units are areas that have specific issues which impact the approach to outbreak investigation and control. Certain organisms have a predilection for health care settings because of the illnesses of patients, the procedures performed, and the care provided. Copyright © 2016 Elsevier Inc. All rights reserved.
Rispin, Karen; Wee, Joy
2015-07-01
This study was conducted to compare the performance of three types of chairs in a low-resource setting. The larger goal was to provide information which will enable more effective use of limited funds by wheelchair manufacturers and suppliers in low-resource settings. The Motivation Rough Terrain and Whirlwind Rough Rider were compared in six skills tests which participants completed in one wheelchair type and then a day later in the other. A hospital-style folding transport wheelchair was also included in one test. For all skills, participants rated the ease or difficulty on a visual analogue scale. For all tracks, distance traveled and the physiological cost index were recorded. Data were analyzed using repeated measures analysis of variance. The Motivation wheelchair outperformed Whirlwind wheelchair on rough and smooth tracks, and in some metrics on the tight spaces track. Motivation and Whirlwind wheelchairs significantly outperformed the hospital transport wheelchair in all metrics on the rough track skills test. This comparative study provides data that are valuable for manufacturers and for those who provide wheelchairs to users. The comparison with the hospital-style transport chair confirms the cost to users of inappropriate wheelchair provision. Implications for Rehabilitation For those with compromised lower limb function, wheelchairs are essential to enable full participation and improved quality of life. Therefore, provision of wheelchairs which effectively enable mobility in the cultures and environments in which people with disabilities live is crucial. This includes low-resource settings where the need for appropriate seating is especially urgent. A repeated measures study to measure wheelchair performances in everyday skills in the setting where wheelchairs are used gives information on the quality of mobility provided by those wheelchairs. This study highlights differences in the performance of three types of wheelchairs often distributed in low-resource settings. This information can improve mobility for wheelchair users in those settings by enabling wheelchair manufacturers to optimize wheelchair design and providers to optimize the use of limited funds.
Glennie, R Andrew; Batke, Juliet; Fallah, Nader; Cheng, Christiana L; Rivers, Carly S; Noonan, Vanessa K; Dvorak, Marcel F; Fisher, Charles G; Kwon, Brian K; Street, John T
2017-10-15
There is worldwide geographic variation in the epidemiology of traumatic spinal cord injury (tSCI). The aim of this study was to determine whether environmental barriers, health status, and quality-of-life outcomes differ between patients with tSCI living in rural or urban settings, and whether patients move from rural to urban settings after tSCI. A cohort review of the Rick Hansen SCI Registry (RHSCIR) was undertaken from 2004 to 2012 for one province in Canada. Rural/urban setting was determined using postal codes. Outcomes data at 1 year in the community included the Short Form-36 Version 2 (SF36v2™), Life Satisfaction Questionnaire, Craig Hospital Inventory of Environmental Factors-Short Form (CHIEF-SF), Functional Independent Measure ® Instrument, and SCI Health Questionnaire. Statistical methodologies used were t test, Mann-Whitney U test, and Fisher's exact or χ 2 test. In the analysis, 338 RHSCIR participants were included; 65 lived in a rural setting and 273 in an urban setting. Of the original patients residing in a rural area at discharge,10 moved to an urban area by 1 year. Those who moved from a rural to urban area reported a lower SF-36v2™ Mental Component Score (MCS; p = 0.04) and a higher incidence of depression at 1 year (p = 0.04). Urban patients also reported a higher incidence of depression (p = 0.02) and a lower CHIEF-SF total score (p = 0.01) indicating fewer environmental barriers. No significant differences were found in other outcomes. Results suggest that although the patient outcomes are similar, some patients move from rural to urban settings after tSCI. Future efforts should target screening mental health problems early, especially in urban settings.
Batke, Juliet; Fallah, Nader; Cheng, Christiana L.; Rivers, Carly S.; Noonan, Vanessa K.; Dvorak, Marcel F.; Fisher, Charles G.; Kwon, Brian K.; Street, John T.
2017-01-01
Abstract There is worldwide geographic variation in the epidemiology of traumatic spinal cord injury (tSCI). The aim of this study was to determine whether environmental barriers, health status, and quality-of-life outcomes differ between patients with tSCI living in rural or urban settings, and whether patients move from rural to urban settings after tSCI. A cohort review of the Rick Hansen SCI Registry (RHSCIR) was undertaken from 2004 to 2012 for one province in Canada. Rural/urban setting was determined using postal codes. Outcomes data at 1 year in the community included the Short Form-36 Version 2 (SF36v2™), Life Satisfaction Questionnaire, Craig Hospital Inventory of Environmental Factors-Short Form (CHIEF-SF), Functional Independent Measure® Instrument, and SCI Health Questionnaire. Statistical methodologies used were t test, Mann-Whitney U test, and Fisher's exact or χ2 test. In the analysis, 338 RHSCIR participants were included; 65 lived in a rural setting and 273 in an urban setting. Of the original patients residing in a rural area at discharge,10 moved to an urban area by 1 year. Those who moved from a rural to urban area reported a lower SF-36v2™ Mental Component Score (MCS; p = 0.04) and a higher incidence of depression at 1 year (p = 0.04). Urban patients also reported a higher incidence of depression (p = 0.02) and a lower CHIEF-SF total score (p = 0.01) indicating fewer environmental barriers. No significant differences were found in other outcomes. Results suggest that although the patient outcomes are similar, some patients move from rural to urban settings after tSCI. Future efforts should target screening mental health problems early, especially in urban settings. PMID:28462633
Processing data base information having nonwhite noise
Gross, Kenneth C.; Morreale, Patricia
1995-01-01
A method and system for processing a set of data from an industrial process and/or a sensor. The method and system can include processing data from either real or calculated data related to an industrial process variable. One of the data sets can be an artificial signal data set generated by an autoregressive moving average technique. After obtaining two data sets associated with one physical variable, a difference function data set is obtained by determining the arithmetic difference between the two pairs of data sets over time. A frequency domain transformation is made of the difference function data set to obtain Fourier modes describing a composite function data set. A residual function data set is obtained by subtracting the composite function data set from the difference function data set and the residual function data set (free of nonwhite noise) is analyzed by a statistical probability ratio test to provide a validated data base.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
Damarell, Raechel A; Tieman, Jennifer J
2016-03-01
Health professionals must be able to search competently for evidence to support practice. We sought to understand how palliative care clinicians construct searches for palliative care literature in PubMed, to quantify search efficacy in retrieving a set of relevant articles and to compare performance against a Palliative CareSearch Filter (PCSF). Included studies from palliative care systematic reviews formed a test set. Palliative care clinicians (n = 37) completed a search task using PubMed. Individual clinician searches were reconstructed in PubMed and combined with the test set to calculate retrieval sensitivity. PCSF performance in the test set was also determined. Many clinicians struggled to create useful searches. Twelve used a single search term, 17 narrowed the search inappropriately and 8 confused Boolean operators. The mean number of test set citations (n = 663) retrieved was 166 (SD = 188), or 25% although 76% of clinicians believed they would find more than 50% of the articles. Only 8 participants (22%) achieved this. Correlations between retrieval and PubMed confidence (r = 0.13) or frequency of use (r = -0.18) were weak. Many palliative care clinicians search PubMed ineffectively. Targeted skills training and PCSF promotion may improve evidence retrieval. © 2015 Health Libraries Group.
NASA Technical Reports Server (NTRS)
Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.
1983-01-01
The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.
High-level neutron coincidence counter maintenance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swansen, J.; Collinsworth, P.
1983-05-01
High-level neutron coincidence counter operational (field) calibration and usage is well known. This manual makes explicit basic (shop) check-out, calibration, and testing of new units and is a guide for repair of failed in-service units. Operational criteria for the major electronic functions are detailed, as are adjustments and calibration procedures, and recurrent mechanical/electromechanical problems are addressed. Some system tests are included for quality assurance. Data on nonstandard large-scale integrated (circuit) components and a schematic set are also included.
Optical fiber dispersion characterization study
NASA Technical Reports Server (NTRS)
Geeslin, A.; Arriad, A.; Riad, S. M.; Padgett, M. E.
1979-01-01
The theory, design, and results of optical fiber pulse dispersion measurements are considered. Both the hardware and software required to perform this type of measurement are described. Hardware includes a thermoelectrically cooled injection laser diode source, an 800 GHz gain bandwidth produce avalanche photodiode and an input mode scrambler. Software for a HP 9825 computer includes fast Fourier transform, inverse Fourier transform, and optimal compensation deconvolution. Test set construction details are also included. Test results include data collected on a 1 Km fiber, a 4 Km fiber, a fused spliced, eight 600 meter length fibers concatenated to form 4.8 Km, and up to nine optical connectors.
Experimental Characterization of Gas Turbine Emissions at Simulated Flight Altitude Conditions
NASA Technical Reports Server (NTRS)
Howard, R. P.; Wormhoudt, J. C.; Whitefield, P. D.
1996-01-01
NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation. A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO). Engine tests have been conducted at AEDC to fulfill the need of AEAP. The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community. It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data. A commercial-type bypass engine with aviation fuel was used in this test series. The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.
Physics of Colloids in Space--Plus (PCS+) Experiment Completed Flight Acceptance Testing
NASA Technical Reports Server (NTRS)
Doherty, Michael P.
2004-01-01
The Physics of Colloids in Space--Plus (PCS+) experiment successfully completed system-level flight acceptance testing in the fall of 2003. This testing included electromagnetic interference (EMI) testing, vibration testing, and thermal testing. PCS+, an Expedite the Process of Experiments to Space Station (EXPRESS) Rack payload will deploy a second set of colloid samples within the PCS flight hardware system that flew on the International Space Station (ISS) from April 2001 to June 2002. PCS+ is slated to return to the ISS in late 2004 or early 2005.
Chandrasekar, Edwin; Kaur, Ravneet; Song, Sharon; Kim, Karen E
2015-01-01
Hepatitis B (HBV) is an urgent, unmet public health issue that affects Asian Americans disproportionately. Of the estimated 1.2 million living with chronic hepatitis B in USA, more than 50% are of Asian ethnicity, despite the fact that Asian Americans constitute less than 6% of the total US population. The Centers for Disease Control and Prevention recommends HBV screening of persons who are at high risk for the disease. Yet, large numbers of Asian Americans have not been diagnosed or tested, in large part because of perceived cultural and linguistic barriers. Primary care physicians are at the front line of the US health care system, and are in a position to identify individuals and families at risk. Clinical settings integrated into Asian American communities, where physicians are on staff and wellness care is emphasized, can provide testing for HBV. In this study, the Asian Health Coalition and its community partners conducted HBV screenings and follow-up linkage to care in both clinical and nonclinical settings. The nonclinic settings included health fair events organized by churches and social services agencies, and were able to reach large numbers of individuals. Twice as many Asian Americans were screened in nonclinical settings than in health clinics. Chi-square and independent samples t-test showed that participants from the two settings did not differ in test positivity, sex, insurance status, years of residence in USA, or education. Additionally, the same proportion of individuals found to be infected in the two groups underwent successful linkage to care. Nonclinical settings were as effective as clinical settings in screening for HBV, as well as in making treatment options available to those who tested positive; demographic factors did not confound the similarities. Further research is needed to evaluate if linkage to care can be accomplished equally efficiently on a larger scale.
Automated spot defect characterization in a field portable night vision goggle test set
NASA Astrophysics Data System (ADS)
Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume
2018-05-01
This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.
Application of two tests of multivariate discordancy to fisheries data sets
Stapanian, M.A.; Kocovsky, P.M.; Garner, F.C.
2008-01-01
The generalized (Mahalanobis) distance and multivariate kurtosis are two powerful tests of multivariate discordancies (outliers). Unlike the generalized distance test, the multivariate kurtosis test has not been applied as a test of discordancy to fisheries data heretofore. We applied both tests, along with published algorithms for identifying suspected causal variable(s) of discordant observations, to two fisheries data sets from Lake Erie: total length, mass, and age from 1,234 burbot, Lota lota; and 22 combinations of unique subsets of 10 morphometrics taken from 119 yellow perch, Perca flavescens. For the burbot data set, the generalized distance test identified six discordant observations and the multivariate kurtosis test identified 24 discordant observations. In contrast with the multivariate tests, the univariate generalized distance test identified no discordancies when applied separately to each variable. Removing discordancies had a substantial effect on length-versus-mass regression equations. For 500-mm burbot, the percent difference in estimated mass after removing discordancies in our study was greater than the percent difference in masses estimated for burbot of the same length in lakes that differed substantially in productivity. The number of discordant yellow perch detected ranged from 0 to 2 with the multivariate generalized distance test and from 6 to 11 with the multivariate kurtosis test. With the kurtosis test, 108 yellow perch (90.7%) were identified as discordant in zero to two combinations, and five (4.2%) were identified as discordant in either all or 21 of the 22 combinations. The relationship among the variables included in each combination determined which variables were identified as causal. The generalized distance test identified between zero and six discordancies when applied separately to each variable. Removing the discordancies found in at least one-half of the combinations (k=5) had a marked effect on a principal components analysis. In particular, the percent of the total variation explained by second and third principal components, which explain shape, increased by 52 and 44% respectively when the discordancies were removed. Multivariate applications of the tests have numerous ecological advantages over univariate applications, including improved management of fish stocks and interpretation of multivariate morphometric data. ?? 2007 Springer Science+Business Media B.V.
Dual-axis resonance testing of wind turbine blades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Scott; Musial, Walter; White, Darris
An apparatus (100) for fatigue testing test articles (104) including wind turbine blades. The apparatus (100) includes a test stand (110) that rigidly supports an end (106) of the test article (104). An actuator assembly (120) is attached to the test article (104) and is adapted for substantially concurrently imparting first and second forcing functions in first and second directions on the test article (104), with the first and second directions being perpendicular to a longitudinal axis. A controller (130) transmits first and second sets of displacement signals (160, 164) to the actuator assembly (120) at two resonant frequencies ofmore » the test system (104). The displacement signals (160, 164) initiate the actuator assembly (120) to impart the forcing loads to concurrently oscillate the test article (104) in the first and second directions. With turbine blades, the blades (104) are resonant tested concurrently for fatigue in the flapwise and edgewise directions.« less
Polar source analysis : technical memorandum
DOT National Transportation Integrated Search
2017-09-29
The following technical memorandum describes the development, testing and analysis of various polar source data sets. The memorandum also includes recommendation for potential inclusion in future releases of AEDT. This memorandum is the final deliver...
Infrastructure | Transportation Research | NREL
establishing a new test fuel standard crucial to set the stage for the commercial introduction of high-octane . Results are provided for all stations-including data from pre-commercial or demonstration stations that
Noise data from tests of a 1.83 meter (6-ft-) diameter variable-pitch 1.2-pressure-ratio fan (QF-9)
NASA Technical Reports Server (NTRS)
Glaser, F. W.; Wazyniak, J. A.; Friedman, R.
1975-01-01
Acoustic and aerodynamic data for a 1.83-meter (6-ft.) diameter fan suitable for a quiet engine for short-takeoff-and-landing (STOL) aircraft are documented. The QF-9 rotor blades had an adjustable pitch feature which provided a means for testing at several rotor blade setting angles, including one for reverse thrust. The fan stage incorporated features for low noise. Far-field noise around the fan was measured without acoustic suppression over a range of operating conditions for six different rotor blade setting angles in the forward thrust configuration, and for one in the reverse configuration. Complete results of one-third-octave band analysis of the data are presented in tabular form. Also included are power spectra, data referred to the source, and sideline perceived noise levels.
Product assurance technology for procuring reliable, radiation-hard, custom LSI/VLSI electronics
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Allen, R. A.; Blaes, B. R.; Hicks, K. A.; Jennings, G. A.; Lin, Y.-S.; Pina, C. A.; Sayah, H. R.; Zamani, N.
1989-01-01
Advanced measurement methods using microelectronic test chips are described. These chips are intended to be used in acquiring the data needed to qualify Application Specific Integrated Circuits (ASIC's) for space use. Efforts were focused on developing the technology for obtaining custom IC's from CMOS/bulk silicon foundries. A series of test chips were developed: a parametric test strip, a fault chip, a set of reliability chips, and the CRRES (Combined Release and Radiation Effects Satellite) chip, a test circuit for monitoring space radiation effects. The technical accomplishments of the effort include: (1) development of a fault chip that contains a set of test structures used to evaluate the density of various process-induced defects; (2) development of new test structures and testing techniques for measuring gate-oxide capacitance, gate-overlap capacitance, and propagation delay; (3) development of a set of reliability chips that are used to evaluate failure mechanisms in CMOS/bulk: interconnect and contact electromigration and time-dependent dielectric breakdown; (4) development of MOSFET parameter extraction procedures for evaluating subthreshold characteristics; (5) evaluation of test chips and test strips on the second CRRES wafer run; (6) two dedicated fabrication runs for the CRRES chip flight parts; and (7) publication of two papers: one on the split-cross bridge resistor and another on asymmetrical SRAM (static random access memory) cells for single-event upset analysis.
Timmings, Caitlyn; Khan, Sobia; Moore, Julia E; Marquez, Christine; Pyka, Kasha; Straus, Sharon E
2016-02-24
To address challenges related to selecting a valid, reliable, and appropriate readiness assessment measure in practice, we developed an online decision support tool to aid frontline implementers in healthcare settings in this process. The focus of this paper is to describe a multi-step, end-user driven approach to developing this tool for use during the planning stages of implementation. A multi-phase, end-user driven approach was used to develop and test the usability of a readiness decision support tool. First, readiness assessment measures that are valid, reliable, and appropriate for healthcare settings were identified from a systematic review. Second, a mapping exercise was performed to categorize individual items of included measures according to key readiness constructs from an existing framework. Third, a modified Delphi process was used to collect stakeholder ratings of the included measures on domains of feasibility, relevance, and likelihood to recommend. Fourth, two versions of a decision support tool prototype were developed and evaluated for usability. Nine valid and reliable readiness assessment measures were included in the decision support tool. The mapping exercise revealed that of the nine measures, most measures (78 %) focused on assessing readiness for change at the organizational versus the individual level, and that four measures (44 %) represented all constructs of organizational readiness. During the modified Delphi process, stakeholders rated most measures as feasible and relevant for use in practice, and reported that they would be likely to recommend use of most measures. Using data from the mapping exercise and stakeholder panel, an algorithm was developed to link users to a measure based on characteristics of their organizational setting and their readiness for change assessment priorities. Usability testing yielded recommendations that were used to refine the Ready, Set, Change! decision support tool . Ready, Set, Change! decision support tool is an implementation support that is designed to facilitate the routine incorporation of a readiness assessment as an early step in implementation. Use of this tool in practice may offer time and resource-saving implications for implementation.
Nonlinear seismic analysis of a reactor structure impact between core components
NASA Technical Reports Server (NTRS)
Hill, R. G.
1975-01-01
The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Imaging indicator for ESD safety testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whinnery, LeRoy L.,; Nissen, April; Keifer, Patrick N.
2013-05-01
This report describes the development of a new detection method for electrostatic discharge (ESD) testing of explosives, using a single-lens reflex (SLR) digital camera and a 200-mm macro lens. This method has demonstrated several distinct advantages to other current ESD detection methods, including the creation of a permanent record, an enlarged image for real-time viewing as well as extended periods of review, and ability to combine with most other Go/No-Go sensors. This report includes details of the method, including camera settings and position, and results with wellcharacterized explosives PETN and RDX, and two ESD-sensitive aluminum powders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R J
The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less
Ahmed, Kauser; Marchand, Erica; Williams, Victoria; Coscarelli, Anne; Ganz, Patricia A
2016-03-01
To describe the development, pilot testing, and dissemination of a psychosocial intervention addressing concerns of young breast cancer survivors (YBCS). Intervention development included needs assessment with community organizations and interviews with YBCS. Based on evidence-based models of treatment, the intervention included tools for managing anxiety, fear of recurrence, tools for decision-making, and coping with sexuality/relationship issues. After pilot testing in a university setting, the program was disseminated to two community clinical settings. The program has two distinct modules (anxiety management and relationships/sexuality) that were delivered in two sessions; however, due to attrition, an all day workshop evolved. An author constructed questionnaire was used for pre- and post-intervention evaluation. Post-treatment scores showed an average increase of 2.7 points on a 10 point scale for the first module, and a 2.3 point increase for the second module. Qualitative feedback surveys were also collected. The two community sites demonstrated similar gains among their participants. The intervention satisfies an unmet need for YBCS and is a possible model of integrating psychosocial intervention with oncology care. This program developed standardized materials which can be disseminated to other organizations and potentially online for implementation within community settings. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Organizational adoption of preemployment drug testing.
Spell, C S; Blum, T C
2001-04-01
This study explored the adoption of preemployment drug testing by 360 organizations. Survival models were developed that included internal organizational and labor market factors hypothesized to affect the likelihood of adoption of drug testing. Also considered was another set of variables that included social and political variables based on institutional theory. An event history analysis using Cox regressions indicated that both internal organizational and environmental variables predicted adoption of drug testing. Results indicate that the higher the proportion of drug testers in the worksite's industry, the more likely it would be to adopt drug testing. Also, the extent to which an organization uses an internal labor market, voluntary turnover rate, and the extent to which management perceives drugs to be a problem were related to likelihood of adoption of drug testing.
Code of Federal Regulations, 2012 CFR
2012-04-01
... shall also include the manufacturer's name, plant location, and shelf life. (c) Periodic tests and quality assurance. Under the procedures set forth in § 200.935(d)(8) concerning periodic tests and quality... administrator. (2) The administrator shall also review the quality assurance procedures twice a year to assure...
Code of Federal Regulations, 2011 CFR
2011-04-01
... shall also include the manufacturer's name, plant location, and shelf life. (c) Periodic tests and quality assurance. Under the procedures set forth in § 200.935(d)(8) concerning periodic tests and quality... administrator. (2) The administrator shall also review the quality assurance procedures twice a year to assure...
Carlson, L K
2001-01-01
Set against the palm trees, riotous flowers, and golf courses of LaQuinta Resort and Spa in California is something new -- a destination health center. Minutes from seaweed wraps and hot stone massage is the option of comprehensive health evaluations (including 150 diagnostic tests and genetic testing), longevity medicine, sports medicine, health coaches, and an array of personal wellness and corporate health programs.
ERIC Educational Resources Information Center
Cirignano, Sherri M.; Hughes, Luanne J.; Wu-Jung, Corey J.; Morgan, Kathleen; Grenci, Alexandra; Savoca, LeeAnne
2013-01-01
The Healthy, Hunger-Free Kids Act (HHFKA) of 2010 sets new nutrition standards for schools, requiring them to serve a greater variety and quantity of fruits and vegetables. Extension educators in New Jersey partnered with school nutrition professionals to implement a school wellness initiative that included taste-testing activities to support…
The Golden Rule Agreement is Psychometrically Defensible.
ERIC Educational Resources Information Center
Gonzalez-Tamayo, Eulogio
The agreement between the Educational Testing Service (ETS) and the Golden Rule Insurance Company of Illinois is interpreted as setting the general principles on which items must be selected to be included in a licensure test. These principles put a limit to the difficulty level of any item, and they also limit the size of the difference in…
On the performance of the HAL/S-FC compiler. [for space shuttles
NASA Technical Reports Server (NTRS)
Martin, F. H.
1975-01-01
The HAL/S compilers which will be used in the space shuttles are described. Acceptance test objectives and procedures are described, the raw results are presented and analyzed, and conclusions and observations are drawn. An appendix is included containing an illustrative set of compiler listings and results for one of the test cases.
The path for incorporating new approach methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. These challenges include sufficient coverage of toxicological mechanisms to meaningfully interpret negative test results, dev...
Effects of hearing aid settings for electric-acoustic stimulation.
Dillon, Margaret T; Buss, Emily; Pillsbury, Harold C; Adunka, Oliver F; Buchman, Craig A; Adunka, Marcia C
2014-02-01
Cochlear implant (CI) recipients with postoperative hearing preservation may utilize an ipsilateral bimodal listening condition known as electric-acoustic stimulation (EAS). Studies on EAS have reported significant improvements in speech perception abilities over CI-alone listening conditions. Adjustments to the hearing aid (HA) settings to match prescription targets routinely used in the programming of conventional amplification may provide additional gains in speech perception abilities. Investigate the difference in users' speech perception scores when listening with the recommended HA settings for EAS patients versus HA settings adjusted to match National Acoustic Laboratories' nonlinear fitting procedure version 1 (NAL-NL1) targets. Prospective analysis of the influence of HA settings. Nine EAS recipients with greater than 12 mo of listening experience with the DUET speech processor. Subjects were tested in the EAS listening condition with two different HA setting configurations. Speech perception materials included consonant-nucleus-consonant (CNC) words in quiet, AzBio sentences in 10-talker speech babble at a signal-to-noise ratio (SNR) of +10, and the Bamford-Kowal-Bench sentences in noise (BKB-SIN) test. The speech perception performance on each test measure was compared between the two HA configurations. Subjects experienced a significant improvement in speech perception abilities with the HA settings adjusted to match NAL-NL1 targets over the recommended HA settings. EAS subjects have been shown to experience improvements in speech perception abilities when listening to ipsilateral combined stimulation. This population's abilities may be underestimated with current HA settings. Tailoring the HA output to the patient's individual hearing loss offers improved outcomes on speech perception measures. American Academy of Audiology.
SU-E-T-468: Implementation of the TG-142 QA Process for Seven Linacs with Enhanced Beam Conformance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woollard, J; Ayan, A; DiCostanzo, D
2015-06-15
Purpose: To develop a TG-142 compliant QA process for 7 Varian TrueBeam linear accelerators (linacs) with enhanced beam conformance and dosimetrically matched beam models. To ensure consistent performance of all 7 linacs, the QA process should include a common set of baseline values for use in routine QA on all linacs. Methods: The TG 142 report provides recommended tests, tolerances and frequencies for quality assurance of medical accelerators. Based on the guidance provided in the report, measurement tests were developed to evaluate each of the applicable parameters listed for daily, monthly and annual QA. These tests were then performed onmore » each of our 7 new linacs as they came on line at our institution. Results: The tolerance values specified in TG-142 for each QA test are either absolute tolerances (i.e. ±2mm) or require a comparison to a baseline value. The results of our QA tests were first used to ensure that all 7 linacs were operating within the suggested tolerance values provided in TG −142 for those tests with absolute tolerances and that the performance of the linacs was adequately matched. The QA test results were then used to develop a set of common baseline values for those QA tests that require comparison to a baseline value at routine monthly and annual QA. The procedures and baseline values were incorporated into a spreadsheets for use in monthly and annual QA. Conclusion: We have developed a set of procedures for daily, monthly and annual QA of our linacs that are consistent with the TG-142 report. A common set of baseline values was developed for routine QA tests. The use of this common set of baseline values for comparison at monthly and annual QA will ensure consistent performance of all 7 linacs.« less
Liver Rapid Reference Set Application: Hemken - Abbott (2015) — EDRN Public Portal
The aim for this testing is to find a small panel of biomarkers (n=2-5) that can be tested on the Abbott ARCHITECT automated immunoassay platform for the early detection of hepatocellular carcinoma (HCC). This panel of biomarkers should perform significantly better than alpha-fetoprotein (AFP) alone based on multivariate statistical analysis. This testing of the EDRN reference set will help expedite the selection of a small panel of ARCHITECT biomarkers for the early detection of HCC. The panel of ARCHITECT biomarkers Abbott plans to test include: AFP, protein induced by vitamin K absence or antagonist-II (PIVKA-II), golgi protein 73 (GP73), hepatocellular growth factor (HGF), dipeptidyl peptidase 4 (DPP4) and DPP4/seprase (surface expressed protease) heterodimer hybrid. PIVKA-II is abnormal des-carboxylated prothrombin (DCP) present in vitamin K deficiency.
Pareto fronts for multiobjective optimization design on materials data
NASA Astrophysics Data System (ADS)
Gopakumar, Abhijith; Balachandran, Prasanna; Gubernatis, James E.; Lookman, Turab
Optimizing multiple properties simultaneously is vital in materials design. Here we apply infor- mation driven, statistical optimization strategies blended with machine learning methods, to address multi-objective optimization tasks on materials data. These strategies aim to find the Pareto front consisting of non-dominated data points from a set of candidate compounds with known character- istics. The objective is to find the pareto front in as few additional measurements or calculations as possible. We show how exploration of the data space to find the front is achieved by using uncer- tainties in predictions from regression models. We test our proposed design strategies on multiple, independent data sets including those from computations as well as experiments. These include data sets for Max phases, piezoelectrics and multicomponent alloys.
Bonnevie, Tristan; Gravier, Francis-Edouard; Leboullenger, Marie; Médrinal, Clément; Viacroze, Catherine; Cuvelier, Antoine; Muir, Jean-François; Tardif, Catherine; Debeaumont, David
2017-06-01
Pulmonary rehabilitation (PR) improves outcomes in patients with chronic obstructive pulmonary disease (COPD). Optimal assessment includes cardiopulmonary exercise testing (CPET), but consultations are limited. Field tests could be used to individualize PR instead of CPET. The six-minute stepper test (6MST) is easy to set up and its sensitivity and reproducibility have previously been reported in patients with COPD. The aim of this study was to develop a prediction equation to set intensity in patients attending PR, based on the 6MST. The following relationships were analyzed: mean heart rate (HR) during the first (HR 1-3 ) and last (HR 4-6 ) 3 minutes of the 6MST and HR at the ventilatory threshold (HRvt) from CPET; step count at the end of the 6MST and workload at the Ventilatory threshold (VT) (Wvt); and forced expiratory volume in 1 second and step count during the 6MST. This retrospective study included patients with COPD referred for PR who underwent CPET, pulmonary function evaluations and the 6MST. Twenty-four patients were included. Prediction equations were HRvt = 0.7887 × HR 1-3 + 20.83 and HRvt = 0.6180 × HR 4-6 + 30.77. There was a strong correlation between HR 1-3 and HR 4-6 and HRvt (r = 0.69, p < 0.001 and r = 0.57, p < 0.01 respectively). A significant correlation was also found between step count and LogWvt (r = 0.63, p < 0.01). The prediction equation was LogWvt = 0.001722 × step count + 1.248. The 6MST could be used to individualize aerobic training in patients with COPD. Further prospective studies are needed to confirm these results.
ASVCP guidelines: quality assurance for point-of-care testing in veterinary medicine.
Flatland, Bente; Freeman, Kathleen P; Vap, Linda M; Harr, Kendal E
2013-12-01
Point-of-care testing (POCT) refers to any laboratory testing performed outside the conventional reference laboratory and implies close proximity to patients. Instrumental POCT systems consist of small, handheld or benchtop analyzers. These have potential utility in many veterinary settings, including private clinics, academic veterinary medical centers, the community (eg, remote area veterinary medical teams), and for research applications in academia, government, and industry. Concern about the quality of veterinary in-clinic testing has been expressed in published veterinary literature; however, little guidance focusing on POCT is available. Recognizing this void, the ASVCP formed a subcommittee in 2009 charged with developing quality assurance (QA) guidelines for veterinary POCT. Guidelines were developed through literature review and a consensus process. Major recommendations include (1) taking a formalized approach to POCT within the facility, (2) use of written policies, standard operating procedures, forms, and logs, (3) operator training, including periodic assessment of skills, (4) assessment of instrument analytical performance and use of both statistical quality control and external quality assessment programs, (5) use of properly established or validated reference intervals, (6) and ensuring accurate patient results reporting. Where possible, given instrument analytical performance, use of a validated 13s control rule for interpretation of control data is recommended. These guidelines are aimed at veterinarians and veterinary technicians seeking to improve management of POCT in their clinical or research setting, and address QA of small chemistry and hematology instruments. These guidelines are not intended to be all-inclusive; rather, they provide a minimum standard for maintenance of POCT instruments in the veterinary setting. © 2013 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
Paul, Topon Kumar; Iba, Hitoshi
2009-01-01
In order to get a better understanding of different types of cancers and to find the possible biomarkers for diseases, recently, many researchers are analyzing the gene expression data using various machine learning techniques. However, due to a very small number of training samples compared to the huge number of genes and class imbalance, most of these methods suffer from overfitting. In this paper, we present a majority voting genetic programming classifier (MVGPC) for the classification of microarray data. Instead of a single rule or a single set of rules, we evolve multiple rules with genetic programming (GP) and then apply those rules to test samples to determine their labels with majority voting technique. By performing experiments on four different public cancer data sets, including multiclass data sets, we have found that the test accuracies of MVGPC are better than those of other methods, including AdaBoost with GP. Moreover, some of the more frequently occurring genes in the classification rules are known to be associated with the types of cancers being studied in this paper.
Pereyra, Margaret; Parish, Carrigan L.; Abel, Stephen; Messinger, Shari; Singer, Richard; Kunzel, Carol; Greenberg, Barbara; Gerbert, Barbara; Glick, Michael; Metsch, Lisa R.
2014-01-01
Objectives. Using a nationally representative survey, we determined dentists’ willingness to provide oral rapid HIV screening in the oral health care setting. Methods. From November 2010 through November 2011, a nationally representative survey of general dentists (sampling frame obtained from American Dental Association Survey Center) examined barriers and facilitators to offering oral HIV rapid testing (n = 1802; 70.7% response). Multiple logistic regression analysis examined dentists’ willingness to conduct this screening and perceived compatibility with their professional role. Results. Agreement with the importance of annual testing for high-risk persons and familiarity with the Centers for Disease Control and Prevention’s recommendations regarding routine HIV testing were positively associated with willingness to conduct such screening. Respondents’ agreement with patients’ acceptance of HIV testing and colleagues’ improved perception of them were also positively associated with willingness. Conclusions. Oral HIV rapid testing is potentially well suited to the dental setting. Although our analysis identified many predictors of dentists’ willingness to offer screening, there are many barriers, including dentists’ perceptions of patients’ acceptance, that must be addressed before such screening is likely to be widely implemented. PMID:24625163
Werner, Simone; Krause, Friedemann; Rolny, Vinzent; Strobl, Matthias; Morgenstern, David; Datz, Christian; Chen, Hongda; Brenner, Hermann
2016-04-01
In initial studies that included colorectal cancer patients undergoing diagnostic colonoscopy, we had identified a serum marker combination able to detect colorectal cancer with similar diagnostic performance as fecal immunochemical test (FIT). In this study, we aimed to validate the results in participants of a large colorectal cancer screening study conducted in the average-risk, asymptomatic screening population. We tested serum samples from 1,200 controls, 420 advanced adenoma patients, 4 carcinoma in situ patients, and 36 colorectal cancer patients with a 5-marker blood test [carcinoembryonic antigen (CEA)+anti-p53+osteopontin+seprase+ferritin]. The diagnostic performance of individual markers and marker combinations was assessed and compared with stool test results. AUCs for the detection of colorectal cancer and advanced adenomas with the 5-marker blood test were 0.78 [95% confidence interval (CI), 0.68-0.87] and 0.56 (95% CI, 0.53-0.59), respectively, which now is comparable with guaiac-based fecal occult blood test (gFOBT) but inferior to FIT. With cutoffs yielding specificities of 80%, 90%, and 95%, the sensitivities for the detection of colorectal cancer were 64%, 50%, and 42%, and early-stage cancers were detected as well as late-stage cancers. For osteopontin, seprase, and ferritin, the diagnostic performance in the screening setting was reduced compared with previous studies in diagnostic settings while CEA and anti-p53 showed similar diagnostic performance in both settings. Performance of the 5-marker blood test under screening conditions is inferior to FIT even though it is still comparable with the performance of gFOBT. CEA and anti-p53 could contribute to the development of a multiple marker blood-based test for early detection of colorectal cancer. ©2015 American Association for Cancer Research.
Transport Test Problems for Hybrid Methods Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
May, Larissa; Mullins, Peter; Pines, Jesse
2013-01-01
Objectives Many factors may influence choice of care setting for treatment of acute infections. The authors evaluated a national sample of U.S. outpatient clinic and emergency department (ED) visits for three common infections (urinary tract infection [UTI], skin and soft tissue infection [SSTI], and upper respiratory infection [URI]), comparing setting, demographics, and care. Methods This was a retrospective analysis of 2006–2010 data from the National Hospital Ambulatory Care Survey (NHAMCS) and National Ambulatory Care Survey (NAMCS). Patients age ≥ 18 years with primary diagnoses of UTI, URI, and SSTI were the visits of interest. Demographics, tests, and prescriptions were compared, divided by ED versus outpatient setting using bivariate statistics. Results Between 2006 and 2010, there were an estimated 40.9 million ambulatory visits for UTI, 168.3 million visits for URI, and 34.8 million visits for SSTI; 24% of UTI, 11% of URI, and 33% of SSTI visits were seen in EDs. Across all groups, ED patients were more commonly younger and black and had Medicaid or no insurance. ED patients had more blood tests (54% vs. 22% for UTI, 21% vs. 14% for URI, and 25% vs. 20% for SSTI) and imaging studies (31% vs. 9% for UTI, 27% vs. 8% for URI, and 16% vs. 5% for SSTI). Pain medications were more frequently used in the ED; over one-fifth of UTI and SSTI visits included narcotics. In both settings, greater than 50% of URI visits received antibiotics; more than 40% of UTI ED visits included broad-spectrum fluoroquinolones. Conclusions Emergency departments treated a considerable proportion of U.S. ambulatory infections from 2006 to 2010. Patient factors, including the presence of acute pain and access to care, appear to influence choice of care setting. Observed antibiotic use in both settings suggests a need for optimizing antibiotic use. PMID:24552520
Markman, John D; Barbosa, William A; Gewandter, Jennifer S; Frazer, Maria; Rast, Shirley; Dugan, Michelle; Nandigam, Kiran; Villareal, Armando; Kwong, Tai C
2015-06-01
To determine whether the prevailing liquid chromatography and tandem mass spectroscopy assay (LC-MS/MS) assay designed to monitor buprenorphine compliance of the sublingual formulation used in the substance abuse treatment setting can be extrapolated to the transdermal formulation used in the chronic pain treatment setting, which is 1000-fold less concentrated. Retrospective chart review. Self-reported compliant patients using the transdermal or sublingual formulations of buprenorhphine. Transdermal patch application was also visually confirmed during clinic visits. Urine drug test results from a LC-MS/MS were compared between samples from transdermal and sublingual patients. While all sublingual patients tested positive for at least one metabolite of buprenorphine, only 69% of the transdermal patients did so. In addition, the most abundant metabolite in the transdermal patients was buprenorphine-glucuronide, as compared with norbuprenorphine-glucuronide in sublingual patients. These data suggest that currently available urine drug tests for buprenorphine, including the more expensive LC-MS/MS based assays, may not be sufficiently sensitive to detect the metabolites from transdermal buprenorphine patients. This study highlights the need to evaluate the value and sensitivity of urine drug tests given the wide range of buprenorphine dosing in clinical practice. These results underscore the need for additional cost benefit analyses comparing different confirmatory drug testing techniques including many commercially available drug testing options. © 2014 Wiley Periodicals, Inc. Wiley Periodicals, Inc.
Results of the long range position-determining system tests. [Field Army system
NASA Technical Reports Server (NTRS)
Rhode, F. W.
1973-01-01
The long range position-determining system (LRPDS) has been developed by the Corps of Engineers to provide the Field Army with a rapid and accurate positioning capability. The LRPDS consists of an airborne reference position set (RPS), up to 30 ground based positioning sets (PS), and a position computing central (PCC). The PCC calculates the position of each PS based on the range change information provided by each Set. The positions can be relayed back to the PS again via RPS. Each PS unit contains a double oven precise crystal oscillator. The RPS contains a Hewlett-Packard cesium beam standard. Frequency drifts and off-sets of the crystal oscillators are taken in account in the data reduction process. A field test program was initiated in November 1972. A total of 54 flights were made which included six flights for equipment testing and 48 flights utilizing the field test data reduction program. The four general types of PS layouts used were: short range; medium range; long range; tactical configuration. The overall RMS radial error of the unknown positions varied from about 2.3 meters for the short range to about 15 meters for the long range. The corresponding elevation RMS errors vary from about 12 meters to 37 meters.
Solid state Impatt Amplifiers performance data
DOT National Transportation Integrated Search
1973-12-01
Evaluation data on an 8-watt and a 16-watt Impatt Amplifier represented to concisely describe the performance of these amplifiers. The data include component specifications and photographs, TSC test set-up configuration, amplitude and phase character...
DISCOVER-AQ Acoustics : Measurement and Data Report.
DOT National Transportation Integrated Search
2015-09-01
The following report documents the acoustic measurements that supplemented the September 2013 NASA DISCOVER-AQ flight tests in Houston, Texas and the corresponding data set developed from those measurements. These data include aircraft performance an...
ERIC Educational Resources Information Center
Fischer, Richard B.
1986-01-01
Defines key terms and discusses things to consider when setting fees for a continuing education program. These include (1) the organization's philosophy and mission, (2) certain key variables, (3) pricing strategy options, and (4) the test of reasonableness. (CH)
Algorithms for Learning Preferences for Sets of Objects
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric
2010-01-01
A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical concepts to estimate quantitative measures of the user s preferences from training examples (preferred subsets) specified by the user. Once preferences have been learned, the system uses those preferences to select preferred subsets from new sets. The method was found to be viable when tested in computational experiments on menus, music playlists, and rover images. Contemplated future development efforts include further tests on more diverse sets and development of a sub-method for (a) estimating the parameter that represents the relative importance of diversity versus depth, and (b) incorporating background knowledge about the nature of quality functions, which are special functions that specify depth preferences for features.
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John
2012-01-01
Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.
Dimech, Wayne; Karakaltsas, Marina; Vincini, Giuseppe A
2018-05-25
A general trend towards conducting infectious disease serology testing in centralized laboratories means that quality control (QC) principles used for clinical chemistry testing are applied to infectious disease testing. However, no systematic assessment of methods used to establish QC limits has been applied to infectious disease serology testing. A total of 103 QC data sets, obtained from six different infectious disease serology analytes, were parsed through standard methods for establishing statistical control limits, including guidelines from Public Health England, USA Clinical and Laboratory Standards Institute (CLSI), German Richtlinien der Bundesärztekammer (RiliBÄK) and Australian QConnect. The percentage of QC results failing each method was compared. The percentage of data sets having more than 20% of QC results failing Westgard rules when the first 20 results were used to calculate the mean±2 standard deviation (SD) ranged from 3 (2.9%) for R4S to 66 (64.1%) for 10X rule, whereas the percentage ranged from 0 (0%) for R4S to 32 (40.5%) for 10X when the first 100 results were used to calculate the mean±2 SD. By contrast, the percentage of data sets with >20% failing the RiliBÄK control limits was 25 (24.3%). Only two data sets (1.9%) had more than 20% of results outside the QConnect Limits. The rate of failure of QCs using QConnect Limits was more applicable for monitoring infectious disease serology testing compared with UK Public Health, CLSI and RiliBÄK, as the alternatives to QConnect Limits reported an unacceptably high percentage of failures across the 103 data sets.
Akbar, Umer; Raike, Robert S.; Hack, Nawaz; Hess, Christopher W.; Skinner, Jared; Martinez‐Ramirez, Daniel; DeJesus, Sol
2016-01-01
Objectives Evidence suggests that nonconventional programming may improve deep brain stimulation (DBS) therapy for movement disorders. The primary objective was to assess feasibility of testing the tolerability of several nonconventional settings in Parkinson's disease (PD) and essential tremor (ET) subjects in a single office visit. Secondary objectives were to explore for potential efficacy signals and to assess the energy demand on the implantable pulse‐generators (IPGs). Materials and Methods A custom firmware (FW) application was developed and acutely uploaded to the IPGs of eight PD and three ET subjects, allowing delivery of several nonconventional DBS settings, including narrow pulse widths, square biphasic pulses, and irregular pulse patterns. Standard clinical rating scales and several objective measures were used to compare motor outcomes with sham, clinically‐optimal and nonconventional settings. Blinded and randomized testing was conducted in a traditional office setting. Results Overall, the nonconventional settings were well tolerated. Under these conditions it was also possible to detect clinically‐relevant differences in DBS responses using clinical rating scales but not objective measures. Compared to the clinically‐optimal settings, some nonconventional settings appeared to offer similar benefit (e.g., narrow pulse widths) and others lesser benefit. Moreover, the results suggest that square biphasic pulses may deliver greater benefit. No unexpected IPG efficiency disadvantages were associated with delivering nonconventional settings. Conclusions It is feasible to acutely screen nonconventional DBS settings using controlled study designs in traditional office settings. Simple IPG FW upgrades may provide more DBS programming options for optimizing therapy. Potential advantages of narrow and biphasic pulses deserve follow up. PMID:27000764
Akbar, Umer; Raike, Robert S; Hack, Nawaz; Hess, Christopher W; Skinner, Jared; Martinez-Ramirez, Daniel; DeJesus, Sol; Okun, Michael S
2016-06-01
Evidence suggests that nonconventional programming may improve deep brain stimulation (DBS) therapy for movement disorders. The primary objective was to assess feasibility of testing the tolerability of several nonconventional settings in Parkinson's disease (PD) and essential tremor (ET) subjects in a single office visit. Secondary objectives were to explore for potential efficacy signals and to assess the energy demand on the implantable pulse-generators (IPGs). A custom firmware (FW) application was developed and acutely uploaded to the IPGs of eight PD and three ET subjects, allowing delivery of several nonconventional DBS settings, including narrow pulse widths, square biphasic pulses, and irregular pulse patterns. Standard clinical rating scales and several objective measures were used to compare motor outcomes with sham, clinically-optimal and nonconventional settings. Blinded and randomized testing was conducted in a traditional office setting. Overall, the nonconventional settings were well tolerated. Under these conditions it was also possible to detect clinically-relevant differences in DBS responses using clinical rating scales but not objective measures. Compared to the clinically-optimal settings, some nonconventional settings appeared to offer similar benefit (e.g., narrow pulse widths) and others lesser benefit. Moreover, the results suggest that square biphasic pulses may deliver greater benefit. No unexpected IPG efficiency disadvantages were associated with delivering nonconventional settings. It is feasible to acutely screen nonconventional DBS settings using controlled study designs in traditional office settings. Simple IPG FW upgrades may provide more DBS programming options for optimizing therapy. Potential advantages of narrow and biphasic pulses deserve follow up. © 2016 The Authors. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.
Goodfellow, Alfred; Keeling, Douglas N; Hayes, Robert C; Webster, Duncan
2009-01-01
With increasing use of immunosuppressive therapy, including tumor necrosis factor alpha inhibitors, there is concern about infectious complications, including reactivation of latent Mycobacterium tuberculosis infection. Routine testing prior to administration of systemic immunosuppression includes the tuberculin skin test, which lacks sensitivity and specificity and may be difficult to interpret in the presence of extensive cutaneous disease. Treatment of individuals with latent tuberculosis infection is recommended when immunosuppressive medications are to be employed. We report a case in which a diagnosis of latent tuberculosis infection in a patient with extensive bullous pemphigoid was clarified by the use of an interferon-gamma release assay after equivocal tuberculin skin test results. Interferon-gamma release assays are useful adjuncts to the tuberculin skin test in the diagnosis of latent tuberculosis infection in the setting of extensive cutaneous disease.
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022574 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022575 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
ERIC Educational Resources Information Center
Kim, Sooyeon; Walker, Michael E.
2011-01-01
This study examines the use of subpopulation invariance indices to evaluate the appropriateness of using a multiple-choice (MC) item anchor in mixed-format tests, which include both MC and constructed-response (CR) items. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using an MC-only anchor set for 4…
Ada Programming Support Environment (APSE) Evaluation and Validation (E&V) Team
1991-12-31
standards. The purpose of the team was to assist the project in several ways. Raymond Szymanski of Wright Research Iand Development Center (WRDC, now...debuggers, program library systems, and compiler diagnostics. The test suite does not include explicit tests for the existence of language features . The...support software is a set of tools and procedures which assist in preparing and executing the test suite, in extracting data from the results of
Inference of Evolutionary Forces Acting on Human Biological Pathways
Daub, Josephine T.; Dupanloup, Isabelle; Robinson-Rechavi, Marc; Excoffier, Laurent
2015-01-01
Because natural selection is likely to act on multiple genes underlying a given phenotypic trait, we study here the potential effect of ongoing and past selection on the genetic diversity of human biological pathways. We first show that genes included in gene sets are generally under stronger selective constraints than other genes and that their evolutionary response is correlated. We then introduce a new procedure to detect selection at the pathway level based on a decomposition of the classical McDonald–Kreitman test extended to multiple genes. This new test, called 2DNS, detects outlier gene sets and takes into account past demographic effects and evolutionary constraints specific to gene sets. Selective forces acting on gene sets can be easily identified by a mere visual inspection of the position of the gene sets relative to their two-dimensional null distribution. We thus find several outlier gene sets that show signals of positive, balancing, or purifying selection but also others showing an ancient relaxation of selective constraints. The principle of the 2DNS test can also be applied to other genomic contrasts. For instance, the comparison of patterns of polymorphisms private to African and non-African populations reveals that most pathways show a higher proportion of nonsynonymous mutations in non-Africans than in Africans, potentially due to different demographic histories and selective pressures. PMID:25971280
Clifton, Soazig; Mercer, Catherine H; Woodhall, Sarah C; Sonnenberg, Pam; Field, Nigel; Lu, Le; Johnson, Anne M; Cassell, Jackie A
2017-06-01
Following widespread rollout of chlamydia testing to non-specialist and community settings in the UK, many individuals receive a chlamydia test without being offered comprehensive STI and HIV testing. We assess sexual behaviour among testers in different settings with a view to understanding their need for other STI diagnostic services. A probability sample survey of the British population undertaken 2010-2012 (the third National Survey of Sexual Attitudes and Lifestyles). We analysed weighted data on chlamydia testing (past year), including location of most recent test, and diagnoses (past 5 years) from individuals aged 16-44 years reporting at least one sexual partner in the past year (4992 women, 3406 men). Of the 26.8% (95% CI 25.4% to 28.2%) of women and 16.7% (15.5% to 18.1%) of men reporting a chlamydia test in the past year, 28.4% of women and 41.2% of men had tested in genitourinary medicine (GUM), 41.1% and 20.7% of women and men respectively tested in general practice (GP) and the remainder tested in other non-GUM settings. Women tested outside GUM were more likely to be older, in a relationship and to live in rural areas. Individuals tested outside GUM reported fewer risk behaviours; nevertheless, 11.0% (8.6% to 14.1%) of women and 6.8% (3.9% to 11.6%) of men tested in GP and 13.2% (10.2% to 16.8%) and 9.6% (6.5% to 13.8%) of women and men tested in other non-GUM settings reported 'unsafe sex', defined as two or more partners and no condom use with any partner in the past year. Individuals treated for chlamydia outside GUM in the past 5 years were less likely to report an HIV test in that time frame (women: 54.5% (42.7% to 65.7%) vs 74.1% (65.9% to 80.9%) in GUM; men: 23.9% (12.7% to 40.5%) vs 65.8% (56.2% to 74.3%)). Most chlamydia testing occurred in non-GUM settings, among populations reporting fewer risk behaviours. However, there is a need to provide pathways to comprehensive STI care to the sizeable minority at higher risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Patient Safety Culture Survey in Pediatric Complex Care Settings: A Factor Analysis.
Hessels, Amanda J; Murray, Meghan; Cohen, Bevin; Larson, Elaine L
2017-04-19
Children with complex medical needs are increasing in number and demanding the services of pediatric long-term care facilities (pLTC), which require a focus on patient safety culture (PSC). However, no tool to measure PSC has been tested in this unique hybrid acute care-residential setting. The objective of this study was to evaluate the psychometric properties of the Nursing Home Survey on Patient Safety Culture tool slightly modified for use in the pLTC setting. Factor analyses were performed on data collected from 239 staff at 3 pLTC in 2012. Items were screened by principal axis factoring, and the original structure was tested using confirmatory factor analysis. Exploratory factor analysis was conducted to identify the best model fit for the pLTC data, and factor reliability was assessed by Cronbach alpha. The extracted, rotated factor solution suggested items in 4 (staffing, nonpunitive response to mistakes, communication openness, and organizational learning) of the original 12 dimensions may not be a good fit for this population. Nevertheless, in the pLTC setting, both the original and the modified factor solutions demonstrated similar reliabilities to the published consistencies of the survey when tested in adult nursing homes and the items factored nearly identically as theorized. This study demonstrates that the Nursing Home Survey on Patient Safety Culture with minimal modification may be an appropriate instrument to measure PSC in pLTC settings. Additional psychometric testing is recommended to further validate the use of this instrument in this setting, including examining the relationship to safety outcomes. Increased use will yield data for benchmarking purposes across these specialized settings to inform frontline workers and organizational leaders of areas of strength and opportunity for improvement.
HIV testing among MSM in Bogotá, Colombia: The role of structural and individual characteristics
Reisen, Carol A.; Zea, Maria Cecilia; Bianchi, Fernanda T.; Poppen, Paul J.; del Río González, Ana Maria; Romero, Rodrigo A. Aguayo; Pérez, Carolin
2014-01-01
This study used mixed methods to examine characteristics related to HIV testing among men who have sex with men (MSM) in Bogotá, Colombia. A sample of 890 MSM responded to a computerized quantitative survey. Follow-up qualitative data included 20 in-depth interviews with MSM and 12 key informant interviews. Hierarchical logistic set regression indicated that sequential sets of variables reflecting demographic characteristics, insurance coverage, risk appraisal, and social context each added to the explanation of HIV testing. Follow-up logistic regression showed that individuals who were older, had higher income, paid for their own insurance, had had a sexually transmitted infection, knew more people living with HIV, and had greater social support were more likely to have been tested for HIV at least once. Qualitative findings provided details of personal and structural barriers to testing, as well as interrelationships among these factors. Recommendations to increase HIV testing among Colombian MSM are offered. PMID:25068180
RAPIDR: an analysis package for non-invasive prenatal testing of aneuploidy
Lo, Kitty K.; Boustred, Christopher; Chitty, Lyn S.; Plagnol, Vincent
2014-01-01
Non-invasive prenatal testing (NIPT) of fetal aneuploidy using cell-free fetal DNA is becoming part of routine clinical practice. RAPIDR (Reliable Accurate Prenatal non-Invasive Diagnosis R package) is an easy-to-use open-source R package that implements several published NIPT analysis methods. The input to RAPIDR is a set of sequence alignment files in the BAM format, and the outputs are calls for aneuploidy, including trisomies 13, 18, 21 and monosomy X as well as fetal sex. RAPIDR has been extensively tested with a large sample set as part of the RAPID project in the UK. The package contains quality control steps to make it robust for use in the clinical setting. Availability and implementation: RAPIDR is implemented in R and can be freely downloaded via CRAN from here: http://cran.r-project.org/web/packages/RAPIDR/index.html. Contact: kitty.lo@ucl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24990604
Greening, S E; Grohs, D H; Guidos, B J
1997-01-01
Providing effective training, retraining and evaluation programs, including proficiency testing programs, for cytoprofessionals is a challenge shared by many academic and clinical educators internationally. In cytopathology the quality of training has immediately transferable and critically important impacts on satisfactory performance in the clinical setting. Well-designed interactive computer-assisted instruction and testing programs have been shown to enhance initial learning and to reinforce factual and conceptual knowledge. Computer systems designed not only to promote diagnostic accuracy but to integrate and streamline work flow in clinical service settings are candidates for educational adaptation. The AcCell 2000 system, designed as a diagnostic screening support system, offers technology that is adaptable to educational needs during basic and in-service training as well as testing of screening proficiency in both locator and identification skills. We describe the considerations, approaches and applications of the AcCell 2000 system in education programs for both training and evaluation of gynecologic diagnostic screening proficiency.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Teaching science through literature
NASA Astrophysics Data System (ADS)
Barth, Daniel
2007-12-01
The hypothesis of this study was that a multidisciplinary, activity rich science curriculum based around science fiction literature, rather than a conventional text book would increase student engagement with the curriculum and improve student performance on standards-based test instruments. Science fiction literature was chosen upon the basis of previous educational research which indicated that science fiction literature was able to stimulate and maintain interest in science. The study was conducted on a middle school campus during the regular summer school session. Students were self-selected from the school's 6 th, 7th, and 8th grade populations. The students used the science fiction novel Maurice on the Moon as their only text. Lessons and activities closely followed the adventures of the characters in the book. The student's initial level of knowledge in Earth and space science was assessed by a pre test. After the four week program was concluded, the students took a post test made up of an identical set of questions. The test included 40 standards-based questions that were based upon concepts covered in the text of the novel and in the classroom lessons and activities. The test also included 10 general knowledge questions that were based upon Earth and space science standards that were not covered in the novel or the classroom lessons or activities. Student performance on the standards-based question set increased an average of 35% for all students in the study group. Every subgroup disaggregated by gender and ethnicity improved from 28-47%. There was no statistically significant change in the performance on the general knowledge question set for any subgroup. Student engagement with the material was assessed by three independent methods, including student self-reports, percentage of classroom work completed, and academic evaluation of student work by the instructor. These assessments of student engagement were correlated with changes in student performance on the standards-based assessment tests. A moderate correlation was found to exist between the level of student engagement with the material and improvement in performance from pre to post test.
Portable detection system of vegetable oils based on laser induced fluorescence
NASA Astrophysics Data System (ADS)
Zhu, Li; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan; Mu, Taotao
2015-11-01
Food safety, especially edible oils, has attracted more and more attention recently. Many methods and instruments have emerged to detect the edible oils, which include oils classification and adulteration. It is well known than the adulteration is based on classification. Then, in this paper, a portable detection system, based on laser induced fluorescence, is proposed and designed to classify the various edible oils, including (olive, rapeseed, walnut, peanut, linseed, sunflower, corn oils). 532 nm laser modules are used in this equipment. Then, all the components are assembled into a module (100*100*25mm). A total of 700 sets of fluorescence data (100 sets of each type oil) are collected. In order to classify different edible oils, principle components analysis and support vector machine have been employed in the data analysis. The training set consisted of 560 sets of data (80 sets of each oil) and the test set consisted of 140 sets of data (20 sets of each oil). The recognition rate is up to 99%, which demonstrates the reliability of this potable system. With nonintrusive and no sample preparation characteristic, the potable system can be effectively applied for food detection.
NASA Technical Reports Server (NTRS)
1999-01-01
Field Integrated Design and Operations (FIDO) rover is a prototype of the Mars Sample Return rovers that will carry the integrated Athena Science Payload to Mars in 2003 and 2005. The purpose of FIDO is to simulate, using Mars analog settings, the complex surface operations that will be necessary to find, characterize, obtain, cache, and return samples to the ascent vehicles on the landers. This videotape shows tests of the FIDO in the Mojave Desert. These tests include drilling through rock and movement of the rover. Also included in this tape are interviews with Dr Raymond Arvidson, the test director for FIDO, and Dr. Eric Baumgartner, Robotics Engineer at the Jet Propulsion Laboratory.
Development of a cross-platform biomarker signature to detect renal transplant tolerance in humans
Sagoo, Pervinder; Perucha, Esperanza; Sawitzki, Birgit; Tomiuk, Stefan; Stephens, David A.; Miqueu, Patrick; Chapman, Stephanie; Craciun, Ligia; Sergeant, Ruhena; Brouard, Sophie; Rovis, Flavia; Jimenez, Elvira; Ballow, Amany; Giral, Magali; Rebollo-Mesa, Irene; Le Moine, Alain; Braudeau, Cecile; Hilton, Rachel; Gerstmayer, Bernhard; Bourcier, Katarzyna; Sharif, Adnan; Krajewska, Magdalena; Lord, Graham M.; Roberts, Ian; Goldman, Michel; Wood, Kathryn J.; Newell, Kenneth; Seyfert-Margolis, Vicki; Warrens, Anthony N.; Janssen, Uwe; Volk, Hans-Dieter; Soulillou, Jean-Paul; Hernandez-Fuentes, Maria P.; Lechler, Robert I.
2010-01-01
Identifying transplant recipients in whom immunological tolerance is established or is developing would allow an individually tailored approach to their posttransplantation management. In this study, we aimed to develop reliable and reproducible in vitro assays capable of detecting tolerance in renal transplant recipients. Several biomarkers and bioassays were screened on a training set that included 11 operationally tolerant renal transplant recipients, recipient groups following different immunosuppressive regimes, recipients undergoing chronic rejection, and healthy controls. Highly predictive assays were repeated on an independent test set that included 24 tolerant renal transplant recipients. Tolerant patients displayed an expansion of peripheral blood B and NK lymphocytes, fewer activated CD4+ T cells, a lack of donor-specific antibodies, donor-specific hyporesponsiveness of CD4+ T cells, and a high ratio of forkhead box P3 to α-1,2-mannosidase gene expression. Microarray analysis further revealed in tolerant recipients a bias toward differential expression of B cell–related genes and their associated molecular pathways. By combining these indices of tolerance as a cross-platform biomarker signature, we were able to identify tolerant recipients in both the training set and the test set. This study provides an immunological profile of the tolerant state that, with further validation, should inform and shape drug-weaning protocols in renal transplant recipients. PMID:20501943
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Importance of geologic study and load test of log pod mangartom arch bridge
NASA Astrophysics Data System (ADS)
Kamnik, Rok; Meshcheryakova, Tatiana; Kovačič, Boštjan
2017-10-01
Some structures and their relationships, positions in space and shifts represent the structural set of an area, as included within regional units, and smaller or larger portions of the earth’s crust known as the Earth’s plates and micro plates. The most important fact is that tectonic movements are always possible around the locations of considered bridges. Therefore, it is certainly necessary to define in detail their characteristics due to the potential impacts on individual bridges. A recent structural set was made for the Log pod Mangartom. To assess the bridge in micro sense the load test of the bridge was performed.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
Task Identification and Evaluation System (TIES)
1991-08-01
Caliorate A N/AVh-11A- iUD -test -sets 127. Calibrate AN/AWII1-55 ASCU test setsI - 128. Calibrate 5001L11 tally punched tape readersI- 129. Perform...11AKHbD test sets -- 132. ?erform fault isolation of U4/AWN-55 ASCU -test sets -- 133. Perform fault isolation of 500 R.M tally punched tape I...AIN/AVM1-11A HfLM test sets- 137. Perf-orm self-tests of AL%/AWL-S5 ASCU test sets G. !MAI.T.T!ING A-7D_ ANUAL TEST SETS 138. Adjust SM-661/AS-388air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Y; Yu, J; Yeung, V
Purpose: Artificial neural networks (ANN) can be used to discover complex relations within datasets to help with medical decision making. This study aimed to develop an ANN method to predict two-year overall survival of patients with peri-ampullary cancer (PAC) following resection. Methods: Data were collected from 334 patients with PAC following resection treated in our institutional pancreatic tumor registry between 2006 and 2012. The dataset contains 14 variables including age, gender, T-stage, tumor differentiation, positive-lymph-node ratio, positive resection margins, chemotherapy, radiation therapy, and tumor histology.After censoring for two-year survival analysis, 309 patients were left, of which 44 patients (∼15%) weremore » randomly selected to form testing set. The remaining 265 cases were randomly divided into training set (211 cases, ∼80% of 265) and validation set (54 cases, ∼20% of 265) for 20 times to build 20 ANN models. Each ANN has one hidden layer with 5 units. The 20 ANN models were ranked according to their concordance index (c-index) of prediction on validation sets. To further improve prediction, the top 10% of ANN models were selected, and their outputs averaged for prediction on testing set. Results: By random division, 44 cases in testing set and the remaining 265 cases have approximately equal two-year survival rates, 36.4% and 35.5% respectively. The 20 ANN models, which were trained and validated on the 265 cases, yielded mean c-indexes as 0.59 and 0.63 on validation sets and the testing set, respectively. C-index was 0.72 when the two best ANN models (top 10%) were used in prediction on testing set. The c-index of Cox regression analysis was 0.63. Conclusion: ANN improved survival prediction for patients with PAC. More patient data and further analysis of additional factors may be needed for a more robust model, which will help guide physicians in providing optimal post-operative care. This project was supported by PA CURE Grant.« less
Taper and volume equations for selected Appalachian hardwood species
A. Jeff Martin
1981-01-01
Coefficients for five taper/volume models are developed for 18 Appalachian hardwood species. Each model can be used to estimate diameter at any point on the bole, height to any preselected diameter, and cubic-foot volume between any two points on the bole. The resulting equations were tested on six sets of independent data and an evaluation of these tests is included,...
USDA-ARS?s Scientific Manuscript database
Open-field host-specificity testing assesses the host-range of a biological control agent in a setting that permits the agent to use its full complement of host-seeking behaviors. This form of testing, particularly when it includes a no-choice phase in which the target weed is killed, may provide th...
Stan Lebow; Bessie Woodward; Steven Halverson; Michael West
2012-01-01
Ground-contact durability of stakes treated with acidic copper formulations was evaluated. All test formulations incorporated copper, dimethylcocoamine and propanoic acid; one set of formulations also included zinc. Sapwood stakes cut from the southern pine group were pressure-treated to a range of retentions with each formulation and placed into plots within Harrison...
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
NASA Astrophysics Data System (ADS)
Arshad, Muhammad Azeem; Maaroufi, AbdelKrim
2018-07-01
A beginning has been made in the present study regarding the accurate lifetime predictions of polymer solar cells. Certain reservations about the conventionally employed temperature accelerated lifetime measurements test for its unworthiness of predicting reliable lifetimes of polymer solar cells are brought into light. Critical issues concerning the accelerated lifetime testing include, assuming reaction mechanism instead of determining it, and relying solely on the temperature acceleration of a single property of material. An advanced approach comprising a set of theoretical models to estimate the accurate lifetimes of polymer solar cells is therefore suggested in order to suitably alternate the accelerated lifetime testing. This approach takes into account systematic kinetic modeling of various possible polymer degradation mechanisms under natural weathering conditions. The proposed kinetic approach is substantiated by its applications on experimental aging data-sets of polymer solar materials/solar cells including, P3HT polymer film, bulk heterojunction (MDMO-PPV:PCBM) and dye-sensitized solar cells. Based on the suggested approach, an efficacious lifetime determination formula for polymer solar cells is derived and tested on dye-sensitized solar cells. Some important merits of the proposed method are also pointed out and its prospective applications are discussed.
NASA Technical Reports Server (NTRS)
Bown, Rodney L. (Editor)
1986-01-01
Topics discussed include: test and verification; environment issues; distributed Ada issues; life cycle issues; Ada in Europe; management/training issues; common Ada interface set; and run time issues.
de Morton, Natalie A; Lane, Kylie
2010-11-01
To investigate the clinimetric properties of the de Morton Mobility Index (DEMMI) in a Geriatric Evaluation and Management (GEM) population. A longitudinal validation study (n = 100) and inter-rater reliability study (n = 29) in a GEM population. Consecutive patients admitted to a GEM rehabilitation ward were eligible for inclusion. At hospital admission and discharge, a physical therapist assessed patients with physical performance instruments that included the 6-metre walk test, step test, Clinical Test of Sensory Organization and Balance, Timed Up and Go test, 6-minute walk test and the DEMMI. Consecutively eligible patients were included in an inter-rater reliability study between physical therapists. DEMMI admission scores were normally distributed (mean 30.2, standard deviation 16.7) and other activity limitation instruments had either a floor or a ceiling effect. Evidence of convergent, discriminant and known groups validity for the DEMMI were obtained. The minimal detectable change with 90% confidence was 10.5 (95% confidence interval 6.1-17.9) points and the minimally clinically important difference was 8.4 points on the 100-point interval DEMMI scale. The DEMMI provides clinicians with an accurate and valid method of measuring mobility for geriatric patients in the subacute hospital setting.
Radiation budget measurement/model interface
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.
1983-01-01
This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.
Koch, Hèlen; van Bokhoven, Marloes A; ter Riet, Gerben; van Alphen-Jager, Jm Tineke; van der Weijden, Trudy; Dinant, Geert-Jan; Bindels, Patrick J E
2009-04-01
Unexplained fatigue is frequently encountered in general practice. Because of the low prior probability of underlying somatic pathology, the positive predictive value of abnormal (blood) test results is limited in such patients. The study objectives were to investigate the relationship between established diagnoses and the occurrence of abnormal blood test results among patients with unexplained fatigue; to survey the effects of the postponement of test ordering on this relationship; and to explore consultation-related determinants of abnormal test results. Cluster randomised trial. General practices of 91 GPs in the Netherlands. GPs were randomised to immediate or postponed blood-test ordering. Patients with new unexplained fatigue were included. Limited and expanded sets of blood tests were ordered either immediately or after 4 weeks. Diagnoses during the 1-year follow-up period were extracted from medical records. Two-by-two tables were generated. To establish independent determinants of abnormal test results, a multivariate logistic regression model was used. Data of 325 patients were analysed (71% women; mean age 41 years). Eight per cent of patients had a somatic illness that was detectable by blood-test ordering. The number of false-positive test results increased in particular in the expanded test set. Patients rarely re-consulted after 4 weeks. Test postponement did not affect the distribution of patients over the two-by-two tables. No independent consultation-related determinants of abnormal test results were found. Results support restricting the number of tests ordered because of the increased risk of false-positive test results from expanded test sets. Although the number of re-consulting patients was small, the data do not refute the advice to postpone blood-test ordering for medical reasons in patients with unexplained fatigue in general practice.
Morpheus Lander Testing Campaign
NASA Technical Reports Server (NTRS)
Hart, Jeremy J.; Mitchell, Jennifer D.
2011-01-01
NASA s Morpheus Project has developed and tested a prototype planetary lander capable of vertical takeoff and landing designed to serve as a testbed for advanced spacecraft technologies. The Morpheus vehicle has successfully performed a set of integrated vehicle test flights including hot-fire and tether tests, ultimately culminating in an un-tethered "free-flight" This development and testing campaign was conducted on-site at the Johnson Space Center (JSC), less than one year after project start. Designed, developed, manufactured and operated in-house by engineers at JSC, the Morpheus Project represents an unprecedented departure from recent NASA programs and projects that traditionally require longer development lifecycles and testing at remote, dedicated testing facilities. This paper documents the integrated testing campaign, including descriptions of test types (hot-fire, tether, and free-flight), test objectives, and the infrastructure of JSC testing facilities. A major focus of the paper will be the fast pace of the project, rapid prototyping, frequent testing, and lessons learned from this departure from the traditional engineering development process at NASA s Johnson Space Center.
Medical devices and diagnostics for cardiovascular diseases in low-resource settings.
McGuire, Helen; Weigl, Bernhard H
2014-11-01
Noncommunicable diseases (NCDs), including cardiovascular diseases and diabetes, have emerged as an underappreciated health threat with enormous economic and public health implications for populations in low-resource settings. In order to address these diseases, devices that are to be used in low-resource settings have to conform to requirements that are generally more challenging than those developed for traditional markets. Characteristics and issues that must be considered when working in low- and middle-income countries (LMICs) include challenging environmental conditions, a complex supply chain, sometimes inadequate operator training, and cost. Somewhat counterintuitively, devices for low-resource setting (LRS) markets need to be of at least as high quality and reliability as those for developed countries to be setting-appropriate and achieve impact. Finally, the devices need to be designed and tested for the populations in which they are to be used in order to achieve the performance that is needed. In this review, we focus on technologies for primary and secondary health-care settings and group them according to the continuum of care from prevention to treatment.
The Space Station Photovoltaic Panels Plasma Interaction Test Program: Test plan and results
NASA Technical Reports Server (NTRS)
Nahra, Henry K.; Felder, Marian C.; Sater, Bernard L.; Staskus, John V.
1989-01-01
The Plasma Interaction Test performed on two space station solar array panels is addressed. This includes a discussion of the test requirements, test plan, experimental set-up, and test results. It was found that parasitic current collection was insignificant (0.3 percent of the solar array delivered power). The measured arcing threshold ranged from -210 to -457 V with respect to the plasma potential. Furthermore, the dynamic response of the panels showed the panel time constant to range between 1 and 5 microsec, and the panel capacitance to be between .01 and .02 microF.
The Space Station photovoltaic panels plasma interaction test program - Test plan and results
NASA Technical Reports Server (NTRS)
Nahra, Henry K.; Felder, Marian C.; Sater, Bernard L.; Staskus, John V.
1990-01-01
The plasma Interaction Test performed on two space station solar array panels is addressed. This includes a discussion of the test requirements, test plan, experimental set-up, and test results. It was found that parasitic current collection was insignificant (0.3 percent of the solar array delivered power). The measured arcing threshold ranged from -210 to -457 V with respect to the plasma potential. Furthermore, the dynamic response of the panels showed the panel time constant to range between 1 and 5 microsec, and the panel capacitance to be between .01 and .02 microF.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
Waveform generation in the EETS
NASA Astrophysics Data System (ADS)
Wilshire, J. P.
1985-05-01
Design decisions and analysis for the waveform generation portion of an electrical equipment test set are discussed. This test set is unlike conventional ATE in that it is portable and designed to operate in forward area sites for the USMC. It is also unique in that it provides for functional testing for 32 electronic units from the AV-88 Harrier II aircraft. Specific requirements for the waveform generator are discussed, including a wide frequency range, high resolution and accuracy, and low total harmonic distortion. Several approaches to meet these requirements are considered and a specific concept is presented in detail, which consists of a digitally produced waveform that feeds a deglitched analog conversion circuit. Rigorous mathematical analysis is presented to prove that this concept meets the requirements. Finally, design alternatives and enhancements are considered.
ERIC Educational Resources Information Center
National Collegiate Software Clearinghouse, Durham, NC.
Over 250 microcomputer software packages, intended for use on MS-DOS machines by scholars and teachers in the humanities and social sciences, are included in this catalog. The clearinghouse's first Macintosh listing is included, with many more Macintosh programs and data sets being planned and tested for future inclusion. Most programs were…
Error Types and Their Significance in Children's Responses in Elicitation Settings.
ERIC Educational Resources Information Center
Dougherty, Janet W. D.
The distribution of errors in children's responses in four elicitation tests of their color-naming abilities is explored with a view to clarifying states of ignorance. Subjects include 47 Polynesian children ranging in age from 2 to 12 years. The four experiments include a naming task, two identification tasks and a mapping task. Children are…
MacDonald, Donald D.; Ingersoll, Christopher G.; Smorong, Dawn E.; Sinclair, Jesse A.; Lindskoog, Rebekka; Wang, Ning; Severn, Corrine; Gouguet, Ron; Meyer, John; Field, Jay
2011-01-01
Three sets of effects-based sediment-quality guidelines (SQGs) were evaluated to support the selection of sediment-quality benchmarks for assessing risks to benthic invertebrates in the Calcasieu Estuary, Louisiana. These SQGs included probable effect concentrations (PECs), effects range median values (ERMs), and logistic regression model (LRMs)-based T50 values. The results of this investigation indicate that all three sets of SQGs tend to underestimate sediment toxicity in the Calcasieu Estuary (i.e., relative to the national data sets), as evaluated using the results of 10-day toxicity tests with the amphipod, Hyalella azteca, or Ampelisca abdita, and 28-day whole-sediment toxicity tests with the H. azteca. These results emphasize the importance of deriving site-specific toxicity thresholds for assessing risks to benthic invertebrates.
Towards a rational antimicrobial testing policy in the laboratory.
Banaji, N; Oommen, S
2011-01-01
Antimicrobial policy for prophylactic and therapeutic use of antimicrobials in a tertiary care setting has gained importance. A hospital's antimicrobial policy as laid down by its hospital infection control team needs to include inputs from the microbiology laboratory, besides the pharmacy and therapeutic committee. Therefore, it is of utmost importance that clinical microbiologists across India follow international guidelines and also take into account local settings, especially detection and presence of resistance enzymes. This article draws a framework for rational antimicrobial testing in our laboratories in tertiary care centers, from the Clinical and Laboratory Standards Institute guidelines. It does not address testing methodologies but suggests ways and means by which antimicrobial susceptibility reporting can be rendered meaningful not only to the treating physician but also to the resistance monitoring epidemiologist. It hopes to initiate some standardization in rational choice of antimicrobial testing in laboratories in the country pertaining to nonfastidious bacteria.
Fabrication and evaluation of cold/formed/weldbrazed beta-titanium skin-stiffened compression panels
NASA Technical Reports Server (NTRS)
Royster, D. M.; Bales, T. T.; Davis, R. C.; Wiant, H. R.
1983-01-01
The room temperature and elevated temperature buckling behavior of cold formed beta titanium hat shaped stiffeners joined by weld brazing to alpha-beta titanium skins was determined. A preliminary set of single stiffener compression panels were used to develop a data base for material and panel properties. These panels were tested at room temperature and 316 C (600 F). A final set of multistiffener compression panels were fabricated for room temperature tests by the process developed in making the single stiffener panels. The overall geometrical dimensions for the multistiffener panels were determined by the structural sizing computer code PASCO. The data presented from the panel tests include load shortening curves, local buckling strengths, and failure loads. Experimental buckling loads are compared with the buckling loads predicted by the PASCO code. Material property data obtained from tests of ASTM standard dogbone specimens are also presented.
Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.
2015-01-01
The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.
NASA Astrophysics Data System (ADS)
Dillon, Chris
Built upon remote sensing and GIS littoral zone characterization methodologies of the past decade, a series of loosely coupled models aimed to test, compare and synthesize multi-beam SONAR (MBES), Airborne LiDAR Bathymetry (ALB), and satellite based optical data sets in the Gulf of St. Lawrence, Canada, eco-region. Bathymetry and relative intensity metrics for the MBES and ALB data sets were run through a quantitative and qualitative comparison, which included outputs from the Benthic Terrain Modeller (BTM) tool. Substrate classification based on relative intensities of respective data sets and textural indices generated using grey level co-occurrence matrices (GLCM) were investigated. A spatial modelling framework built in ArcGIS(TM) for the derivation of bathymetric data sets from optical satellite imagery was also tested for proof of concept and validation. Where possible, efficiencies and semi-automation for repeatable testing was achieved using ArcGIS(TM) ModelBuilder. The findings from this study could assist future decision makers in the field of coastal management and hydrographic studies. Keywords: Seafloor terrain characterization, Benthic Terrain Modeller (BTM), Multi-beam SONAR, Airborne LiDAR Bathymetry, Satellite Derived Bathymetry, ArcGISTM ModelBuilder, Textural analysis, Substrate classification.
Nicodemus, Kristin K; Hargreaves, April; Morris, Derek; Anney, Richard; Gill, Michael; Corvin, Aiden; Donohoe, Gary
2014-07-01
We investigated the variation in neuropsychological function explained by risk alleles at the psychosis susceptibility gene ZNF804A and its interacting partners using single nucleotide polymorphisms (SNPs), polygenic scores, and epistatic analyses. Of particular importance was the relative contribution of the polygenic score vs epistasis in variation explained. To (1) assess the association between SNPs in ZNF804A and the ZNF804A polygenic score with measures of cognition in cases with psychosis and (2) assess whether epistasis within the ZNF804A pathway could explain additional variation above and beyond that explained by the polygenic score. Patients with psychosis (n = 424) were assessed in areas of cognitive ability impaired in schizophrenia including IQ, memory, attention, and social cognition. We used the Psychiatric GWAS Consortium 1 schizophrenia genome-wide association study to calculate a polygenic score based on identified risk variants within this genetic pathway. Cognitive measures significantly associated with the polygenic score were tested for an epistatic component using a training set (n = 170), which was used to develop linear regression models containing the polygenic score and 2-SNP interactions. The best-fitting models were tested for replication in 2 independent test sets of cases: (1) 170 individuals with schizophrenia or schizoaffective disorder and (2) 84 patients with broad psychosis (including bipolar disorder, major depressive disorder, and other psychosis). Participants completed a neuropsychological assessment battery designed to target the cognitive deficits of schizophrenia including general cognitive function, episodic memory, working memory, attentional control, and social cognition. Higher polygenic scores were associated with poorer performance among patients on IQ, memory, and social cognition, explaining 1% to 3% of variation on these scores (range, P = .01 to .03). Using a narrow psychosis training set and independent test sets of narrow phenotype psychosis (schizophrenia and schizoaffective disorder), broad psychosis, and control participants (n = 89), the addition of 2 interaction terms containing 2 SNPs each increased the R2 for spatial working memory strategy in the independent psychosis test sets from 1.2% using the polygenic score only to 4.8% (P = .11 and .001, respectively) but did not explain additional variation in control participants. These data support a role for the ZNF804A pathway in IQ, memory, and social cognition in cases. Furthermore, we showed that epistasis increases the variation explained above the contribution of the polygenic score.
Valeiro, Beatriz; Hernández, Carme; Barberán-Garcia, Anael; Rodríguez, Diego A; Aibar, Jesús; Llop, Lourdes; Vilaró, Jordi
2016-05-01
The Glittre Activities of Daily Living Test (ADL-Test) is a reliable functional status measurement for stable chronic obstructive pulmonary disease (COPD) patients in a laboratory setting. We aimed to adapt the test to the home setting (mADL-Test) and to follow-up the functional status recovery of post-exacerbation COPD patients included in a home hospitalization (HH) program. We assessed 17 exacerbated moderate-to-very-severe COPD patients in 3 home visits: at discharge to HH (V0), 10days (V10post) and 1month after discharge (V30post). Patients completed the mADL-Test (laps, VO2 and VE), COPD assessment test (CAT), London Chest ADL Test (LCADL), modified Medical Research Council (mMRC) and upper limb strength (handgrip). The number of laps of the mADL-Test (4, 5 and 5, P<.05), CAT (19, 12 and 12, P<.01), mMRC (2, 1.5 and 1, P<.01) and the self-care domain of the LCADL (6, 5 and 5, P<.01) improved during follow-up (V0, V10post and V30post, respectively). No significant changes were evidenced in VO2, VE or handgrip. Our results suggest that the mADL-test can be performed in the home setting after a COPD exacerbation, and that functional status continues to improve 10days after discharge to HH. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.
A Unified Mixed-Effects Model for Rare-Variant Association in Sequencing Studies
Sun, Jianping; Zheng, Yingye; Hsu, Li
2013-01-01
For rare-variant association analysis, due to extreme low frequencies of these variants, it is necessary to aggregate them by a prior set (e.g., genes and pathways) in order to achieve adequate power. In this paper, we consider hierarchical models to relate a set of rare variants to phenotype by modeling the effects of variants as a function of variant characteristics while allowing for variant-specific effect (heterogeneity). We derive a set of two score statistics, testing the group effect by variant characteristics and the heterogeneity effect. We make a novel modification to these score statistics so that they are independent under the null hypothesis and their asymptotic distributions can be derived. As a result, the computational burden is greatly reduced compared with permutation-based tests. Our approach provides a general testing framework for rare variants association, which includes many commonly used tests, such as the burden test [Li and Leal, 2008] and the sequence kernel association test [Wu et al., 2011], as special cases. Furthermore, in contrast to these tests, our proposed test has an added capacity to identify which components of variant characteristics and heterogeneity contribute to the association. Simulations under a wide range of scenarios show that the proposed test is valid, robust and powerful. An application to the Dallas Heart Study illustrates that apart from identifying genes with significant associations, the new method also provides additional information regarding the source of the association. Such information may be useful for generating hypothesis in future studies. PMID:23483651
User's guide to the NOZL3D and NOZLIC computer programs
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1980-01-01
Complete FORTRAN listings and running instructions are given for a set of computer programs that perform an implicit numerical solution to the unsteady Navier-Stokes equations to predict the flow characteristics and performance of nonaxisymmetric nozzles. The set includes the NOZL3D program, which performs the flow computations; the NOZLIC program, which sets up the flow field initial conditions for general nozzle configurations, and also generates the computational grid for simple two dimensional and axisymmetric configurations; and the RGRIDD program, which generates the computational grid for complicated three dimensional configurations. The programs are designed specifically for the NASA-Langley CYBER 175 computer, and employ auxiliary disk files for primary data storage. Input instructions and computed results are given for four test cases that include two dimensional, three dimensional, and axisymmetric configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Irshad; Gnedin, Nickolay Y.
Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less
NASA Technical Reports Server (NTRS)
New, S. R.
1981-01-01
The multiplexer-demultiplexer (MDM) project included the design, documentation, manufacture, and testing of three MDM Data Systems. The equipment is contained in 59 racks, and includes more than 3,000 circuit boards and 600 microprocessors. Spares, circuit card testers, a master set of programmable integrated circuits, and a program development system were included as deliverables. All three MDM's were installed, and were operationally tested. The systems performed well with no major problems. The progress and problems analysis, addresses schedule conformance, new technology, items awaiting government approval, and project conclusions are summarized. All contract modifications are described.
NASA Astrophysics Data System (ADS)
New, S. R.
1981-06-01
The multiplexer-demultiplexer (MDM) project included the design, documentation, manufacture, and testing of three MDM Data Systems. The equipment is contained in 59 racks, and includes more than 3,000 circuit boards and 600 microprocessors. Spares, circuit card testers, a master set of programmable integrated circuits, and a program development system were included as deliverables. All three MDM's were installed, and were operationally tested. The systems performed well with no major problems. The progress and problems analysis, addresses schedule conformance, new technology, items awaiting government approval, and project conclusions are summarized. All contract modifications are described.
Data Summary Report for the Open Rotor Propulsion Rig Equipped With F31/A31 Rotor Blades
NASA Technical Reports Server (NTRS)
Stephens, David
2014-01-01
An extensive wind tunnel test campaign was undertaken to quantify the performance and acoustics of a counter-rotating open rotor system. The present document summarizes the portion of this test performed with the so-called Historical Baseline rotor blades, designated F31A31. It includes performance and acoustic data acquired at Mach numbers from take-off to cruise. It also includes the effect of propulsor angle of attack as well as an upstream pylon. This report is accompanied by an electronic data set including relevant acoustic and performance measurements for all of the F31A31 data.
Data Summary Report for the Open Rotor Propulsion Rig Equipped with F31/A31 Rotor Blades
NASA Technical Reports Server (NTRS)
Stephens, David B.
2014-01-01
An extensive wind tunnel test campaign was undertaken to quantify the performance and acoustics of a counter-rotating open rotor system. The present document summarizes the portion of this test performed with the so-called "Historical Baseline" rotor blades, designated F31/A31. It includes performance and acoustic data acquired at Mach numbers from take-off to cruise. It also includes the effect of propulsor angle of attack as well as an upstream pylon. This report is accompanied by an electronic data set including relevant acoustic and performance measurements for all of the F31/A31 data.
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
Update on parts SEE suspectibility from heavy ions. [Single Event Effects
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Smith, L. S.; Schwartz, H. R.; Soli, G.; Watson, K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.
1991-01-01
JPL and the Aerospace Corporation have collected a fourth set of heavy ion single event effects (SEE) test data. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are displayed. All data are conveniently divided into two tables: one for MOS devices, and one for a shorter list of recently tested bipolar devices. In addition, a new table of data for latchup tests only (invariably CMOS processes) is given.
Office-based treatment and outcomes for febrile infants with clinically diagnosed bronchiolitis.
Luginbuhl, Lynn M; Newman, Thomas B; Pantell, Robert H; Finch, Stacia A; Wasserman, Richard C
2008-11-01
The goals were to describe the (1) frequency of sepsis evaluation and empiric antibiotic treatment, (2) clinical predictors of management, and (3) serious bacterial illness frequency for febrile infants with clinically diagnosed bronchiolitis seen in office settings. The Pediatric Research in Office Settings network conducted a prospective cohort study of 3066 febrile infants (<3 months of age with temperatures >or=38 degrees C) in 219 practices in 44 states. We compared the frequency of sepsis evaluation, parenteral antibiotic treatment, and serious bacterial illness in infants with and without clinically diagnosed bronchiolitis. We identified predictors of sepsis evaluation and parenteral antibiotic treatment in infants with bronchiolitis by using logistic regression models. Practitioners were less likely to perform a complete sepsis evaluation, urine testing, and cerebrospinal fluid culture and to administer parenteral antibiotic treatment for infants with bronchiolitis, compared with those without bronchiolitis. Significant predictors of sepsis evaluation in infants with bronchiolitis included younger age, higher maximal temperature, and respiratory syncytial virus testing. Predictors of parenteral antibiotic use included initial ill appearance, age of <30 days, higher maximal temperature, and general signs of infant distress. Among infants with bronchiolitis (N = 218), none had serious bacterial illness and those with respiratory distress signs were less likely to receive parenteral antibiotic treatment. Diagnoses among 2848 febrile infants without bronchiolitis included bacterial meningitis (n = 14), bacteremia (n = 49), and urinary tract infection (n = 167). In office settings, serious bacterial illness in young febrile infants with clinically diagnosed bronchiolitis is uncommon. Limited testing for bacterial infections seems to be an appropriate management strategy.
Design of an efficient music-speech discriminator.
Tardón, Lorenzo J; Sammartino, Simone; Barbancho, Isabel
2010-01-01
In this paper, the problem of the design of a simple and efficient music-speech discriminator for large audio data sets in which advanced music playing techniques are taught and voice and music are intrinsically interleaved is addressed. In the process, a number of features used in speech-music discrimination are defined and evaluated over the available data set. Specifically, the data set contains pieces of classical music played with different and unspecified instruments (or even lyrics) and the voice of a teacher (a top music performer) or even the overlapped voice of the translator and other persons. After an initial test of the performance of the features implemented, a selection process is started, which takes into account the type of classifier selected beforehand, to achieve good discrimination performance and computational efficiency, as shown in the experiments. The discrimination application has been defined and tested on a large data set supplied by Fundacion Albeniz, containing a large variety of classical music pieces played with different instrument, which include comments and speeches of famous performers.
A rank test for bivariate time-to-event outcomes when one event is a surrogate
Shaw, Pamela A.; Fay, Michael P.
2016-01-01
In many clinical settings, improving patient survival is of interest but a practical surrogate, such as time to disease progression, is instead used as a clinical trial’s primary endpoint. A time-to-first endpoint (e.g. death or disease progression) is commonly analyzed but may not be adequate to summarize patient outcomes if a subsequent event contains important additional information. We consider a surrogate outcome very generally, as one correlated with the true endpoint of interest. Settings of interest include those where the surrogate indicates a beneficial outcome so that the usual time-to-first endpoint of death or surrogate event is nonsensical. We present a new two-sample test for bivariate, interval-censored time-to-event data, where one endpoint is a surrogate for the second, less frequently observed endpoint of true interest. This test examines whether patient groups have equal clinical severity. If the true endpoint rarely occurs, the proposed test acts like a weighted logrank test on the surrogate; if it occurs for most individuals, then our test acts like a weighted logrank test on the true endpoint. If the surrogate is a useful statistical surrogate, our test can have better power than tests based on the surrogate that naively handle the true endpoint. In settings where the surrogate is not valid (treatment affects the surrogate but not the true endpoint), our test incorporates the information regarding the lack of treatment effect from the observed true endpoints and hence is expected to have a dampened treatment effect compared to tests based on the surrogate alone. PMID:27059817
Role of the laboratory in the evaluation of suspected drug abuse.
Gold, M S; Dackis, C A
1986-01-01
Despite the high incidence of substance abuse, it remains a common cause of misdiagnosis. In patients who have abused or who are currently abusing drugs, symptoms of a psychiatric illness may be mimicked by either the drug's presence or absence. The laboratory can aid in making a differential diagnosis and eliminating drugs from active consideration as a cause of psychosis, depression, mania, and personality changes. Treatment planning and prevention of serious medical consequences often rest on the accuracy of the admission drug screen. Testing is widely used to assess improvement in substance abuse in both inpatient and outpatient settings. In occupational settings, testing has been used as an early indicator that a problem exists and as a successful prevention tool. The appropriate use of analytic technology in drug abuse testing requires an understanding of available test methodologies. These include drug screens by thin-layer chromatography, comprehensive testing using enzyme immunoassay, and computer-assisted gas chromatography-mass spectrometry (GC-MS). Testing for specific drugs considered likely causes or precipitants of "psychiatric" complaints is available with enzyme assays, radioimmunoassay, or definitive forensic-quality testing using GC-MS.
A goal attainment pain management program for older adults with arthritis.
Davis, Gail C; White, Terri L
2008-12-01
The purpose of this study was to test a pain management intervention that integrates goal setting with older adults (age > or =65) living independently in residential settings. This preliminary testing of the Goal Attainment Pain Management Program (GAPMAP) included a sample of 17 adults (mean age 79.29 years) with self-reported pain related to arthritis. Specific study aims were to: 1) explore the use of individual goal setting; 2) determine participants' levels of goal attainment; 3) determine whether changes occurred in the pain management methods used and found to be helpful by GAPMAP participants; and 4) determine whether changes occurred in selected pain-related variables (i.e., experience of living with persistent pain, the expected outcomes of pain management, pain management barriers, and global ratings of perceived pain intensity and success of pain management). Because of the small sample size, both parametric (t test) and nonparametric (Wilcoxon signed rank test) analyses were used to examine differences from pretest to posttest. Results showed that older individuals could successfully participate in setting and attaining individual goals. Thirteen of the 17 participants (76%) met their goals at the expected level or above. Two management methods (exercise and using a heated pool, tub, or shower) were used significantly more often after the intervention, and two methods (exercise and distraction) were identified as significantly more helpful. Two pain-related variables (experience of living with persistent pain and expected outcomes of pain management) revealed significant change, and all of those tested showed overall improvement.
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
International spinal cord injury pulmonary function basic data set.
Biering-Sørensen, F; Krassioukov, A; Alexander, M S; Donovan, W; Karlsson, A-K; Mueller, G; Perkash, I; Sheel, A William; Wecht, J; Schilero, G J
2012-06-01
To develop the International Spinal Cord Injury (SCI) Pulmonary Function Basic Data Set within the framework of the International SCI Data Sets in order to facilitate consistent collection and reporting of basic bronchopulmonary findings in the SCI population. International. The SCI Pulmonary Function Data Set was developed by an international working group. The initial data set document was revised on the basis of suggestions from members of the Executive Committee of the International SCI Standards and Data Sets, the International Spinal Cord Society (ISCoS) Executive and Scientific Committees, American Spinal Injury Association (ASIA) Board, other interested organizations and societies and individual reviewers. In addition, the data set was posted for 2 months on ISCoS and ASIA websites for comments. The final International SCI Pulmonary Function Data Set contains questions on the pulmonary conditions diagnosed before spinal cord lesion,if available, to be obtained only once; smoking history; pulmonary complications and conditions after the spinal cord lesion, which may be collected at any time. These data include information on pneumonia, asthma, chronic obstructive pulmonary disease and sleep apnea. Current utilization of ventilator assistance including mechanical ventilation, diaphragmatic pacing, phrenic nerve stimulation and Bi-level positive airway pressure can be reported, as well as results from pulmonary function testing includes: forced vital capacity, forced expiratory volume in one second and peak expiratory flow. The complete instructions for data collection and the data sheet itself are freely available on the website of ISCoS (http://www.iscos.org.uk).
Brimblecombe, Julie; Wycherley, Thomas Philip
2017-01-01
Smartphone applications are increasingly being used to support nutrition improvement in community settings. However, there is a scarcity of practical literature to support researchers and practitioners in choosing or developing health applications. This work maps the features, key content, theoretical approaches, and methods of consumer testing of applications intended for nutrition improvement in community settings. A systematic, scoping review methodology was used to map published, peer-reviewed literature reporting on applications with a specific nutrition-improvement focus intended for use in the community setting. After screening, articles were grouped into 4 categories: dietary self-monitoring trials, nutrition improvement trials, application description articles, and qualitative application development studies. For mapping, studies were also grouped into categories based on the target population and aim of the application or program. Of the 4818 titles identified from the database search, 64 articles were included. The broad categories of features found to be included in applications generally corresponded to different behavior change support strategies common to many classic behavioral change models. Key content of applications generally focused on food composition, with tailored feedback most commonly used to deliver educational content. Consumer testing before application deployment was reported in just over half of the studies. Collaboration between practitioners and application developers promotes an appropriate balance of evidence-based content and functionality. This work provides a unique resource for program development teams and practitioners seeking to use an application for nutrition improvement in community settings. PMID:28298274
Tonkin, Emma; Brimblecombe, Julie; Wycherley, Thomas Philip
2017-03-01
Smartphone applications are increasingly being used to support nutrition improvement in community settings. However, there is a scarcity of practical literature to support researchers and practitioners in choosing or developing health applications. This work maps the features, key content, theoretical approaches, and methods of consumer testing of applications intended for nutrition improvement in community settings. A systematic, scoping review methodology was used to map published, peer-reviewed literature reporting on applications with a specific nutrition-improvement focus intended for use in the community setting. After screening, articles were grouped into 4 categories: dietary self-monitoring trials, nutrition improvement trials, application description articles, and qualitative application development studies. For mapping, studies were also grouped into categories based on the target population and aim of the application or program. Of the 4818 titles identified from the database search, 64 articles were included. The broad categories of features found to be included in applications generally corresponded to different behavior change support strategies common to many classic behavioral change models. Key content of applications generally focused on food composition, with tailored feedback most commonly used to deliver educational content. Consumer testing before application deployment was reported in just over half of the studies. Collaboration between practitioners and application developers promotes an appropriate balance of evidence-based content and functionality. This work provides a unique resource for program development teams and practitioners seeking to use an application for nutrition improvement in community settings. © 2017 American Society for Nutrition.
Multisource Feedback in the Ambulatory Setting
Warm, Eric J.; Schauer, Daniel; Revis, Brian; Boex, James R.
2010-01-01
Background The Accreditation Council for Graduate Medical Education has mandated multisource feedback (MSF) in the ambulatory setting for internal medicine residents. Few published reports demonstrate actual MSF results for a residency class, and fewer still include clinical quality measures and knowledge-based testing performance in the data set. Methods Residents participating in a year-long group practice experience called the “long-block” received MSF that included self, peer, staff, attending physician, and patient evaluations, as well as concomitant clinical quality data and knowledge-based testing scores. Residents were given a rank for each data point compared with peers in the class, and these data were reviewed with the chief resident and program director over the course of the long-block. Results Multisource feedback identified residents who performed well on most measures compared with their peers (10%), residents who performed poorly on most measures compared with their peers (10%), and residents who performed well on some measures and poorly on others (80%). Each high-, intermediate-, and low-performing resident had a least one aspect of the MSF that was significantly lower than the other, and this served as the basis of formative feedback during the long-block. Conclusion Use of multi-source feedback in the ambulatory setting can identify high-, intermediate-, and low-performing residents and suggest specific formative feedback for each. More research needs to be done on the effect of such feedback, as well as the relationships between each of the components in the MSF data set. PMID:21975632
Quality Assurance of RNA Expression Profiling in Clinical Laboratories
Tang, Weihua; Hu, Zhiyuan; Muallem, Hind; Gulley, Margaret L.
2012-01-01
RNA expression profiles are increasingly used to diagnose and classify disease, based on expression patterns of as many as several thousand RNAs. To ensure quality of expression profiling services in clinical settings, a standard operating procedure incorporates multiple quality indicators and controls, beginning with preanalytic specimen preparation and proceeding thorough analysis, interpretation, and reporting. Before testing, histopathological examination of each cellular specimen, along with optional cell enrichment procedures, ensures adequacy of the input tissue. Other tactics include endogenous controls to evaluate adequacy of RNA and exogenous or spiked controls to evaluate run- and patient-specific performance of the test system, respectively. Unique aspects of quality assurance for array-based tests include controls for the pertinent outcome signatures that often supersede controls for each individual analyte, built-in redundancy for critical analytes or biochemical pathways, and software-supported scrutiny of abundant data by a laboratory physician who interprets the findings in a manner facilitating appropriate medical intervention. Access to high-quality reagents, instruments, and software from commercial sources promotes standardization and adoption in clinical settings, once an assay is vetted in validation studies as being analytically sound and clinically useful. Careful attention to the well-honed principles of laboratory medicine, along with guidance from government and professional groups on strategies to preserve RNA and manage large data sets, promotes clinical-grade assay performance. PMID:22020152
Analysis of pumping tests: Significance of well diameter, partial penetration, and noise
Heidari, M.; Ghiassi, K.; Mehnert, E.
1999-01-01
The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating piezometers and observation wells. Noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced a set of parameters that agrees very well with piezometer test data when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters decreased with increasing noise level.
Data for Room Fire Model Comparisons
Peacock, Richard D.; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system. PMID:28184121
Data for Room Fire Model Comparisons.
Peacock, Richard D; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system.
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Macmurphy, Shawn; Friedberg, Patricia; Day, John H. (Technical Monitor)
2002-01-01
Presented here is the second set of testing conducted by the Technology Validation Laboratory for Photonics at NASA Goddard Space Flight Center on the 12 optical fiber ribbon cable with MTP array connector for space flight environments. In the first set of testing the commercial 62.5/125 cable assembly was characterized using space flight parameters. The testing showed that the cable assembly would survive a typical space flight mission with the exception of a vacuum environment. Two enhancements were conducted to the existing technology to better suit the vacuum environment as well as the existing optoelectronics and increase the reliability of the assembly during vibration. The MTP assembly characterized here has a 100/140 optical commercial fiber and non outgassing connector and cable components. The characterization for this enhanced fiber optic cable assembly involved vibration, thermal and radiation testing. The data and results of this characterization study are presented which include optical in-situ testing.
Dietz, Vance; Rota, Jennifer; Izurieta, Héctor; Carrasco, Peter; Bellini, William
2004-01-01
The Americas have set a goal of interrupting indigenous transmission of measles using a strategy developed by the Pan American Health Organization (PAHO). This strategy includes recommendations for vaccination activities to achieve and sustain high immunity in the population and is complemented by sensitive epidemiological surveillance systems developed to monitor illnesses characterized by febrile rash, and to provide effective virological and serological surveillance. A key component in ensuring the success of the programme has been a laboratory network comprising 22 national laboratories including reference centres. Commercially available indirect enzyme immunoassay kits (EIA) for immunoglobulin M (IgM)-class antibodies are currently being used throughout the region. However, because there are few or no true measles cases in the region, the positive predictive value of these diagnostic tests has decreased. False-positive results of IgM tests can also occur as a result of testing suspected measles cases with exanthemata caused by Parvovirus B19, rubella and Human herpesvirus 6, among others. In addition, as countries maintain high levels of vaccination activity and increased surveillance of rash and fever, the notification of febrile rash illness in recently vaccinated people can be anticipated. Thus, managers in the measles elimination programme must be prepared to address the interpretation of a positive result of a laboratory test for measles IgM when clinical and epidemiological data may indicate that the case is not measles. The interpretation of an IgM-positive test under different circumstances and the definition of a vaccine-related rash illness in a setting of greatly reduced, or absent, transmission of measles is discussed. PMID:15640921
DOE Office of Scientific and Technical Information (OSTI.GOV)
KRUGER AA; MATLACK KS; GONG W
2011-12-29
This report documents melter and off-gas performance results obtained on the DM1200 HLW Pilot Melter during processing of AZ-101 HLW simulants. The tests reported herein are a subset of six tests from a larger series of tests described in the Test Plan for the work; results from the other tests have been reported separately. The solids contents of the melter feeds were based on the WTP baseline value for the solids content of the feeds from pretreatment which changed during these tests from 20% to 15% undissolved solids resulting in tests conducted at two feed solids contents. Based on themore » results of earlier tests with single outlet 'J' bubblers, initial tests were performed with a total bubbling rate of 651 pm. The first set of tests (Tests 1A-1E) addressed the effects of skewing this total air flow rate back and forth between the two installed bubblers in comparison to a fixed equal division of flow between them. The second set of tests (2A-2D) addressed the effects of bubbler depth. Subsequently, as the location, type and number of bubbling outlets were varied, the optimum bubbling rate for each was determined. A third (3A-3C) and fourth (8A-8C) set of tests evaluated the effects of alternative bubbler designs with two gas outlets per bubbler instead of one by placing four bubblers in positions simulating multiple-outlet bubblers. Data from the simulated multiple outlet bubblers were used to design bubblers with two outlets for an additional set of tests (9A-9C). Test 9 was also used to determine the effect of small sugar additions to the feed on ruthenium volatility. Another set of tests (10A-10D) evaluated the effects on production rate of spiking the feed with chloride and sulfate. Variables held constant to the extent possible included melt temperature, plenum temperature, cold cap coverage, the waste simulant composition, and the target glass composition. The feed rate was increased to the point that a constant, essentially complete, cold cap was achieved, which was used as an indicator of a maximized feed rate for each test. The first day of each test was used to build the cold cap and decrease the plenum temperature. The remainder of each test was split into two- to six-day segments, each with a different bubbling rate, bubbler orientation, or feed concentration of chloride and sulfur.« less
Oomen, Agnes G.; Bos, Peter M. J.; Fernandes, Teresa F.; Hund-Rinke, Kerstin; Boraschi, Diana; Byrne, Hugh J.; Aschberger, Karin; Gottardo, Stefania; von der Kammer, Frank; Kühnel, Dana; Hristozov, Danail; Marcomini, Antonio; Migliore, Lucia; Scott-Fordsmand, Janeck; Wick, Peter
2014-01-01
Bringing together topic-related European Union (EU)-funded projects, the so-called “NanoSafety Cluster” aims at identifying key areas for further research on risk assessment procedures for nanomaterials (NM). The outcome of NanoSafety Cluster Working Group 10, this commentary presents a vision for concern-driven integrated approaches for the (eco-)toxicological testing and assessment (IATA) of NM. Such approaches should start out by determining concerns, i.e., specific information needs for a given NM based on realistic exposure scenarios. Recognised concerns can be addressed in a set of tiers using standardised protocols for NM preparation and testing. Tier 1 includes determining physico-chemical properties, non-testing (e.g., structure–activity relationships) and evaluating existing data. In tier 2, a limited set of in vitro and in vivo tests are performed that can either indicate that the risk of the specific concern is sufficiently known or indicate the need for further testing, including details for such testing. Ecotoxicological testing begins with representative test organisms followed by complex test systems. After each tier, it is evaluated whether the information gained permits assessing the safety of the NM so that further testing can be waived. By effectively exploiting all available information, IATA allow accelerating the risk assessment process and reducing testing costs and animal use (in line with the 3Rs principle implemented in EU Directive 2010/63/EU). Combining material properties, exposure, biokinetics and hazard data, information gained with IATA can be used to recognise groups of NM based upon similar modes of action. Grouping of substances in return should form integral part of the IATA themselves. PMID:23641967
Combining Gene Signatures Improves Prediction of Breast Cancer Survival
Zhao, Xi; Naume, Bjørn; Langerød, Anita; Frigessi, Arnoldo; Kristensen, Vessela N.; Børresen-Dale, Anne-Lise; Lingjærde, Ole Christian
2011-01-01
Background Several gene sets for prediction of breast cancer survival have been derived from whole-genome mRNA expression profiles. Here, we develop a statistical framework to explore whether combination of the information from such sets may improve prediction of recurrence and breast cancer specific death in early-stage breast cancers. Microarray data from two clinically similar cohorts of breast cancer patients are used as training (n = 123) and test set (n = 81), respectively. Gene sets from eleven previously published gene signatures are included in the study. Principal Findings To investigate the relationship between breast cancer survival and gene expression on a particular gene set, a Cox proportional hazards model is applied using partial likelihood regression with an L2 penalty to avoid overfitting and using cross-validation to determine the penalty weight. The fitted models are applied to an independent test set to obtain a predicted risk for each individual and each gene set. Hierarchical clustering of the test individuals on the basis of the vector of predicted risks results in two clusters with distinct clinical characteristics in terms of the distribution of molecular subtypes, ER, PR status, TP53 mutation status and histological grade category, and associated with significantly different survival probabilities (recurrence: p = 0.005; breast cancer death: p = 0.014). Finally, principal components analysis of the gene signatures is used to derive combined predictors used to fit a new Cox model. This model classifies test individuals into two risk groups with distinct survival characteristics (recurrence: p = 0.003; breast cancer death: p = 0.001). The latter classifier outperforms all the individual gene signatures, as well as Cox models based on traditional clinical parameters and the Adjuvant! Online for survival prediction. Conclusion Combining the predictive strength of multiple gene signatures improves prediction of breast cancer survival. The presented methodology is broadly applicable to breast cancer risk assessment using any new identified gene set. PMID:21423775
Health maintenance facility: Dental equipment requirements
NASA Technical Reports Server (NTRS)
Young, John; Gosbee, John; Billica, Roger
1991-01-01
The objectives were to test the effectiveness of the Health Maintenance Facility (HMF) dental suction/particle containment system, which controls fluids and debris generated during simulated dental treatment, in microgravity; to test the effectiveness of fiber optic intraoral lighting systems in microgravity, while simulating dental treatment; and to evaluate the operation and function of off-the-shelf dental handheld instruments, namely a portable dental hand drill and temporary filling material, in microgravity. A description of test procedures, including test set-up, flight equipment, and the data acquisition system, is given.
Bidirectional power converter control electronics
NASA Technical Reports Server (NTRS)
Mildice, J. W.
1987-01-01
The object of this program was to design, build, test, and deliver a set of control electronics suitable for control of bidirectional resonant power processing equipment of the direct output type. The program is described, including the technical background, and results discussed. Even though the initial program tested only the logic outputs, the hardware was subsequently tested with high-power breadboard equipment, and in the testbed of NASA contract NAS3-24399. The completed equipment is now operating as part of the Space Station Power System Test Facility at NASA Lewis Research Center.
1989-06-01
Measurable goals and milestones are supported by action plans which include underlying assumptions, allocation of respon- sibility, resource ...military reasons developments to the relationship be- vironmental laws? for starting where he did. To tween hardware and the environment. Responsibility...supporting integrated testing understanding of the concerns of the - Managing test resources military services and which would set - Evaluating system
Model verification of large structural systems
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1977-01-01
A methodology was formulated, and a general computer code implemented for processing sinusoidal vibration test data to simultaneously make adjustments to a prior mathematical model of a large structural system, and resolve measured response data to obtain a set of orthogonal modes representative of the test model. The derivation of estimator equations is shown along with example problems. A method for improving the prior analytic model is included.
Deep Sequencing of 71 Candidate Genes to Characterize Variation Associated with Alcohol Dependence.
Clark, Shaunna L; McClay, Joseph L; Adkins, Daniel E; Kumar, Gaurav; Aberg, Karolina A; Nerella, Srilaxmi; Xie, Linying; Collins, Ann L; Crowley, James J; Quackenbush, Corey R; Hilliard, Christopher E; Shabalin, Andrey A; Vrieze, Scott I; Peterson, Roseann E; Copeland, William E; Silberg, Judy L; McGue, Matt; Maes, Hermine; Iacono, William G; Sullivan, Patrick F; Costello, Elizabeth J; van den Oord, Edwin J
2017-04-01
Previous genomewide association studies (GWASs) have identified a number of putative risk loci for alcohol dependence (AD). However, only a few loci have replicated and these replicated variants only explain a small proportion of AD risk. Using an innovative approach, the goal of this study was to generate hypotheses about potentially causal variants for AD that can be explored further through functional studies. We employed targeted capture of 71 candidate loci and flanking regions followed by next-generation deep sequencing (mean coverage 78X) in 806 European Americans. Regions included in our targeted capture library were genes identified through published GWAS of alcohol, all human alcohol and aldehyde dehydrogenases, reward system genes including dopaminergic and opioid receptors, prioritized candidate genes based on previous associations, and genes involved in the absorption, distribution, metabolism, and excretion of drugs. We performed single-locus tests to determine if any single variant was associated with AD symptom count. Sets of variants that overlapped with biologically meaningful annotations were tested for association in aggregate. No single, common variant was significantly associated with AD in our study. We did, however, find evidence for association with several variant sets. Two variant sets were significant at the q-value <0.10 level: a genic enhancer for ADHFE1 (p = 1.47 × 10 -5 ; q = 0.019), an alcohol dehydrogenase, and ADORA1 (p = 5.29 × 10 -5 ; q = 0.035), an adenosine receptor that belongs to a G-protein-coupled receptor gene family. To our knowledge, this is the first sequencing study of AD to examine variants in entire genes, including flanking and regulatory regions. We found that in addition to protein coding variant sets, regulatory variant sets may play a role in AD. From these findings, we have generated initial functional hypotheses about how these sets may influence AD. Copyright © 2017 by the Research Society on Alcoholism.
Park, S B; Kim, H; Yao, M; Ellis, R; Machtay, M; Sohn, J W
2012-06-01
To quantify the systematic error of a Deformable Image Registration (DIR) system and establish Quality Assurance (QA) procedure. To address the shortfall of landmark approach which it is only available at the significant visible feature points, we adapted a Deformation Vector Map (DVM) comparison approach. We used two CT image sets (R and T image sets) taken for the same patient at different time and generated a DVM, which includes the DIR systematic error. The DVM was calculated using fine-tuned B-Spline DIR and L-BFGS optimizer. By utilizing this DVM we generated R' image set to eliminate the systematic error in DVM,. Thus, we have truth data set, R' and T image sets, and the truth DVM. To test a DIR system, we use R' and T image sets to a DIR system. We compare the test DVM to the truth DVM. If there is no systematic error, they should be identical. We built Deformation Error Histogram (DEH) for quantitative analysis. The test registration was performed with an in-house B-Spline DIR system using a stochastic gradient descent optimizer. Our example data set was generated with a head and neck patient case. We also tested CT to CBCT deformable registration. We found skin regions which interface with the air has relatively larger errors. Also mobile joints such as shoulders had larger errors. Average error for ROIs were as follows; CTV: 0.4mm, Brain stem: 1.4mm, Shoulders: 1.6mm, and Normal tissues: 0.7mm. We succeeded to build DEH approach to quantify the DVM uncertainty. Our data sets are available for testing other systems in our web page. Utilizing DEH, users can decide how much systematic error they would accept. DEH and our data can be a tool for an AAPM task group to compose a DIR system QA guideline. This project is partially supported by the Agency for Healthcare Research and Quality (AHRQ) grant 1R18HS017424-01A2. © 2012 American Association of Physicists in Medicine.
Grützmacher, G; Bartel, H; Althoff, H W; Clemen, S
2007-03-01
A set-up for experiments in the flow-through mode was constructed in order to test the efficacy of substances used for disinfecting water during drinking water treatment. A flow-through mode - in contrast to experiments under stationary conditions (so-called batch experiments) - was chosen, because this experimental design allows experiments to be carried out under constant conditions for an extended time (up to one week) and because efficacy testing is possible repeatedly, simultaneously and under exactly the same conditions for short (about 0.5 min) and also longer (about 47 min) contact times. With this experimental design the effect of biofilms along the inner pipe surfaces can be included in the observations. The construction of the experimental set-up is based on experience with laboratory flow-through systems that were installed by the UBA's drinking water department (formerly Institute for Water-, Soil- and Air Hygiene (WaBoLu) Institute) for testing disinfection with chlorine. In the first step, a test pipe for the simulation of a water works situation was installed. Water of different qualities can be mixed in large volumes beforehand so that the experimental procedure can be run with constant water quality for a minimum of one week. The kinetics of the disinfection reaction can be observed by extracting samples from eight sampling ports situated along the test pipe. In order to assign exact residence times to each of the sampling ports, tracer experiments were performed prior to testing disinfectant efficacy. This paper gives the technical details of the experimental set-up and presents the results of the tracer experiments to provide an introduction with respect to its potential.
Effect of wheelchair design on wheeled mobility and propulsion efficiency in less-resourced settings
2017-01-01
Background Wheelchair research includes both qualitative and quantitative approaches, primarily focuses on functionality and skill performance and is often limited to short testing periods. This is the first study to use the combination of a performance test (i.e. wheelchair propulsion test) and a multiple-day mobility assessment to evaluate wheelchair designs in rural areas of a developing country. Objectives Test the feasibility of using wheel-mounted accelerometers to document bouts of wheeled mobility data in rural settings and use these data to compare how patients respond to different wheelchair designs. Methods A quasi-experimental, pre- and post-test design was used to test the differences between locally manufactured wheelchairs (push rim and tricycle) and an imported intervention product (dual-lever propulsion wheelchair). A one-way repeated measures analysis of variance was used to interpret propulsion and wheeled mobility data. Results There were no statistical differences in bouts of mobility between the locally manufactured and intervention product, which was explained by high amounts of variability within the data. With regard to the propulsion test, push rim users were significantly more efficient when using the intervention product compared with tricycle users. Conclusion Use of wheel-mounted accelerometers as a means to test user mobility proved to be a feasible methodology in rural settings. Variability in wheeled mobility data could be decreased with longer acclimatisation periods. The data suggest that push rim users experience an easier transition to a dual-lever propulsion system. PMID:28936416
Stanfill, Christopher J; Jensen, Jody L
2017-01-01
Wheelchair research includes both qualitative and quantitative approaches, primarily focuses on functionality and skill performance and is often limited to short testing periods. This is the first study to use the combination of a performance test (i.e. wheelchair propulsion test) and a multiple-day mobility assessment to evaluate wheelchair designs in rural areas of a developing country. Test the feasibility of using wheel-mounted accelerometers to document bouts of wheeled mobility data in rural settings and use these data to compare how patients respond to different wheelchair designs. A quasi-experimental, pre- and post-test design was used to test the differences between locally manufactured wheelchairs (push rim and tricycle) and an imported intervention product (dual-lever propulsion wheelchair). A one-way repeated measures analysis of variance was used to interpret propulsion and wheeled mobility data. There were no statistical differences in bouts of mobility between the locally manufactured and intervention product, which was explained by high amounts of variability within the data. With regard to the propulsion test, push rim users were significantly more efficient when using the intervention product compared with tricycle users. Use of wheel-mounted accelerometers as a means to test user mobility proved to be a feasible methodology in rural settings. Variability in wheeled mobility data could be decreased with longer acclimatisation periods. The data suggest that push rim users experience an easier transition to a dual-lever propulsion system.
SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
2015-06-15
Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Computational Test Cases for a Rectangular Supercritical Wing Undergoing Pitching Oscillations
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Walker, Charlotte E.
1999-01-01
Proposed computational test cases have been selected from the data set for a rectangular wing of panel aspect ratio two with a twelve-percent-thick supercritical airfoil section that was tested in the NASA Langley Transonic Dynamics Tunnel. The test cases include parametric variation of static angle of attack, pitching oscillation frequency, and Mach numbers from subsonic to transonic with strong shocks. Tables and plots of the measured pressures are presented for each case. This report provides an early release of test cases that have been proposed for a document that supplements the cases presented in AGARD Report 702.
Tetherless ergonomics workstation to assess nurses' physical workload in a clinical setting.
Smith, Warren D; Nave, Michael E; Hreljac, Alan P
2011-01-01
Nurses are at risk of physical injury when moving immobile patients. This paper describes the development and testing of a tetherless ergonomics workstation that is suitable for studying nurses' physical workload in a clinical setting. The workstation uses wearable sensors to record multiple channels of body orientation and muscle activity and wirelessly transmits them to a base station laptop computer for display, storage, and analysis. In preparation for use in a clinical setting, the workstation was tested in a laboratory equipped for multi-camera video motion analysis. The testing included a pilot study of the effect of bed height on student nurses' physical workload while they repositioned a volunteer posing as a bedridden patient toward the head of the bed. Each nurse subject chose a preferred bed height, and data were recorded, in randomized order, with the bed at this height, at 0.1 m below this height, and at 0.1 m above this height. The testing showed that the body orientation recordings made by the wearable sensors agreed closely with those obtained from the video motion analysis system. The pilot study showed the following trends: As the bed height was raised, the nurses' trunk flexion at both thoracic and lumbar sites and lumbar muscle effort decreased, whereas trapezius and deltoid muscle effort increased. These trends will be evaluated by further studies of practicing nurses in the clinical setting.
Saturno, P J; Martinez-Nicolas, I; Robles-Garcia, I S; López-Soriano, F; Angel-García, D
2015-01-01
Pain is among the most important symptoms in terms of prevalence and cause of distress for cancer patients and their families. However, there is a lack of clearly defined measures of quality pain management to identify problems and monitor changes in improvement initiatives. We built a comprehensive set of evidence-based indicators following a four-step model: (1) review and systematization of existing guidelines to list evidence-based recommendations; (2) review and systematization of existing indicators matching the recommendations; (3) development of new indicators to complete a set of measures for the identified recommendations; and (4) pilot test (in hospital and primary care settings) for feasibility, reliability (kappa), and usefulness for the identification of quality problems using the lot quality acceptance sampling (LQAS) method and estimates of compliance. Twenty-two indicators were eventually pilot tested. Seventeen were feasible in hospitals and 12 in all settings. Feasibility barriers included difficulties in identifying target patients, deficient clinical records and low prevalence of cases for some indicators. Reliability was mostly very good or excellent (k > 0.8). Four indicators, all of them related to medication and prevention of side effects, had acceptable compliance at 75%/40% LQAS level. Other important medication-related indicators (i.e., adjustment to pain intensity, prescription for breakthrough pain) and indicators concerning patient-centred care (i.e., attention to psychological distress and educational needs) had very low compliance, highlighting specific quality gaps. A set of good practice indicators has been built and pilot tested as a feasible, reliable and useful quality monitoring tool, and underscoring particular and important areas for improvement. © 2014 European Pain Federation - EFIC®
Day 1 for the Integrated Multi-Satellite Retrievals for GPM (IMERG) Data Sets
NASA Astrophysics Data System (ADS)
Huffman, G. J.; Bolvin, D. T.; Braithwaite, D.; Hsu, K. L.; Joyce, R.; Kidd, C.; Sorooshian, S.; Xie, P.
2014-12-01
The Integrated Multi-satellitE Retrievals for GPM (IMERG) is designed to compute the best time series of (nearly) global precipitation from "all" precipitation-relevant satellites and global surface precipitation gauge analyses. IMERG was developed to use GPM Core Observatory data as a reference for the international constellation of satellites of opportunity that constitute the GPM virtual constellation. Computationally, IMERG is a unified U.S. algorithm drawing on strengths in the three contributing groups, whose previous work includes: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA); 2) the CPC Morphing algorithm with Kalman Filtering (K-CMORPH); and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS). We review the IMERG design, development, testing, and current status. IMERG provides 0.1°x0.1° half-hourly data, and will be run at multiple times, providing successively more accurate estimates: 4 hours, 8 hours, and 2 months after observation time. In Day 1 the spatial extent is 60°N-S, for the period March 2014 to the present. In subsequent reprocessing the data will extend to fully global, covering the period 1998 to the present. Both the set of input data set retrievals and the IMERG system are substantially different than those used in previous U.S. products. The input passive microwave data are all being produced with GPROF2014, which is substantially upgraded compared to previous versions. For the first time, this includes microwave sounders. Accordingly, there is a strong need to carefully check the initial test data sets for performance. IMERG output will be illustrated using pre-operational test data, including the variety of supporting fields, such as the merged-microwave and infrared estimates, and the precipitation type. Finally, we will summarize the expected release of various output products, and the subsequent reprocessing sequence.
Finding consensus on frailty assessment in acute care through Delphi method
2016-01-01
Objective We seek to address gaps in knowledge and agreement around optimal frailty assessment in the acute medical care setting. Frailty is a common term describing older persons who are at increased risk of developing multimorbidity, disability, institutionalisation and death. Consensus has not been reached on the practical implementation of this concept to assess clinically and manage older persons in the acute care setting. Design Modified Delphi, via electronic questionnaire. Questions included ranking items that best recognise frailty, optimal timing, location and contextual elements of a successful tool. Intraclass correlation coefficients for overall levels of agreement, with consensus and stability tested by 2-way ANOVA with absolute agreement and Fisher's exact test. Participants A panel of national experts (academics, front-line clinicians and specialist charities) were invited to electronic correspondence. Results Variables reflecting accumulated deficit and high resource usage were perceived by participants as the most useful indicators of frailty in the acute care setting. The Acute Medical Unit and Care of the older Persons Ward were perceived as optimum settings for frailty assessment. ‘Clinically meaningful and relevant’, ‘simple (easy to use)’ and ‘accessible by multidisciplinary team’ were perceived as characteristics of a successful frailty assessment tool in the acute care setting. No agreement was reached on optimal timing, number of variables and organisational structures. Conclusions This study is a first step in developing consensus for a clinically relevant frailty assessment model for the acute care setting, providing content validation and illuminating contextual requirements. Testing on clinical data sets is a research priority. PMID:27742633
Liquid hydrogen and liquid oxygen feedline passive recirculation analysis
NASA Astrophysics Data System (ADS)
Holt, Kimberly Ann; Cleary, Nicole L.; Nichols, Andrew J.; Perry, Gretchen L. E.
The primary goal of the National Launch System (NLS) program was to design an operationally efficient, highly reliable vehicle with minimal recurring launch costs. To achieve this goal, trade studies of key main propulsion subsystems were performed to specify vehicle design requirements. These requirements include the use of passive recirculation to thermally condition the liquid hydrogen (LH2) and liquid oxygen (LO2) propellant feed systems and Space Transportation Main Engine (STME) fuel pumps. Rockwell International (RI) proposed a joint independent research and development (JIRAD) program with Marshall Space Flight Center (MSFC) to study the LH2 feed system passive recirculation concept. The testing was started in July 1992 and completed in November 1992. Vertical and sloped feedline designs were used. An engine simulator was attached at the bottom of the feedline. This simulator had strip heaters that were set to equal the corresponding heat input from different engines. A computer program is currently being used to analyze the passive recirculation concept in the LH2 vertical feedline tests. Four tests, where the heater setting is the independent variable, were chosen. While the JIRAD with RI was underway, General Dynamics Space Systems (GDSS) proposed a JIRAD with MSFC to explore passive recirculation in the LO2 feed system. Liquid nitrogen (LN2) is being used instead of LO2 for safety and economic concerns. To date, three sets of calibration tests have been completed on the sloped LN2 test article. The environmental heat was calculated from the calibration tests in which the strip heaters were turned off. During the LH2 testing, the environmental heat was assumed to be constant. Therefore, the total heat was equal to the environmental heat flux plus the heater input. However, the first two sets of LN2 calibration tests have shown that the environmental heat flux varies with heater input. A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) model is currently being built to determine if this variation in environmental heat is due to a change in the wall temperature.
NASA Technical Reports Server (NTRS)
Skavdahl, H.; Patterson, D. H.
1972-01-01
The initial flight test phase of the modified C-8A airplane was conducted. The primary objective of the testing was to establish the basic airworthiness of the research vehicle. This included verification of the structural design and evaluation of the aircraft's systems. Only a minimum amount of performance testing was scheduled; this has been used to provide a preliminary indication of the airplane's performance and flight characteristics for future flight planning. The testing included flutter and loads investigations up to the maximum design speed. The operational characteristics of all systems were assessed including hydraulics, environmental control system, air ducts, the vectoring conical nozzles, and the stability augmentation system (SAS). Approaches to stall were made at three primary flap settings: up, 30 deg and 65 deg, but full stalls were not scheduled. Minimum control speeds and maneuver margins were checked. All takeoffs and landings were conventional, and STOL performance was not scheduled during this phase of the evaluation.
NASA Technical Reports Server (NTRS)
Houser, J.; Johnson, L. J.; Oiye, M.; Runciman, W.
1972-01-01
Experimental aerodynamic investigations were made in a transonic wind tunnel on a 1/150-scale model of the Boeing H-32 space shuttle booster configuration. The purpose of the test was: (1) to verify the transonic reentry corridor at high angles of attack; (2) to determine the transonic aerodynamic characteristics; and (3) to determine the subsonic aerodynamic characteristics at low angles of attack. Test variables included configuration buildup, horizontal stabilizer settings of 0 and -20 deg, elevator deflections of 0 and -30 deg, and wing spoiler settings of 60 deg.
Inverted drop testing and neck injury potential.
Forrest, Stephen; Herbst, Brian; Meyer, Steve; Sances, Anthony; Kumaresan, Srirangam
2003-01-01
Inverted drop testing of vehicles is a methodology that has long been used by the automotive industry and researchers to test roof integrity and is currently being considered by the National Highway Traffic Safety Administration as a roof strength test. In 1990 a study was reported which involved 8 dolly rollover tests and 5 inverted drop tests. These studies were conducted with restrained Hybrid III instrumented Anthropometric Test Devices (ATD) in production and rollcaged vehicles to investigate the relationship between roof strength and occupant injury potential. The 5 inverted drop tests included in the study provided a methodology producing "repeatable roof impacts" exposing the ATDs to the similar impact environment as those seen in the dolly rollover tests. Authors have conducted two inverted drop test sets as part of an investigation of two real world rollover accidents. Hybrid-III ATD's were used in each test with instrumented head and necks. Both test sets confirm that reduction of roof intrusion and increased headroom can significantly enhance occupant protection. In both test pairs, the neck force of the dummy in the vehicle with less crush and more survival space was significantly lower. Reduced roof crush and dynamic preservation of the occupant survival space resulted in only minor occupant contact and minimal occupant loading, establishing a clear causal relationship between roof crush and neck injuries.
Direct-to-consumer genetic testing: an assessment of genetic counselors' knowledge and beliefs
Hock, Kathryn T.; Christensen, Kurt D.; Yashar, Beverly M.; Roberts, J. Scott; Gollust, Sarah E.; Uhlmann, Wendy R.
2013-01-01
Purpose Direct-to-consumer genetic testing is a new means of obtaining genetic testing outside of a traditional clinical setting. This study assesses genetic counselors’ experience, knowledge, and beliefs regarding direct-to-consumer genetic testing for tests that would currently be offered in genetics clinics. Methods Members of the National Society of Genetic Counselors completed a web-administered survey in February 2008. Results Response rate was 36%; the final data analysis included 312 respondents. Eighty-three percent of respondents had two or fewer inquiries about direct-to-consumer genetic testing, and 14% had received requests for test interpretation or discussion. Respondents believed that genetic counselors have a professional obligation to be knowledgeable about direct-to-consumer genetic testing (55%) and interpret results (48%). Fifty-one percent of respondents thought genetic testing should be limited to a clinical setting; 56% agreed direct-to-consumer genetic testing is acceptable if genetic counseling is provided. More than 70% of respondents would definitely or possibly consider direct-to-consumer testing for patients who (1) have concerns about genetic discrimination, (2) want anonymous testing, or (3) have geographic constraints. Conclusions Results indicate that genetic counselors have limited patient experiences with direct-to-consumer genetic testing and are cautiously considering if and under what circumstances this approach should be used PMID:21233722
Assessing quality of care for migraineurs: a model health plan measurement set.
Leas, Brian F; Gagne, Joshua J; Goldfarb, Neil I; Rupnow, Marcia F T; Silberstein, Stephen
2008-08-01
Quality of care measures are increasingly important to health plans, purchasers, physicians, and patients. Appropriate measures can be used to assess quality and evaluate improvement and are necessary components of pay-for-performance programs. Despite the broad scope of activity in the development of quality measures, migraine headache has received little attention. Given the enormous costs associated with migraine, especially in terms of lost productivity and preventable health care utilization, health plans could gain from a structured approach to measuring the quality of migraine care their beneficiaries receive. A potential migraine quality measurement set was developed through a review of migraine care literature and guidelines, interviews with leaders in migraine care, health care purchasing, and managed care, and the assembly of an advisory board. The board discussed candidate measures and established consensus on a testable measurement set. Twenty measures were developed, focused primarily on diagnosis and utilization. Areas of utilization include physician visits, emergency department visits, hospitalizations, and imaging. Use of both acute and preventive medications is included. More complex aspects of migraine care are also addressed, including triptan overuse, the relationship between acute and preventive medications, and follow-up after emergency department visits. The measures are currently being tested in health plans to assess their feasibility and value. A compelling case can be made for the development of migraine-specific quality measures for health plans. This effort to develop and test a starter set of measures should lead to new and innovative efforts to assess and improve quality of care for migraineurs.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
... Street SW., Room 8230, Washington, DC 20410. FOR FURTHER INFORMATION CONTACT: Elizabeth Rudd, Ph.D... included a $10 million set-aside for a demonstration program ``to test the effectiveness of strategies to...
28 CFR 549.10 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Judicial Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INSTITUTIONAL MANAGEMENT MEDICAL SERVICES Infectious Disease Management § 549.10 Purpose and scope. The Bureau will manage infectious diseases in the confined environment of a correctional setting through a comprehensive approach which includes testing...
DOT National Transportation Integrated Search
1996-04-01
THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.
Syndromic Surveillance: Adapting Innovations to Developing Settings
2008-03-01
outbreak investigation was initiated, including rectal swab sampling of patients with watery diarrhea. Culture tests identified Vibrio cholerae in 44...GF, Kulldorff M, Madigan D, et al. (2007) Issues in applied statistics for public health bioterrorism Technical considerations • Harvesting data
A global database of nitrogen and phosphorus excretion rates of aquatic animals
Vanni, Michael J.; McIntyre, Peter B.; Allen, Dennis; ...
2017-03-06
Though their importance varies greatly among species and ecosystems, animals can be important in modulating ecosystem-level nutrient cycling. Nutrient cycling rates of individual animals represent valuable data for testing the predictions of important frameworks such as the Metabolic Theory of Ecology (MTE) and ecological stoichiometry (ES). They also represent an important set of functional traits that may reflect both environmental and phylogenetic influences. Over the past two decades, studies of animal-mediated nutrient cycling have increased dramatically, especially in aquatic ecosystems. Here we present a global compilation of aquatic animal nutrient excretion rates. The dataset includes 10,534 observations from freshwater andmore » marine animals of N and/or P excretion rates. Furthermore, these observations represent 491 species, including most aquatic phyla. Coverage varies greatly among phyla and other taxonomic levels. The dataset includes information on animal body size, ambient temperature, taxonomic affiliations, and animal body N:P. We used this data set to test predictions of MTE and ES, as described in Vanni and McIntyre (2016; Ecology DOI: 10.1002/ecy.1582).« less
A global database of nitrogen and phosphorus excretion rates of aquatic animals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanni, Michael J.; McIntyre, Peter B.; Allen, Dennis
Though their importance varies greatly among species and ecosystems, animals can be important in modulating ecosystem-level nutrient cycling. Nutrient cycling rates of individual animals represent valuable data for testing the predictions of important frameworks such as the Metabolic Theory of Ecology (MTE) and ecological stoichiometry (ES). They also represent an important set of functional traits that may reflect both environmental and phylogenetic influences. Over the past two decades, studies of animal-mediated nutrient cycling have increased dramatically, especially in aquatic ecosystems. Here we present a global compilation of aquatic animal nutrient excretion rates. The dataset includes 10,534 observations from freshwater andmore » marine animals of N and/or P excretion rates. Furthermore, these observations represent 491 species, including most aquatic phyla. Coverage varies greatly among phyla and other taxonomic levels. The dataset includes information on animal body size, ambient temperature, taxonomic affiliations, and animal body N:P. We used this data set to test predictions of MTE and ES, as described in Vanni and McIntyre (2016; Ecology DOI: 10.1002/ecy.1582).« less
Determination of HART I Blade Structural Properties by Laboratory Testing
NASA Technical Reports Server (NTRS)
Jung, Sung N.; Lau, Benton H.
2012-01-01
The structural properties of higher harmonic Aeroacoustic Rotor Test (HART I) blades were measured using the original set of blades tested in the German-dutch wind tunnel (DNW) in 1994. the measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. the measured properties were compared to the estimated values obtained initially from the blade manufacturer. The previously estimated blade properties showed consistently higher stiffness, up to 30 percent for the flap bending in the blade inboard root section.
Effect of sodium fluorosilicate on the properties of Portland cement.
Appelbaum, Keith S; Stewart, Jeffrey T; Hartwell, Gary R
2012-07-01
Mineral trioxide aggregate (MTA) satisfies most of the ideal properties of a surgical root-end filling and perforation repair material. It has been found to be nontoxic, noncarcinogenic, nongenotoxic, biocompatible, insoluble in tissue fluids, and dimensionally stable and promotes cementogenesis. The major disadvantages are its long setting time and difficult handling characteristics during placement when performing endodontic procedures. MTA is similar to Portland cement (PC) in both composition and properties. The cement industry has used many additives to decrease the setting time of PC. Proprietary formulas of PC additives include fluorosilicates, which decrease setting time. The purpose of this pilot study was to determine whether sodium fluorosilicate (SF) could be used to decrease the setting time without adversely affecting the compressive strength of PC. To determine the most appropriate amount of SF to add to PC to decrease its setting time, 1%, 2%, 3%, 4%, 5%, 10%, and 15% SF by weight were added to PC and compared with PC without SF. Setting times were measured by using a Gilmore needle, and compressive strengths were determined by using a materials testing system at 24 hours and 21 days. Statistical analysis was performed by using one-way analysis of variance with post hoc Games-Howell test. None of the percentages of SF were effective in changing the setting time of PC (P > .05), and the SF additives were found to decrease the compressive strength of PC (P < .001). On the basis of the conditions of this study, SF should not be used to decrease setting time and increase the compressive strength of PC and as such does not warrant further testing with MTA. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Lv, Yufeng; Wei, Wenhao; Huang, Zhong; Chen, Zhichao; Fang, Yuan; Pan, Lili; Han, Xueqiong; Xu, Zihai
2018-06-20
The aim of this study was to develop a novel long non-coding RNA (lncRNA) expression signature to accurately predict early recurrence for patients with hepatocellular carcinoma (HCC) after curative resection. Using expression profiles downloaded from The Cancer Genome Atlas database, we identified multiple lncRNAs with differential expression between early recurrence (ER) group and non-early recurrence (non-ER) group of HCC. Least absolute shrinkage and selection operator (LASSO) for logistic regression models were used to develop a lncRNA-based classifier for predicting ER in the training set. An independent test set was used to validated the predictive value of this classifier. Futhermore, a co-expression network based on these lncRNAs and its highly related genes was constructed and Gene Ontology and Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses of genes in the network were performed. We identified 10 differentially expressed lncRNAs, including 3 that were upregulated and 7 that were downregulated in ER group. The lncRNA-based classifier was constructed based on 7 lncRNAs (AL035661.1, PART1, AC011632.1, AC109588.1, AL365361.1, LINC00861 and LINC02084), and its accuracy was 0.83 in training set, 0.87 in test set and 0.84 in total set. And ROC curve analysis showed the AUROC was 0.741 in training set, 0.824 in the test set and 0.765 in total set. A functional enrichment analysis suggested that the genes of which is highly related to 4 lncRNAs were involved in immune system. This 7-lncRNA expression profile can effectively predict the early recurrence after surgical resection for HCC. This article is protected by copyright. All rights reserved.
WND-CHARM: Multi-purpose image classification using compound image transforms
Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.
2008-01-01
We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301
Machine learning-based coreference resolution of concepts in clinical documents
Ware, Henry; Mullett, Charles J; El-Rawas, Oussama
2012-01-01
Objective Coreference resolution of concepts, although a very active area in the natural language processing community, has not yet been widely applied to clinical documents. Accordingly, the 2011 i2b2 competition focusing on this area is a timely and useful challenge. The objective of this research was to collate coreferent chains of concepts from a corpus of clinical documents. These concepts are in the categories of person, problems, treatments, and tests. Design A machine learning approach based on graphical models was employed to cluster coreferent concepts. Features selected were divided into domain independent and domain specific sets. Training was done with the i2b2 provided training set of 489 documents with 6949 chains. Testing was done on 322 documents. Results The learning engine, using the un-weighted average of three different measurement schemes, resulted in an F measure of 0.8423 where no domain specific features were included and 0.8483 where the feature set included both domain independent and domain specific features. Conclusion Our machine learning approach is a promising solution for recognizing coreferent concepts, which in turn is useful for practical applications such as the assembly of problem and medication lists from clinical documents. PMID:22582205
Underhill, Kristen; Morrow, Kathleen M; Colleran, Christopher M; Holcomb, Richard; Operario, Don; Calabrese, Sarah K; Galárraga, Omar; Mayer, Kenneth H
2014-01-01
Pre-exposure prophylaxis (PrEP) is a promising strategy for HIV prevention among men who have sex with men (MSM) and men who engage in sex work. But access will require routine HIV testing and contacts with healthcare providers. This study investigated men's healthcare and HIV testing experiences to inform PrEP implementation. We conducted 8 focus groups (n = 38) in 2012 and 56 in-depth qualitative interviews in 2013-14 with male sex workers (MSWs) (n = 31) and other MSM (n = 25) in Providence, RI. MSWs primarily met clients in street-based sex work venues. Facilitators asked participants about access to healthcare and HIV/STI testing, healthcare needs, and preferred PrEP providers. MSWs primarily accessed care in emergency rooms (ERs), substance use clinics, correctional institutions, and walk-in clinics. Rates of HIV testing were high, but MSWs reported low access to other STI testing, low insurance coverage, and unmet healthcare needs including primary care, substance use treatment, and mental health services. MSM not engaging in sex work were more likely to report access to primary and specialist care. Rates of HIV testing among these MSM were slightly lower, but they reported more STI testing, more insurance coverage, and fewer unmet needs. Preferred PrEP providers for both groups included primary care physicians, infectious disease specialists, and psychiatrists. MSWs were also willing to access PrEP in substance use treatment and ER settings. PrEP outreach efforts for MSWs and other MSM should engage diverse providers in many settings, including mental health and substance use treatment, ERs, needle exchanges, correctional institutions, and HIV testing centers. Access to PrEP will require financial assistance, but can build on existing healthcare contacts for both populations.
Underhill, Kristen; Morrow, Kathleen M.; Colleran, Christopher M.; Holcomb, Richard; Operario, Don; Calabrese, Sarah K.; Galárraga, Omar; Mayer, Kenneth H.
2014-01-01
Background Pre-exposure prophylaxis (PrEP) is a promising strategy for HIV prevention among men who have sex with men (MSM) and men who engage in sex work. But access will require routine HIV testing and contacts with healthcare providers. This study investigated men’s healthcare and HIV testing experiences to inform PrEP implementation. Methods We conducted 8 focus groups (n = 38) in 2012 and 56 in-depth qualitative interviews in 2013–14 with male sex workers (MSWs) (n = 31) and other MSM (n = 25) in Providence, RI. MSWs primarily met clients in street-based sex work venues. Facilitators asked participants about access to healthcare and HIV/STI testing, healthcare needs, and preferred PrEP providers. Results MSWs primarily accessed care in emergency rooms (ERs), substance use clinics, correctional institutions, and walk-in clinics. Rates of HIV testing were high, but MSWs reported low access to other STI testing, low insurance coverage, and unmet healthcare needs including primary care, substance use treatment, and mental health services. MSM not engaging in sex work were more likely to report access to primary and specialist care. Rates of HIV testing among these MSM were slightly lower, but they reported more STI testing, more insurance coverage, and fewer unmet needs. Preferred PrEP providers for both groups included primary care physicians, infectious disease specialists, and psychiatrists. MSWs were also willing to access PrEP in substance use treatment and ER settings. Conclusions PrEP outreach efforts for MSWs and other MSM should engage diverse providers in many settings, including mental health and substance use treatment, ERs, needle exchanges, correctional institutions, and HIV testing centers. Access to PrEP will require financial assistance, but can build on existing healthcare contacts for both populations. PMID:25386746
Combined Space Environmental Exposure Tests of Multi-Junction GaAs/Ge Solar Array Coupons
NASA Technical Reports Server (NTRS)
Hoang, Bao; Wong, Frankie; Corey, Ron; Gardiner, George; Funderburk, Victor V.; Gahart, Richard; Wright, Kenneth H.; Schneider, Todd; Vaughn, Jason
2010-01-01
A set of multi-junction GaAs/Ge solar array test coupons were subjected to a sequence of 5-year increments of combined environmental exposure tests. The purpose of this test program is to understand the changes and degradation of the solar array panel components, including its ESD mitigation design features in their integrated form, after multiple years (up to 15) of simulated geosynchronous space environment. These tests consist of: UV radiation, electrostatic discharge (ESD), electron/proton particle radiation, thermal cycling, and ion thruster plume exposures. The solar radiation was produced using a Mercury-Xenon lamp with wavelengths in the UV spectrum ranging from 230 to 400 nm. The ESD test was performed in the inverted-gradient mode using a low-energy electron (2.6 - 6 keV) beam exposure. The ESD test also included a simulated panel coverglass flashover for the primary arc event. The electron/proton radiation exposure included both 1.0 MeV and 100 keV electron beams simultaneous with a 40 keV proton beam. The thermal cycling included simulated transient earth eclipse for satellites in geosynchronous orbit. With the increasing use of ion thruster engines on many satellites, the combined environmental test also included ion thruster exposure to determine whether solar array surface erosion had any impact on its performance. Before and after each increment of environmental exposures, the coupons underwent visual inspection under high power magnification and electrical tests that included characterization by LAPSS, Dark I-V, and electroluminescence. This paper discusses the test objective, test methodologies, and preliminary results after 5 years of simulated exposure.
Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun
2017-02-01
An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.
Perspectives of HER2-targeting in gastric and esophageal cancer.
Gerson, James N; Skariah, Sam; Denlinger, Crystal S; Astsaturov, Igor
2017-05-01
The blockade of HER2 signaling has significantly improved the outlook for esophagogastric cancer patients. However, targeting HER2 still remains challenging due to complex biology of this receptor in gastric and esophageal cancers. Areas covered: Here, we review complex HER2 biology, current methods of HER2 testing and tumor heterogeneity of gastroesophageal cancer. Ongoing and completed clinical research data are discussed. Expert opinion: HER2 overexpression is a validated target in gastroesophageal cancer, with therapeutic implications resulting in prolonged survival when inhibited in the front-line setting. With standardized HER2 testing in gastro-esophageal cancer, the ongoing trials are testing newer agents and combinations including combination of anti-HER2 antibodies with immunotherapy. Clonal heterogeneity and emergence of resistance will challenge our approach to treating these patients beyond the frontline settings.
A novel method to estimate the affinity of HLA-A∗0201 restricted CTL epitope
NASA Astrophysics Data System (ADS)
Xu, Yun-sheng; Lin, Yong; Zhu, Bo; Lin, Zhi-hua
2009-02-01
A set of 70 peptides with affinity for the class I MHC HLA-A∗0201 molecule was subjected to quantitative structure-affinity relationship studies based on the SCORE function with good results ( r2 = 0.6982, RMS = 0.280). Then the 'leave-one-out' cross-validation (LOO-CV) and an outer test set including 18 outer samples were used to validate the QSAR model. The results of the LOO-CV were q2 = 0.6188, RMS = 0.315, and the results of outer test set were r2 = 0.5633, RMS = 0.2292. All these show that the QSAR model has good predictability. Statistical analysis showed that the hydrophobic and hydrogen bond interaction played a significant role in peptide-MHC molecule binding. The study also provided useful information for structure modification of CTL epitope, and laid theoretical base for molecular design of therapeutic vaccine.
Towards optimal experimental tests on the reality of the quantum state
NASA Astrophysics Data System (ADS)
Knee, George C.
2017-02-01
The Barrett-Cavalcanti-Lal-Maroney (BCLM) argument stands as the most effective means of demonstrating the reality of the quantum state. Its advantages include being derived from very few assumptions, and a robustness to experimental error. Finding the best way to implement the argument experimentally is an open problem, however, and involves cleverly choosing sets of states and measurements. I show that techniques from convex optimisation theory can be leveraged to numerically search for these sets, which then form a recipe for experiments that allow for the strongest statements about the ontology of the wavefunction to be made. The optimisation approach presented is versatile, efficient and can take account of the finite errors present in any real experiment. I find significantly improved low-cardinality sets which are guaranteed partially optimal for a BCLM test in low Hilbert space dimension. I further show that mixed states can be more optimal than pure states.
Updating the immunology curriculum in clinical laboratory science.
Stevens, C D
2000-01-01
To determine essential content areas of immunology/serology courses at the clinical laboratory technician (CLT) and clinical laboratory scientist (CLS) levels. A questionnaire was designed which listed all major topics in immunology and serology. Participants were asked to place a check beside each topic covered. For an additional list of serological and immunological laboratory testing, participants were asked to indicate if each test was performed in either the didactic or clinical setting, or not performed at all. A national survey of 593 NAACLS approved CLT and CLS programs was conducted by mail under the auspices of ASCLS. Responses were obtained from 158 programs. Respondents from all across the United States included 60 CLT programs, 48 hospital-based CLS programs, 45 university-based CLS programs, and 5 university-based combined CLT and CLS programs. The survey was designed to enumerate major topics included in immunology and serology courses by a majority of participants at two distinct educational levels, CLT and CLS. Laboratory testing routinely performed in student laboratories as well as in the clinical setting was also determined for these two levels of practitioners. Certain key topics were common to most immunology and serology courses. There were some notable differences in the depth of courses at the CLT and CLS levels. Laboratory testing associated with these courses also differed at the two levels. Testing requiring more detailed interpretation, such as antinuclear antibody patterns (ANAs), was mainly performed by CLS students only. There are certain key topics as well as specific laboratory tests that should be included in immunology/serology courses at each of the two different educational levels to best prepare students for the workplace. Educators can use this information as a guide to plan a curriculum for such courses.
Overview of software development at the parabolic dish test site
NASA Technical Reports Server (NTRS)
Miyazono, C. K.
1985-01-01
The development history of the data acquisition and data analysis software is discussed. The software development occurred between 1978 and 1984 in support of solar energy module testing at the Jet Propulsion Laboratory's Parabolic Dish Test Site, located within Edwards Test Station. The development went through incremental stages, starting with a simple single-user BASIC set of programs, and progressing to the relative complex multi-user FORTRAN system that was used until the termination of the project. Additional software in support of testing is discussed including software in support of a meteorological subsystem and the Test Bed Concentrator Control Console interface. Conclusions and recommendations for further development are discussed.
Early Examples from the Integrated Multi-Satellite Retrievals for GPM (IMERG)
NASA Astrophysics Data System (ADS)
Huffman, George; Bolvin, David; Braithwaite, Daniel; Hsu, Kuolin; Joyce, Robert; Kidd, Christopher; Sorooshian, Soroosh; Xie, Pingping
2014-05-01
The U.S. GPM Science Team's Day-1 algorithm for computing combined precipitation estimates as part of GPM is the Integrated Multi-satellitE Retrievals for GPM (IMERG). The goal is to compute the best time series of (nearly) global precipitation from "all" precipitation-relevant satellites and global surface precipitation gauge analyses. IMERG is being developed as a unified U.S. algorithm drawing on strengths in the three contributing groups, whose previous work includes: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA); 2) the CPC Morphing algorithm with Kalman Filtering (K-CMORPH); and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS). We review the IMERG design and development, plans for testing, and current status. Some of the lessons learned in running and reprocessing the previous data sets include the importance of quality-controlling input data sets, strategies for coping with transitions in the various input data sets, and practical approaches to retrospective analysis of multiple output products (namely the real- and post-real-time data streams). IMERG output will be illustrated using early test data, including the variety of supporting fields, such as the merged-microwave and infrared estimates, and the precipitation type. We end by considering recent changes in input data specifications, the transition from TRMM-based calibration to GPM-based, and further "Day 2" development.
Evaluation of the infrared test method for the olympus thermal balance tests
NASA Technical Reports Server (NTRS)
Donato, M.; Stpierre, D.; Green, J.; Reeves, M.
1986-01-01
The performance of the infrared (IR) rig used for the thermal balance testing of the Olympus S/C thermal model is discussed. Included in this evaluation are the rig effects themselves, the IRFLUX computer code used to predict the radiation inputs, the Monitored Background Radiometers (MBR's) developed to measure the absorbed radiation flux intensity, the Uniform Temperature Reference (UTR) based temperature measurement system and the data acquisition system. A preliminary set of verification tests were performed on a 1 m x 1 m zone to assess the performance of the IR lamps, calrods, MBR's and aluminized baffles. The results were used, in part, to obtain some empirical data required for the IRFLUX code. This data included lamp and calrod characteristics, the absorptance function for various surface types, and the baffle reflectivities.
Larson, Bruce; Schnippel, Kathryn; Ndibongo, Buyiswa; Long, Lawrence; Fox, Matthew P; Rosen, Sydney
2012-01-01
Integrating POC CD4 testing technologies into HIV counseling and testing (HCT) programs may improve post-HIV testing linkage to care and treatment. As evaluations of these technologies in program settings continue, estimates of the costs of POC CD4 tests to the service provider will be needed and estimates have begun to be reported. Without a consistent and transparent methodology, estimates of the cost per CD4 test using POC technologies are likely to be difficult to compare and may lead to erroneous conclusions about costs and cost-effectiveness. This paper provides a step-by-step approach for estimating the cost per CD4 test from a provider's perspective. As an example, the approach is applied to one specific POC technology, the Pima Analyzer. The costing approach is illustrated with data from a mobile HCT program in Gauteng Province of South Africa. For this program, the cost per test in 2010 was estimated at $23.76 (material costs = $8.70; labor cost per test = $7.33; and equipment, insurance, and daily quality control = $7.72). Labor and equipment costs can vary widely depending on how the program operates and the number of CD4 tests completed over time. Additional costs not included in the above analysis, for on-going training, supervision, and quality control, are likely to increase further the cost per test. The main contribution of this paper is to outline a methodology for estimating the costs of incorporating POC CD4 testing technologies into an HCT program. The details of the program setting matter significantly for the cost estimate, so that such details should be clearly documented to improve the consistency, transparency, and comparability of cost estimates.
What motivates health professionals? Opportunities to gain greater insight from theory.
Buetow, Stephen
2007-07-01
Health care policy-makers and researchers need to pay more attention to understanding the influence of motivation on professional behaviour. Goal setting theory, including two hypotheses - the business case and the pride case - dominates current attempts to motivate professionals. However, the predominance of goal setting theory stifles other approaches to conceptualizing professional motivation. These approaches include other cognitive theories of motivation, such as self-determination theory (concerned with how to use extrinsic rewards that enhance intrinsic motivation), as well as content, psychoanalytic and environmental theories. A valuable opportunity exists to develop and test such theories in addition to possible hybrids, for example, by elaborating goal setting theory in health care. The results can be expected to inform health policy and motivate individual professionals, groups, organizations and workforces to improve and deliver high quality care.
Materials Compatibility Testing in Concentrated Hydrogen Peroxide
NASA Technical Reports Server (NTRS)
Boxwell, R.; Bromley, G.; Mason, D.; Crockett, D.; Martinez, L.; McNeal, C.; Lyles, G. (Technical Monitor)
2000-01-01
Materials test methods from the 1960's have been used as a starting point in evaluating materials for today's space launch vehicles. These established test methods have been modified to incorporate today's analytical laboratory equipment. The Orbital test objective was to test a wide range of materials to incorporate the revolution in polymer and composite materials that has occurred since the 1960's. Testing is accomplished in 3 stages from rough screening to detailed analytical tests. Several interesting test observations have been made during this testing and are included in the paper. A summary of the set-up, test and evaluation of long-term storage sub-scale tanks is also included. This sub-scale tank test lasted for a 7-month duration prior to being stopped due to a polar boss material breakdown. Chemical evaluations of the hydrogen peroxide and residue left on the polar boss surface identify the material breakdown quite clearly. The paper concludes with recommendations for future testing and a specific effort underway within the industry to standardize the test methods used in evaluating materials.
Physiological Factors Contributing to Postflight Changes in Functional Performance
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Feedback, D. L.; Feiverson, A. H.; Lee, S. M. C.; Mulavara, A. P.; Peters, B. T.; Platts, S. H.; Reschke, M. F.; Ryder, J.; Spiering, B. A.;
2009-01-01
Astronauts experience alterations in multiple physiological systems due to exposure to the microgravity conditions of space flight. These physiological changes include sensorimotor disturbances, cardiovascular deconditioning and loss of muscle mass and strength. These changes might affect the ability of crewmembers to perform critical mission tasks immediately after landing on lunar and Martian surfaces. To date, changes in functional performance have not been systematically studied or correlated with physiological changes. To understand how changes in physiological function impact functional performance an interdisciplinary pre/postflight testing regimen (Functional Task Test, FTT) has been developed that systematically evaluates both astronaut postflight functional performance and related physiological changes. The overall objectives of the FTT are to: Develop a set of functional tasks that represent critical mission tasks for Constellation. Determine the ability to perform these tasks after flight. Identify the key physiological factors that contribute to functional decrements. Use this information to develop targeted countermeasures. The functional test battery was designed to address high priority tasks identified by the Constellation program as critical for mission success. The set of functional tests making up the FTT include the: 1) Seat Egress and Walk Test, 2) Ladder Climb Test, 3) Recovery from Fall/Stand Test, 4) Rock Translation Test, 5) Jump Down Test, 6) Torque Generation Test, and 7) Construction Activity Board Test. Corresponding physiological measures include assessments of postural and gait control, dynamic visual acuity, fine motor control, plasma volume, orthostatic intolerance, upper and lower body muscle strength, power, fatigue, control and neuromuscular drive. Crewmembers will perform both functional and physiological tests before and after short (Shuttle) and long-duration (ISS) space flight. Data will be collected on R+0 (Shuttle only), R+1, R+6 and R+30. Using a multivariate regression model we will identify which physiological systems contribute the most to impaired performance on each functional test. This will allow us to identify the physiological systems that play the largest role in decrement in functional performance. Using this information we can then design and implement countermeasures that specifically target the physiological systems most responsible for the altered functional performance associated with space flight.
Calès, P; Boursier, J; Lebigot, J; de Ledinghen, V; Aubé, C; Hubert, I; Oberti, F
2017-04-01
In chronic hepatitis C, the European Association for the Study of the Liver and the Asociacion Latinoamericana para el Estudio del Higado recommend performing transient elastography plus a blood test to diagnose significant fibrosis; test concordance confirms the diagnosis. To validate this rule and improve it by combining a blood test, FibroMeter (virus second generation, Echosens, Paris, France) and transient elastography (constitutive tests) into a single combined test, as suggested by the American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. A total of 1199 patients were included in an exploratory set (HCV, n = 679) or in two validation sets (HCV ± HIV, HBV, n = 520). Accuracy was mainly evaluated by correct diagnosis rate for severe fibrosis (pathological Metavir F ≥ 3, primary outcome) by classical test scores or a fibrosis classification, reflecting Metavir staging, as a function of test concordance. Score accuracy: there were no significant differences between the blood test (75.7%), elastography (79.1%) and the combined test (79.4%) (P = 0.066); the score accuracy of each test was significantly (P < 0.001) decreased in discordant vs. concordant tests. Classification accuracy: combined test accuracy (91.7%) was significantly (P < 0.001) increased vs. the blood test (84.1%) and elastography (88.2%); accuracy of each constitutive test was significantly (P < 0.001) decreased in discordant vs. concordant tests but not with combined test: 89.0 vs. 92.7% (P = 0.118). Multivariate analysis for accuracy showed an interaction between concordance and fibrosis level: in the 1% of patients with full classification discordance and severe fibrosis, non-invasive tests were unreliable. The advantage of combined test classification was confirmed in the validation sets. The concordance recommendation is validated. A combined test, expressed in classification instead of score, improves this rule and validates the recommendation of a combined test, avoiding 99% of biopsies, and offering precise staging. © 2017 John Wiley & Sons Ltd.
Ebrahimi-Najafabadi, Heshmatollah; Leardi, Riccardo; Oliveri, Paolo; Casolino, Maria Chiara; Jalali-Heravi, Mehdi; Lanteri, Silvia
2012-09-15
The current study presents an application of near infrared spectroscopy for identification and quantification of the fraudulent addition of barley in roasted and ground coffee samples. Nine different types of coffee including pure Arabica, Robusta and mixtures of them at different roasting degrees were blended with four types of barley. The blending degrees were between 2 and 20 wt% of barley. D-optimal design was applied to select 100 and 30 experiments to be used as calibration and test set, respectively. Partial least squares regression (PLS) was employed to build the models aimed at predicting the amounts of barley in coffee samples. In order to obtain simplified models, taking into account only informative regions of the spectral profiles, a genetic algorithm (GA) was applied. A completely independent external set was also used to test the model performances. The models showed excellent predictive ability with root mean square errors (RMSE) for the test and external set equal to 1.4% w/w and 0.8% w/w, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Association of blood lipids with Alzheimer's disease: A comprehensive lipidomics analysis.
Proitsi, Petroula; Kim, Min; Whiley, Luke; Simmons, Andrew; Sattlecker, Martina; Velayudhan, Latha; Lupton, Michelle K; Soininen, Hillka; Kloszewska, Iwona; Mecocci, Patrizia; Tsolaki, Magda; Vellas, Bruno; Lovestone, Simon; Powell, John F; Dobson, Richard J B; Legido-Quigley, Cristina
2017-02-01
The aim of this study was to (1) replicate previous associations between six blood lipids and Alzheimer's disease (AD) (Proitsi et al 2015) and (2) identify novel associations between lipids, clinical AD diagnosis, disease progression and brain atrophy (left/right hippocampus/entorhinal cortex). We performed untargeted lipidomic analysis on 148 AD and 152 elderly control plasma samples and used univariate and multivariate analysis methods. We replicated our previous lipids associations and reported novel associations between lipids molecules and all phenotypes. A combination of 24 molecules classified AD patients with >70% accuracy in a test and a validation data set, and we identified lipid signatures that predicted disease progression (R 2 = 0.10, test data set) and brain atrophy (R 2 ≥ 0.14, all test data sets except left entorhinal cortex). We putatively identified a number of metabolic features including cholesteryl esters/triglycerides and phosphatidylcholines. Blood lipids are promising AD biomarkers that may lead to new treatment strategies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Impact of probability estimation on frequency of urine culture requests in ambulatory settings.
Gul, Naheed; Quadri, Mujtaba
2012-07-01
To determine the perceptions of the medical community about urine culture in diagnosing urinary tract infections. The cross-sectional survey based of consecutive sampling was conducted at Shifa International Hospital, Islamabad, on 200 doctors, including medical students of the Shifa College of Medicine, from April to October 2010. A questionnaire with three common clinical scenarios of low, intermediate and high pre-test probability for urinary tract infection was used to assess the behaviour of the respondents to make a decision for urine culture test. The differences between the reference estimates and the respondents' estimates of pre- and post-test probability were assessed. The association of estimated probabilities with the number of tests ordered was also evaluated. The respondents were also asked about the cost effectiveness and safety of urine culture and sensitivity. Data was analysed using SPSS version 15. In low pre-test probability settings, the disease probability was over-estimated, suggesting the participants' inability to rule out the disease. The post-test probabilities were, however, under-estimated by the doctors as compared to the students. In intermediate and high pre-test probability settings, both over- and underestimation of probabilities were noticed. Doctors were more likely to consider ordering the test as the disease probability increased. Most of the respondents were of the opinion that urine culture was a cost-effective test and there was no associated potential harm. The wide variation in the clinical use of urine culture necessitates the formulation of appropriate guidelines for the diagnostic use of urine culture, and application of Bayesian probabilistic thinking to real clinical situations.
Hyle, Emily P; Jani, Ilesh V; Lehe, Jonathan; Su, Amanda E; Wood, Robin; Quevedo, Jorge; Losina, Elena; Bassett, Ingrid V; Pei, Pamela P; Paltiel, A David; Resch, Stephen; Freedberg, Kenneth A; Peter, Trevor; Walensky, Rochelle P
2014-09-01
Point-of-care CD4 tests at HIV diagnosis could improve linkage to care in resource-limited settings. Our objective is to evaluate the clinical and economic impact of point-of-care CD4 tests compared to laboratory-based tests in Mozambique. We use a validated model of HIV testing, linkage, and treatment (CEPAC-International) to examine two strategies of immunological staging in Mozambique: (1) laboratory-based CD4 testing (LAB-CD4) and (2) point-of-care CD4 testing (POC-CD4). Model outcomes include 5-y survival, life expectancy, lifetime costs, and incremental cost-effectiveness ratios (ICERs). Input parameters include linkage to care (LAB-CD4, 34%; POC-CD4, 61%), probability of correctly detecting antiretroviral therapy (ART) eligibility (sensitivity: LAB-CD4, 100%; POC-CD4, 90%) or ART ineligibility (specificity: LAB-CD4, 100%; POC-CD4, 85%), and test cost (LAB-CD4, US$10; POC-CD4, US$24). In sensitivity analyses, we vary POC-CD4-specific parameters, as well as cohort and setting parameters to reflect a range of scenarios in sub-Saharan Africa. We consider ICERs less than three times the per capita gross domestic product in Mozambique (US$570) to be cost-effective, and ICERs less than one times the per capita gross domestic product in Mozambique to be very cost-effective. Projected 5-y survival in HIV-infected persons with LAB-CD4 is 60.9% (95% CI, 60.9%-61.0%), increasing to 65.0% (95% CI, 64.9%-65.1%) with POC-CD4. Discounted life expectancy and per person lifetime costs with LAB-CD4 are 9.6 y (95% CI, 9.6-9.6 y) and US$2,440 (95% CI, US$2,440-US$2,450) and increase with POC-CD4 to 10.3 y (95% CI, 10.3-10.3 y) and US$2,800 (95% CI, US$2,790-US$2,800); the ICER of POC-CD4 compared to LAB-CD4 is US$500/year of life saved (YLS) (95% CI, US$480-US$520/YLS). POC-CD4 improves clinical outcomes and remains near the very cost-effective threshold in sensitivity analyses, even if point-of-care CD4 tests have lower sensitivity/specificity and higher cost than published values. In other resource-limited settings with fewer opportunities to access care, POC-CD4 has a greater impact on clinical outcomes and remains cost-effective compared to LAB-CD4. Limitations of the analysis include the uncertainty around input parameters, which is examined in sensitivity analyses. The potential added benefits due to decreased transmission are excluded; their inclusion would likely further increase the value of POC-CD4 compared to LAB-CD4. POC-CD4 at the time of HIV diagnosis could improve survival and be cost-effective compared to LAB-CD4 in Mozambique, if it improves linkage to care. POC-CD4 could have the greatest impact on mortality in settings where resources for HIV testing and linkage are most limited. Please see later in the article for the Editors' Summary.
Cyber-T web server: differential analysis of high-throughput data.
Kayala, Matthew A; Baldi, Pierre
2012-07-01
The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.
ERIC Educational Resources Information Center
Shriver, Edgar L.; And Others
This document furnishes a complete copy of the Test Subject's Instructions and the Test Administrator's Handbook for a battery of criterion referenced Job Task Performance Tests (JTPT) for electronic maintenance. General information is provided on soldering, Radar Set AN/APN-147(v), Radar Set Special Equipment, Radar Set Bench Test Set-Up, and…
Development of knowledge tests for multi-disciplinary emergency training: a review and an example.
Sørensen, J L; Thellesen, L; Strandbygaard, J; Svendsen, K D; Christensen, K B; Johansen, M; Langhoff-Roos, P; Ekelund, K; Ottesen, B; Van Der Vleuten, C
2015-01-01
The literature is sparse on written test development in a post-graduate multi-disciplinary setting. Developing and evaluating knowledge tests for use in multi-disciplinary post-graduate training is challenging. The objective of this study was to describe the process of developing and evaluating a multiple-choice question (MCQ) test for use in a multi-disciplinary training program in obstetric-anesthesia emergencies. A multi-disciplinary working committee with 12 members representing six professional healthcare groups and another 28 participants were involved. Recurrent revisions of the MCQ items were undertaken followed by a statistical analysis. The MCQ items were developed stepwise, including decisions on aims and content, followed by testing for face and content validity, construct validity, item-total correlation, and reliability. To obtain acceptable content validity, 40 out of originally 50 items were included in the final MCQ test. The MCQ test was able to distinguish between levels of competence, and good construct validity was indicated by a significant difference in the mean score between consultants and first-year trainees, as well as between first-year trainees and medical and midwifery students. Evaluation of the item-total correlation analysis in the 40 items set revealed that 11 items needed re-evaluation, four of which addressed content issues in local clinical guidelines. A Cronbach's alpha of 0.83 for reliability was found, which is acceptable. Content and construct validity and reliability were acceptable. The presented template for the development of this MCQ test could be useful to others when developing knowledge tests and may enhance the overall quality of test development. © 2014 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Yokoyama, Eiji; Hirai, Shinichiro; Ishige, Taichiro; Murakami, Satoshi
2018-01-02
Seventeen clusters of Shiga toxin-producing Escherichia coli O157:H7/- (O157) strains, determined by cluster analysis of pulsed-field gel electrophoresis patterns, were analyzed using whole genome sequence (WGS) data to investigate this pathogen's molecular epidemiology. The 17 clusters included 136 strains containing strains from nine outbreaks, with each outbreak caused by a single source contaminated with the organism, as shown by epidemiological contact surveys. WGS data of these strains were used to identify single nucleotide polymorphisms (SNPs) by two methods: short read data were directly mapped to a reference genome (mapping derived SNPs) and common SNPs between the mapping derived SNPs and SNPs in assembled data of short read data (common SNPs). Among both SNPs, those that were detected in genes with a gap were excluded to remove ambiguous SNPs from further analysis. The effectiveness of both SNPs was investigated among all the concatenated SNPs that were detected (whole SNP set); SNPs were divided into three categories based on the genes in which they were located (i.e., backbone SNP set, O-island SNP set, and mobile element SNP set); and SNPs in non-coding regions (intergenic region SNP set). When SNPs from strains isolated from the nine single source derived outbreaks were analyzed using an unweighted pair group method with arithmetic mean tree (UPGMA) and a minimum spanning tree (MST), the maximum pair-wise distances of the backbone SNP set of the mapping derived SNPs were significantly smaller than those of the whole and intergenic region SNP set on both UPGMAs and MSTs. This significant difference was also observed when the backbone SNP set of the common SNPs were examined (Steel-Dwass test, P≤0.01). When the maximum pair-wise distances were compared between the mapping derived and common SNPs, significant differences were observed in those of the whole, mobile element, and intergenic region SNP set (Wilcoxon signed rank test, P≤0.01). When all the strains included in one complex on an MST or one cluster on a UPGMA were designated as the same genotype, the values of the Hunter-Gaston Discriminatory Power Index for the backbone SNP set of the mapping derived and common SNPs were higher than those of other SNP sets. In contrast, the mobile element SNP set could not robustly subdivide lineage I strains of tested O157 strains using both the mapping derived and common SNPs. These results suggested that the backbone SNP set were the most effective for analysis of WGS data for O157 in enabling an appropriation of its molecular epidemiology. Copyright © 2017 Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
2000-10-01
The Phoenix, Arizona Metropolitan Model Deployment was one of four cities included in the Metropolitan Model Deployment Initiative (MMDI). The initiative was set forth in 1996 to serve as model deployments of ITS infrastructure and integration. One o...
The Standards Movement: A Child-Centered Response.
ERIC Educational Resources Information Center
Crain, William
2003-01-01
Discusses how child-centered educational philosophies, including Montessori, share positions differing radically from those of the educational standards movement. Focuses on adult-set goals and standards, social promotion, external motivators, demands for more challenging work, and standardized tests. Reports that children in child-centered…
Evaluation of Options for Interpreting Environmental ...
Report Secondary data from the BioResponse Operational Testing and Evaluation project were used to study six options for interpreting culture-based/microbial count data sets that include left censored data, or measurements that are less than established quantification limits and/or detection limits.
Evaluation of a Microelectrode Arrays for Neurotoxicity Testing Using a Chemical Training Set
Microelectrode array (MEA) approaches have been proposed as a tool for detecting functional changes in electrically active cells, including neurons, exposed to drugs, chemicals, or particles. However, conventional single well MEA systems lack the throughput necessary for screenin...
Oncology biomarkers: discovery, validation, and clinical use.
Heckman-Stoddard, Brandy M
2012-05-01
To discuss the discovery, validation, and clinical use of multiple types of biomarkers. Medical literature and published guidelines. Formal validation of biomarkers should include both retrospective analyses of well-characterized samples as well as a prospective clinical trial in which the biomarker is tested for its ability to predict the presence of disease or the efficacy of a cancer therapy. Biomarker development is complicated, with very few biomarker discoveries leading to clinically useful tests. Nurses should understand how a biomarker was developed, including the sensitivity and specificity before applying new biomarkers in the clinical setting. Copyright © 2012. Published by Elsevier Inc.
An assessment of unstructured grid technology for timely CFD analysis
NASA Technical Reports Server (NTRS)
Kinard, Tom A.; Schabowski, Deanne M.
1995-01-01
An assessment of two unstructured methods is presented in this paper. A tetrahedral unstructured method USM3D, developed at NASA Langley Research Center is compared to a Cartesian unstructured method, SPLITFLOW, developed at Lockheed Fort Worth Company. USM3D is an upwind finite volume solver that accepts grids generated primarily from the Vgrid grid generator. SPLITFLOW combines an unstructured grid generator with an implicit flow solver in one package. Both methods are exercised on three test cases, a wing, and a wing body, and a fully expanded nozzle. The results for the first two runs are included here and compared to the structured grid method TEAM and to available test data. On each test case, the set up procedure are described, including any difficulties that were encountered. Detailed descriptions of the solvers are not included in this paper.
Lun, Aaron T L; Chen, Yunshun; Smyth, Gordon K
2016-01-01
RNA sequencing (RNA-seq) is widely used to profile transcriptional activity in biological systems. Here we present an analysis pipeline for differential expression analysis of RNA-seq experiments using the Rsubread and edgeR software packages. The basic pipeline includes read alignment and counting, filtering and normalization, modelling of biological variability and hypothesis testing. For hypothesis testing, we describe particularly the quasi-likelihood features of edgeR. Some more advanced downstream analysis steps are also covered, including complex comparisons, gene ontology enrichment analyses and gene set testing. The code required to run each step is described, along with an outline of the underlying theory. The chapter includes a case study in which the pipeline is used to study the expression profiles of mammary gland cells in virgin, pregnant and lactating mice.
Sinharay, Sandip; Jensen, Jens Ledet
2018-06-27
In educational and psychological measurement, researchers and/or practitioners are often interested in examining whether the ability of an examinee is the same over two sets of items. Such problems can arise in measurement of change, detection of cheating on unproctored tests, erasure analysis, detection of item preknowledge, etc. Traditional frequentist approaches that are used in such problems include the Wald test, the likelihood ratio test, and the score test (e.g., Fischer, Appl Psychol Meas 27:3-26, 2003; Finkelman, Weiss, & Kim-Kang, Appl Psychol Meas 34:238-254, 2010; Glas & Dagohoy, Psychometrika 72:159-180, 2007; Guo & Drasgow, Int J Sel Assess 18:351-364, 2010; Klauer & Rettig, Br J Math Stat Psychol 43:193-206, 1990; Sinharay, J Educ Behav Stat 42:46-68, 2017). This paper shows that approaches based on higher-order asymptotics (e.g., Barndorff-Nielsen & Cox, Inference and asymptotics. Springer, London, 1994; Ghosh, Higher order asymptotics. Institute of Mathematical Statistics, Hayward, 1994) can also be used to test for the equality of the examinee ability over two sets of items. The modified signed likelihood ratio test (e.g., Barndorff-Nielsen, Biometrika 73:307-322, 1986) and the Lugannani-Rice approximation (Lugannani & Rice, Adv Appl Prob 12:475-490, 1980), both of which are based on higher-order asymptotics, are shown to provide some improvement over the traditional frequentist approaches in three simulations. Two real data examples are also provided.
Point-of-Care Diagnostics in Low Resource Settings: Present Status and Future Role of Microfluidics
Sharma, Shikha; Zapatero-Rodríguez, Julia; Estrela, Pedro; O’Kennedy, Richard
2015-01-01
The inability to diagnose numerous diseases rapidly is a significant cause of the disparity of deaths resulting from both communicable and non-communicable diseases in the developing world in comparison to the developed world. Existing diagnostic instrumentation usually requires sophisticated infrastructure, stable electrical power, expensive reagents, long assay times, and highly trained personnel which is not often available in limited resource settings. This review will critically survey and analyse the current lateral flow-based point-of-care (POC) technologies, which have made a major impact on diagnostic testing in developing countries over the last 50 years. The future of POC technologies including the applications of microfluidics, which allows miniaturisation and integration of complex functions that facilitate their usage in limited resource settings, is discussed The advantages offered by such systems, including low cost, ruggedness and the capacity to generate accurate and reliable results rapidly, are well suited to the clinical and social settings of the developing world. PMID:26287254
Image-guided intervention in the coagulopathic patient.
Kohli, Marc; Mayo-Smith, William; Zagoria, Ronald; Sandrasegaran, Kumar
2016-04-01
Determining practice parameters for interventional procedures is challenging due to many factors including unreliable laboratory tests to measure bleeding risk, variable usage of standardized terminology for adverse events, poorly defined standards for administration of blood products, and the growing numbers of anticoagulant and antiplatelet medications. We aim to address these and other issues faced by radiologists performing invasive procedures through a review of available literature, and experiential guidance from three academic medical centers. We discuss the significant limitations with respect to using prothrombin-time and international normalized ratio to measure bleeding risk, especially in patients with synthetic defects due to liver function. Factors affecting platelet function including the impact of uremia; recent advances in laboratory testing, including platelet function testing; and thromboelastography are also discussed. A review of the existing literature of fresh-frozen plasma replacement therapy is included. The literature regarding comorbidities affecting coagulation including malignancy, liver failure, and uremia are also reviewed. Finally, the authors present a set of recommendations for laboratory thresholds, corrective transfusions, as well as withholding and restarting medications.
Ivy, Reid A; Farber, Jeffrey M; Pagotto, Franco; Wiedmann, Martin
2013-01-01
Foodborne pathogen isolate collections are important for the development of detection methods, for validation of intervention strategies, and to develop an understanding of pathogenesis and virulence. We have assembled a publicly available Cronobacter (formerly Enterobacter sakazakii) isolate set that consists of (i) 25 Cronobacter sakazakii isolates, (ii) two Cronobacter malonaticus isolates, (iii) one Cronobacter muytjensii isolate, which displays some atypical phenotypic characteristics, biochemical profiles, and colony color on selected differential media, and (iv) two nonclinical Enterobacter asburiae isolates, which show some phenotypic characteristics similar to those of Cronobacter spp. The set consists of human (n = 10), food (n = 11), and environmental (n = 9) isolates. Analysis of partial 16S rDNA sequence and seven-gene multilocus sequence typing data allowed for reliable identification of these isolates to species and identification of 14 isolates as sequence type 4, which had previously been shown to be the most common C. sakazakii sequence type associated with neonatal meningitis. Phenotypic characterization was carried out with API 20E and API 32E test strips and streaking on two selective chromogenic agars; isolates were also assessed for sorbitol fermentation and growth at 45°C. Although these strategies typically produced the same classification as sequence-based strategies, based on a panel of four biochemical tests, one C. sakazakii isolate yielded inconclusive data and one was classified as C. malonaticus. EcoRI automated ribotyping and pulsed-field gel electrophoresis (PFGE) with XbaI separated the set into 23 unique ribotypes and 30 unique PFGE types, respectively, indicating subtype diversity within the set. Subtype and source data for the collection are publicly available in the PathogenTracker database (www. pathogentracker. net), which allows for continuous updating of information on the set, including links to publications that include information on isolates from this collection.
Experimental testing of prototype face gears for helicopter transmissions
NASA Technical Reports Server (NTRS)
Handschuh, R.; Lewicki, D.; Bossler, R.
1992-01-01
An experimental program to test the feasibility of using face gears in a high-speed and high-power environment was conducted. Four face gear sets were tested, two sets at a time, in a closed-loop test stand at pinion rotational speeds to 19,100 rpm and to 271 kW. The test gear sets were one-half scale of the helicopter design gear set. Testing the gears at one-eighth power, the test gear set had slightly increased bending and compressive stresses when compared to the full scale design. The tests were performed in the LeRC spiral bevel gear test facility. All four sets of gears successfully ran at 100 percent of design torque and speed for 30 million pinion cycles, and two sets successfully ran at 200 percent of torque for an additional 30 million pinion cycles. The results, although limited, demonstrated the feasibility of using face gears for high-speed, high-load applications.
Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Mittman, David S.; Shams, Khawaja, S.; Bachmann, Andrew G.; Ludowise, Melissa
2013-01-01
This software simplifies the process of having to set up an Eclipse IDE programming environment for the members of the cross-NASA center project, Ensemble. It achieves this by assembling all the necessary add-ons and custom tools/preferences. This software is unique in that it allows developers in the Ensemble Project (approximately 20 to 40 at any time) across multiple NASA centers to set up a development environment almost instantly and work on Ensemble software. The software automatically has the source code repositories and other vital information and settings included. The Eclipse IDE is an open-source development framework. The NASA (Ensemble-specific) version of the software includes Ensemble-specific plug-ins as well as settings for the Ensemble project. This software saves developers the time and hassle of setting up a programming environment, making sure that everything is set up in the correct manner for Ensemble development. Existing software (i.e., standard Eclipse) requires an intensive setup process that is both time-consuming and error prone. This software is built once by a single user and tested, allowing other developers to simply download and use the software
Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter
2017-05-12
A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P < 0.05). GFBLUP provides a framework for integrating multiple layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set test might be used as a first-step to improve GFBLUP models. Approaches like GFBLUP and SNP set test will become increasingly useful, as the functional annotations of genomes keep accumulating for a range of species and traits.
Interventions to Improve Sexually Transmitted Disease Screening in Clinic-Based Settings.
Taylor, Melanie M; Frasure-Williams, Jessica; Burnett, Phyllis; Park, Ina U
2016-02-01
The asymptomatic nature and suboptimal screening rates of sexually transmitted diseases (STD) call for implementation of successful interventions to improve screening in community-based clinic settings with attention to cost and resources. We used MEDLINE to systematically review comparative analyses of interventions to improve STD (chlamydia, gonorrhea, or syphilis) screening or rescreening in clinic-based settings that were published between January 2000 and January 2014. Absolute differences in the percent of the target population screened between comparison groups or relative percent increase in the number of tests or patients tested were used to score the interventions as highly effective (>20% increase) or moderately effective (5%-19% increase) in improving screening. Published cost of the interventions was described where available and, when not available, was estimated. Of the 4566 citations reviewed, 38 articles describing 42 interventions met the inclusion criteria. Of the 42 interventions, 16 (38.1%) were categorized as highly effective and 14 (33.3%) as moderately effective. Effective low-cost interventions (<$1000) included the strategic placement of specimen collection materials or automatic collection of STD specimens as part of a routine visit (7 highly effective and 1 moderately effective) and the use of electronic health records (EHRs; 3 highly effective and 4 moderately effective). Patient reminders for screening or rescreening (via text, telephone, and postcards) were highly effective (3) or moderately effective (2) and low or moderate cost (<$1001-10,000). Interventions with dedicated clinic staff to improve STD screening were highly effective (2) or moderately effective in improving STD screening (1) but high-cost ($10,001-$100,000). Successful interventions include changing clinic flow to routinely collect specimens for testing, using EHR screening reminders, and reminding patients to get screened or rescreened. These strategies can be tailored to different clinic settings to improve screening at a low cost.
Zhong, Fei; Tang, Weiming; Cheng, Weibin; Lin, Peng; Wu, Qiongmiao; Cai, Yanshan; Tang, Songyuan; Fan, Lirui; Zhao, Yuteng; Chen, Xi; Mao, Jessica; Meng, Gang; Tucker, Joseph D.; Xu, Huifang
2017-01-01
Background HIV self-testing (HIVST) offers an opportunity to increase HIV testing among people not reached by facility-based services. However, the promotion of HIVST is limited due to insufficient community engagement. We built a Social Entrepreneurship Model (SET) to promote HIVST linkage to care among Chinese MSM in Guangzhou. Method SET model includes a few key steps: Each participant first completed an online survey, and paid a $23 USD (refundable) deposit to get a HIVST kit and a syphilis self-testing (SST) kit. After the testing, the results were sent to the platform by the participants and interpreted by CDC staff. Meanwhile, the deposit was returned to each participant. Finally, the CBO contacted the participants to provide counseling services, confirmation testing and linkage to care. Result During April–June of 2015, a total of 198 MSM completed a preliminary survey and purchased self-testing kits. Among them, the majority were aged under 34 (84.4%) and met partners online (93.1%). In addition, 68.9% of participants ever tested for HIV, and 19.5% had ever performed HIVST. Overall, feedback was received from 192 (97.0%) participants. Among these, 14 people did not use kits, and the HIV and syphilis prevalence among these users were of 4.5% (8/178) and 3.7% (6/178), respectively. All of the screened HIV-positive cases sought further confirmation testing and were linked to care. Conclusion Using an online SET model to promote HIV and syphilis among Chinese MSM is acceptable and feasible, and this model adds a new testing platform to the current testing service system. PMID:27601301
Batey, D Scott; Whitfield, Samantha; Mulla, Mazheruddin; Stringer, Kristi L; Durojaiye, Modupeoluwa; McCormick, Lisa; Turan, Bulent; Nyblade, Laura; Kempf, Mirjam-Colette; Turan, Janet M
2016-11-01
HIV-related stigma has been shown to have profound effects on people living with HIV (PLWH). When stigma is experienced in a healthcare setting, negative health outcomes are exacerbated. We sought to assess the feasibility and acceptability of a healthcare setting stigma-reduction intervention, the Finding Respect and Ending Stigma around HIV (FRESH) Workshop, in the United States. This intervention, adapted from a similar strategy implemented in Africa, brought together healthcare workers (HW) and PLWH to address HIV-related stigma. Two pilot workshops were conducted in Alabama and included 17 HW and 19 PLWH. Participants completed questionnaire measures pre- and post-workshop, including open-ended feedback items. Analytical methods included assessment of measures reliability, pre-post-test comparisons using paired t-tests, and qualitative content analysis. Overall satisfaction with the workshop experience was high, with 87% PLWH and 89% HW rating the workshop "excellent" and the majority agreeing that others like themselves would be interested in participating. Content analysis of open-ended items revealed that participants considered the workshop informative, interactive, well-organized, understandable, fun, and inclusive, while addressing real and prevalent issues. Most pre- and post-test measures had good-excellent internal consistency reliability (Cronbach's alphas ranging from 0.70 to 0.96) and, although sample sizes were small, positive trends were observed, reaching statistical significance for increased awareness of stigma in the health facility among HW (p = 0.047) and decreased uncertainty about HIV treatment among PLWH (p = 0.017). The FRESH intervention appears to be feasible and highly acceptable to HW and PLWH participants and shows great promise as a healthcare setting stigma-reduction intervention for US contexts.
Whitfield, Samantha; Mulla, Mazheruddin; Stringer, Kristi L.; Durojaiye, Modupeoluwa; McCormick, Lisa; Turan, Bulent; Nyblade, Laura; Kempf, Mirjam-Colette; Turan, Janet M.
2016-01-01
Abstract HIV-related stigma has been shown to have profound effects on people living with HIV (PLWH). When stigma is experienced in a healthcare setting, negative health outcomes are exacerbated. We sought to assess the feasibility and acceptability of a healthcare setting stigma-reduction intervention, the Finding Respect and Ending Stigma around HIV (FRESH) Workshop, in the United States. This intervention, adapted from a similar strategy implemented in Africa, brought together healthcare workers (HW) and PLWH to address HIV-related stigma. Two pilot workshops were conducted in Alabama and included 17 HW and 19 PLWH. Participants completed questionnaire measures pre- and post-workshop, including open-ended feedback items. Analytical methods included assessment of measures reliability, pre–post-test comparisons using paired t-tests, and qualitative content analysis. Overall satisfaction with the workshop experience was high, with 87% PLWH and 89% HW rating the workshop “excellent” and the majority agreeing that others like themselves would be interested in participating. Content analysis of open-ended items revealed that participants considered the workshop informative, interactive, well-organized, understandable, fun, and inclusive, while addressing real and prevalent issues. Most pre- and post-test measures had good–excellent internal consistency reliability (Cronbach's alphas ranging from 0.70 to 0.96) and, although sample sizes were small, positive trends were observed, reaching statistical significance for increased awareness of stigma in the health facility among HW (p = 0.047) and decreased uncertainty about HIV treatment among PLWH (p = 0.017). The FRESH intervention appears to be feasible and highly acceptable to HW and PLWH participants and shows great promise as a healthcare setting stigma-reduction intervention for US contexts. PMID:27849373
Hollen, Patricia J; Gralla, Richard J; Jones, Randy A; Thomas, Christopher Y; Brenin, David R; Weiss, Geoffrey R; Schroen, Anneke T; Petroni, Gina R
2013-03-01
Appropriate utilization of treatment is a goal for all patients undergoing cancer treatment. Proper treatment maximizes benefit and limits exposure to unnecessary measures. This report describes findings of the feasibility and acceptability of implementing a short, clinic-based decision aid and presents an in-depth clinical profile of the participants. This descriptive study used a prospective, quantitative approach to obtain the feasibility and acceptability of a decision aid (DecisionKEYS for Balancing Choices) for use in clinical settings. It combined results of trials of patients with three different common malignancies. All groups used the same decision aid series. Participants included 80 patients with solid tumors (22 with newly diagnosed breast cancer, 19 with advanced prostate cancer, and 39 with advanced lung cancer) and their 80 supporters as well as their physicians and nurses, for a total of 160 participants and 10 health professionals. The decision aid was highly acceptable to patient and supporter participants in all diagnostic groups. It was feasible for use in clinic settings; the overall value was rated highly. Of six physicians, all found the interactive format with the help of the nurse as feasible and acceptable. Nurses also rated the decision aid favorably. This intervention provides the opportunity to enhance decision making about cancer treatment and warrants further study including larger and more diverse groups. Strengths of the study included a theoretical grounding, feasibility testing of a practical clinic-based intervention, and summative evaluation of acceptability of the intervention by patient and supporter pairs. Further research also is needed to test the effectiveness of the decision aid in diverse clinical settings and to determine if this intervention can decrease overall costs.
Scanlon, Michael L; Vreeman, Rachel C
2013-01-01
The rollout of antiretroviral therapy (ART) significantly reduced human immunodeficiency virus (HIV)-related morbidity and mortality, but good clinical outcomes depend on access and adherence to treatment. In resource-limited settings, where over 90% of the world’s HIV-infected population resides, data on barriers to treatment are emerging that contribute to low rates of uptake in HIV testing, linkage to and retention in HIV care systems, and suboptimal adherence rates to therapy. A review of the literature reveals limited evidence to inform strategies to improve access and adherence with the majority of studies from sub-Saharan Africa. Data from observational studies and randomized controlled trials support home-based, mobile and antenatal care HIV testing, task-shifting from doctor-based to nurse-based and lower level provider care, and adherence support through education, counseling and mobile phone messaging services. Strategies with more limited evidence include targeted HIV testing for couples and family members of ART patients, decentralization of HIV care, including through home- and community-based ART programs, and adherence promotion through peer health workers, treatment supporters, and directly observed therapy. There is little evidence for improving access and adherence among vulnerable groups such as women, children and adolescents, and other high-risk populations and for addressing major barriers. Overall, studies are few in number and suffer from methodological issues. Recommendations for further research include health information technology, social-level factors like HIV stigma, and new research directions in cost-effectiveness, operations, and implementation. Findings from this review make a compelling case for more data to guide strategies to improve access and adherence to treatment in resource-limited settings. PMID:23326204
Zheng, Bin; Lu, Amy; Hardesty, Lara A; Sumkin, Jules H; Hakim, Christiane M; Ganott, Marie A; Gur, David
2006-01-01
The purpose of this study was to develop and test a method for selecting "visually similar" regions of interest depicting breast masses from a reference library to be used in an interactive computer-aided diagnosis (CAD) environment. A reference library including 1000 malignant mass regions and 2000 benign and CAD-generated false-positive regions was established. When a suspicious mass region is identified, the scheme segments the region and searches for similar regions from the reference library using a multifeature based k-nearest neighbor (KNN) algorithm. To improve selection of reference images, we added an interactive step. All actual masses in the reference library were subjectively rated on a scale from 1 to 9 as to their "visual margins speculations". When an observer identifies a suspected mass region during a case interpretation he/she first rates the margins and the computerized search is then limited only to regions rated as having similar levels of spiculation (within +/-1 scale difference). In an observer preference study including 85 test regions, two sets of the six "similar" reference regions selected by the KNN with and without the interactive step were displayed side by side with each test region. Four radiologists and five nonclinician observers selected the more appropriate ("similar") reference set in a two alternative forced choice preference experiment. All four radiologists and five nonclinician observers preferred the sets of regions selected by the interactive method with an average frequency of 76.8% and 74.6%, respectively. The overall preference for the interactive method was highly significant (p < 0.001). The study demonstrated that a simple interactive approach that includes subjectively perceived ratings of one feature alone namely, a rating of margin "spiculation," could substantially improve the selection of "visually similar" reference images.
Virulotyping of Shigella spp. isolated from pediatric patients in Tehran, Iran.
Ranjbar, Reza; Bolandian, Masomeh; Behzadi, Payam
2017-03-01
Shigellosis is a considerable infectious disease with high morbidity and mortality among children worldwide. In this survey the prevalence of four important virulence genes including ial, ipaH, set1A, and set1B were investigated among Shigella strains and the related gene profiles identified in the present investigation, stool specimens were collected from children who were referred to two hospitals in Tehran, Iran. The samples were collected during 3 years (2008-2010) from children who were suspected to shigellosis. Shigella spp. were identified throughout microbiological and serological tests and then subjected to PCR for virulotyping. Shigella sonnei was ranking first (65.5%) followed by Shigella flexneri (25.9%), Shigella boydii (6.9%), and Shigella dysenteriae (1.7%). The ial gene was the most frequent virulence gene among isolated bacterial strains and was followed by ipaH, set1B, and set1A. S. flexneri possessed all of the studied virulence genes (ial 65.51%, ipaH 58.62%, set1A 12.07%, and set1B 22.41%). Moreover, the pattern of virulence gene profiles including ial, ial-ipaH, ial-ipaH-set1B, and ial-ipaH-set1B-set1A was identified for isolated Shigella spp. strains. The pattern of virulence genes is changed in isolated strains of Shigella in this study. So, the ial gene is placed first and the ipaH in second.
Tracking techniques for space shuttle rendezvous
NASA Technical Reports Server (NTRS)
1975-01-01
The space shuttle rendezvous radar has a requirement to track cooperative and non-cooperative targets. For this reason the Lunar Module (LM) Rendezvous Radar was modified to incorporate the capability of tracking a non-cooperative target. The modifications are discussed. All modifications except those relating to frequency diversity were completed, and system tests were performed to confirm proper performance in the non-cooperative mode. Frequency diversity was added to the radar and to the special test equipment, and then system tests were performed. This last set of tests included re-running the tests of the non-cooperative mode without frequency diversity, followed by tests with frequency diversity and tests of operation in the original cooperative mode.
SeaWiFS technical report series. Volume 15: The simulated SesWiFS data set, version 2
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Gregg, Watson W.; Patt, Frederick S.; Woodward, Robert H.
1994-01-01
This document describes the second version of the simulated SeaWiFS data set. A realistic simulated data set is essential for mission readiness preparations and can potentially assist in all phases of ground support for a future mission. The second version improves on the first version primarily through additional realism and complexity. This version incorporates a representation of virtually every aspect of the flight mission. Thus, it provides a high-fidelity data set for testing several aspects of the ground system, including data acquisition, data processing, data transfers, calibration and validation, quality control, and mission operations. The data set is constructed for a seven-day period, 25-31 March 1994. Specific features of the data set include Global Area coverage (GAC), recorded Local Area Coverage (LAC), and realtime High Resolution Picture Transmission (HRPT) data for the seven-day period. A realistic orbit, which is propagated using a Brouwer-Lyddane model with drag, is used to simulate orbit positions. The simulated data corresponds to the command schedule based on the orbit for this seven-day period. It includes total (at-satellite) radiances not only for ocean, but for land, clouds, and ice. The simulation also utilizes a high-resolution land-sea mask. It includes the April 1993 SeaWiFS spectral responses and sensor saturation responses. The simulation is formatted according to July 1993 onboard data structures, which include corresponding telemetry (instrument and spacecraft) data. The methods are described and some examples of the output are given. The instrument response functions made available in April 1993 have been used to produce the Version 2 simulated data. These response functions will change as part of the sensor improvements initiated in July-August 1993.
Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang
2012-11-01
Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).
An algorithm for deriving core magnetic field models from the Swarm data set
NASA Astrophysics Data System (ADS)
Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko
2013-11-01
In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.
SELDI-TOF-MS proteomic profiling of serum, urine, and amniotic fluid in neural tube defects.
Liu, Zhenjiang; Yuan, Zhengwei; Zhao, Qun
2014-01-01
Neural tube defects (NTDs) are common birth defects, whose specific biomarkers are needed. The purpose of this pilot study is to determine whether protein profiling in NTD-mothers differ from normal controls using SELDI-TOF-MS. ProteinChip Biomarker System was used to evaluate 82 maternal serum samples, 78 urine samples and 76 amniotic fluid samples. The validity of classification tree was then challenged with a blind test set including another 20 NTD-mothers and 18 controls in serum samples, and another 19 NTD-mothers and 17 controls in urine samples, and another 20 NTD-mothers and 17 controls in amniotic fluid samples. Eight proteins detected in serum samples were up-regulated and four proteins were down-regulated in the NTD group. Four proteins detected in urine samples were up-regulated and one protein was down-regulated in the NTD group. Six proteins detected in amniotic fluid samples were up-regulated and one protein was down-regulated in the NTD group. The classification tree for serum samples separated NTDs from healthy individuals, achieving a sensitivity of 91% and a specificity of 97% in the training set, and achieving a sensitivity of 90% and a specificity of 97% and a positive predictive value of 95% in the test set. The classification tree for urine samples separated NTDs from controls, achieving a sensitivity of 95% and a specificity of 94% in the training set, and achieving a sensitivity of 89% and a specificity of 82% and a positive predictive value of 85% in the test set. The classification tree for amniotic fluid samples separated NTDs from controls, achieving a sensitivity of 93% and a specificity of 89% in the training set, and achieving a sensitivity of 90% and a specificity of 88% and a positive predictive value of 90% in the test set. These suggest that SELDI-TOF-MS is an additional method for NTDs pregnancies detection.
Woicik, Patricia A.; Urban, Catherine; Alia-Klein, Nelly; Henry, Ashley; Maloney, Thomas; Telang, Frank; Wang, Gene-Jack; Volkow, Nora D.; Goldstein, Rita Z.
2011-01-01
The ability to adapt behavior in a changing environment is necessary for humans to achieve their goals and can be measured in the lab with tests of rule-based switching. Disease models, such as cocaine addiction, have revealed that alterations in dopamine interfere with adaptive set switching, culminating in perseveration. We explore perseverative behavior in individuals with cocaine use disorders (CUD) and healthy controls (CON) during performance of the Wisconsin Card Sorting Test (WCST) (N = 107 in each group). By examining perseverative errors within each of the 6 blocks of the WCST, we uniquely test two forms of set switching that are differentiated by either the presence (extradimensional set shifting (EDS) – first 3 blocks) or absence (task-set switching – last 3 blocks) of contingency learning. We also explore relationships between perseveration and select cognitive and drug use factors including verbal learning and memory, trait inhibitory control, motivational state, and urine status for cocaine (in CUD). Results indicate greater impairment for CUD than CON on the WCST, even in higher performing CUD who completed all 6 blocks of the WCST. Block by block analysis conducted on completers’ scores indicate a tendency for greater perseveration in CUD than CON but only during the first task-set switch; no such deficits were observed during EDS. This task-set switching impairment was modestly associated with two indices of immediate recall (r = −.32, −.29) and urine status for cocaine [t (134) = 2.3, p <.03]. By distinguishing these two forms of switching on the WCST, the current study reveals a neurocognitive context (i.e. initial stage of task-set switching) implicit in the WCST that possibly relies upon intact dopaminergic function, but that is impaired in CUD, as associated with worse recall and possibly withdrawal from cocaine. Future studies should investigate whether dopaminergically innervated pathways alone, or in combination with other monoamines, underlie this implicit neurocognitive processes in the WCST. PMID:21392517
Bruxvoort, Katia J; Leurent, Baptiste; Chandler, Clare I R; Ansah, Evelyn K; Baiden, Frank; Björkman, Anders; Burchett, Helen E D; Clarke, Siân E; Cundill, Bonnie; DiLiberto, Debora D; Elfving, Kristina; Goodman, Catherine; Hansen, Kristian S; Kachur, S Patrick; Lal, Sham; Lalloo, David G; Leslie, Toby; Magnussen, Pascal; Mangham-Jefferies, Lindsay; Mårtensson, Andreas; Mayan, Ismail; Mbonye, Anthony K; Msellem, Mwinyi I; Onwujekwe, Obinna E; Owusu-Agyei, Seth; Rowland, Mark W; Shakely, Delér; Staedke, Sarah G; Vestergaard, Lasse S; Webster, Jayne; Whitty, Christopher J M; Wiseman, Virginia L; Yeung, Shunmay; Schellenberg, David; Hopkins, Heidi
2017-10-01
Since 2010, the World Health Organization has been recommending that all suspected cases of malaria be confirmed with parasite-based diagnosis before treatment. These guidelines represent a paradigm shift away from presumptive antimalarial treatment of fever. Malaria rapid diagnostic tests (mRDTs) are central to implementing this policy, intended to target artemisinin-based combination therapies (ACT) to patients with confirmed malaria and to improve management of patients with nonmalarial fevers. The ACT Consortium conducted ten linked studies, eight in sub-Saharan Africa and two in Afghanistan, to evaluate the impact of mRDT introduction on case management across settings that vary in malaria endemicity and healthcare provider type. This synthesis includes 562,368 outpatient encounters (study size range 2,400-432,513). mRDTs were associated with significantly lower ACT prescription (range 8-69% versus 20-100%). Prescribing did not always adhere to malaria test results; in several settings, ACTs were prescribed to more than 30% of test-negative patients or to fewer than 80% of test-positive patients. Either an antimalarial or an antibiotic was prescribed for more than 75% of patients across most settings; lower antimalarial prescription for malaria test-negative patients was partly offset by higher antibiotic prescription. Symptomatic management with antipyretics alone was prescribed for fewer than 25% of patients across all scenarios. In community health worker and private retailer settings, mRDTs increased referral of patients to other providers. This synthesis provides an overview of shifts in case management that may be expected with mRDT introduction and highlights areas of focus to improve design and implementation of future case management programs.
Bruxvoort, Katia J.; Leurent, Baptiste; Chandler, Clare I. R.; Ansah, Evelyn K.; Baiden, Frank; Björkman, Anders; Burchett, Helen E. D.; Clarke, Siân E.; Cundill, Bonnie; DiLiberto, Debora D.; Elfving, Kristina; Goodman, Catherine; Hansen, Kristian S.; Kachur, S. Patrick; Lal, Sham; Lalloo, David G.; Leslie, Toby; Magnussen, Pascal; Mangham-Jefferies, Lindsay; Mårtensson, Andreas; Mayan, Ismail; Mbonye, Anthony K.; Msellem, Mwinyi I.; Onwujekwe, Obinna E.; Owusu-Agyei, Seth; Rowland, Mark W.; Shakely, Delér; Staedke, Sarah G.; Vestergaard, Lasse S.; Webster, Jayne; Whitty, Christopher J. M.; Wiseman, Virginia L.; Yeung, Shunmay; Schellenberg, David; Hopkins, Heidi
2017-01-01
Abstract. Since 2010, the World Health Organization has been recommending that all suspected cases of malaria be confirmed with parasite-based diagnosis before treatment. These guidelines represent a paradigm shift away from presumptive antimalarial treatment of fever. Malaria rapid diagnostic tests (mRDTs) are central to implementing this policy, intended to target artemisinin-based combination therapies (ACT) to patients with confirmed malaria and to improve management of patients with nonmalarial fevers. The ACT Consortium conducted ten linked studies, eight in sub-Saharan Africa and two in Afghanistan, to evaluate the impact of mRDT introduction on case management across settings that vary in malaria endemicity and healthcare provider type. This synthesis includes 562,368 outpatient encounters (study size range 2,400–432,513). mRDTs were associated with significantly lower ACT prescription (range 8–69% versus 20–100%). Prescribing did not always adhere to malaria test results; in several settings, ACTs were prescribed to more than 30% of test-negative patients or to fewer than 80% of test-positive patients. Either an antimalarial or an antibiotic was prescribed for more than 75% of patients across most settings; lower antimalarial prescription for malaria test-negative patients was partly offset by higher antibiotic prescription. Symptomatic management with antipyretics alone was prescribed for fewer than 25% of patients across all scenarios. In community health worker and private retailer settings, mRDTs increased referral of patients to other providers. This synthesis provides an overview of shifts in case management that may be expected with mRDT introduction and highlights areas of focus to improve design and implementation of future case management programs. PMID:28820705
Inzaule, Seth C; Hamers, Ralph L; Paredes, Roger; Yang, Chunfu; Schuurman, Rob; Rinke de Wit, Tobias F
2017-01-01
Global scale-up of antiretroviral treatment has dramatically changed the prospects of HIV/AIDS disease, rendering life-long chronic care and treatment a reality for millions of HIV-infected patients. Affordable technologies to monitor antiretroviral treatment are needed to ensure long-term durability of limited available drug regimens. HIV drug resistance tests can complement existing strategies in optimizing clinical decision-making for patients with treatment failure, in addition to facilitating population-based surveillance of HIV drug resistance. This review assesses the current landscape of HIV drug resistance technologies and discusses the strengths and limitations of existing assays available for expanding testing in resource-limited settings. These include sequencing-based assays (Sanger sequencing assays and nextgeneration sequencing), point mutation assays, and genotype-free data-based prediction systems. Sanger assays are currently considered the gold standard genotyping technology, though only available at a limited number of resource-limited setting reference and regional laboratories, but high capital and test costs have limited their wide expansion. Point mutation assays present opportunities for simplified laboratory assays, but HIV genetic variability, extensive codon redundancy at or near the mutation target sites with limited multiplexing capability have restricted their utility. Next-generation sequencing, despite high costs, may have potential to reduce the testing cost significantly through multiplexing in high-throughput facilities, although the level of bioinformatics expertise required for data analysis is currently still complex and expensive and lacks standardization. Web-based genotype-free prediction systems may provide enhanced antiretroviral treatment decision-making without the need for laboratory testing, but require further clinical field evaluation and implementation scientific research in resource-limited settings.
Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo
2017-01-01
The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.
An implementation of differential evolution algorithm for inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan
2013-11-01
Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.
Willis, Brian H; Hyde, Christopher J
2014-05-01
To determine a plausible estimate for a test's performance in a specific setting using a new method for selecting studies. It is shown how routine data from practice may be used to define an "applicable region" for studies in receiver operating characteristic space. After qualitative appraisal, studies are selected based on the probability that their study accuracy estimates arose from parameters lying in this applicable region. Three methods for calculating these probabilities are developed and used to tailor the selection of studies for meta-analysis. The Pap test applied to the UK National Health Service (NHS) Cervical Screening Programme provides a case example. The meta-analysis for the Pap test included 68 studies, but at most 17 studies were considered applicable to the NHS. For conventional meta-analysis, the sensitivity and specificity (with 95% confidence intervals) were estimated to be 72.8% (65.8, 78.8) and 75.4% (68.1, 81.5) compared with 50.9% (35.8, 66.0) and 98.0% (95.4, 99.1) from tailored meta-analysis using a binomial method for selection. Thus, for a cervical intraepithelial neoplasia (CIN) 1 prevalence of 2.2%, the post-test probability for CIN 1 would increase from 6.2% to 36.6% between the two methods of meta-analysis. Tailored meta-analysis provides a method for augmenting study selection based on the study's applicability to a setting. As such, the summary estimate is more likely to be plausible for a setting and could improve diagnostic prediction in practice. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.
1996-08-01
Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less
Rajgaria, R.; Wei, Y.; Floudas, C. A.
2010-01-01
An integer linear optimization model is presented to predict residue contacts in β, α + β, and α/β proteins. The total energy of a protein is expressed as sum of a Cα – Cα distance dependent contact energy contribution and a hydrophobic contribution. The model selects contacts that assign lowest energy to the protein structure while satisfying a set of constraints that are included to enforce certain physically observed topological information. A new method based on hydrophobicity is proposed to find the β-sheet alignments. These β-sheet alignments are used as constraints for contacts between residues of β-sheets. This model was tested on three independent protein test sets and CASP8 test proteins consisting of β, α + β, α/β proteins and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) was approximately 61%. The average true positive and false positive distances were also calculated for each of the test sets and they are 7.58 Å and 15.88 Å, respectively. Residue contact prediction can be directly used to facilitate the protein tertiary structure prediction. This proposed residue contact prediction model is incorporated into the first principles protein tertiary structure prediction approach, ASTRO-FOLD. The effectiveness of the contact prediction model was further demonstrated by the improvement in the quality of the protein structure ensemble generated using the predicted residue contacts for a test set of 10 proteins. PMID:20225257
Kushniruk, A; Nohr, C; Jensen, S; Borycki, E M
2013-01-01
The objective of this paper is to explore human factors approaches to understanding the use of health information technology (HIT) by extending usability engineering approaches to include analysis of the impact of clinical context through use of clinical simulations. Methods discussed are considered on a continuum from traditional laboratory-based usability testing to clinical simulations. Clinical simulations can be conducted in a simulation laboratory and they can also be conducted in real-world settings. The clinical simulation approach attempts to bring the dimension of clinical context into stronger focus. This involves testing of systems with representative users doing representative tasks, in representative settings/environments. Application of methods where realistic clinical scenarios are used to drive the study of users interacting with systems under realistic conditions and settings can lead to identification of problems and issues with systems that may not be detected using traditional usability engineering methods. In conducting such studies, careful consideration is needed in creating ecologically valid test scenarios. The evidence obtained from such evaluation can be used to improve both the usability and safety of HIT. In addition, recent work has shown that clinical simulations, in particular those conducted in-situ, can lead to considerable benefits when compared to the costs of running such studies. In order to bring context of use into the testing of HIT, clinical simulation, involving observing representative users carrying out tasks in representative settings, holds considerable promise.
NASA Astrophysics Data System (ADS)
Datta, Nilanjana; Pautrat, Yan; Rouzé, Cambyse
2016-06-01
Quantum Stein's lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ⊗n or σ⊗n) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability αn of erroneously inferring the state to be σ, the probability βn of erroneously inferring the state to be ρ decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.
Consistency of response and image recognition, pulmonary nodules
Liu, M A Q; Galvan, E; Bassett, R; Murphy, W A; Matamoros, A; Marom, E M
2014-01-01
Objective: To investigate the effect of recognition of a previously encountered radiograph on consistency of response in localized pulmonary nodules. Methods: 13 radiologists interpreted 40 radiographs each to locate pulmonary nodules. A few days later, they again interpreted 40 radiographs. Half of the images in the second set were new. We asked the radiologists whether each image had been in the first set. We used Fisher's exact test and Kruskal–Wallis test to evaluate the correlation between recognition of an image and consistency in its interpretation. We evaluated the data using all possible recognition levels—definitely, probably or possibly included vs definitely, probably or possibly not included by collapsing the recognition levels into two and by eliminating the “possibly included” and “possibly not included” scores. Results: With all but one of six methods of looking at the data, there was no significant correlation between consistency in interpretation and recognition of the image. When the possibly included and possibly not included scores were eliminated, there was a borderline statistical significance (p = 0.04) with slightly greater consistency in interpretation of recognized than that of non-recognized images. Conclusion: We found no convincing evidence that radiologists' recognition of images in an observer performance study affects their interpretation on a second encounter. Advances in knowledge: Conscious recognition of chest radiographs did not result in a greater degree of consistency in the tested interpretation than that in the interpretation of images that were not recognized. PMID:24697724
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
Maples, Jessica L; Carter, Nathan T; Few, Lauren R; Crego, Cristina; Gore, Whitney L; Samuel, Douglas B; Williamson, Rachel L; Lynam, Donald R; Widiger, Thomas A; Markon, Kristian E; Krueger, Robert F; Miller, Joshua D
2015-12-01
The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) includes an alternative model of personality disorders (PDs) in Section III, consisting in part of a pathological personality trait model. To date, the 220-item Personality Inventory for DSM-5 (PID-5; Krueger, Derringer, Markon, Watson, & Skodol, 2012) is the only extant self-report instrument explicitly developed to measure this pathological trait model. The present study used item response theory-based analyses in a large sample (n = 1,417) to investigate whether a reduced set of 100 items could be identified from the PID-5 that could measure the 25 traits and 5 domains. This reduced set of PID-5 items was then tested in a community sample of adults currently receiving psychological treatment (n = 109). Across a wide range of criterion variables including NEO PI-R domains and facets, DSM-5 Section II PD scores, and externalizing and internalizing outcomes, the correlational profiles of the original and reduced versions of the PID-5 were nearly identical (rICC = .995). These results provide strong support for the hypothesis that an abbreviated set of PID-5 items can be used to reliably, validly, and efficiently assess these personality disorder traits. The ability to assess the DSM-5 Section III traits using only 100 items has important implications in that it suggests these traits could still be measured in settings in which assessment-related resources (e.g., time, compensation) are limited. (c) 2015 APA, all rights reserved).
A computer program (MACPUMP) for interactive aquifer-test analysis
Day-Lewis, F. D.; Person, M.A.; Konikow, Leonard F.
1995-01-01
This report introduces MACPUMP (Version 1.0), an aquifer-test-analysis package for use with Macintosh4 computers. The report outlines the input- data format, describes the solutions encoded in the program, explains the menu-items, and offers a tutorial illustrating the use of the program. The package reads list-directed aquifer-test data from a file, plots the data to the screen, generates and plots type curves for several different test conditions, and allows mouse-controlled curve matching. MACPUMP features pull-down menus, a simple text viewer for displaying data-files, and optional on-line help windows. This version includes the analytical solutions for nonleaky and leaky confined aquifers, using both type curves and straight-line methods, and for the analysis of single-well slug tests using type curves. An executable version of the code and sample input data sets are included on an accompanying floppy disk.
Kentala, E; Pyykkö, I; Auramo, Y; Juhola, M
1995-03-01
An interactive database has been developed to assist the diagnostic procedure for vertigo and to store the data. The database offers a possibility to split and reunite the collected information when needed. It contains detailed information about a patient's history, symptoms, and findings in otoneurologic, audiologic, and imaging tests. The symptoms are classified into sets of questions on vertigo (including postural instability), hearing loss and tinnitus, and provoking factors. Confounding disorders are screened. The otoneurologic tests involve saccades, smooth pursuit, posturography, and a caloric test. In addition, findings from specific antibody tests, clinical neurotologic tests, magnetic resonance imaging, brain stem audiometry, and electrocochleography are included. The input information can be applied to workups for vertigo in an expert system called ONE. The database assists its user in that the input of information is easy. If not only can be used for diagnostic purposes but is also beneficial for research, and in combination with the expert system, it provides a tutorial guide for medical students.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-13
... audiences that FDA needs to design effective communication strategies, messages, and labels. These... group settings. Third, as evaluative research, it will allow FDA to ascertain the effectiveness of the... include contractor expenses for designing and conducting information collection activities, specifically...
The Economics of Information: A Classroom Experiment.
ERIC Educational Resources Information Center
Netusil, Noelwah R.; Haupert, Michael
1995-01-01
Describes an economics class experiment where students ranked the quality of baked pies according to limited information. The limited sets of information included brand name and packaging only, price only, advertising only, word-of-mouth, and taste test. Discusses signals of quality and consumer decisions. (MJP)
Writing and the Seven Intelligences.
ERIC Educational Resources Information Center
Grow, Gerald
In "Frames of Mind," Howard Gardner replaces the standard view of intelligence with the idea that human beings have several distinct intelligences. Using an elaborate set of criteria, including evidence from studies of brain damage, prodigies, developmental patterns, cross-cultural comparisons, and various kinds of tests, Gardner…
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Schroeder, Lee F; Robilotti, Elizabeth; Peterson, Lance R; Banaei, Niaz; Dowdy, David W
2014-02-01
Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI.
Robilotti, Elizabeth; Peterson, Lance R.; Banaei, Niaz; Dowdy, David W.
2014-01-01
Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI. PMID:24478478
Kosack, Cara S.; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng'ang'a, Anne; Bita, André; Zahinda, Jean-Paul B. N.; Fransen, Katrien
2017-01-01
ABSTRACT Our objective was to evaluate the performance of HIV testing algorithms based on WHO recommendations, using data from specimens collected at six HIV testing and counseling sites in sub-Saharan Africa (Conakry, Guinea; Kitgum and Arua, Uganda; Homa Bay, Kenya; Douala, Cameroon; Baraka, Democratic Republic of Congo). A total of 2,780 samples, including 1,306 HIV-positive samples, were included in the analysis. HIV testing algorithms were designed using Determine as a first test. Second and third rapid diagnostic tests (RDTs) were selected based on site-specific performance, adhering where possible to the WHO-recommended minimum requirements of ≥99% sensitivity and specificity. The threshold for specificity was reduced to 98% or 96% if necessary. We also simulated algorithms consisting of one RDT followed by a simple confirmatory assay. The positive predictive values (PPV) of the simulated algorithms ranged from 75.8% to 100% using strategies recommended for high-prevalence settings, 98.7% to 100% using strategies recommended for low-prevalence settings, and 98.1% to 100% using a rapid test followed by a simple confirmatory assay. Although we were able to design algorithms that met the recommended PPV of ≥99% in five of six sites using the applicable high-prevalence strategy, options were often very limited due to suboptimal performance of individual RDTs and to shared falsely reactive results. These results underscore the impact of the sequence of HIV tests and of shared false-reactivity data on algorithm performance. Where it is not possible to identify tests that meet WHO-recommended specifications, the low-prevalence strategy may be more suitable. PMID:28747371
Lorenc, Theo; Marrero-Guillamón, Isaac; Aggleton, Peter; Cooper, Chris; Llewellyn, Alexis; Lehmann, Angela; Lindsay, Catriona
2011-06-01
What interventions are effective and cost-effective in increasing the uptake of HIV testing among men who have sex with men (MSM)? A systematic review was conducted of the following databases: AEGIS, ASSIA, BL Direct, BNI, Centre for Reviews and Dissemination, Cochrane Database of Systematic Reviews, CINAHL, Current Contents Connect, EconLit, EMBASE, ERIC, HMIC, Medline, Medline In-Process, NRR, PsychINFO, Scopus, SIGLE, Social Policy and Practice, Web of Science, websites, journal hand-searching, citation chasing and expert recommendations. Prospective studies of the effectiveness or cost-effectiveness of interventions (randomised controlled trial (RCT), controlled trial, one-group or any economic analysis) were included if the intervention aimed to increase the uptake of HIV testing among MSM in a high-income (Organization for Economic Co-operation and Development) country. Quality was assessed and data were extracted using standardised tools. Results were synthesised narratively. Twelve effectiveness studies and one cost-effectiveness study were located, covering a range of intervention types. There is evidence that rapid testing and counselling in community settings (one RCT), and intensive peer counselling (one RCT), can increase the uptake of HIV testing among MSM. There are promising results regarding the introduction of opt-out testing in sexually transmitted infection clinics (two one-group studies). Findings regarding other interventions, including bundling HIV tests with other tests, peer outreach in community settings, and media campaigns, are inconclusive. Findings indicate several promising approaches to increasing HIV testing among MSM. However, there is limited evidence overall, and evidence for the effectiveness of key intervention types (particularly peer outreach and media campaigns) remains lacking.
Abuse behavior of high-power, lithium-ion cells
NASA Astrophysics Data System (ADS)
Spotnitz, R.; Franklin, J.
Published accounts of abuse testing of lithium-ion cells and components are summarized, including modeling work. From this summary, a set of exothermic reactions is selected with corresponding estimates of heats of reaction. Using this set of reactions, along with estimated kinetic parameters and designs for high-rate batteries, models for the abuse behavior (oven, short-circuit, overcharge, nail, crush) are developed. Finally, the models are used to determine that fluorinated binder plays a relatively unimportant role in thermal runaway.
No clear Y2K roadmap can be costly; may create serious liability.
1998-12-01
Hospitals without Year 2000 plans in place could be setting themselves up for costly equipment failure. Learn what the experts say about prioritizing to ensure patient care isn't disrupted, and find out about testing medical devices and equipment for Y2K compliance, and what to consider when setting up a Y2K analysis program in your hospital, including how to ensure vendors tell you the truth about whether their products are Y2K compliant.
2012-05-18
CAPE CANAVERAL, Fla. – A photographer sets up his remote camera at Space Launch Complex-40 on Cape Canaveral Air Force Station in Florida. In the background, final preparations are under way to launch the SpaceX Falcon 9 rocket. Liftoff with the Dragon capsule on top is set for 4:55 a.m. EDT on May 19. The launch will be the company's second demonstration test flight for NASA's Commercial Orbital Transportation Services Program, or COTS. During the flight, the capsule will conduct a series of check-out procedures to test and prove its systems, including rendezvous and berthing with the International Space Station. If the capsule performs as planned, the cargo and experiments it is carrying will be transferred to the station. The cargo includes food, water and provisions for the station’s Expedition crews, such as clothing, batteries and computer equipment. Under COTS, NASA has partnered with two aerospace companies to deliver cargo to the station. For more information, visit http://www.nasa.gov/spacex. Photo credit: NASA/Ken Thornsley
Towards non- and minimally instrumented, microfluidics-based diagnostic devices†
Weigl, Bernhard; Domingo, Gonzalo; LaBarre, Paul; Gerlach, Jay
2009-01-01
In many health care settings, it is uneconomical, impractical, or unaffordable to maintain and access a fully equipped diagnostics laboratory. Examples include home health care, developing-country health care, and emergency situations in which first responders are dealing with pandemics or biowarfare agent release. In those settings, fully disposable diagnostic devices that require no instrument support, reagent, or significant training are well suited. Although the only such technology to have found widespread adoption so far is the immunochromatographic rapid assay strip test, microfluidics holds promise to expand the range of assay technologies that can be performed in formats similar to that of a strip test. In this paper, we review progress toward development of disposable, low-cost, easy-to-use microfluidics-based diagnostics that require no instrument at all. We also present examples of microfluidic functional elements—including mixers, separators, and detectors—as well as complete microfluidic devices that function entirely without any moving parts and external power sources. PMID:19023463
Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.
1981-01-01
The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.
System and Method for Modeling the Flow Performance Features of an Object
NASA Technical Reports Server (NTRS)
Jorgensen, Charles (Inventor); Ross, James (Inventor)
1997-01-01
The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.
Boumaza, S; Lebain, P; Brazo, P
2015-06-01
Tobacco smoking is the main cause of death among mentally ill persons. Since February 2007, smoking has been strictly forbidden in French covered and closed psychiatric wards. The fear of an increased violence risk induced by tobacco withdrawal is one of the most frequent arguments invoked against this tobacco ban. According to the literature, it seems that the implementation of this ban does not imply such a risk. All these studies compared inpatients' violence risk before and after the tobacco ban in a same psychiatric ward. We aimed to analyse the strict tobacco withdrawal consequences on the violence risk in a retrospective study including patients hospitalised in a psychiatric intensive care unit of the university hospital of Caen during the same period. We compared clinical and demographic data and the violence risk between the smoker group (strict tobacco withdrawal with proposed tobacco substitution) and the non-smoker group (control group). In order to evaluate the violence risk, we used three indicators: a standardised scale (the Bröset Violence Checklist) and two assessments specific to the psychiatric intensive care setting ("the preventing risk protocol" and the "seclusion time"). The clinical and demographic data were compared using the Khi2 test, Fisher test and Mann-Whitney test, and the three violence risk indicators were compared with the Mann-Whitney test. Firstly, comparisons were conducted in the total population, and secondly (in order to eliminate a bias of tobacco substitution) in the subgroup directly hospitalised in the psychiatric intensive care setting. Finally, we analysed in the smoker group the statistical correlation between tobacco smoking intensity and violence risk intensity using a regression test. A population of 72 patients (50 male) was included; 45 were smokers (62.5%) and 27 non-smokers. No statistically significant differences were found in clinical and demographic data between smoker and non-smoker groups in the whole population, as well as in the subgroup directly hospitalised in the psychiatric intensive care setting. Whatever the violence risk indicators, no statistically significant difference was found between the smoker group and the non-smoker group in the total population, as well as the subgroup directly hospitalised in the psychiatric intensive care setting. Moreover, no correlation was found between the tobacco smoking intensity and the violence risk intensity in the smoker group. Strict tobacco withdrawal does not appear to constitute a violence risk factor in psychiatric intensive care unit inpatients. However, further studies are needed to confirm these results. They should be prospective and they should take into account larger samples including patients hospitalised in non-intensive care psychiatric wards. Copyright © 2014 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Toussova, Olga V.; Verevochkin, Sergei V.; Barbour, Russell; Heimer, Robert; Kozlov, Andrei P.
2011-01-01
The purpose of this analysis was to estimate human immunodeficiency virus (HIV) prevalence and testing patterns among injection drug users (IDUs) in St. Petersburg, Russia. HIV prevalence among 387 IDUs in the sample was 50%. Correlates of HIV-positive serostatus included unemployment, recent unsafe injections, and history/current sexually transmitted infection. Seventy-six percent had been HIV tested, but only 22% of those who did not report HIV-positive serostatus had been tested in the past 12 months and received their test result. Correlates of this measure included recent doctor visit and having been in prison or jail among men. Among the 193 HIV-infected participants, 36% were aware of their HIV-positive serostatus. HIV prevalence is high and continuing to increase in this population. Adequate coverage of HIV testing has not been achieved, resulting in poor knowledge of positive serostatus. Efforts are needed to better understand motivating and deterring factors for HIV testing in this setting. PMID:18843531
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Walker, Charlotte E.
1999-01-01
Computational test cases have been selected from the data set for a clipped delta wing with a six-percent-thick circular-arc airfoil section that was tested in the NASA Langley Transonic Dynamics Tunnel. The test cases include parametric variation of static angle of attack, pitching oscillation frequency, trailing-edge control surface oscillation frequency, and Mach numbers from subsonic to low supersonic values. Tables and plots of the measured pressures are presented for each case. This report provides an early release of test cases that have been proposed for a document that supplements the cases presented in AGARD Report 702.
CALIBRATION OF SEISMIC ATTRIBUTES FOR RESERVOIR CHARACTERIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne D. Pennington; Horacio Acevedo; Aaron Green
2002-10-01
The project, ''Calibration of Seismic Attributes for Reservoir Calibration,'' is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, including several that are in final stages of preparation ormore » printing; one of these is a chapter on ''Reservoir Geophysics'' for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along ''phantom'' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines. The Teal South data set has provided a surprising set of results, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. Additional results were found using the public-domain Waha and Woresham-Bayer data set, and some tests of technologies were made using 2D seismic lines from Michigan and the western Pacific ocean.« less
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
Boeing Smart Rotor Full-scale Wind Tunnel Test Data Report
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Hagerty, Brandon; Salazar, Denise
2016-01-01
A full-scale helicopter smart material actuated rotor technology (SMART) rotor test was conducted in the USAF National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames. The SMART rotor system is a five-bladed MD 902 bearingless rotor with active trailing-edge flaps. The flaps are actuated using piezoelectric actuators. Rotor performance, structural loads, and acoustic data were obtained over a wide range of rotor shaft angles of attack, thrust, and airspeeds. The primary test objective was to acquire unique validation data for the high-performance computing analyses developed under the Defense Advanced Research Project Agency (DARPA) Helicopter Quieting Program (HQP). Other research objectives included quantifying the ability of the on-blade flaps to achieve vibration reduction, rotor smoothing, and performance improvements. This data set of rotor performance and structural loads can be used for analytical and experimental comparison studies with other full-scale rotor systems and for analytical validation of computer simulation models. The purpose of this final data report is to document a comprehensive, highquality data set that includes only data points where the flap was actively controlled and each of the five flaps behaved in a similar manner.
Goldstein, Elizabeth; Farquhar, Marybeth; Crofton, Christine; Darby, Charles; Garfinkel, Steven
2005-12-01
To describe the developmental process for the CAHPS Hospital Survey. A pilot was conducted in three states with 19,720 hospital discharges. A rigorous, multi-step process was used to develop the CAHPS Hospital Survey. It included a public call for measures, multiple Federal Register notices soliciting public input, a review of the relevant literature, meetings with hospitals, consumers and survey vendors, cognitive interviews with consumer, a large-scale pilot test in three states and consumer testing and numerous small-scale field tests. The current version of the CAHPS Hospital Survey has survey items in seven domains, two overall ratings of the hospital and five items used for adjusting for the mix of patients across hospitals and for analytical purposes. The CAHPS Hospital Survey is a core set of questions that can be administered as a stand-alone questionnaire or combined with a broader set of hospital specific items.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
NASA Technical Reports Server (NTRS)
Hartman, Edwin P; Biermann, David
1938-01-01
Negative thrust and torque data for 2, 3, and 4-blade metal propellers having Clark y and R.A.F. 6 airfoil sections were obtained from tests in the NACA 20-foot tunnel. The propellers were mounted in front of a radial engine nacelle and the blade-angle settings covered in the tests ranged from l5 degrees to 90 degrees. One propeller was also tested at blade-angle settings of 0 degree, 5 degrees, and 10 degrees. A considerable portion of the report deals with the various applications of the negative thrust and torque to flight problems. A controllable propeller is shown to have a number of interesting, and perhaps valuable, uses within the negative thrust and torque range of operation. A small amount of engine-friction data is included to facilitate the application of the propeller data.
Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob
2012-01-01
To analyse the costs and evaluate the equity, efficiency and feasibility of four strategies to identify poor households for premium exemptions in Ghana's National Health Insurance Scheme (NHIS): means testing (MT), proxy means testing (PMT), participatory wealth ranking (PWR) and geographic targeting (GT) in urban, rural and semi-urban settings in Ghana. We conducted the study in 145-147 households per setting with MT as our gold standard strategy. We estimated total costs that included costs of household surveys and cost of premiums paid to the poor, efficiency (cost per poor person identified), equity (number of true poor excluded) and the administrative feasibility of implementation. The cost of exempting one poor individual ranged from US$15.87 to US$95.44; exclusion of the poor ranged between 0% and 73%. MT was most efficient and equitable in rural and urban settings with low-poverty incidence; GT was efficient and equitable in the semi-urban setting with high-poverty incidence. PMT and PWR were less equitable and inefficient although feasible in some settings. We recommend MT as optimal strategy in low-poverty urban and rural settings and GT as optimal strategy in high-poverty semi-urban setting. The study is relevant to other social and developmental programmes that require identification and exemptions of the poor in low-income countries. © 2011 Blackwell Publishing Ltd.
X-Ray Phantom Development For Observer Performance Studies
NASA Astrophysics Data System (ADS)
Kelsey, C. A.; Moseley, R. D.; Mettler, F. A.; Parker, T. W.
1981-07-01
The requirements for radiographic imaging phantoms for observer performance testing include realistic tasks which mimic at least some portion of the diagnostic examination presented in a setting which approximates clinically derived images. This study describes efforts to simulate chest and vascular diseases for evaluation of conventional and digital radiographic systems. Images of lung nodules, pulmonary infiltrates, as well as hilar and mediastinal masses are generated with a conventional chest phantom to make up chest disease test series. Vascular images are simulated by hollow tubes embedded in tissue density plastic with widening and narrowing added to mimic aneurysms and stenoses. Both sets of phantoms produce images which allow simultaneous determination of true positive and false positive rates as well as complete ROC curves.