Rasmussen, Kirsten; Rauscher, Hubert; Mech, Agnieszka; Riego Sintes, Juan; Gilliland, Douglas; González, Mar; Kearns, Peter; Moss, Kenneth; Visser, Maaike; Groenewold, Monique; Bleeker, Eric A J
2018-02-01
Identifying and characterising nanomaterials require additional information on physico-chemical properties and test methods, compared to chemicals in general. Furthermore, regulatory decisions for chemicals are usually based upon certain toxicological properties, and these effects may not be equivalent to those for nanomaterials. However, regulatory agencies lack an authoritative decision framework for nanomaterials that links the relevance of certain physico-chemical endpoints to toxicological effects. This paper investigates various physico-chemical endpoints and available test methods that could be used to produce such a decision framework for nanomaterials. It presents an overview of regulatory relevance and methods used for testing fifteen proposed physico-chemical properties of eleven nanomaterials in the OECD Working Party on Manufactured Nanomaterials' Testing Programme, complemented with methods from literature, and assesses the methods' adequacy and applications limits. Most endpoints are of regulatory relevance, though the specific parameters depend on the nanomaterial and type of assessment. Size (distribution) is the common characteristic of all nanomaterials and is decisive information for classifying a material as a nanomaterial. Shape is an important particle descriptor. The octanol-water partitioning coefficient is undefined for particulate nanomaterials. Methods, including sample preparation, need to be further standardised, and some new methods are needed. The current work of OECD's Test Guidelines Programme regarding physico-chemical properties is highlighted. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Grey, Katherine R; Warshaw, Erin M
Allergic contact dermatitis is an important cause of periorbital dermatitis. Topical ophthalmic agents are relevant sensitizers. Contact dermatitis to ophthalmic medications can be challenging to diagnose and manage given the numerous possible offending agents, including both active and inactive ingredients. Furthermore, a substantial body of literature reports false-negative patch test results to ophthalmic agents. Subsequently, numerous alternative testing methods have been described. This review outlines the periorbital manifestations, causative agents, and alternative testing methods of allergic contact dermatitis to ophthalmic medications.
Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions
Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele
2016-01-01
To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration. Given the ease-of-use and cost benefits of test strips, we recommend further development of test strips robust to pH variation and appropriate for Ebola-relevant chlorine solution concentrations. PMID:27243817
Gourmelon, Anne; Delrue, Nathalie
Ten years elapsed since the OECD published the Guidance document on the validation and international regulatory acceptance of test methods for hazard assessment. Much experience has been gained since then in validation centres, in countries and at the OECD on a variety of test methods that were subjected to validation studies. This chapter reviews validation principles and highlights common features that appear to be important for further regulatory acceptance across studies. Existing OECD-agreed validation principles will most likely generally remain relevant and applicable to address challenges associated with the validation of future test methods. Some adaptations may be needed to take into account the level of technique introduced in test systems, but demonstration of relevance and reliability will continue to play a central role as pre-requisite for the regulatory acceptance. Demonstration of relevance will become more challenging for test methods that form part of a set of predictive tools and methods, and that do not stand alone. OECD is keen on ensuring that while these concepts evolve, countries can continue to rely on valid methods and harmonised approaches for an efficient testing and assessment of chemicals.
ECVAM and new technologies for toxicity testing.
Bouvier d'Yvoire, Michel; Bremer, Susanne; Casati, Silvia; Ceridono, Mara; Coecke, Sandra; Corvi, Raffaella; Eskes, Chantra; Gribaldo, Laura; Griesinger, Claudius; Knaut, Holger; Linge, Jens P; Roi, Annett; Zuang, Valérie
2012-01-01
The development of alternative empirical (testing) and non-empirical (non-testing) methods to traditional toxicological tests for complex human health effects is a tremendous task. Toxicants may potentially interfere with a vast number of physiological mechanisms thereby causing disturbances on various levels of complexity of human physiology. Only a limited number of mechanisms relevant for toxicity ('pathways' of toxicity) have been identified with certainty so far and, presumably, many more mechanisms by which toxicants cause adverse effects remain to be identified. Recapitulating in empirical model systems (i.e., in vitro test systems) all those relevant physiological mechanisms prone to be disturbed by toxicants and relevant for causing the toxicity effect in question poses an enormous challenge. First, the mechanism(s) of action of toxicants in relation to the most relevant adverse effects of a specific human health endpoint need to be identified. Subsequently, these mechanisms need to be modeled in reductionist test systems that allow assessing whether an unknown substance may operate via a specific (array of) mechanism(s). Ideally, such test systems should be relevant for the species of interest, i.e., based on human cells or modeling mechanisms present in humans. Since much of our understanding about toxicity mechanisms is based on studies using animal model systems (i.e., experimental animals or animal-derived cells), designing test systems that model mechanisms relevant for the human situation may be limited by the lack of relevant information from basic research. New technologies from molecular biology and cell biology, as well as progress in tissue engineering, imaging techniques and automated testing platforms hold the promise to alleviate some of the traditional difficulties associated with improving toxicity testing for complex endpoints. Such new technologies are expected (1) to accelerate the identification of toxicity pathways with human relevance that need to be modeled in test methods for toxicity testing (2) to enable the reconstruction of reductionist test systems modeling at a reduced level of complexity the target system/organ of interest (e.g., through tissue engineering, use of human-derived cell lines and stem cells etc.), (3) to allow the measurement of specific mechanisms relevant for a given health endpoint in such test methods (e.g., through gene and protein expression, changes in metabolites, receptor activation, changes in neural activity etc.), (4) to allow to measure toxicity mechanisms at higher throughput rates through the use of automated testing. In this chapter, we discuss the potential impact of new technologies on the development, optimization and use of empirical testing methods, grouped according to important toxicological endpoints. We highlight, from an ECVAM perspective, the areas of topical toxicity, skin absorption, reproductive and developmental toxicity, carcinogenicity/genotoxicity, sensitization, hematopoeisis and toxicokinetics and discuss strategic developments including ECVAM's database service on alternative methods. Neither the areas of toxicity discussed nor the highlighted new technologies represent comprehensive listings which would be an impossible endeavor in the context of a book chapter. However, we feel that these areas are of utmost importance and we predict that new technologies are likely to contribute significantly to test development in these fields. We summarize which new technologies are expected to contribute to the development of new alternative testing methods over the next few years and point out current and planned ECVAM projects for each of these areas.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
Rhebergen, Martijn D F; Visser, Maaike J; Verberk, Maarten M; Lenderink, Annet F; van Dijk, Frank J H; Kezic, Sanja; Hulshof, Carel T J
2012-10-01
We compared three common user involvement methods in revealing barriers and facilitators from intended users that might influence their use of a new genetic test. The study was part of the development of a new genetic test on the susceptibility to hand eczema for nurses. Eighty student nurses participated in five focus groups (n = 33), 15 interviews (n = 15) or questionnaires (n = 32). For each method, data were collected until saturation. We compared the mean number of items and relevant remarks that could influence the use of the genetic test obtained per method, divided by the number of participants in that method. Thematic content analysis was performed using MAXQDA software. The focus groups revealed 30 unique items compared to 29 in the interviews and 21 in the questionnaires. The interviews produced more items and relevant remarks per participant (1.9 and 8.4 pp) than focus groups (0.9 and 4.8 pp) or questionnaires (0.7 and 2.3 pp). All three involvement methods revealed relevant barriers and facilitators to use a new genetic test. Focus groups and interviews revealed substantially more items than questionnaires. Furthermore, this study suggests a preference for the use of interviews because the number of items per participant was higher than for focus groups and questionnaires. This conclusion may be valid for other genetic tests as well.
Zhang, Shujun
2018-01-01
Genome-wide association studies (GWASs) have identified many disease associated loci, the majority of which have unknown biological functions. Understanding the mechanism underlying trait associations requires identifying trait-relevant tissues and investigating associations in a trait-specific fashion. Here, we extend the widely used linear mixed model to incorporate multiple SNP functional annotations from omics studies with GWAS summary statistics to facilitate the identification of trait-relevant tissues, with which to further construct powerful association tests. Specifically, we rely on a generalized estimating equation based algorithm for parameter inference, a mixture modeling framework for trait-tissue relevance classification, and a weighted sequence kernel association test constructed based on the identified trait-relevant tissues for powerful association analysis. We refer to our analytic procedure as the Scalable Multiple Annotation integration for trait-Relevant Tissue identification and usage (SMART). With extensive simulations, we show how our method can make use of multiple complementary annotations to improve the accuracy for identifying trait-relevant tissues. In addition, our procedure allows us to make use of the inferred trait-relevant tissues, for the first time, to construct more powerful SNP set tests. We apply our method for an in-depth analysis of 43 traits from 28 GWASs using tissue-specific annotations in 105 tissues derived from ENCODE and Roadmap. Our results reveal new trait-tissue relevance, pinpoint important annotations that are informative of trait-tissue relationship, and illustrate how we can use the inferred trait-relevant tissues to construct more powerful association tests in the Wellcome trust case control consortium study. PMID:29377896
A Test of Genetic Algorithms in Relevance Feedback.
ERIC Educational Resources Information Center
Lopez-Pujalte, Cristina; Guerrero Bote, Vicente P.; Moya Anegon, Felix de
2002-01-01
Discussion of information retrieval, query optimization techniques, and relevance feedback focuses on genetic algorithms, which are derived from artificial intelligence techniques. Describes an evaluation of different genetic algorithms using a residual collection method and compares results with the Ide dec-hi method (Salton and Buckley, 1990…
Development of a test protocol for evaluating EVA glove performance
NASA Technical Reports Server (NTRS)
Hinman, Elaine M.
1992-01-01
Testing gloved hand performance involves work from several disciplines. Evaluations performed in the course of reenabling a disabled hand, designing a robotic end effector or master controller, or hard-suit design have all yielded relevant information, and, in most cases, produced performance test methods. Most times, these test methods have been primarily oriented toward their parent discipline. For space operations, a comparative test which would provide a way to quantify pressure glove and end effector performance would be useful in dividing tasks between humans and robots. Such a test would have to rely heavily on sensored measurement, as opposed to questionnaires, to produce relevant data. However, at some point human preference would have to be taken into account. This paper presents a methodology for evaluating gloved hand performance which attempts to respond to these issues. Glove testing of a prototype glove design using this method is described.
ERIC Educational Resources Information Center
Patalino, Marianne
Problems in current course evaluation methods are discussed and an alternative method is described for the construction, analysis, and interpretation of a test to evaluate instructional programs. The method presented represents a different approach to the traditional overreliance on standardized achievement tests and the total scores they provide.…
Assessment of Foundation Knowledge: Are Students Confident in Their Ability?
ERIC Educational Resources Information Center
Fenna, Doug S.
2004-01-01
Multiple-choice testing (MCT) has several advantages which are becoming more relevant in the current financial climate. In particular, they can be machine marked. As an objective testing method it is particularly relevant to engineering and other factual courses, but MCTs are not widely used in engineering because students can benefit from…
Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro
2013-08-01
Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed better but performance was inconsistent at subsections level, within the same DB. The method developed allows quantification of the inclusion of relevant information items in DB and comparison with an "ideal database". It is necessary to consult diverse DB in order to find all the relevant information needed to support clinical drug use.
Lung bioaccessibility of contaminants in particulate matter of geological origin.
Guney, Mert; Chapuis, Robert P; Zagury, Gerald J
2016-12-01
Human exposure to particulate matter (PM) has been associated with adverse health effects. While inhalation exposure to airborne PM is a prominent research subject, exposure to PM of geological origin (i.e., generated from soil/soil-like material) has received less attention. This review discusses the contaminants in PM of geological origin and their relevance for human exposure and then evaluates lung bioaccessibility assessment methods and their use. PM of geological origin can contain toxic elements as well as organic contaminants. Observed/predicted PM lung clearance times are long, which may lead to prolonged contact with lung environment. Thus, certain exposure scenarios warrant the use of in vitro bioaccessibility testing to predict lung bioavailability. Limited research is available on lung bioaccessibility test development and test application to PM of geological origin. For in vitro tests, test parameter variation between different studies and concerns about physiological relevance indicate a crucial need for test method standardization and comparison with relevant animal data. Research is recommended on (1) developing robust in vitro lung bioaccessibility methods, (2) assessing bioaccessibility of various contaminants (especially polycyclic aromatic hydrocarbons (PAHs)) in PM of diverse origin (surface soils, mine tailings, etc.), and (3) risk characterization to determine relative importance of exposure to PM of geological origin.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-04
... Evaluation and testing within a risk management process 2-100 ASTM E1372-95 (2003) Standard Test Method...) Standard Title, Type of Test Method for Agar Diffusion Cell standard , Relevant Culture Screening for... Biological evaluation of medical devices - Office(s) and Part 3: Tests for genotoxicity, Division(s...
Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre
2012-07-01
Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.
Rodrigues, Dulcilea Ferraz; Goulart, Eugênio Marcos Andrade
2015-01-01
BACKGROUND Patch testing is an efficient method to identify the allergen responsible for allergic contact dermatitis. OBJECTIVE To evaluate the results of patch tests in children and adolescents comparing these two age groups' results. METHODS Cross-sectional study to assess patch test results of 125 children and adolescents aged 1-19 years, with suspected allergic contact dermatitis, in a dermatology clinic in Brazil. Two Brazilian standardized series were used. RESULTS Seventy four (59.2%) patients had "at least one positive reaction" to the patch test. Among these positive tests, 77.0% were deemed relevant. The most frequent allergens were nickel (36.8%), thimerosal (18.4%), tosylamide formaldehyde resin (6.8%), neomycin (6.4%), cobalt (4.0%) and fragrance mix I (4.0%). The most frequent positive tests came from adolescents (p=0.0014) and females (p=0.0002). There was no relevant statistical difference concerning contact sensitizations among patients with or without atopic history. However, there were significant differences regarding sensitization to nickel (p=0.029) and thimerosal (p=0.042) between the two age groups under study, while adolescents were the most affected. CONCLUSION Nickel and fragrances were the only positive (and relevant) allergens in children. Nickel and tosylamide formaldehyde resin were the most frequent and relevant allergens among adolescents. PMID:26560213
Toxcast and the Use of Human Relevant In Vitro Exposures ...
The path for incorporating new approach methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. These challenges include sufficient coverage of toxicological mechanisms to meaningfully interpret negative test results, development of increasingly relevant test systems, computational modeling to integrate experimental data, putting results in a dose and exposure context, characterizing uncertainty, and efficient validation of the test systems and computational models. The presentation will cover progress at the U.S. EPA in systematically addressing each of these challenges and delivering more human-relevant risk-based assessments. This abstract does not necessarily reflect U.S. EPA policy. Presentation at the British Toxicological Society Annual Congress on ToxCast and the Use of Human Relevant In Vitro Exposures: Incorporating high-throughput exposure and toxicity testing data for 21st century risk assessments .
A Comprehensive Comparison of Relativistic Particle Integrators
NASA Astrophysics Data System (ADS)
Ripperda, B.; Bacchini, F.; Teunissen, J.; Xia, C.; Porth, O.; Sironi, L.; Lapenta, G.; Keppens, R.
2018-03-01
We compare relativistic particle integrators commonly used in plasma physics, showing several test cases relevant for astrophysics. Three explicit particle pushers are considered, namely, the Boris, Vay, and Higuera–Cary schemes. We also present a new relativistic fully implicit particle integrator that is energy conserving. Furthermore, a method based on the relativistic guiding center approximation is included. The algorithms are described such that they can be readily implemented in magnetohydrodynamics codes or Particle-in-Cell codes. Our comparison focuses on the strengths and key features of the particle integrators. We test the conservation of invariants of motion and the accuracy of particle drift dynamics in highly relativistic, mildly relativistic, and non-relativistic settings. The methods are compared in idealized test cases, i.e., without considering feedback onto the electrodynamic fields, collisions, pair creation, or radiation. The test cases include uniform electric and magnetic fields, {\\boldsymbol{E}}× {\\boldsymbol{B}} fields, force-free fields, and setups relevant for high-energy astrophysics, e.g., a magnetic mirror, a magnetic dipole, and a magnetic null. These tests have direct relevance for particle acceleration in shocks and in magnetic reconnection.
Efforts to Produce Relevant Score Reports to School, District, and State Officials on National Tests
ERIC Educational Resources Information Center
Patelis, Thanos; Matos-Elefonte, Haifa
2009-01-01
Presented at the Annual National Council on Measurement in Education (NCME) in San Diego in April 2009. This presentation explores how the College Board strives to ensure the relevance and utility of score reporting practices and methods for the PSAT/NMSQT and SAT scores. The new reporting methods allow for greater interaction and intervention at…
The Role of Relevance in Future Teachers' Utility Value and Interest toward Technology
ERIC Educational Resources Information Center
Kale, Ugur; Akcaoglu, Mete
2018-01-01
Seeing the relevance of tasks for future use is important for developing value and interest in them. We employed a pre- and post-test quasi-experimental design using a mixed-methods approach to examine if reflecting on the relevance of technology to future teaching practices influences elementary and secondary preservice teachers' utility value…
ERIC Educational Resources Information Center
Dahabreh, Issa J.; Chung, Mei; Kitsios, Georgios D.; Terasawa, Teruhiko; Raman, Gowri; Tatsioni, Athina; Tobar, Annette; Lau, Joseph; Trikalinos, Thomas A.; Schmid, Christopher H.
2013-01-01
We performed a survey of meta-analyses of test performance to describe the evolution in their methods and reporting. Studies were identified through MEDLINE (1966-2009), reference lists, and relevant reviews. We extracted information on clinical topics, literature review methods, quality assessment, and statistical analyses. We reviewed 760…
NASA Astrophysics Data System (ADS)
Himschoot, Agnes Rose
The purpose of this mixed method case study was to examine the effects of methods of instruction on students' perception of relevance in higher education non-biology majors' courses. Nearly ninety percent of all students in a liberal arts college are required to take a general biology course. It is proposed that for many of those students, this is the last science course they will take for life. General biology courses are suspected of discouraging student interest in biology with large enrollment, didactic instruction, covering a huge amount of content in one semester, and are charged with promoting student disengagement with biology by the end of the course. Previous research has been aimed at increasing student motivation and interest in biology as measured by surveys and test results. Various methods of instruction have been tested and show evidence of improved learning gains. This study focused on students' perception of relevance of biology content to everyday life and the methods of instruction that increase it. A quantitative survey was administered to assess perception of relevance pre and post instruction over three topics typically taught in a general biology course. A second quantitative survey of student experiences during instruction was administered to identify methods of instruction used in the course lecture and lab. While perception of relevance dropped in the study, qualitative focus groups provided insight into the surprising results by identifying topics that are more relevant than the ones chosen for the study, conveying the affects of the instructor's personal and instructional skills on student engagement, explanation of how active engagement during instruction promotes understanding of relevance, the roll of laboratory in promoting students' understanding of relevance as well as identifying external factors that affect student engagement. The study also investigated the extent to which gender affected changes in students' perception of relevance. The results of this study will inform instructors' pedagogical and logistical choices in the design and implementation of higher education biology courses for non-biology majors. Recommendations for future research will include refining the study to train instructors in methods of instruction that promote student engagement as well as to identify biology topics that are more relevant to students enrolled in non-major biology courses.
In silico methods provide a rapid, inexpensive means of screening a wide array of environmentally relevant pollutants, pesticides, fungicides and consumer products for further toxicity testing. Physiologically based pharmacokinetic (PBPK) models bridge the gap between in vitro as...
Science and Television Commercials: Adding Relevance to the Research Methodology Course.
ERIC Educational Resources Information Center
Solomon, Paul R.
1979-01-01
Contends that research methodology courses can be relevant to issues outside of psychology and describes a method which relates the course to consumer problems. Students use experimental methodology to test claims made in television commercials advertising deodorant, bathroom tissues, and soft drinks. (KC)
Laboratory test methods for evaluating the fire response of aerospace materials
NASA Technical Reports Server (NTRS)
Hilado, C. J.
1979-01-01
The test methods which were developed or evaluated were intended to serve as means of comparing materials on the basis of specific responses under specific sets of test conditions, using apparatus, facilities, and personnel that would be within the capabilities of perhaps the majority of laboratories. Priority was given to test methods which showed promise of addressing the pre-ignition state of a potential fire. These test methods were intended to indicate which materials may present more hazard than others under specific test conditions. These test methods are discussed and arranged according to the stage of a fire to which they are most relevant. Some observations of material performance which resulted from this work are also discussed.
In Situ Test Method for the Electrostatic Characterization of Lunar Dust
NASA Technical Reports Server (NTRS)
Buhler, C. R.; Calle, Carlos I.; CLements, S. J.; Mantovani, J.; Ritz, M. I.
2007-01-01
This paper serves to illustrate the testing methods necessary to classify the electrostatic properties of lunar dust using in situ instrumentation and the required techniques therein. A review of electrostatic classification of lunar simulant materials is provided as is its relevance to the success of future human lunar missions.
Makris, Susan L.; Raffaele, Kathleen; Allen, Sandra; Bowers, Wayne J.; Hass, Ulla; Alleva, Enrico; Calamandrei, Gemma; Sheets, Larry; Amcoff, Patric; Delrue, Nathalie; Crofton, Kevin M.
2009-01-01
Objective We conducted a review of the history and performance of developmental neurotoxicity (DNT) testing in support of the finalization and implementation of Organisation of Economic Co-operation and Development (OECD) DNT test guideline 426 (TG 426). Information sources and analysis In this review we summarize extensive scientific efforts that form the foundation for this testing paradigm, including basic neurotoxicology research, interlaboratory collaborative studies, expert workshops, and validation studies, and we address the relevance, applicability, and use of the DNT study in risk assessment. Conclusions The OECD DNT guideline represents the best available science for assessing the potential for DNT in human health risk assessment, and data generated with this protocol are relevant and reliable for the assessment of these end points. The test methods used have been subjected to an extensive history of international validation, peer review, and evaluation, which is contained in the public record. The reproducibility, reliability, and sensitivity of these methods have been demonstrated, using a wide variety of test substances, in accordance with OECD guidance on the validation and international acceptance of new or updated test methods for hazard characterization. Multiple independent, expert scientific peer reviews affirm these conclusions. PMID:19165382
Web usability testing with a Hispanic medically underserved population.
Moore, Mary; Bias, Randolph G; Prentice, Katherine; Fletcher, Robin; Vaughn, Terry
2009-04-01
Skilled website developers value usability testing to assure user needs are met. When the target audience differs substantially from the developers, it becomes essential to tailor both design and evaluation methods. In this study, researchers carried out a multifaceted usability evaluation of a website (Healthy Texas) designed for Hispanic audiences with lower computer literacy and lower health literacy. METHODS INCLUDED: (1) heuristic evaluation by a usability engineer, (2) remote end-user testing using WebEx software; and (3) face-to-face testing in a community center where use of the website was likely. Researchers found standard usability testing methods needed to be modified to provide interpreters, increased flexibility for time on task, presence of a trusted intermediary such as a librarian, and accommodation for family members who accompanied participants. Participants offered recommendations for website redesign, including simplified language, engaging and relevant graphics, culturally relevant examples, and clear navigation. User-centered design is especially important when website developers are not representative of the target audience. Failure to conduct appropriate usability testing with a representative audience can substantially reduce use and value of the website. This thorough course of usability testing identified improvements that benefit all users but become crucial when trying to reach an underserved audience.
In Vitro Susceptibility Testing Methods for Caspofungin against Aspergillus and Fusarium Isolates
Arikan, Sevtap; Lozano-Chiu, Mario; Paetznick, Victor; Rex, John H.
2001-01-01
We investigated the relevance of prominent reduction in turbidity macroscopically (MIC) and formation of aberrant hyphal tips microscopically (minimum effective concentration; MEC) in measuring the in vitro activity of caspofungin against Aspergillus and Fusarium. Caspofungin generated low MICs and MECs against Aspergillus, but not for Fusarium. While MICs increased inconsistently when the incubation time was prolonged, MEC appeared as a stable and potentially relevant endpoint in testing in vitro caspofungin activity. PMID:11120990
Some Tests of Randomness with Applications
1981-02-01
freedom. For further details, the reader is referred to Gnanadesikan (1977, p. 169) wherein other relevant tests are also given, Graphical tests, as...sample from a gamma distri- bution. J. Am. Statist. Assoc. 71, 480-7. Gnanadesikan , R. (1977). Methods for Statistical Data Analysis of Multivariate
ERIC Educational Resources Information Center
Hilgenkamp, Thessa I. M.; van Wijck, Ruud; Evenhuis, Heleen M.
2012-01-01
Background: Physical fitness is relevant for wellbeing and health, but knowledge on the feasibility and reliability of instruments to measure physical fitness for older adults with intellectual disability is lacking. Methods: Feasibility and test-retest reliability of a physical fitness test battery (Box and Block Test, Response Time Test, walking…
Proof test methodology for composites
NASA Technical Reports Server (NTRS)
Wu, Edward M.; Bell, David K.
1992-01-01
The special requirements for proof test of composites are identified based on the underlying failure process of composites. Two proof test methods are developed to eliminate the inevitable weak fiber sites without also causing flaw clustering which weakens the post-proof-test composite. Significant reliability enhancement by these proof test methods has been experimentally demonstrated for composite strength and composite life in tension. This basic proof test methodology is relevant to the certification and acceptance of critical composite structures. It can also be applied to the manufacturing process development to achieve zero-reject for very large composite structures.
Spector, Paul E.
2016-01-01
Background Safety climate, violence prevention climate, and civility climate were independently developed and linked to domain-specific workplace hazards, although all three were designed to promote the physical and psychological safety of workers. Purpose To test domain specificity between conceptually related workplace climates and relevant workplace hazards. Methods Data were collected from 368 persons employed in various industries and descriptive statistics were calculated for all study variables. Correlational and relative weights analyses were used to test for domain specificity. Results The three climate domains were similarly predictive of most workplace hazards, regardless of domain specificity. Discussion This study suggests that the three climate domains share a common higher order construct that may predict relevant workplace hazards better than any of the scales alone. PMID:27110930
Validation of alternative methods for toxicity testing.
Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M
1998-01-01
Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695
Developing and Testing a Method to Measure Academic Societal Impact
ERIC Educational Resources Information Center
Phillips, Paul; Moutinho, Luiz; Godinho, Pedro
2018-01-01
This paper aims to extend understanding of the business and societal impact of academic research. From a business school perspective, it has taken stock of the role of academic research and relevance in business and society. The proposed conceptual framework highlights the forces influencing the pursuit of academic rigour and relevance in…
Hu, Xinyi; Chen, Yinghe; Tian, Baowei
2016-01-01
Past studies suggest that managers and educators often consider negative feedback as a motivator for individuals to think about their shortcomings and improve their work, but delivering negative feedback does not always achieve desired results. The present study, based on incremental theory, employed an intervention method to activate the belief that a particular ability could be improved after negative feedback. Three experiments tested the intervention effect on negative self-relevant emotion. Study 1 indicated conveying suggestions for improving ability reduced negative self-relevant emotion after negative feedback. Study 2 tested whether activating the sense of possible improvement in the ability could reduce negative self-relevant emotion. Results indicated activating the belief that ability could be improved reduced negative self-relevant emotion after failure, but delivering emotion management information alone did not yield the same effect. Study 3 extended the results by affirming the effort participants made in doing the test, and found the affirmation reduced negative self-relevant emotion. Collectively, the findings indicated focusing on the belief that the ability could be improved in the future can reduce negative self-relevant emotion after negative feedback.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Dryers Note: Effective February 10, 2014, manufacturers must make representations of energy efficiency...), disregarding the provisions regarding batteries and the determination, classification, and testing of relevant...
Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V
2017-04-01
To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... 2.4 Energy factor for dehumidifiers means a measure of energy efficiency of a dehumidifier... batteries and the determination, classification, and testing of relevant modes. 3.2.2 Electrical energy...
Image Search Reranking With Hierarchical Topic Awareness.
Tian, Xinmei; Yang, Linjun; Lu, Yijuan; Tian, Qi; Tao, Dacheng
2015-10-01
With much attention from both academia and industrial communities, visual search reranking has recently been proposed to refine image search results obtained from text-based image search engines. Most of the traditional reranking methods cannot capture both relevance and diversity of the search results at the same time. Or they ignore the hierarchical topic structure of search result. Each topic is treated equally and independently. However, in real applications, images returned for certain queries are naturally in hierarchical organization, rather than simple parallel relation. In this paper, a new reranking method "topic-aware reranking (TARerank)" is proposed. TARerank describes the hierarchical topic structure of search results in one model, and seamlessly captures both relevance and diversity of the image search results simultaneously. Through a structured learning framework, relevance and diversity are modeled in TARerank by a set of carefully designed features, and then the model is learned from human-labeled training samples. The learned model is expected to predict reranking results with high relevance and diversity for testing queries. To verify the effectiveness of the proposed method, we collect an image search dataset and conduct comparison experiments on it. The experimental results demonstrate that the proposed TARerank outperforms the existing relevance-based and diversified reranking methods.
Preparation of a Frozen Regolith Simulant Bed for ISRU Component Testing in a Vacuum Chamber
NASA Technical Reports Server (NTRS)
Klenhenz, Julie; Linne, Diane
2013-01-01
In-Situ Resource Utilization (ISRU) systems and components have undergone extensive laboratory and field tests to expose hardware to relevant soil environments. The next step is to combine these soil environments with relevant pressure and temperature conditions. Previous testing has demonstrated how to incorporate large bins of unconsolidated lunar regolith into sufficiently sized vacuum chambers. In order to create appropriate depth dependent soil characteristics that are needed to test drilling operations for the lunar surface, the regolith simulant bed must by properly compacted and frozen. While small cryogenic simulant beds have been created for laboratory tests, this scale effort will allow testing of a full 1m drill which has been developed for a potential lunar prospector mission. Compacted bulk densities were measured at various moisture contents for GRC-3 and Chenobi regolith simulants. Vibrational compaction methods were compared with the previously used hammer compaction, or "Proctor", method. All testing was done per ASTM standard methods. A full 6.13 m3 simulant bed with 6 percent moisture by weight was prepared, compacted in layers, and frozen in a commercial freezer. Temperature and desiccation data was collected to determine logistics for preparation and transport of the simulant bed for thermal vacuum testing. Once in the vacuum facility, the simulant bed will be cryogenically frozen with liquid nitrogen. These cryogenic vacuum tests are underway, but results will not be included in this manuscript.
High-Throughput Toxicity Testing: New Strategies for Assessing Chemical Safety
In recent years, the food industry has made progress in improving safety testing methods focused on microbial contaminants in order to promote food safety. However, food industry toxicologists must also assess the safety of food-relevant chemicals including pesticides, direct add...
Code of Federal Regulations, 2013 CFR
2013-01-01
.... 2.4Energy factor for dehumidifiers means a measure of energy efficiency of a dehumidifier calculated... batteries and the determination, classification, and testing of relevant modes. 3.2.2Electrical energy...
A Method to Examine Content Domain Structures
ERIC Educational Resources Information Center
D'Agostino, Jerome; Karpinski, Aryn; Welsh, Megan
2011-01-01
After a test is developed, most content validation analyses shift from ascertaining domain definition to studying domain representation and relevance because the domain is assumed to be set once a test exists. We present an approach that allows for the examination of alternative domain structures based on extant test items. In our example based on…
Nesslany, Fabrice
2017-08-01
The standard regulatory core battery of genotoxicity tests generally includes 2 or 3 validated tests with at least one in vitro test in bacteria and one in vitro test on cell cultures. However, limitations in in vitro genotoxicity testing may exist at many levels. The knowledge of the underlying mechanisms of genotoxicity is particularly useful to assess the level of relevance for the in vivo situation. In order to avoid wrong conclusions regarding the actual genotoxicity status of any test substance, it appears very important to be aware of the various origins of related bias leading to 'false positives and negatives' by using in vitro methods. Among these, mention may be made on the metabolic activation system, experimental (extreme) conditions, specificities of the test systems implemented, cell type used etc. The knowledge of the actual 'limits' of the in vitro test systems used is clearly an advantage and may contribute to avoid some pitfalls in order to better assess the level of relevance for the in vivo situation. Copyright © 2016. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Feldt, Leonard S.
2011-01-01
This article presents a simple, computer-assisted method of determining the extent to which increases in reliability increase the power of the "F" test of equality of means. The method uses a derived formula that relates the changes in the reliability coefficient to changes in the noncentrality of the relevant "F" distribution. A readily available…
A systematic review of stakeholder views of selection methods for medical schools admission.
Kelly, M E; Patterson, F; O'Flynn, S; Mulligan, J; Murphy, A W
2018-06-15
The purpose of this paper is to systematically review the literature with respect to stakeholder views of selection methods for medical school admissions. An electronic search of nine databases was conducted between January 2000-July 2014. Two reviewers independently assessed all titles (n = 1017) and retained abstracts (n = 233) for relevance. Methodological quality of quantitative papers was assessed using the MERSQI instrument. The overall quality of evidence in this field was low. Evidence was synthesised in a narrative review. Applicants support interviews, and multiple mini interviews (MMIs). There is emerging evidence that situational judgement tests (SJTs) and selection centres (SCs) are also well regarded, but aptitude tests less so. Selectors endorse the use of interviews in general and in particular MMIs judging them to be fair, relevant and appropriate, with emerging evidence of similarly positive reactions to SCs. Aptitude tests and academic records were valued in decisions of whom to call to interview. Medical students prefer interviews based selection to cognitive aptitude tests. They are unconvinced about the transparency and veracity of written applications. Perceptions of organisational justice, which describe views of fairness in organisational processes, appear to be highly influential on stakeholders' views of the acceptability of selection methods. In particular procedural justice (perceived fairness of selection tools in terms of job relevance and characteristics of the test) and distributive justice (perceived fairness of selection outcomes in terms of equal opportunity and equity), appear to be important considerations when deciding on acceptability of selection methods. There were significant gaps with respect to both key stakeholder groups and the range of selection tools assessed. Notwithstanding the observed limitations in the quality of research in this field, there appears to be broad concordance of views on the various selection methods, across the diverse stakeholders groups. This review highlights the need for better standards, more appropriate methodologies and for broadening the scope of stakeholder research.
Malone, Matthew; Goeres, Darla M; Gosbell, Iain; Vickery, Karen; Jensen, Slade; Stoodley, Paul
2017-02-01
The concept of biofilms in human health and disease is now widely accepted as cause of chronic infection. Typically, biofilms show remarkable tolerance to many forms of treatments and the host immune response. This has led to vast increase in research to identify new (and sometimes old) anti-biofilm strategies that demonstrate effectiveness against these tolerant phenotypes. Areas covered: Unfortunately, a standardized methodological approach of biofilm models has not been adopted leading to a large disparity between testing conditions. This has made it almost impossible to compare data across multiple laboratories, leaving large gaps in the evidence. Furthermore, many biofilm models testing anti-biofilm strategies aimed at the medical arena have not considered the matter of relevance to an intended application. This may explain why some in vitro models based on methodological designs that do not consider relevance to an intended application fail when applied in vivo at the clinical level. Expert commentary: This review will explore the issues that need to be considered in developing performance standards for anti-biofilm therapeutics and provide a rationale for the need to standardize models/methods that are clinically relevant. We also provide some rational as to why no standards currently exist.
García-Remesal, M; Maojo, V; Billhardt, H; Crespo, J
2010-01-01
Bringing together structured and text-based sources is an exciting challenge for biomedical informaticians, since most relevant biomedical sources belong to one of these categories. In this paper we evaluate the feasibility of integrating relational and text-based biomedical sources using: i) an original logical schema acquisition method for textual databases developed by the authors, and ii) OntoFusion, a system originally designed by the authors for the integration of relational sources. We conducted an integration experiment involving a test set of seven differently structured sources covering the domain of genetic diseases. We used our logical schema acquisition method to generate schemas for all textual sources. The sources were integrated using the methods and tools provided by OntoFusion. The integration was validated using a test set of 500 queries. A panel of experts answered a questionnaire to evaluate i) the quality of the extracted schemas, ii) the query processing performance of the integrated set of sources, and iii) the relevance of the retrieved results. The results of the survey show that our method extracts coherent and representative logical schemas. Experts' feedback on the performance of the integrated system and the relevance of the retrieved results was also positive. Regarding the validation of the integration, the system successfully provided correct results for all queries in the test set. The results of the experiment suggest that text-based sources including a logical schema can be regarded as equivalent to structured databases. Using our method, previous research and existing tools designed for the integration of structured databases can be reused - possibly subject to minor modifications - to integrate differently structured sources.
Thermal Measurements of Packed Copper Wire Enables Better Electric Motor
transmittance characterization methods both parallel and perpendicular to the axis. A measurement of apparent from all three test methods indicated that the k_app of the packed copper wire was significantly higher methods for examining the thermal impact of new materials for winding structures relevant to motor
2012-01-01
Background Technological advances have enabled the widespread use of video cases via web-streaming and online download as an educational medium. The use of real subjects to demonstrate acute pathology should aid the education of health care professionals. However, the methodology by which this effect may be tested is not clear. Methods We undertook a literature review of major databases, found relevant articles relevant to using patient video cases as educational interventions, extracted the methodologies used and assessed these methods for internal and construct validity. Results A review of 2532 abstracts revealed 23 studies meeting the inclusion criteria and a final review of 18 of relevance. Medical students were the most commonly studied group (10 articles) with a spread of learner satisfaction, knowledge and behaviour tested. Only two of the studies fulfilled defined criteria on achieving internal and construct validity. The heterogeneity of articles meant it was not possible to perform any meta-analysis. Conclusions Previous studies have not well classified which facet of training or educational outcome the study is aiming to explore and had poor internal and construct validity. Future research should aim to validate a particular outcome measure, preferably by reproducing previous work rather than adopting new methods. In particular cognitive processing enhancement, demonstrated in a number of the medical student studies, should be tested at a postgraduate level. PMID:23256787
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.
1985-01-01
A component mode synthesis method for damped structures was developed and modal test methods were explored which could be employed to determine the relevant parameters required by the component mode synthesis method. Research was conducted on the following topics: (1) Development of a generalized time-domain component mode synthesis technique for damped systems; (2) Development of a frequency-domain component mode synthesis method for damped systems; and (3) Development of a system identification algorithm applicable to general damped systems. Abstracts are presented of the major publications which have been previously issued on these topics.
NASA Astrophysics Data System (ADS)
Bobrov, A. L.
2017-08-01
This paper presents issues of identification of various AE sources in order to increase the information value of AE method. This task is especially relevant for complex objects, when factors that affect an acoustic path on an object of testing significantly affect parameters of signals recorded by sensor. Correlation criteria, sensitive to type of AE source in metal objects is determined in the article.
ERIC Educational Resources Information Center
Grady, Kathleen E.
1981-01-01
Presents feminist criticisms of selected aspects of research methods in psychology. Reviews data relevant to sex bias in topic selection, subject selection and single-sex designs, operationalization of variables, testing for sex differences, and interpretation of results. Suggestions for achieving more "sex fair" research methods are discussed.…
Issues and Methods for Standard-Setting.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
Issues involved in standard setting along with methods for standard setting are reviewed, with specific reference to their relevance for criterion referenced testing. Definitions are given of continuum and state models, and traditional and normative standard setting procedures. Since continuum models are considered more appropriate for criterion…
Promotion of research on in vitro immunotoxicology.
Balls, M; Sabbioni, E
2001-04-10
ECVAM was established to play a leading role at the European level in the independent evaluation of the reliability and relevance of test methods and testing strategies for specific purposes through research on advanced methods and new test development and validation, so that chemicals and products of various kinds, including medicines, vaccines, medical devices, cosmetics, household products and agricultural products, can be manufactured, transported and used more economically and more safely, whilst the current relevance on animal test procedures is progressively reduced. Nowhere is this activity more necessary than in the field of immunotoxicology, where we know that chemicals and products of many kinds have the potential to stimulate, modulate or suppress the induction or expression of various types of immune responses. The problem is to effectively evaluate the potency of these effectors, and, since the available information is currently based on rather qualitative animal tests, to evaluate the true relevance of this knowledge and apply it intelligently in risk assessment processes which will protect human beings without unnecessarily limiting the development and use of materials which otherwise have economic, health and social benefits. The way forward must depend on the following: (a) a better understanding of immunotoxicological processes, based on a sounder understanding of the immune system itself (and of its network of control systems and interrelationships with other body systems); (b) The use of in vitro (not in vivo) systems based on human (not animal) cells and tissues; (c) integrated and tiered testing strategies, incorporating QSAR, as well as in vitro approaches; (d) taking advantage of the use of cells or factors from humans who have been exposed to potential immunotoxins, be this voluntarily, occupationally, environmentally or by accident; and (e) the recognition that virtually everything will effect one or more aspects of the immune system at some dose level and, in some circumstances, deciding when such effects are relevant, is the key to immunotoxicity testing. Some current ECVAM-sponsored work and activities at ECVAM are described.
NASA Astrophysics Data System (ADS)
Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata
2016-09-01
Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.
Lam, Simon C
2014-05-01
To perform detailed psychometric testing of the compliance with standard precautions scale (CSPS) in measuring compliance with standard precautions of clinical nurses and to conduct cross-cultural pilot testing and assess the relevance of the CSPS on an international platform. A cross-sectional and correlational design with repeated measures. Nursing students from a local registered nurse training university, nurses from different hospitals in Hong Kong, and experts in an international conference. The psychometric properties of the CSPS were evaluated via internal consistency, 2-week and 3-month test-retest reliability, concurrent validation, and construct validation. The cross-cultural pilot testing and relevance check was examined by experts on infection control from various developed and developing regions. Among 453 participants, 193 were nursing students, 165 were enrolled nurses, and 95 were registered nurses. The results showed that the CSPS had satisfactory reliability (Cronbach α = 0.73; intraclass correlation coefficient, 0.79 for 2-week test-retest and 0.74 for 3-month test-retest) and validity (optimum correlation with criterion measure; r = 0.76, P < .001; satisfactory results on known-group method and hypothesis testing). A total of 19 experts from 16 countries assured that most of the CSPS findings were relevant and globally applicable. The CSPS demonstrated satisfactory results on the basis of the standard international criteria on psychometric testing, which ascertained the reliability and validity of this instrument in measuring the compliance of clinical nurses with standard precautions. The cross-cultural pilot testing further reinforced the instrument's relevance and applicability in most developed and developing regions.
Testing the Fracture Behaviour of Chocolate
ERIC Educational Resources Information Center
Parsons, L. B.; Goodall, R.
2011-01-01
In teaching the materials science aspects of physics, mechanical behaviour is important due to its relevance to many practical applications. This article presents a method for experimentally examining the toughness of chocolate, including a design for a simple test rig, and a number of experiments that can be performed in the classroom. Typical…
COMMUNITY-ORIENTED DESIGN AND EVALUATION PROCESS FOR SUSTAINABLE INFRASTRUCTURE
We met our first objective by completing the physical infrastructure of the La Fortuna-Tule water and sanitation project using the CODE-PSI method. This physical component of the project was important in providing a real, relevant, community-scale test case for the methods ...
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Gray Matter Correlates of Fluid, Crystallized, and Spatial Intelligence: Testing the P-FIT Model
ERIC Educational Resources Information Center
Colom, Roberto; Haier, Richard J.; Head, Kevin; Alvarez-Linera, Juan; Quiroga, Maria Angeles; Shih, Pei Chun; Jung, Rex E.
2009-01-01
The parieto-frontal integration theory (P-FIT) nominates several areas distributed throughout the brain as relevant for intelligence. This theory was derived from previously published studies using a variety of both imaging methods and tests of cognitive ability. Here we test this theory in a new sample of young healthy adults (N = 100) using a…
Ackerman, Janet M.; Dairkee, Shanaz H.; Fenton, Suzanne E.; Johnson, Dale; Navarro, Kathleen M.; Osborne, Gwendolyn; Rudel, Ruthann A.; Solomon, Gina M.; Zeise, Lauren; Janssen, Sarah
2015-01-01
Background Current approaches to chemical screening, prioritization, and assessment are being reenvisioned, driven by innovations in chemical safety testing, new chemical regulations, and demand for information on human and environmental impacts of chemicals. To conceptualize these changes through the lens of a prevalent disease, the Breast Cancer and Chemicals Policy project convened an interdisciplinary expert panel to investigate methods for identifying chemicals that may increase breast cancer risk. Methods Based on a review of current evidence, the panel identified key biological processes whose perturbation may alter breast cancer risk. We identified corresponding assays to develop the Hazard Identification Approach for Breast Carcinogens (HIA-BC), a method for detecting chemicals that may raise breast cancer risk. Finally, we conducted a literature-based pilot test of the HIA-BC. Results The HIA-BC identifies assays capable of detecting alterations to biological processes relevant to breast cancer, including cellular and molecular events, tissue changes, and factors that alter susceptibility. In the pilot test of the HIA-BC, chemicals associated with breast cancer all demonstrated genotoxic or endocrine activity, but not necessarily both. Significant data gaps persist. Conclusions This approach could inform the development of toxicity testing that targets mechanisms relevant to breast cancer, providing a basis for identifying safer chemicals. The study identified important end points not currently evaluated by federal testing programs, including altered mammary gland development, Her2 activation, progesterone receptor activity, prolactin effects, and aspects of estrogen receptor β activity. This approach could be extended to identify the biological processes and screening methods relevant for other common diseases. Citation Schwarzman MR, Ackerman JM, Dairkee SH, Fenton SE, Johnson D, Navarro KM, Osborne G, Rudel RA, Solomon GM, Zeise L, Janssen S. 2015. Screening for chemical contributions to breast cancer risk: a case study for chemical safety evaluation. Environ Health Perspect 123:1255–1264; http://dx.doi.org/10.1289/ehp.1408337 PMID:26032647
Relevance and reliability of experimental data in human health risk assessment of pesticides.
Kaltenhäuser, Johanna; Kneuer, Carsten; Marx-Stoelting, Philip; Niemann, Lars; Schubert, Jens; Stein, Bernd; Solecki, Roland
2017-08-01
Evaluation of data relevance, reliability and contribution to uncertainty is crucial in regulatory health risk assessment if robust conclusions are to be drawn. Whether a specific study is used as key study, as additional information or not accepted depends in part on the criteria according to which its relevance and reliability are judged. In addition to GLP-compliant regulatory studies following OECD Test Guidelines, data from peer-reviewed scientific literature have to be evaluated in regulatory risk assessment of pesticide active substances. Publications should be taken into account if they are of acceptable relevance and reliability. Their contribution to the overall weight of evidence is influenced by factors including test organism, study design and statistical methods, as well as test item identification, documentation and reporting of results. Various reports make recommendations for improving the quality of risk assessments and different criteria catalogues have been published to support evaluation of data relevance and reliability. Their intention was to guide transparent decision making on the integration of the respective information into the regulatory process. This article describes an approach to assess the relevance and reliability of experimental data from guideline-compliant studies as well as from non-guideline studies published in the scientific literature in the specific context of uncertainty and risk assessment of pesticides. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie
This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.
ERIC Educational Resources Information Center
Ramaekers, Stephan; Kremer, Wim; Pilot, Albert; van Beukelen, Peter; van Keulen, Hanno
2010-01-01
Real-life, complex problems often require that decisions are made despite limited information or insufficient time to explore all relevant aspects. Incorporating authentic uncertainties into an assessment, however, poses problems in establishing results and analysing their methodological qualities. This study aims at developing a test on clinical…
Fixed-ratio ray designs have been used for detecting and characterizing interactions of large numbers of chemicals in combination. Single chemical dose-response data are used to predict an “additivity curve” along an environmentally relevant ray. A “mixture curve” is estimated fr...
Krause, Mark A
2015-07-01
Inquiry into evolutionary adaptations has flourished since the modern synthesis of evolutionary biology. Comparative methods, genetic techniques, and various experimental and modeling approaches are used to test adaptive hypotheses. In psychology, the concept of adaptation is broadly applied and is central to comparative psychology and cognition. The concept of an adaptive specialization of learning is a proposed account for exceptions to general learning processes, as seen in studies of Pavlovian conditioning of taste aversions, sexual responses, and fear. The evidence generally consists of selective associations forming between biologically relevant conditioned and unconditioned stimuli, with conditioned responses differing in magnitude, persistence, or other measures relative to non-biologically relevant stimuli. Selective associations for biologically relevant stimuli may suggest adaptive specializations of learning, but do not necessarily confirm adaptive hypotheses as conceived of in evolutionary biology. Exceptions to general learning processes do not necessarily default to an adaptive specialization explanation, even if experimental results "make biological sense". This paper examines the degree to which hypotheses of adaptive specializations of learning in sexual and fear response systems have been tested using methodologies developed in evolutionary biology (e.g., comparative methods, quantitative and molecular genetics, survival experiments). A broader aim is to offer perspectives from evolutionary biology for testing adaptive hypotheses in psychological science.
Research on environmental impact of water-based fire extinguishing agents
NASA Astrophysics Data System (ADS)
Wang, Shuai
2018-02-01
This paper offers current status of application of water-based fire extinguishing agents, the environmental and research considerations of the need for the study of toxicity research. This paper also offers systematic review of test methods of toxicity and environmental impact of water-based fire extinguishing agents currently available, illustrate the main requirements and relevant test methods, and offer some research findings for future research considerations. The paper also offers limitations of current study.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-11
... stock(s) for subsistence uses (where relevant). The authorization must set forth the permissible methods... shooting a test pattern over an ocean bottom instrument in shallow water. This method is neither practical nor valid in water depths as great as 3,000 m (9,842.5 ft). The alternative method of conducting site...
ERIC Educational Resources Information Center
ALTMANN, BERTHOLD
AFTER A BRIEF SUMMARY OF THE TEST PROGRAM (DESCRIBED MORE FULLY IN LI 000 318), THE STATISTICAL RESULTS TABULATED AS OVERALL "ABC (APPROACH BY CONCEPT)-RELEVANCE RATIOS" AND "ABC-RECALL FIGURES" ARE PRESENTED AND REVIEWED. AN ABSTRACT MODEL DEVELOPED IN ACCORDANCE WITH MAX WEBER'S "IDEALTYPUS" ("DIE OBJEKTIVITAET…
Graphical method for comparative statistical study of vaccine potency tests.
Pay, T W; Hingley, P J
1984-03-01
Producers and consumers are interested in some of the intrinsic characteristics of vaccine potency assays for the comparative evaluation of suitable experimental design. A graphical method is developed which represents the precision of test results, the sensitivity of such results to changes in dosage, and the relevance of the results in the way they reflect the protection afforded in the host species. The graphs can be constructed from Producer's scores and Consumer's scores on each of the scales of test score, antigen dose and probability of protection against disease. A method for calculating these scores is suggested and illustrated for single and multiple component vaccines, for tests which do or do not employ a standard reference preparation, and for tests which employ quantitative or quantal systems of scoring.
Verstraelen, Sandra; Van Rompay, An R
2018-06-01
The main objective of the CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project (2015-2016) was to develop tiered, non-animal testing strategies for serious eye damage and eye irritation assessment in relation to the most important drivers of classification. The serious eye damage and eye irritation potential of a set of 80 chemicals was identified based on existing in vivo Draize eye test data and testing was conducted using the following eight alternative test methods: BCOP (Bovine Corneal Opacity and Permeability)+histopathology, BCOP-LLBO (BCOP Laser Light-Based Opacitometer), ICE (Isolated Chicken Eye)+histopathology, STE (Short Term Exposure), EpiOcular™ EIT (EpiOcular Eye Irritation Test), EpiOcular™ ET-50 (EpiOcular™ Time-to-toxicity), SkinEthic™ HCE EIT (SkinEthic™ Human Corneal Epithelial Eye Irritation Test), and SMI (Slug Mucosal Irritation). Project management decided to not include the ICE data in this project since the execution showed relevant, and not predictable, deviations from Organisation for Economic Co-operation and Development (OECD) Test Guideline (TG) 438 and Guidance Document 160. At this stage, the outcome of these deviations has not been fully assessed. In addition to these alternative test methods, the computational models Toxtree and Case Ultra were taken into account. This project assessed the relevance of these test methods, their applicability domains and limitations in terms of 'drivers of classification', and their strengths and weaknesses. In this way, methods were identified that fit into a tiered-testing strategy for serious eye damage/eye irritation assessment to distinguish United Nations Globally Harmonized System of Classification and Labelling of Chemicals (UN GHS) Category 1 (Cat 1) chemicals from non-Cat 1 chemicals and address the gap namely distinguish between Category 2 (Cat 2) and Cat 1 chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yang, Mingxing; Li, Xiumin; Li, Zhibin; Ou, Zhimin; Liu, Ming; Liu, Suhuan; Li, Xuejun; Yang, Shuyu
2013-01-01
DNA microarray analysis is characterized by obtaining a large number of gene variables from a small number of observations. Cluster analysis is widely used to analyze DNA microarray data to make classification and diagnosis of disease. Because there are so many irrelevant and insignificant genes in a dataset, a feature selection approach must be employed in data analysis. The performance of cluster analysis of this high-throughput data depends on whether the feature selection approach chooses the most relevant genes associated with disease classes. Here we proposed a new method using multiple Orthogonal Partial Least Squares-Discriminant Analysis (mOPLS-DA) models and S-plots to select the most relevant genes to conduct three-class disease classification and prediction. We tested our method using Golub's leukemia microarray data. For three classes with subtypes, we proposed hierarchical orthogonal partial least squares-discriminant analysis (OPLS-DA) models and S-plots to select features for two main classes and their subtypes. For three classes in parallel, we employed three OPLS-DA models and S-plots to choose marker genes for each class. The power of feature selection to classify and predict three-class disease was evaluated using cluster analysis. Further, the general performance of our method was tested using four public datasets and compared with those of four other feature selection methods. The results revealed that our method effectively selected the most relevant features for disease classification and prediction, and its performance was better than that of the other methods.
Wu, John Z; Cutlip, Robert G; Welcome, Daniel; Dong, Ren G
2006-01-01
Knowledge of viscoelastic properties of soft tissues is essential for the finite element modelling of the stress/strain distributions in finger-pad during vibratory loading, which is important in exploring the mechanism of hand-arm vibration syndrome. In conventional procedures, skin and subcutaneous tissue have to be separated for testing the viscoelastic properties. In this study, a novel method has been proposed to simultaneously determine the viscoelastic properties of skin and subcutaneous tissue in uniaxial stress relaxation tests. A mathematical approach has been derived to obtain the creep and relaxation characteristics of skin and subcutaneous tissue using uniaxial stress relaxation data of skin/subcutaneous composite specimens. The micro-structures of collagen fiber networks in the soft tissue, which underline the tissue mechanical characteristics, will be intact in the proposed method. Therefore, the viscoelastic properties of soft tissues obtained using the proposed method would be more physiologically relevant than those obtained using the conventional method. The proposed approach has been utilized to measure the viscoelastic properties of soft tissues of pig. The relaxation curves of pig skin and subcutaneous tissue obtained in the current study agree well with those in literature. Using the proposed approach, reliable material properties of soft tissues can be obtained in a cost- and time-efficient manner, which simultaneously improves the physiological relevance.
A Cost Model for Testing Unmanned and Autonomous Systems of Systems
2011-02-01
those risks. In addition, the fundamental methods presented by Aranha and Borba to include the complexity and sizing of tests for UASoS, can be expanded...used as an input for test execution effort estimation models (Aranha & Borba , 2007). Such methodology is very relevant to this work because as a UASoS...calculate the test effort based on the complexity of the SoS. However, Aranha and Borba define test size as the number of steps required to complete
Hearing Aid–Related Standards and Test Systems
Ravn, Gert; Preves, David
2015-01-01
Many documents describe standardized methods and standard equipment requirements in the field of audiology and hearing aids. These standards will ensure a uniform level and a high quality of both the methods and equipment used in audiological work. The standards create the basis for measuring performance in a reproducible manner and independent from how and when and by whom parameters have been measured. This article explains, and focuses on, relevant acoustic and electromagnetic compatibility parameters and describes several test systems available. PMID:27516709
The path for incorporating new approach methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. These challenges include sufficient coverage of toxicological mechanisms to meaningfully interpret negative test results, dev...
Testing Scientific Software: A Systematic Literature Review
Kanewala, Upulee; Bieman, James M.
2014-01-01
Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798
Casimiro, Ana C; Vinga, Susana; Freitas, Ana T; Oliveira, Arlindo L
2008-02-07
Motif finding algorithms have developed in their ability to use computationally efficient methods to detect patterns in biological sequences. However the posterior classification of the output still suffers from some limitations, which makes it difficult to assess the biological significance of the motifs found. Previous work has highlighted the existence of positional bias of motifs in the DNA sequences, which might indicate not only that the pattern is important, but also provide hints of the positions where these patterns occur preferentially. We propose to integrate position uniformity tests and over-representation tests to improve the accuracy of the classification of motifs. Using artificial data, we have compared three different statistical tests (Chi-Square, Kolmogorov-Smirnov and a Chi-Square bootstrap) to assess whether a given motif occurs uniformly in the promoter region of a gene. Using the test that performed better in this dataset, we proceeded to study the positional distribution of several well known cis-regulatory elements, in the promoter sequences of different organisms (S. cerevisiae, H. sapiens, D. melanogaster, E. coli and several Dicotyledons plants). The results show that position conservation is relevant for the transcriptional machinery. We conclude that many biologically relevant motifs appear heterogeneously distributed in the promoter region of genes, and therefore, that non-uniformity is a good indicator of biological relevance and can be used to complement over-representation tests commonly used. In this article we present the results obtained for the S. cerevisiae data sets.
Kreps, Gary L.
2005-01-01
Objective: This paper examines the influence of the digital divide on disparities in health outcomes for vulnerable populations, identifying implications for medical and public libraries. Method: The paper describes the results of the Digital Divide Pilot Projects demonstration research programs funded by the National Cancer Institute to test new strategies for disseminating relevant health information to underserved and at-risk audiences. Results: The Digital Divide Pilot Projects field-tested innovative systemic strategies for helping underserved populations access and utilize relevant health information to make informed health-related decisions about seeking appropriate health care and support, resisting avoidable and significant health risks, and promoting their own health. Implications: The paper builds on the Digital Divide Pilot Projects by identifying implications for developing health communication strategies that libraries can adopt to provide digital health information to vulnerable populations. PMID:16239960
Wang, Xinglong; Rak, Rafal; Restificar, Angelo; Nobata, Chikashi; Rupp, C J; Batista-Navarro, Riza Theresa B; Nawaz, Raheel; Ananiadou, Sophia
2011-10-03
The selection of relevant articles for curation, and linking those articles to experimental techniques confirming the findings became one of the primary subjects of the recent BioCreative III contest. The contest's Protein-Protein Interaction (PPI) task consisted of two sub-tasks: Article Classification Task (ACT) and Interaction Method Task (IMT). ACT aimed to automatically select relevant documents for PPI curation, whereas the goal of IMT was to recognise the methods used in experiments for identifying the interactions in full-text articles. We proposed and compared several classification-based methods for both tasks, employing rich contextual features as well as features extracted from external knowledge sources. For IMT, a new method that classifies pair-wise relations between every text phrase and candidate interaction method obtained promising results with an F1 score of 64.49%, as tested on the task's development dataset. We also explored ways to combine this new approach and more conventional, multi-label document classification methods. For ACT, our classifiers exploited automatically detected named entities and other linguistic information. The evaluation results on the BioCreative III PPI test datasets showed that our systems were very competitive: one of our IMT methods yielded the best performance among all participants, as measured by F1 score, Matthew's Correlation Coefficient and AUC iP/R; whereas for ACT, our best classifier was ranked second as measured by AUC iP/R, and also competitive according to other metrics. Our novel approach that converts the multi-class, multi-label classification problem to a binary classification problem showed much promise in IMT. Nevertheless, on the test dataset the best performance was achieved by taking the union of the output of this method and that of a multi-class, multi-label document classifier, which indicates that the two types of systems complement each other in terms of recall. For ACT, our system exploited a rich set of features and also obtained encouraging results. We examined the features with respect to their contributions to the classification results, and concluded that contextual words surrounding named entities, as well as the MeSH headings associated with the documents were among the main contributors to the performance.
Kandasamy, Ram; Calsbeek, Jonas J.; Morgan, Michael M.
2016-01-01
Background The assessment of nociception in preclinical studies is undergoing a transformation from pain-evoked to pain-depressed tests to more closely mimic the effects of clinical pain. Many inflammatory pain-depressed behaviors (reward seeking, locomotion) have been examined, but these tests are limited because of confounds such as stress and difficulties in quantifying behavior. New Method The present study evaluates home cage wheel running as an objective method to assess the magnitude and duration of inflammatory pain in male and female rats. Results Injection of Complete Freund’s Adjuvant (CFA) into the right hindpaw to induce inflammatory pain almost completely inhibited wheel running for 2 days in males and females. Wheel running gradually returned to baseline levels within 12 days despite persistent mechanical hypersensitivity (von Frey test). Comparison with Existing Methods Continuously monitoring home cage wheel running improves on previous studies examining inflammatory pain-depressed wheel running because it is more sensitive to noxious stimuli, avoids the stress of removing the rat from its cage for testing, and provides a complete analysis of the time course for changes in nociception. Conclusions The present data indicate that home cage wheel running is a clinically relevant method to assess inflammatory pain in the rat. The decrease in activity caused by inflammatory pain and subsequent gradual recovery mimics the changes in activity caused by pain in humans. The tendency for pain-depressed wheel running to be greater in female than male rats is consistent with the tendency for women to be at greater risk of chronic pain than men. PMID:26891874
Grady, Haiyan; Elder, David; Webster, Gregory K; Mao, Yun; Lin, Yiqing; Flanagan, Talia; Mann, James; Blanchard, Andy; Cohen, Michael J; Lin, Judy; Kesisoglou, Filippos; Hermans, Andre; Abend, Andreas; Zhang, Limin; Curran, David
2018-01-01
This article intends to summarize the current views of the IQ Consortium Dissolution Working Group, which comprises various industry companies, on the roles of dissolution testing throughout pharmaceutical product development, registration, commercialization, and beyond. Over the past 3 decades, dissolution testing has evolved from a routine and straightforward test as a component of end-product release into a comprehensive set of tools that the developer can deploy at various stages of the product life cycle. The definitions of commonly used dissolution approaches, how they relate to one another and how they may be applied in modern drug development, and life cycle management is described in this article. Specifically, this article discusses the purpose, advantages, and limitations of quality control, biorelevant, and clinically relevant dissolution methods. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Tindal, Gerald; Lee, Daesik; Geller, Leanne Ketterlin
2008-01-01
In this paper we review different methods for teachers to recommend accommodations in large scale tests. Then we present data on the stability of their judgments on variables relevant to this decision-making process. The outcomes from the judgments support the need for a more explicit model. Four general categories are presented: student…
Code of Federal Regulations, 2014 CFR
2014-01-01
...), disregarding the provisions regarding batteries and the determination, classification, and testing of relevant... per degree Fahrenheit = 8.2, and e = nominal gas or oil water heater recovery efficiency = 0.75, 5.6.1... heater recovery efficiency = 0.75. 5.6.2 Dishwashers that operate with a nominal 120 °F inlet water...
Review of Test Theory and Methods.
1981-01-01
literature, although some books , technical reports, and unpub- lished literature have been included where relevant. The focus of the review is on practical...1977) and Abu-Sayf (1977) developed new versions of formula scores, and Molenaar (1977) took a Bayesian approach to correcting for random guessing. The...Snow’s (1977) book on aptitude and instructional methods is a landmark review of the research on the interaction between instructional methods and
Dredged Material Analysis Tools; Performance of Acute and Chronic Sediment Toxicity Methods
2008-04-01
Chronic Sediment Toxicity Methods Jeffery Steevens, Alan Kennedy, Daniel Farrar, Cory McNemar, Mark R. Reiss, Roy K. Kropp, Jon Doi, and Todd Bridges...Research Program ERDC/EL TR-08-16 April 2008 Dredged Material Analysis Tools Performance of Acute and Chronic Sediment Toxicity Methods Jeffery...potential advan- tages and disadvantages of using chronic sediment toxicity tests with relevant benthic macroinvertebrates as part of dredged material
Usefulness of component resolved analysis of cat allergy in routine clinical practice.
Eder, Katharina; Becker, Sven; San Nicoló, Marion; Berghaus, Alexander; Gröger, Moritz
2016-01-01
Cat allergy is of great importance, and its prevalence is increasing worldwide. Cat allergens and house dust mite allergens represent the major indoor allergens; however, they are ubiquitous. Cat sensitization and allergy are known risk factors for rhinitis, bronchial hyperreactivity and asthma. Thus, the diagnosis of sensitization to cats is important for any allergist. 70 patients with positive skin prick tests for cats were retrospectively compared regarding their skin prick test results, as well as their specific immunoglobulin E antibody profiles with regard to their responses to the native cat extract, rFel d 1, nFel d 2 and rFel d 4. 35 patients were allergic to cats, as determined by positive anamnesis and/or nasal provocation with cat allergens, and 35 patients exhibited clinically non-relevant sensitization, as indicated by negative anamnesis and/or a negative nasal allergen challenge. Native cat extract serology testing detected 100% of patients who were allergic to cats but missed eight patients who showed sensitization in the skin prick test and did not have allergic symptoms. The median values of the skin prick test, as well as those of the specific immunoglobulin E antibodies against the native cat extract, were significantly higher for allergic patients than for patients with clinically non-relevant sensitization. Component based diagnostic testing to rFel d 1 was not as reliable. Sensitization to nFel d 2 and rFel d 4 was seen only in individual patients. Extract based diagnostic methods for identifying cat allergy and sensitization, such as the skin prick test and native cat extract serology, remain crucial in routine clinical practice. In our study, component based diagnostic testing could not replace these methods with regard to the detection of sensitization to cats and differentiation between allergy and sensitization without clinical relevance. However, component resolved allergy diagnostic tools have individual implications, and future studies may facilitate a better understanding of its use and subsequently may improve the clinical management of allergic patients.
De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric
2010-01-11
Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.
The Genus Corynebacterium and Other Medically Relevant Coryneform-Like Bacteria
2012-01-01
Catalase-positive Gram-positive bacilli, commonly called “diphtheroids” or “coryneform” bacteria were historically nearly always dismissed as contaminants when recovered from patients, but increasingly have been implicated as the cause of significant infections. These taxa have been underreported, and the taxa were taxonomically confusing. The mechanisms of pathogenesis, especially for newly described taxa, were rarely studied. Antibiotic susceptibility data were relatively scant. In this minireview, clinical relevance, phenotypic and genetic identification methods, matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) evaluations, and antimicrobial susceptibility testing involving species in the genus Corynebacterium and other medically relevant Gram-positive rods, collectively called coryneforms, are described. PMID:22837327
Subtil, Fabien; Rabilloud, Muriel
2015-07-01
The receiver operating characteristic curves (ROC curves) are often used to compare continuous diagnostic tests or determine the optimal threshold of a test; however, they do not consider the costs of misclassifications or the disease prevalence. The ROC graph was extended to allow for these aspects. Two new lines are added to the ROC graph: a sensitivity line and a specificity line. Their slopes depend on the disease prevalence and on the ratio of the net benefit of treating a diseased subject to the net cost of treating a nondiseased one. First, these lines help researchers determine the range of specificities within which test comparisons of partial areas under the curves is clinically relevant. Second, the ROC curve point the farthest from the specificity line is shown to be the optimal threshold in terms of expected utility. This method was applied: (1) to determine the optimal threshold of ratio specific immunoglobulin G (IgG)/total IgG for the diagnosis of congenital toxoplasmosis and (2) to select, among two markers, the most accurate for the diagnosis of left ventricular hypertrophy in hypertensive subjects. The two additional lines transform the statistically valid ROC graph into a clinically relevant tool for test selection and threshold determination. Copyright © 2015 Elsevier Inc. All rights reserved.
A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps
NASA Astrophysics Data System (ADS)
Brown, Scott
Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
Metabolomics Approach for Toxicity Screening of Volatile Substances
In 2007 the National Research Council envisioned the need for inexpensive, high throughput, cell based toxicity testing methods relevant to human health. High Throughput Screening (HTS) in vitro screening approaches have addressed these problems by using robotics. However, the ch...
Physiological Parameters for Oral Delivery and In vitro Testing
Mudie, Deanna M.; Amidon, Gordon L.; Amidon, Gregory E.
2010-01-01
Pharmaceutical solid oral dosage forms must undergo dissolution in the intestinal fluids of the gastrointestinal tract before they can be absorbed and reach the systemic circulation. Therefore, dissolution is a critical part of the drug-delivery process. The rate and extent of drug dissolution and absorption depend on the characteristics of the active ingredient as well as properties of the dosage form. Just as importantly, characteristics of the physiological environment such as buffer species, pH, bile salts, gastric emptying rate, intestinal motility, and hydrodynamics can significantly impact dissolution and absorption. While significant progress has been made since 1970 when the first compendial dissolution test was introduced (USP Apparatus 1), current dissolution testing does not take full advantage of the extensive physiologic information that is available. For quality control purposes, where the question is one of lot-to-lot consistency in performance, using nonphysiologic test conditions that match drug and dosage form properties with practical dissolution media and apparatus may be appropriate. However, where in vitro – in vivo correlations are desired, it is logical to consider and utilize knowledge of the in vivo condition. This publication critically reviews the literature that is relevant to oral human drug delivery. Physiologically relevant information must serve as a basis for the design of dissolution test methods and systems that are more representative of the human condition. As in vitro methods advance in their physiological relevance, better in vitro - in vivo correlations will be possible. This will, in turn, lead to in vitro systems that can be utilized to more effectively design dosage forms that have improved and more consistent oral bioperformance. PMID:20822152
A novel method for unsteady flow field segmentation based on stochastic similarity of direction
NASA Astrophysics Data System (ADS)
Omata, Noriyasu; Shirayama, Susumu
2018-04-01
Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.
Krystek, Petra; Bäuerlein, Patrick S; Kooij, Pascal J F
2015-03-15
For pharmaceutical applications, the use of inorganic engineered nanoparticles is of growing interest while silver (Ag) and gold (Au) are the most relevant elements. A few methods were developed recently but the validation and the application testing were quite limited. Therefore, a routinely suitable multi element method for the identification of nanoparticles of different sizes below 100 nm and elemental composition by applying asymmetric flow field flow fraction (AF4) - inductively coupled plasma mass spectrometry (ICPMS) is developed. A complete validation model of the quantification of releasable pharmaceutical relevant inorganic nanoparticles based on Ag and Au is presented for the most relevant aqueous matrices of tap water and domestic waste water. The samples are originated from locations in the Netherlands and it is of great interest to study the unwanted presence of Ag and Au as nanoparticle residues due to possible health and environmental risks. During method development, instability effects are observed for 60 nm and 70 nm Ag ENPs with different capping agents. These effects are studied more closely in relation to matrix effects. Besides the methodological aspects, the obtained analytical results and relevant performance characteristics (e.g. measuring range, limit of detection, repeatability, reproducibility, trueness, and expanded uncertainty of measurement) are determined and discussed. For the chosen aqueous matrices, the results of the performance characteristics are significantly better for Au ENPs in comparison to Ag ENPs; e.g. repeatability and reproducibility are below 10% for all Au ENPs respectively maximal 27% repeatability for larger Ag ENPs. The method is a promising tool for the simultaneous determination of releasable pharmaceutical relevant inorganic nanoparticles. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Shakuto, Elena A.; Dorozhkin, Evgenij M.; Kozlova, Anastasia A.
2016-01-01
The relevance of the subject under analysis is determined by the lack of theoretical development of the problem of management of teacher scientific-methodical work in vocational educational institutions based upon innovative approaches in the framework of project paradigm. The purpose of the article is to develop and test a science-based…
Good cell culture practices &in vitro toxicology.
Eskes, Chantra; Boström, Ann-Charlotte; Bowe, Gerhard; Coecke, Sandra; Hartung, Thomas; Hendriks, Giel; Pamies, David; Piton, Alain; Rovida, Costanza
2017-12-01
Good Cell Culture Practices (GCCP) is of high relevance to in vitro toxicology. The European Society of Toxicology In Vitro (ESTIV), the Center for Alternatives for Animal Testing (CAAT) and the In Vitro Toxicology Industrial Platform (IVTIP) joined forces to address by means of an ESTIV 2016 pre-congress session the different aspects and applications of GCCP. The covered aspects comprised the current status of the OECD guidance document on Good In Vitro Method Practices, the importance of quality assurance for new technological advances in in vitro toxicology including stem cells, and the optimized implementation of Good Manufacturing Practices and Good Laboratory Practices for regulatory testing purposes. General discussions raised the duality related to the difficulties in implementing GCCP in an academic innovative research framework on one hand, and on the other hand, the need for such GCCP principles in order to ensure reproducibility and robustness of in vitro test methods for toxicity testing. Indeed, if good cell culture principles are critical to take into consideration for all uses of in vitro test methods for toxicity testing, the level of application of such principles may depend on the stage of development of the test method as well as on the applications of the test methods, i.e., academic innovative research vs. regulatory standardized test method. Copyright © 2017 Elsevier Ltd. All rights reserved.
The transfer of analytical procedures.
Ermer, J; Limberger, M; Lis, K; Wätzig, H
2013-11-01
Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.
Pathway-based predictive approaches for non-animal assessment of acute inhalation toxicity.
Clippinger, Amy J; Allen, David; Behrsing, Holger; BéruBé, Kelly A; Bolger, Michael B; Casey, Warren; DeLorme, Michael; Gaça, Marianna; Gehen, Sean C; Glover, Kyle; Hayden, Patrick; Hinderliter, Paul; Hotchkiss, Jon A; Iskandar, Anita; Keyser, Brian; Luettich, Karsta; Ma-Hock, Lan; Maione, Anna G; Makena, Patrudu; Melbourne, Jodie; Milchak, Lawrence; Ng, Sheung P; Paini, Alicia; Page, Kathryn; Patlewicz, Grace; Prieto, Pilar; Raabe, Hans; Reinke, Emily N; Roper, Clive; Rose, Jane; Sharma, Monita; Spoo, Wayne; Thorne, Peter S; Wilson, Daniel M; Jarabek, Annie M
2018-06-20
New approaches are needed to assess the effects of inhaled substances on human health. These approaches will be based on mechanisms of toxicity, an understanding of dosimetry, and the use of in silico modeling and in vitro test methods. In order to accelerate wider implementation of such approaches, development of adverse outcome pathways (AOPs) can help identify and address gaps in our understanding of relevant parameters for model input and mechanisms, and optimize non-animal approaches that can be used to investigate key events of toxicity. This paper describes the AOPs and the toolbox of in vitro and in silico models that can be used to assess the key events leading to toxicity following inhalation exposure. Because the optimal testing strategy will vary depending on the substance of interest, here we present a decision tree approach to identify an appropriate non-animal integrated testing strategy that incorporates consideration of a substance's physicochemical properties, relevant mechanisms of toxicity, and available in silico models and in vitro test methods. This decision tree can facilitate standardization of the testing approaches. Case study examples are presented to provide a basis for proof-of-concept testing to illustrate the utility of non-animal approaches to inform hazard identification and risk assessment of humans exposed to inhaled substances. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Klämpfl, Tobias G; Isbary, Georg; Shimizu, Tetsuji; Li, Yang-Fang; Zimmermann, Julia L; Stolz, Wilhelm; Schlegel, Jürgen; Morfill, Gregor E; Schmidt, Hans-Ulrich
2012-08-01
Physical cold atmospheric surface microdischarge (SMD) plasma operating in ambient air has promising properties for the sterilization of sensitive medical devices where conventional methods are not applicable. Furthermore, SMD plasma could revolutionize the field of disinfection at health care facilities. The antimicrobial effects on Gram-negative and Gram-positive bacteria of clinical relevance, as well as the fungus Candida albicans, were tested. Thirty seconds of plasma treatment led to a 4 to 6 log(10) CFU reduction on agar plates. C. albicans was the hardest to inactivate. The sterilizing effect on standard bioindicators (bacterial endospores) was evaluated on dry test specimens that were wrapped in Tyvek coupons. The experimental D(23)(°)(C) values for Bacillus subtilis, Bacillus pumilus, Bacillus atrophaeus, and Geobacillus stearothermophilus were determined as 0.3 min, 0.5 min, 0.6 min, and 0.9 min, respectively. These decimal reduction times (D values) are distinctly lower than D values obtained with other reference methods. Importantly, the high inactivation rate was independent of the material of the test specimen. Possible inactivation mechanisms for relevant microorganisms are briefly discussed, emphasizing the important role of neutral reactive plasma species and pointing to recent diagnostic methods that will contribute to a better understanding of the strong biocidal effect of SMD air plasma.
Isbary, Georg; Shimizu, Tetsuji; Li, Yang-Fang; Zimmermann, Julia L.; Stolz, Wilhelm; Schlegel, Jürgen; Morfill, Gregor E.; Schmidt, Hans-Ulrich
2012-01-01
Physical cold atmospheric surface microdischarge (SMD) plasma operating in ambient air has promising properties for the sterilization of sensitive medical devices where conventional methods are not applicable. Furthermore, SMD plasma could revolutionize the field of disinfection at health care facilities. The antimicrobial effects on Gram-negative and Gram-positive bacteria of clinical relevance, as well as the fungus Candida albicans, were tested. Thirty seconds of plasma treatment led to a 4 to 6 log10 CFU reduction on agar plates. C. albicans was the hardest to inactivate. The sterilizing effect on standard bioindicators (bacterial endospores) was evaluated on dry test specimens that were wrapped in Tyvek coupons. The experimental D23°C values for Bacillus subtilis, Bacillus pumilus, Bacillus atrophaeus, and Geobacillus stearothermophilus were determined as 0.3 min, 0.5 min, 0.6 min, and 0.9 min, respectively. These decimal reduction times (D values) are distinctly lower than D values obtained with other reference methods. Importantly, the high inactivation rate was independent of the material of the test specimen. Possible inactivation mechanisms for relevant microorganisms are briefly discussed, emphasizing the important role of neutral reactive plasma species and pointing to recent diagnostic methods that will contribute to a better understanding of the strong biocidal effect of SMD air plasma. PMID:22582068
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, W. Geoffrey; Gray, David Clinton
Purpose: To introduce the Joint Commission's requirements for annual diagnostic physics testing of all nuclear medicine equipment, effective 7/1/2014, and to highlight an acceptable methodology for testing lowcontrast resolution of the nuclear medicine imaging system. Methods: The Joint Commission's required diagnostic physics evaluations are to be conducted for all of the image types produced clinically by each scanner. Other accrediting bodies, such as the ACR and the IAC, have similar imaging metrics, but do not emphasize testing low-contrast resolution as it relates clinically. The proposed method for testing low contrast resolution introduces quantitative metrics that are clinically relevant. The acquisitionmore » protocol and calculation of contrast levels will utilize a modified version of the protocol defined in AAPM Report #52. Results: Using the Rose criterion for lesion detection with a SNRpixel = 4.335 and a CNRlesion = 4, the minimum contrast levels for 25.4 mm and 31.8 mm cold spheres were calculated to be 0.317 and 0.283, respectively. These contrast levels are the minimum threshold that must be attained to guard against false positive lesion detection. Conclusion: Low contrast resolution, or detectability, can be properly tested in a manner that is clinically relevant by measuring the contrast level of cold spheres within a Jaszczak phantom using pixel values within ROI's placed in the background and cold sphere regions. The measured contrast levels are then compared to a minimum threshold calculated using the Rose criterion and a CNRlesion = 4. The measured contrast levels must either meet or exceed this minimum threshold to prove acceptable lesion detectability. This research and development activity was performed by the authors while employed at West Physics Consulting, LLC. It is presented with the consent of West Physics, which has authorized the dissemination of the information and/or techniques described in the work.« less
ToxCast, the United States Environmental Protection Agency’s chemical prioritization research program, is developing methods for utilizing computational chemistry and bioactivity profiling to predict potential for toxicity and prioritize limited testing resources (www.epa.gov/toc...
Evaluation of Low-Tech Indoor Remediation Methods ...
Report This study identified, collected, evaluated, and summarized available articles, reports, guidance documents, and other pertinent information related to common housekeeping activities within the United States. This resulted in a summary compendium including relevant information about multiple low-tech cleaning methods from the literature search results. Through discussion and prioritization, an EPA project team, made up of several EPA scientists and emergency responders, focused the information into a list of 14 housekeeping activities for decontamination evaluation testing. These types of activities are collectively referred to as “low-tech” remediation methods because of the comparative simple tools, equipment, and operations involved. Similarly, eight common household surfaces were chosen that were contaminated using three different contamination conditions. Thirty-three combinations of methods and surfaces were chosen for testing under the three contamination conditions for a total of 99 tests.
A new IRT-based standard setting method: application to eCat-listening.
García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David
2013-01-01
Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.
Comparison of Different EHG Feature Selection Methods for the Detection of Preterm Labor
Alamedine, D.; Khalil, M.; Marque, C.
2013-01-01
Numerous types of linear and nonlinear features have been extracted from the electrohysterogram (EHG) in order to classify labor and pregnancy contractions. As a result, the number of available features is now very large. The goal of this study is to reduce the number of features by selecting only the relevant ones which are useful for solving the classification problem. This paper presents three methods for feature subset selection that can be applied to choose the best subsets for classifying labor and pregnancy contractions: an algorithm using the Jeffrey divergence (JD) distance, a sequential forward selection (SFS) algorithm, and a binary particle swarm optimization (BPSO) algorithm. The two last methods are based on a classifier and were tested with three types of classifiers. These methods have allowed us to identify common features which are relevant for contraction classification. PMID:24454536
Is a specific eyelid patch test series useful? Results of a French prospective study.
Assier, Haudrey; Tetart, Florence; Avenel-Audran, Martine; Barbaud, Annick; Ferrier-le Bouëdec, Marie-Christine; Giordano-Labadie, Françoise; Milpied, Brigitte; Amsler, Emmanuelle; Collet, Evelyne; Girardin, Pascal; Soria, Angèle; Waton, Julie; Truchetet, François; Bourrain, Jean-Luc; Gener, Gwendeline; Bernier, Claire; Raison-Peyron, Nadia
2018-06-08
Eyelids are frequent sites of contact dermatitis. No prospective study focused on eyelid allergic contact dermatitis (EACD) has yet been published, and this topic has never been studied in French patients. To prospectively evaluate the usefulness of an eyelid series in French patients patch tested because of EACD, and to describe these patients. We prospectively analysed standardized data for all patients referred to our departments between September 2014 and August 2016 for patch testing for suspected EACD as the main reason. All patients were patch tested with an eyelid series, the European baseline series (EBS), the French additional series, and their personal products. Patch testing with additional series and repeated open application tests (ROATs) or open tests were performed if necessary. A standardized assessment of the relevance was used, and the analysis of the results was focused on patients having positive test results with a current certain relevance. Two-hundred and sixty-four patients (238 women and 26 men) were included. Three-hundred and twenty-two tests gave positive results in 167 patients, 84 of whom had currently relevant reactions: 56 had currently relevant positive test reactions to the EBS, 16 had currently relevant positive test reactions to their personal products, 8 had currently relevant positive test reactions to the French additional series, and 4 had currently relevant positive test reactions to the eyelid series. Sixty-seven per cent of all relevant cases were related to cosmetic products. The most frequent allergens with current relevance were methylisothiazolinone (10.2%), fragrance mix I (3%), nickel (2.7%), hydroxyperoxides of linalool (2.7%) and limonene (2.3%), and Myroxylon pereirae (2.3%). Current atopic dermatitis was found in 9.5% of patients. The duration of dermatitis was shorter (23.2 vs 34.2 months; P = .035) in patients with currently relevant test reactions. The percentage of currently relevant tests remained the same when atopic patients or dermatitis localized only on the eyelids were taken into account. In French patients, testing for EACD with the extended baseline series and personal products, also including ROATs and use tests, appears to be adequate, considering the currently relevant positive test reactions. The regular addition of an eyelid series does not seem to be necessary. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.
2013-08-01
A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.
Borba, Alexandre Meireles; José da Silva, Everton; Fernandes da Silva, André Luis; Han, Michael D; da Graça Naclério-Homem, Maria; Miloro, Michael
2018-01-12
To verify predicted versus obtained surgical movements in 2-dimensional (2D) and 3-dimensional (3D) measurements and compare the equivalence between these methods. A retrospective observational study of bimaxillary orthognathic surgeries was performed. Postoperative cone-beam computed tomographic (CBCT) scans were superimposed on preoperative scans and a lateral cephalometric radiograph was generated from each CBCT scan. After identification of the sella, nasion, and upper central incisor tip landmarks on 2D and 3D images, actual and planned movements were compared by cephalometric measurements. One-sample t test was used to statistically evaluate results, with expected mean discrepancy values ranging from 0 to 2 mm. Equivalence of 2D and 3D values was compared using paired t test. The final sample of 46 cases showed by 2D cephalometry that differences between actual and planned movements in the horizontal axis were statistically relevant for expected means of 0, 0.5, and 2 mm without relevance for expected means of 1 and 1.5 mm; vertical movements were statistically relevant for expected means of 0 and 0.5 mm without relevance for expected means of 1, 1.5, and 2 mm. For 3D cephalometry in the horizontal axis, there were statistically relevant differences for expected means of 0, 1.5, and 2 mm without relevance for expected means of 0.5 and 1 mm; vertical movements showed statistically relevant differences for expected means of 0, 0.5, 1.5 and 2 mm without relevance for the expected mean of 1 mm. Comparison of 2D and 3D values displayed statistical differences for the horizontal and vertical axes. Comparison of 2D and 3D surgical outcome assessments should be performed with caution because there seems to be a difference in acceptable levels of accuracy between these 2 methods of evaluation. Moreover, 3D accuracy studies should no longer rely on a 2-mm level of discrepancy but on a 1-mm level. Copyright © 2018 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
2009-11-01
relevance feedback algo- rithm. Four methods, εMap [1], MapA , P10A, and StatAP [2], were used in the track to measure the performance of Phase 2 runs...εMap and StatAP were applied to the runs us- ing the testing set of only ClueWeb09 Category-B, whereas MapA and P10A were applied to those using the...whole ClueWeb09 English set. Because our experiments were based on only ClueWeb09 Category-B, measuring our per- formance by MapA and P10A might not
Günther, F; Scherrer, M; Kaiser, S J; DeRosa, A; Mutters, N T
2017-03-01
The aim of our study was to develop a new reproducible method for disinfectant efficacy testing on bacterial biofilms and to evaluate the efficacy of different disinfectants against biofilms. Clinical multidrug-resistant strains were chosen as test isolates to ensure practical relevance. We compared the standard qualitative suspension assay for disinfectant testing, which does not take into account biofilm formation, to the new biofilm viability assay that uses kinetic analysis of metabolic activity in biofilms after disinfectant exposure to evaluate disinfectant efficacy. In addition, the efficacy of four standard disinfectants to clinical isolates was tested using both methods. All tested disinfectants were effective against test isolates when in planktonic state using the standard qualitative suspension assay, while disinfectants were only weakly effective against bacteria in biofilms. Disinfectant efficacy testing on planktonic organisms ignores biofilms and overestimates disinfectant susceptibility of bacteria. However, biofilm forming, e.g. on medical devices or hospital surfaces, is the natural state of bacterial living and needs to be considered in disinfectant testing. Although bacterial biofilms are the predominant manner of bacterial colonization, most standard procedures for antimicrobial susceptibility testing and efficacy testing of disinfectants are adapted for application to planktonic bacteria. To our knowledge, this is the first study to use a newly developed microplate-based biofilm test system that uses kinetic analysis of the metabolic activity in biofilms, after disinfectant exposure, to evaluate disinfectant efficacy. Our study shows that findings obtained from disinfectant efficacy testing on planktonic bacteria cannot be extrapolated to predict disinfectant efficacy on bacterial biofilms of clinically relevant multidrug-resistant organisms. © 2016 The Society for Applied Microbiology.
Chapter 3: choosing the important outcomes for a systematic review of a medical test.
Segal, Jodi B
2012-06-01
In this chapter of the Evidence-based Practice Centers Methods Guide for Medical Tests, we describe how the decision to use a medical test generates a broad range of outcomes and that each of these outcomes should be considered for inclusion in a systematic review. Awareness of these varied outcomes affects how a decision maker balances the benefits and risks of the test; therefore, a systematic review should present the evidence on these diverse outcomes. The key outcome categories include clinical management outcomes and direct health effects; emotional, social, cognitive, and behavioral responses to testing; legal and ethical outcomes, and costs. We describe the challenges of incorporating these outcomes in a systematic review, suggest a framework for generating potential outcomes for inclusion, and describe the role of stakeholders in choosing the outcomes for study. Finally, we give examples of systematic reviews that either included a range of outcomes or that might have done so. The following are the key messages in this chapter: Consider both the outcomes that are relevant to the process of testing and those that are relevant to the results of the test. Consider inclusion of outcomes in all five domains: clinical management effects, direct test effects; emotional, social, cognitive and behavioral effects; legal and ethical effects, and costs. Consider to which group the outcomes of testing are most relevant. Given resource limitations, prioritize which outcomes to include. This decision depends on the needs of the stakeholder(s), who should be assisted in prioritizing the outcomes for inclusion.
ERIC Educational Resources Information Center
Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.
2016-01-01
Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…
In Vitro Toxicity Screening Technique for Volatile Substances Using Flow-Through System@@
In 2007, the National Research Council envisioned the need for inexpensive, rapid, cell-based toxicity testing methods relevant to human health. in vitro screening approaches have largely addressed these problems by using robotics and automation. However, the challenge is that ma...
V-TECS Guide for Commercial Foods.
ERIC Educational Resources Information Center
Elliott, Ronald T.; Benson, Robert T.
This guide is designed to provide job-relevant tasks, performance objectives, performance guides, resources, teaching activities, evaluation standards, and achievement testing for commercial foods occupations. It can be used with any teaching method, and it addresses all three domains of learning: psychomotor, cognitive, and affective. The guide…
In Vitro Toxicity Screening Technique for Volatile Substances Using Flow-Through System
In 2007 the National Research Council envisioned the need for inexpensive, rapid, cell based toxicity testing methods relevant to human health. Recent advances in robotics, automation, and miniaturization have been used to address these problems. However, one challenge is that ma...
In Vitro Toxicity Screening Technique for Volatile Substances Using Flow-Through System##
In 2007 the National Research Council envisioned the need for inexpensive, rapid, cell based toxicity testing methods relevant to human health. Recent advances in robotics, automation, and miniaturization have been used to address this challenge. However, one drawback to currentl...
In Vitro Toxicity Screening Technique for Volatile Substances Using Flow-Through System#
In 2007 the National Research Council envisioned the need for inexpensive, high throughput, cell based toxicity testing methods relevant to human health. High Throughput Screening (HTS) in vitro screening approaches have addressed these problems by using robotics. However the cha...
Sahadevan, S; Hofmann-Apitius, M; Schellander, K; Tesfaye, D; Fluck, J; Friedrich, C M
2012-10-01
In biological research, establishing the prior art by searching and collecting information already present in the domain has equal importance as the experiments done. To obtain a complete overview about the relevant knowledge, researchers mainly rely on 2 major information sources: i) various biological databases and ii) scientific publications in the field. The major difference between the 2 information sources is that information from databases is available, typically well structured and condensed. The information content in scientific literature is vastly unstructured; that is, dispersed among the many different sections of scientific text. The traditional method of information extraction from scientific literature occurs by generating a list of relevant publications in the field of interest and manually scanning these texts for relevant information, which is very time consuming. It is more than likely that in using this "classical" approach the researcher misses some relevant information mentioned in the literature or has to go through biological databases to extract further information. Text mining and named entity recognition methods have already been used in human genomics and related fields as a solution to this problem. These methods can process and extract information from large volumes of scientific text. Text mining is defined as the automatic extraction of previously unknown and potentially useful information from text. Named entity recognition (NER) is defined as the method of identifying named entities (names of real world objects; for example, gene/protein names, drugs, enzymes) in text. In animal sciences, text mining and related methods have been briefly used in murine genomics and associated fields, leaving behind other fields of animal sciences, such as livestock genomics. The aim of this work was to develop an information retrieval platform in the livestock domain focusing on livestock publications and the recognition of relevant data from cattle and pigs. For this purpose, the rather noncomprehensive resources of pig and cattle gene and protein terminologies were enriched with orthologue synonyms, integrated in the NER platform, ProMiner, which is successfully used in human genomics domain. Based on the performance tests done, the present system achieved a fair performance with precision 0.64, recall 0.74, and F(1) measure of 0.69 in a test scenario based on cattle literature.
Development of a Pulsed Pressure-Based Technique for Cavitation Damage Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Fei; Wang, Jy-An John; Liu, Yun
2012-01-01
Cavitation occurs in many fluid systems and can lead to severe material damage. To assist the study of cavitation damage, a novel testing method utilizing pulsed pressure was developed. In this talk, the scientific background and the technical approach of this development are present and preliminary testing results are discussed. It is expected that this technique can be used to evaluate cavitation damage under various testing conditions including harsh environments such as those relevant to geothermal power generation.
Sturm, Gunter J.; Jin, Chunsheng; Kranzelbinder, Bettina; Hemmer, Wolfgang; Sturm, Eva M.; Griesbacher, Antonia; Heinemann, Akos; Vollmann, Jutta; Altmann, Friedrich; Crailsheim, Karl; Focke, Margarete; Aberer, Werner
2011-01-01
Background Double sensitization (DS) to bee and vespid venom is frequently observed in the diagnosis of hymenoptera venom allergy, but clinically relevant DS is rare. Therefore it is sophisticated to choose the relevant venom for specific immunotherapy and overtreatment with both venoms may occur. We aimed to compare currently available routine diagnostic tests as well as experimental tests to identify the most accurate diagnostic tool. Methods 117 patients with a history of a bee or vespid allergy were included in the study. Initially, IgE determination by the ImmunoCAP, by the Immulite, and by the ADVIA Centaur, as well as the intradermal test (IDT) and the basophil activation test (BAT) were performed. In 72 CAP double positive patients, individual IgE patterns were determined by western blot inhibition and component resolved diagnosis (CRD) with rApi m 1, nVes v 1, and nVes v 5. Results Among 117 patients, DS was observed in 63.7% by the Immulite, in 61.5% by the CAP, in 47.9% by the IDT, in 20.5% by the ADVIA, and in 17.1% by the BAT. In CAP double positive patients, western blot inhibition revealed CCD-based DS in 50.8%, and the CRD showed 41.7% of patients with true DS. Generally, agreement between the tests was only fair and inconsistent results were common. Conclusion BAT, CRD, and ADVIA showed a low rate of DS. However, the rate of DS is higher than expected by personal history, indicating that the matter of clinical relevance is still not solved even by novel tests. Furthermore, the lack of agreement between these tests makes it difficult to distinguish between bee and vespid venom allergy. At present, no routinely employed test can be regarded as gold standard to find the clinically relevant sensitization. PMID:21698247
Nettleton, D; Muñiz, J
2001-09-01
In this article, we revise and try to resolve some of the problems inherent in questionnaire screening of sleep apnea cases and apnea diagnosis based on attributes which are relevant and reliable. We present a way of learning information about the relevance of the data, comparing this with the definition of the information by the medical expert. We generate a predictive data model using a data aggregation operator which takes relevance and reliability information about the data into account to produce a diagnosis for each case. We also introduce a grade of membership for each question response which allows the patient to indicate a level of confidence or doubt in their own judgement. The method is tested with data collected from patients in a Sleep Clinic using questionnaires specially designed for the study. Other artificial intelligence predictive modeling algorithms are also tested on the same data and their predictive accuracy compared to that of the aggregation operator.
O'Gorman, Susan M; Torgerson, Rochelle R
2016-07-01
Recalcitrant non-actinic cheilitis may indicate contact allergy. This study aimed to determine the prevalence of allergic contact cheilitis (ACC) in patients with non-actinic cheilitis and to identify the most relevant allergens. We used an institutional database to identify patients with non-actinic cheilitis who underwent patch testing between January 1, 2001, and August 31, 2011, and conducted a retrospective review of patch test results in these patients. Additional data were obtained from institutional electronic medical records. Ninety-one patients (70 [77%] female; mean age: 51 years) were included in the study. Almost half (41 [45%]) had a final diagnosis of ACC. Patch testing was performed in line with universally accepted methods, with application on day 1, allergen removal and an initial reading on day 3, and the final reading on day 5. The allergens of most significance were fragrance mix, Myroxylon pereirae resin, dodecyl gallate, octyl gallate, and benzoic acid. Nickel was the most relevant metal allergen. Contact allergy is an important consideration in recalcitrant cheilitis. Fragrances, antioxidants, and preservatives dominated the list of relevant allergens in our patients. Nickel and gold were among the top 10 allergens. Almost half (45%) of these patients had a final diagnosis of ACC. Patch testing beyond the oral complete series should be undertaken in any investigation of non-actinic cheilitis. © 2015 The International Society of Dermatology.
Microbicide safety/efficacy studies in animals: macaques and small animal models.
Veazey, Ronald S
2008-09-01
A number of microbicide candidates have failed to prevent HIV transmission in human clinical trials, and there is uncertainty as to how many additional trials can be supported by the field. Regardless, there are far too many microbicide candidates in development, and a logical and consistent method for screening and selecting candidates for human clinical trials is desperately needed. The unique host and cell specificity of HIV, however, provides challenges for microbicide safety and efficacy screening, that can only be addressed by rigorous testing in relevant laboratory animal models. A number of laboratory animal model systems ranging from rodents to nonhuman primates, and single versus multiple dose challenges have recently been developed to test microbicide candidates. These models have shed light on both the safety and efficacy of candidate microbicides as well as the early mechanisms involved in transmission. This article summarizes the major advantages and disadvantages of the relevant animal models for microbicide safety and efficacy testing. Currently, nonhuman primates are the only relevant and effective laboratory model for screening microbicide candidates. Given the consistent failures of prior strategies, it is now clear that rigorous safety and efficacy testing in nonhuman primates should be a prerequisite for advancing additional microbicide candidates to human clinical trials.
Microbicide Safety/Efficacy studies in animals -macaques and small animal models
Veazey, Ronald S.
2009-01-01
Purpose of review A number of microbicide candidates have failed to prevent HIV transmission in human clinical trials, and there is uncertainty as to how many additional trials can be supported by the field. Regardless, there are far too many microbicide candidates in development, and a logical and consistent method for screening and selecting candidates for human clinical trials is desperately needed. However, the unique host and cell specificity of HIV provides challenges for microbicide safety and efficacy screening, that can only be addressed by rigorous testing in relevant laboratory animal models. Recent findings A number of laboratory animal model systems ranging from rodents to nonhuman primates, and single versus multiple dose challenges have recently been developed to test microbicide candidates. These models have shed light on both the safety and efficacy of candidate microbicides as well as the early mechanisms involved in transmission. This article summarizes the major advantages and disadvantages of the relevant animal models for microbicide safety and efficacy testing. Summary Currently, nonhuman primates are the only relevant and effective laboratory model for screening microbicide candidates. Given the consistent failures of prior strategies, it is now clear that rigorous safety and efficacy testing in nonhuman primates should be a pre-requisite for advancing additional microbicide candidates to human clinical trials. PMID:19373023
A 'difficult' insect allergy patient: reliable history of a sting, but all testing negative.
Tracy, James M; Olsen, Jonathan A; Carlson, John
2015-08-01
Few conditions are as treatable as allergy to stinging insects, with venom immunotherapy (VIT) providing up to 98% protection to subsequent stings. The challenge with VIT is not in the treatment, but in the diagnosis. To offer VIT, one must determine a history of a systemic reaction to a stinging insect in conjunction with the presence venom-specific IgE. Current diagnostic methods, although sensitive and specific, are imperfect, and some newer testing options are not widely available. A conundrum occasionally faced is the patient with a reliable and compelling history of a systemic allergic reaction yet negative venom-specific testing. This diagnostic dilemma presents an opportunity to consider possible causes for this diagnostic challenge. Our evolving understanding of the role of occult mast cell disease may begin to help us understand this situation and develop appropriate management strategies. Venom-specific skin testing has long been the cornerstone of the evaluation of venom sensitivity and is often combined with in-vitro assays to add clarity, but even these occasionally may fall short. Exploring novel venom diagnostic testing methods may help to fill in some of the diagnostic gaps. Do currently available venom vaccines contain all the key venom species? Are there enough differences between insect species that we may simply be missing the relevant allergens? What is the significance of the antigenicity of carbohydrate moieties in venoms? What is the role of recombinant venom extracts? VIT is the definitive treatment for insect allergic individuals. To utilize VIT, identification of the relevant Hymenoptera is necessary. Unfortunately, this cannot always be accomplished. This deficiency can have several causes: a potential comorbid condition such as occult mast cell disease, limitations of currently available diagnostic resources, or testing vaccines with an insufficient coverage of relevant venom allergens. Exploring these potential causes may help to provide important insight into this important diagnostic conundrum. The use of a case report may help clarify this challenge.
Search by photo methodology for signature properties assessment by human observers
NASA Astrophysics Data System (ADS)
Selj, Gorm K.; Heinrich, Daniela H.
2015-05-01
Reliable, low-cost and simple methods for assessment of signature properties for military purposes are very important. In this paper we present such an approach that uses human observers in a search by photo assessment of signature properties of generic test targets. The method was carried out by logging a large number of detection times of targets recorded in relevant terrain backgrounds. The detection times were harvested by using human observers searching for targets in scene images shown by a high definition pc screen. All targets were identically located in each "search image", allowing relative comparisons (and not just rank by order) of targets. To avoid biased detections, each observer only searched for one target per scene. Statistical analyses were carried out for the detection times data. Analysis of variance was chosen if detection times distribution associated with all targets satisfied normality, and non-parametric tests, such as Wilcoxon's rank test, if otherwise. The new methodology allows assessment of signature properties in a reproducible, rapid and reliable setting. Such assessments are very complex as they must sort out what is of relevance in a signature test, but not loose information of value. We believe that choosing detection times as the primary variable for a comparison of signature properties, allows a careful and necessary inspection of observer data as the variable is continuous rather than discrete. Our method thus stands in opposition to approaches based on detections by subsequent, stepwise reductions in distance to target, or based on probability of detection.
2011-01-01
Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199
Validation of Clinical Testing for Warfarin Sensitivity
Langley, Michael R.; Booker, Jessica K.; Evans, James P.; McLeod, Howard L.; Weck, Karen E.
2009-01-01
Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 −1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses. PMID:19324988
Langley, Michael R; Booker, Jessica K; Evans, James P; McLeod, Howard L; Weck, Karen E
2009-05-01
Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 -1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses.
Processing urinary endoscopes in a low-temperature steam and formaldehyde autoclave.
Gibson, G L
1977-01-01
Methods of disinfection and sterilisation of urinary endoscopes are considered. A small mobile low-temperature steam and formaldehyde autoclave (Miniclave 80) is evaluated and shown to be satisfactory for this purpose as judged by a variety of relevant microbiological test pieces. Images PMID:557503
SeqAPASS: Sequence alignment to predict across-species susceptibility
Efforts to shift the toxicity testing paradigm from whole organism studies to those focused on the initiation of toxicity and relevant pathways have led to increased utilization of in vitro and in silico methods. Hence the emergence of high through-put screening (HTS) programs, s...
RELATIVE POTENCY OF ORAL ANTIGENS IN PROVOKING FOOD ALLERGY IN THE MOUS
Rationale: An animal model for food allergy is needed to test novel proteins produced through biotechnology for potential allergenicity. While the oral route is the most relevant method of exposure, oral tolerance is an impediment. We demonstrate that mice can distinguish...
2011-01-01
Background The selection of relevant articles for curation, and linking those articles to experimental techniques confirming the findings became one of the primary subjects of the recent BioCreative III contest. The contest’s Protein-Protein Interaction (PPI) task consisted of two sub-tasks: Article Classification Task (ACT) and Interaction Method Task (IMT). ACT aimed to automatically select relevant documents for PPI curation, whereas the goal of IMT was to recognise the methods used in experiments for identifying the interactions in full-text articles. Results We proposed and compared several classification-based methods for both tasks, employing rich contextual features as well as features extracted from external knowledge sources. For IMT, a new method that classifies pair-wise relations between every text phrase and candidate interaction method obtained promising results with an F1 score of 64.49%, as tested on the task’s development dataset. We also explored ways to combine this new approach and more conventional, multi-label document classification methods. For ACT, our classifiers exploited automatically detected named entities and other linguistic information. The evaluation results on the BioCreative III PPI test datasets showed that our systems were very competitive: one of our IMT methods yielded the best performance among all participants, as measured by F1 score, Matthew’s Correlation Coefficient and AUC iP/R; whereas for ACT, our best classifier was ranked second as measured by AUC iP/R, and also competitive according to other metrics. Conclusions Our novel approach that converts the multi-class, multi-label classification problem to a binary classification problem showed much promise in IMT. Nevertheless, on the test dataset the best performance was achieved by taking the union of the output of this method and that of a multi-class, multi-label document classifier, which indicates that the two types of systems complement each other in terms of recall. For ACT, our system exploited a rich set of features and also obtained encouraging results. We examined the features with respect to their contributions to the classification results, and concluded that contextual words surrounding named entities, as well as the MeSH headings associated with the documents were among the main contributors to the performance. PMID:22151769
Armijo-Olivo, Susan; Warren, Sharon; Fuentes, Jorge; Magee, David J
2011-12-01
Statistical significance has been used extensively to evaluate the results of research studies. Nevertheless, it offers only limited information to clinicians. The assessment of clinical relevance can facilitate the interpretation of the research results into clinical practice. The objective of this study was to explore different methods to evaluate the clinical relevance of the results using a cross-sectional study as an example comparing different neck outcomes between subjects with temporomandibular disorders and healthy controls. Subjects were compared for head and cervical posture, maximal cervical muscle strength, endurance of the cervical flexor and extensor muscles, and electromyographic activity of the cervical flexor muscles during the CranioCervical Flexion Test (CCFT). The evaluation of clinical relevance of the results was performed based on the effect size (ES), minimal important difference (MID), and clinical judgement. The results of this study show that it is possible to have statistical significance without having clinical relevance, to have both statistical significance and clinical relevance, to have clinical relevance without having statistical significance, or to have neither statistical significance nor clinical relevance. The evaluation of clinical relevance in clinical research is crucial to simplify the transfer of knowledge from research into practice. Clinical researchers should present the clinical relevance of their results. Copyright © 2011 Elsevier Ltd. All rights reserved.
Keizer, D; van Wijhe, M; Post, W J; Uges, D R A; Wierda, J M K H
2007-08-01
Allodynia is a common and disabling symptom in many patients with neuropathic pain. Whereas quantification of pain mostly depends on subjective pain reports, allodynia can also be measured objectively with quantitative sensory testing. In this pilot study, we investigated the clinical relevance of quantitative sensory testing with Von Frey monofilaments in patients with allodynia as a consequence of a neuropathic pain syndrome, by means of correlating subjective pain scores with pain thresholds obtained with quantitative sensory testing. During a 4-week trial, we administered a cannabis extract to 17 patients with allodynia. We quantified the severity of the allodynia with Von Frey monofilaments before, during and after the patients finished the trial. We also asked the patients to rate their pain on a numeric rating scale at these three moments. We found that most of the effect of the cannabis occurred in the last 2 weeks of the trial. In this phase, we observed that the pain thresholds, as measured with Von Frey monofilaments, were inversely correlated with a decrease of the perceived pain intensity. These preliminary findings indicate clinical relevance of quantitative sensory testing with Von Frey monofilaments in the quantification of allodynia in patients with neuropathic pain, although confirmation of our data is still required in further studies to position this method of quantitative sensory testing as a valuable tool, for example, in the evaluation of therapeutic interventions for neuropathic pain.
Steffeck, D.W.; Striegl, Robert G.
1989-01-01
Results of studies of the aquatic biology of the upper Illinois River basin provide a historical data source from which inferences can be made about changes in the quality of water in the main stem river and its tributaries. The results of biological investigations that have been conducted throughout the basin since 1900 are summarized and their relevance to stream-water-quality assessment is described, particularly their relevance to the upper Illinois River basin pilot project for the National Water Quality Assessment program. Four general categories of biological investigations were identified: Populations and community structure, chemical concentrations in tissue, organism health, and toxicity measurements. Biological investigations were identified by their location in the basin and by their relevance to each general investigation category. The most abundant literature was in the populations and community structure category. Tissue data were limited to polychlorinated biphenyls, organochlorine pesticides, dioxin, and several metals. The most cited measure of organism health was a condition factor for fish that associates body length with weight or body depth. Toxicity measurements included bioassays and the Ames Tests. The bioassays included several testing methods and test organism. (USGS)
Testing the cosmic anisotropy with supernovae data: Hemisphere comparison and dipole fitting
NASA Astrophysics Data System (ADS)
Deng, Hua-Kai; Wei, Hao
2018-06-01
The cosmological principle is one of the cornerstones in modern cosmology. It assumes that the universe is homogeneous and isotropic on cosmic scales. Both the homogeneity and the isotropy of the universe should be tested carefully. In the present work, we are interested in probing the possible preferred direction in the distribution of type Ia supernovae (SNIa). To our best knowledge, two main methods have been used in almost all of the relevant works in the literature, namely the hemisphere comparison (HC) method and the dipole fitting (DF) method. However, the results from these two methods are not always approximately coincident with each other. In this work, we test the cosmic anisotropy by using these two methods with the joint light-curve analysis (JLA) and simulated SNIa data sets. In many cases, both methods work well, and their results are consistent with each other. However, in the cases with two (or even more) preferred directions, the DF method fails while the HC method still works well. This might shed new light on our understanding of these two methods.
Aerodynamic calculational methods for curved-blade Darrieus VAWT WECS
NASA Astrophysics Data System (ADS)
Templin, R. J.
1985-03-01
Calculation of aerodynamic performance and load distributions for curved-blade wind turbines is discussed. Double multiple stream tube theory, and the uncertainties that remain in further developing adequate methods are considered. The lack of relevant airfoil data at high Reynolds numbers and high angles of attack, and doubts concerning the accuracy of models of dynamic stall are underlined. Wind tunnel tests of blade airbrake configurations are summarized.
ERIC Educational Resources Information Center
Carolan, Brian V.; Unger, Jennifer B.; Johnson, C. Anderson; Valente, Thomas W.
2007-01-01
Peer-led programs that employ classroom-based group exercises have been shown to be the most effective in preventing adolescent tobacco use. In addition, health promotion programs that include cultural referents have also been shown to be advantageous. The purpose of this study was to test the interaction between the method by which leaders and…
Dredged Material Analysis Tools; Performance of Acute and Chronic Sediment Toxicity Methods
2008-07-01
Chronic Sediment Toxicity Methods Jeffery Steevens, Alan Kennedy, Daniel Farrar, Cory McNemar, Mark R. Reiss, Roy K. Kropp, Jon Doi, and Todd Bridges...Environmental Research Program ERDC/EL TR-08-16 July 2008 Revised Dredged Material Analysis Tools Performance of Acute and Chronic Sediment Toxicity ...insight into the potential advan- tages and disadvantages of using chronic sediment toxicity tests with relevant benthic macroinvertebrates as part of
NASA Astrophysics Data System (ADS)
Aviv, O.; Lipshtat, A.
2018-05-01
On-Site Inspection (OSI) activities under the Comprehensive Nuclear-Test-Ban Treaty (CTBT) allow limitations to measurement equipment. Thus, certain detectors require modifications to be operated in a restricted mode. The accuracy and reliability of results obtained by a restricted device may be impaired. We present here a method for limiting data acquisition during OSI. Limitations are applied to a high-resolution high-purity germanium detector system, where the vast majority of the acquired data that is not relevant to the inspection is filtered out. The limited spectrum is displayed to the user and allows analysis using standard gamma spectrometry procedures. The proposed method can be incorporated into commercial gamma-ray spectrometers, including both stationary and mobile-based systems. By applying this procedure to more than 1000 spectra, representing various scenarios, we show that partial data are sufficient for reaching reliable conclusions. A comprehensive survey of potential false-positive identifications of various radionuclides is presented as well. It is evident from the results that the analysis of a limited spectrum is practically identical to that of a standard spectrum in terms of detection and quantification of OSI-relevant radionuclides. A future limited system can be developed making use of the principles outlined by the suggested method.
2014-01-01
Background Risk adjustment is crucial for comparison of outcome in medical care. Knowledge of the external factors that impact measured outcome but that cannot be influenced by the physician is a prerequisite for this adjustment. To date, a universal and reproducible method for identification of the relevant external factors has not been published. The selection of external factors in current quality assurance programmes is mainly based on expert opinion. We propose and demonstrate a methodology for identification of external factors requiring risk adjustment of outcome indicators and we apply it to a cataract surgery register. Methods Defined test criteria to determine the relevance for risk adjustment are “clinical relevance” and “statistical significance”. Clinical relevance of the association is presumed when observed success rates of the indicator in the presence and absence of the external factor exceed a pre-specified range of 10%. Statistical significance of the association between the external factor and outcome indicators is assessed by univariate stratification and multivariate logistic regression adjustment. The cataract surgery register was set up as part of a German multi-centre register trial for out-patient cataract surgery in three high-volume surgical sites. A total of 14,924 patient follow-ups have been documented since 2005. Eight external factors potentially relevant for risk adjustment were related to the outcome indicators “refractive accuracy” and “visual rehabilitation” 2–5 weeks after surgery. Results The clinical relevance criterion confirmed 2 (“refractive accuracy”) and 5 (“visual rehabilitation”) external factors. The significance criterion was verified in two ways. Univariate and multivariate analyses revealed almost identical external factors: 4 were related to “refractive accuracy” and 7 (6) to “visual rehabilitation”. Two (“refractive accuracy”) and 5 (“visual rehabilitation”) factors conformed to both criteria and were therefore relevant for risk adjustment. Conclusion In a practical application, the proposed method to identify relevant external factors for risk adjustment for comparison of outcome in healthcare proved to be feasible and comprehensive. The method can also be adapted to other quality assurance programmes. However, the cut-off score for clinical relevance needs to be individually assessed when applying the proposed method to other indications or indicators. PMID:24965949
Determination of Absolute Configuration of Secondary Alcohols Using Thin-Layer Chromatography
Wagner, Alexander J.; Rychnovsky, Scott D.
2013-01-01
A new implementation of the Competing Enantioselective Conversion (CEC) method was developed to qualitatively determine the absolute configuration of enantioenriched secondary alcohols using thin-layer chromatography. The entire process for the method requires approximately 60 min and utilizes micromole quantities of the secondary alcohol being tested. A number of synthetically relevant secondary alcohols are presented. Additionally, 1H NMR spectroscopy was conducted on all samples to provide evidence of reaction conversion that supports the qualitative method presented herein. PMID:23593963
Dooley, Christopher J; Tenore, Francesco V; Gayzik, F Scott; Merkle, Andrew C
2018-04-27
Biological tissue testing is inherently susceptible to the wide range of variability specimen to specimen. A primary resource for encapsulating this range of variability is the biofidelity response corridor or BRC. In the field of injury biomechanics, BRCs are often used for development and validation of both physical, such as anthropomorphic test devices, and computational models. For the purpose of generating corridors, post-mortem human surrogates were tested across a range of loading conditions relevant to under-body blast events. To sufficiently cover the wide range of input conditions, a relatively small number of tests were performed across a large spread of conditions. The high volume of required testing called for leveraging the capabilities of multiple impact test facilities, all with slight variations in test devices. A method for assessing similitude of responses between test devices was created as a metric for inclusion of a response in the resulting BRC. The goal of this method was to supply a statistically sound, objective method to assess the similitude of an individual response against a set of responses to ensure that the BRC created from the set was affected primarily by biological variability, not anomalies or differences stemming from test devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
A multi-analyte serum test for the detection of non-small cell lung cancer
Farlow, E C; Vercillo, M S; Coon, J S; Basu, S; Kim, A W; Faber, L P; Warren, W H; Bonomi, P; Liptay, M J; Borgia, J A
2010-01-01
Background: In this study, we appraised a wide assortment of biomarkers previously shown to have diagnostic or prognostic value for non-small cell lung cancer (NSCLC) with the intent of establishing a multi-analyte serum test capable of identifying patients with lung cancer. Methods: Circulating levels of 47 biomarkers were evaluated against patient cohorts consisting of 90 NSCLC and 43 non-cancer controls using commercial immunoassays. Multivariate statistical methods were used on all biomarkers achieving statistical relevance to define an optimised panel of diagnostic biomarkers for NSCLC. The resulting biomarkers were fashioned into a classification algorithm and validated against serum from a second patient cohort. Results: A total of 14 analytes achieved statistical relevance upon evaluation. Multivariate statistical methods then identified a panel of six biomarkers (tumour necrosis factor-α, CYFRA 21-1, interleukin-1ra, matrix metalloproteinase-2, monocyte chemotactic protein-1 and sE-selectin) as being the most efficacious for diagnosing early stage NSCLC. When tested against a second patient cohort, the panel successfully classified 75 of 88 patients. Conclusions: Here, we report the development of a serum algorithm with high specificity for classifying patients with NSCLC against cohorts of various ‘high-risk' individuals. A high rate of false positives was observed within the cohort in which patients had non-neoplastic lung nodules, possibly as a consequence of the inflammatory nature of these conditions. PMID:20859284
Alternative microbial methods: An overview and selection criteria.
Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke
2010-09-01
This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.
2016-01-01
Objectives Korea’s Act on the Registration and Evaluation of Chemicals (K-REACH) was enacted for the protection of human health and the environment in 2015. Considering that about 2000 new substances are introduced annually across the globe, the extent of animal testing requirement could be overwhelming unless regulators and companies work proactively to institute and enforce global best practices to replace, reduce or refine animal use. In this review, the way to reduce the animal use for K-REACH is discussed. Methods Background of the enforcement of the K-REACH and its details was reviewed along with the papers and regulatory documents regarding the limitation of animal experiments and its alternatives in order to discuss the regulatory adoption of alternative tests. Results Depending on the tonnage of the chemical used, the data required ranges from acute and other short-term studies for a single exposure route to testing via multiple exposure routes and costly, longer-term studies such as a full two-generation reproducibility toxicity. The European Registration, Evaluation, Authorization and Restriction of Chemicals regulation provides for mandatory sharing of vertebrate test data to avoid unnecessary duplication of animal use and test costs, and obligation to revise data requirements and test guidelines “as soon as possible” after relevant, validated replacement, reduction or refinement (3R) methods become available. Furthermore, the Organization for Economic Cooperation and Development actively accepts alternative animal tests and 3R to chemical toxicity tests. Conclusions Alternative tests which are more ethical and efficient than animal experiments should be widely used to assess the toxicity of chemicals for K-REACH registration. The relevant regulatory agencies will have to make efforts to actively adopt and uptake new alternative tests and 3R to K-REACH. PMID:28118702
ERIC Educational Resources Information Center
Meyer, Calvin F.; Benson, Robert T.
This guide provides job relevant tasks, performance objectives, performance guides, resources, learning activitites, evaluation standards, and achievement testing in the occupation of environmental control system installer/servicer (residential air conditioning mechanic). It is designed to be used with any chosen teaching method. The course…
Alternative toxicity assessment methods to characterize the hazards of chemical substances have been proposed to reduce animal testing and screen thousands of chemicals in an efficient manner. Resources to accomplish these goals include utilizing large in vitro chemical screening...
Economic Psychology: Its Connections with Research-Oriented Courses
ERIC Educational Resources Information Center
Christopher, Andrew N.; Marek, Pam; Benigno, Joann
2003-01-01
To enhance student interest in research methods, tests and measurement, and statistics classes, we describe how teachers may use resources from economic psychology to illustrate key concepts in these courses. Because of their applied nature and relevance to student experiences, topics covered by these resources may capture student attention and…
Patlewicz, Grace; Casati, Silvia; Basketter, David A; Asturiol, David; Roberts, David W; Lepoittevin, Jean-Pierre; Worth, Andrew P; Aschberger, Karin
2016-12-01
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal tests such as the Local Lymph Node Assay (LLNA). In recent years, regulations in the cosmetics and chemicals sectors have provided strong impetus to develop non-animal alternatives. Three test methods have undergone OECD validation: the direct peptide reactivity assay (DPRA), the KeratinoSens™ and the human Cell Line Activation Test (h-CLAT). Whilst these methods perform relatively well in predicting LLNA results, a concern raised is their ability to predict chemicals that need activation to be sensitizing (pre- or pro-haptens). This current study reviewed an EURL ECVAM dataset of 127 substances for which information was available in the LLNA and three non-animal test methods. Twenty eight of the sensitizers needed to be activated, with the majority being pre-haptens. These were correctly identified by 1 or more of the test methods. Six substances were categorized exclusively as pro-haptens, but were correctly identified by at least one of the cell-based assays. The analysis here showed that skin metabolism was not likely to be a major consideration for assessing sensitization potential and that sensitizers requiring activation could be identified correctly using one or more of the current non-animal methods. Published by Elsevier Inc.
A comparison of the mechanical properties of fiberglass cast materials and their clinical relevance.
Berman, A T; Parks, B G
1990-01-01
The mechanical properties of five synthetic fiberglass casting materials were evaluated and compared with the properties of plaster of Paris. Two of the tests were designed to bear clinical relevance and the third to determine intrinsic material properties. The effect of water on strength degradation was also evaluated. It was found that the synthetics as a group are far superior to plaster of Paris in all methods of testing and that, among the synthetics, KCast Tack Free, Deltalite "S", and KCast Improved were the stronger materials. Clinically, the most important results are that the synthetics attain their relatively high strength in a much shorter time frame than does plaster of Paris, and retain 70-90% of their strength after being immersed in water and allowed to dry.
Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.
Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F
2013-10-01
A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).
Pazos, Patricia; Pellizzer, Cristian; Stummann, Tina C; Hareng, Lars; Bremer, Susanne
2010-08-01
The selection of reference compounds is crucial for a successful in vitro test development in order to proof the relevance of the test system. This publication describes the criteria and the selection strategy leading to a list of more than 130 chemicals suitable for test development within the ReProTect project. The presented chemical inventory aimed to support the development and optimization of in vitro tests that seek to fulfill ECVAM's criteria for entering into the prevalidation. In order to select appropriate substances, a primary database was established compiling information from existing databases. In a second step, predefined selection criteria have been applied to obtain a comprehensive list ready to undergo a peer review process from independent experts with industrial, academic and regulatory background. Finally, a peer reviewed chemical list containing 13 substances challenging endocrine disrupter tests, additional 50 substances serving as reference chemicals for various tests evaluating effects on male and female fertility, and finally 61 substances were identified as known to provoke effects on the early development of mammalian offspring. The final list aims to cover relevant and specific mode/site of actions as they are known to be relevant for various substance classes. However, the recommended list should not be interpreted as a list of reproductive toxicants, because such a description requires proven associations with adverse effects of mammalian reproduction, which are subject of regulatory decisions done by involved competent authorities. Copyright 2010 Elsevier Inc. All rights reserved.
Cotton, Robin W; Fisher, Matthew B
2015-09-01
Forensic DNA testing is grounded in molecular biology and population genetics. The technologies that were the basis of restriction length polymorphism testing (RFLP) have given way to PCR based technologies. While PCR has been the pillar of short tandem repeat (STR) methods and will continue to be used as DNA sequencing and analysis of single nucleotide polymorphisms (SNPs) are introduced into human identification, the molecular biology techniques in use today represent significant advances since the introduction of STR testing. Large forensic laboratories with dedicated research teams and forensic laboratories which are part of academic institutions have the resources to keep track of advances which can then be considered for further research or incorporated into current testing methods. However, many laboratories have limited ability to keep up with research advances outside of the immediate area of forensic science and may not have access to a large university library systems. This review focuses on filling this gap with respect to areas of research that intersect with selected methods used in forensic biology. The review summarizes information collected from several areas of the scientific literature where advances in molecular biology have produced information relevant to DNA analysis of sexual assault evidence and methods used in presumptive and confirmatory identification of semen. Older information from the literature is also included where this information may not be commonly known and is relevant to current methods. The topics selected highlight (1) information from applications of proteomics to sperm biology and human reproduction, (2) seminal fluid proteins and prostate cancer diagnostics, (3) developmental biology of sperm from the fertility literature and (4) areas where methods are common to forensic analysis and research in contraceptive use and monitoring. Information and progress made in these areas coincide with the research interests of forensic biology and cross-talk between these disciplines may benefit both. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Learning feature representations with a cost-relevant sparse autoencoder.
Längkvist, Martin; Loutfi, Amy
2015-02-01
There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.
Müsken, Mathias; Di Fiore, Stefano; Römling, Ute; Häussler, Susanne
2010-08-01
A major reason for bacterial persistence during chronic infections is the survival of bacteria within biofilm structures, which protect cells from environmental stresses, host immune responses and antimicrobial therapy. Thus, there is concern that laboratory methods developed to measure the antibiotic susceptibility of planktonic bacteria may not be relevant to chronic biofilm infections, and it has been suggested that alternative methods should test antibiotic susceptibility within a biofilm. In this paper, we describe a fast and reliable protocol for using 96-well microtiter plates for the formation of Pseudomonas aeruginosa biofilms; the method is easily adaptable for antimicrobial susceptibility testing. This method is based on bacterial viability staining in combination with automated confocal laser scanning microscopy. The procedure simplifies qualitative and quantitative evaluation of biofilms and has proven to be effective for standardized determination of antibiotic efficiency on P. aeruginosa biofilms. The protocol can be performed within approximately 60 h.
Pan, Yue; Liu, Hongmei; Metsch, Lisa R; Feaster, Daniel J
2017-02-01
HIV testing is the foundation for consolidated HIV treatment and prevention. In this study, we aim to discover the most relevant variables for predicting HIV testing uptake among substance users in substance use disorder treatment programs by applying random forest (RF), a robust multivariate statistical learning method. We also provide a descriptive introduction to this method for those who are unfamiliar with it. We used data from the National Institute on Drug Abuse Clinical Trials Network HIV testing and counseling study (CTN-0032). A total of 1281 HIV-negative or status unknown participants from 12 US community-based substance use disorder treatment programs were included and were randomized into three HIV testing and counseling treatment groups. The a priori primary outcome was self-reported receipt of HIV test results. Classification accuracy of RF was compared to logistic regression, a standard statistical approach for binary outcomes. Variable importance measures for the RF model were used to select the most relevant variables. RF based models produced much higher classification accuracy than those based on logistic regression. Treatment group is the most important predictor among all covariates, with a variable importance index of 12.9%. RF variable importance revealed that several types of condomless sex behaviors, condom use self-efficacy and attitudes towards condom use, and level of depression are the most important predictors of receipt of HIV testing results. There is a non-linear negative relationship between count of condomless sex acts and the receipt of HIV testing. In conclusion, RF seems promising in discovering important factors related to HIV testing uptake among large numbers of predictors and should be encouraged in future HIV prevention and treatment research and intervention program evaluations.
Kirchner, Sebastian; Fothergill, Joanne L; Wright, Elli A; James, Chloe E; Mowat, Eilidh; Winstanley, Craig
2012-06-05
There is growing concern about the relevance of in vitro antimicrobial susceptibility tests when applied to isolates of P. aeruginosa from cystic fibrosis (CF) patients. Existing methods rely on single or a few isolates grown aerobically and planktonically. Predetermined cut-offs are used to define whether the bacteria are sensitive or resistant to any given antibiotic. However, during chronic lung infections in CF, P. aeruginosa populations exist in biofilms and there is evidence that the environment is largely microaerophilic. The stark difference in conditions between bacteria in the lung and those during diagnostic testing has called into question the reliability and even relevance of these tests. Artificial sputum medium (ASM) is a culture medium containing the components of CF patient sputum, including amino acids, mucin and free DNA. P. aeruginosa growth in ASM mimics growth during CF infections, with the formation of self-aggregating biofilm structures and population divergence. The aim of this study was to develop a microtitre-plate assay to study antimicrobial susceptibility of P. aeruginosa based on growth in ASM, which is applicable to both microaerophilic and aerobic conditions. An ASM assay was developed in a microtitre plate format. P. aeruginosa biofilms were allowed to develop for 3 days prior to incubation with antimicrobial agents at different concentrations for 24 hours. After biofilm disruption, cell viability was measured by staining with resazurin. This assay was used to ascertain the sessile cell minimum inhibitory concentration (SMIC) of tobramycin for 15 different P. aeruginosa isolates under aerobic and microaerophilic conditions and SMIC values were compared to those obtained with standard broth growth. Whilst there was some evidence for increased MIC values for isolates grown in ASM when compared to their planktonic counterparts, the biggest differences were found with bacteria tested in microaerophilic conditions, which showed a much increased resistance up to a > 128 fold, towards tobramycin in the ASM system when compared to assays carried out in aerobic conditions. The lack of association between current susceptibility testing methods and clinical outcome has questioned the validity of current methods. Several in vitro models have been used previously to study P. aeruginosa biofilms. However, these methods rely on surface attached biofilms, whereas the ASM biofilms resemble those observed in the CF lung. In addition, reduced oxygen concentration in the mucus has been shown to alter the behavior of P. aeruginosa and affect antibiotic susceptibility. Therefore using ASM under microaerophilic conditions may provide a more realistic environment in which to study antimicrobial susceptibility.
Words, concepts, or both: optimal indexing units for automated information retrieval.
Hersh, W. R.; Hickam, D. H.; Leone, T. J.
1992-01-01
What is the best way to represent the content of documents in an information retrieval system? This study compares the retrieval effectiveness of five different methods for automated (machine-assigned) indexing using three test collections. The consistently best methods are those that use indexing based on the words that occur in the available text of each document. Methods used to map text into concepts from a controlled vocabulary showed no advantage over the word-based methods. This study also looked at an approach to relevance feedback which showed benefit for both word-based and concept-based methods. PMID:1482951
Pierce, Brandon L.; Carlson, Christopher S.; Kuszler, Patricia C.; Stanford, Janet L.; Austin, Melissa A.
2010-01-01
Purpose Fragmented ownership of diagnostic gene patents has the potential to create an ‘anticommons’ in the area of genomic diagnostics, making it difficult and expensive to assemble the patent rights necessary to develop a panel of genetic tests. The objectives of this study were to identify U.S. patents that protect existing panels of genetic tests, describe how (or if) test providers acquired rights to these patents, and determine if fragmented patent ownership has inhibited the commercialization of these panels. Methods As case studies, we selected four clinical applications of genetic testing (cystic fibrosis, maturity-onset diabetes of the young, long QT syndrome, and hereditary breast cancer) that utilize tests protected by ≥3 U.S. patents. We summarized publically available information on relevant patents, test providers, licenses, and litigation. Results For each case study, all tests of major genes/mutations were patented, and at least one party held the collective rights to conduct all relevant tests, often as a result of licensing agreements. Conclusions We did not find evidence that fragmentation of patent rights has inhibited commercialization of genetic testing services. However, as knowledge of genetic susceptibility increases, it will be important to consider the potential consequences of fragmented ownership of diagnostic gene patents. PMID:19367193
Bascomb, Shoshana; Manafi, Mammad
1998-01-01
The contribution of enzyme tests to the accurate and rapid routine identification of gram-positive cocci is introduced. The current taxonomy of the genera of aerobic and facultatively anaerobic cocci based on genotypic and phenotypic characterization is reviewed. The clinical and economic importance of members of these taxa is briefly summarized. Tables summarizing test schemes and kits available for the identification of staphylococci, enterococci, and streptococci on the basis of general requirements, number of tests, number of taxa, test classes, and completion times are discussed. Enzyme tests included in each scheme are compared on the basis of their synthetic moiety. The current understanding of the activity of enzymes important for classification and identification of the major groups, methods of testing, and relevance to the ease and speed of identification are reviewed. Publications describing the use of different identification kits are listed, and overall identification successes and problems are discussed. The relationships between the results of conventional biochemical and rapid enzyme tests are described and considered. The use of synthetic substrates for the detection of glycosidases and peptidases is reviewed, and the advantages of fluorogenic synthetic moieties are discussed. The relevance of enzyme tests to accurate and meaningful rapid routine identification is discussed. PMID:9564566
NASA Astrophysics Data System (ADS)
Cho, Hyun-chong; Hadjiiski, Lubomir; Sahiner, Berkman; Chan, Heang-Ping; Paramagul, Chintana; Helvie, Mark; Nees, Alexis V.
2012-03-01
We designed a Content-Based Image Retrieval (CBIR) Computer-Aided Diagnosis (CADx) system to assist radiologists in characterizing masses on ultrasound images. The CADx system retrieves masses that are similar to a query mass from a reference library based on computer-extracted features that describe texture, width-to-height ratio, and posterior shadowing of a mass. Retrieval is performed with k nearest neighbor (k-NN) method using Euclidean distance similarity measure and Rocchio relevance feedback algorithm (RRF). In this study, we evaluated the similarity between the query and the retrieved masses with relevance feedback using our interactive CBIR CADx system. The similarity assessment and feedback were provided by experienced radiologists' visual judgment. For training the RRF parameters, similarities of 1891 image pairs obtained from 62 masses were rated by 3 MQSA radiologists using a 9-point scale (9=most similar). A leave-one-out method was used in training. For each query mass, 5 most similar masses were retrieved from the reference library using radiologists' similarity ratings, which were then used by RRF to retrieve another 5 masses for the same query. The best RRF parameters were chosen based on three simulated observer experiments, each of which used one of the radiologists' ratings for retrieval and relevance feedback. For testing, 100 independent query masses on 100 images and 121 reference masses on 230 images were collected. Three radiologists rated the similarity between the query and the computer-retrieved masses. Average similarity ratings without and with RRF were 5.39 and 5.64 on the training set and 5.78 and 6.02 on the test set, respectively. The average Az values without and with RRF were 0.86+/-0.03 and 0.87+/-0.03 on the training set and 0.91+/-0.03 and 0.90+/-0.03 on the test set, respectively. This study demonstrated that RRF improved the similarity of the retrieved masses.
Utility of PCR, Culture, and Antigen Detection Methods for Diagnosis of Legionellosis.
Chen, Derrick J; Procop, Gary W; Vogel, Sherilynn; Yen-Lieberman, Belinda; Richter, Sandra S
2015-11-01
The goal of this retrospective study was to evaluate the performance of different diagnostic tests for Legionnaires' disease in a clinical setting where Legionella pneumophila PCR had been introduced. Electronic medical records at the Cleveland Clinic were searched for Legionella urinary antigen (UAG), culture, and PCR tests ordered from March 2010 through December 2013. For cases where two or more test methods were performed and at least one was positive, the medical record was reviewed for relevant clinical and epidemiologic factors. Excluding repeat testing on a given patient, 19,912 tests were ordered (12,569 UAG, 3,747 cultures, and 3,596 PCR) with 378 positive results. The positivity rate for each method was 0.4% for culture, 0.8% for PCR, and 2.7% for UAG. For 37 patients, at least two test methods were performed with at least one positive result: 10 (27%) cases were positive by all three methods, 16 (43%) were positive by two methods, and 11 (30%) were positive by one method only. For the 32 patients with medical records available, clinical presentation was consistent with proven or probable Legionella infection in 84% of the cases. For those cases, the sensitivities of culture, PCR, and UAG were 50%, 92%, and 96%, respectively. The specificities were 100% for culture and 99.9% for PCR and UAG. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Utility of PCR, Culture, and Antigen Detection Methods for Diagnosis of Legionellosis
Chen, Derrick J.; Procop, Gary W.; Vogel, Sherilynn; Yen-Lieberman, Belinda
2015-01-01
The goal of this retrospective study was to evaluate the performance of different diagnostic tests for Legionnaires' disease in a clinical setting where Legionella pneumophila PCR had been introduced. Electronic medical records at the Cleveland Clinic were searched for Legionella urinary antigen (UAG), culture, and PCR tests ordered from March 2010 through December 2013. For cases where two or more test methods were performed and at least one was positive, the medical record was reviewed for relevant clinical and epidemiologic factors. Excluding repeat testing on a given patient, 19,912 tests were ordered (12,569 UAG, 3,747 cultures, and 3,596 PCR) with 378 positive results. The positivity rate for each method was 0.4% for culture, 0.8% for PCR, and 2.7% for UAG. For 37 patients, at least two test methods were performed with at least one positive result: 10 (27%) cases were positive by all three methods, 16 (43%) were positive by two methods, and 11 (30%) were positive by one method only. For the 32 patients with medical records available, clinical presentation was consistent with proven or probable Legionella infection in 84% of the cases. For those cases, the sensitivities of culture, PCR, and UAG were 50%, 92%, and 96%, respectively. The specificities were 100% for culture and 99.9% for PCR and UAG. PMID:26292304
2011-01-01
Background The correspondence of satisfaction ratings between physicians and patients can be assessed on different dimensions. One may examine whether they differ between the two groups or focus on measures of association or agreement. The aim of our study was to evaluate methodological difficulties in calculating the correspondence between patient and physician satisfaction ratings and to show the relevance for shared decision making research. Methods We utilised a structured tool for cardiovascular prevention (arriba™) in a pragmatic cluster-randomised controlled trial. Correspondence between patient and physician satisfaction ratings after individual primary care consultations was assessed using the Patient Participation Scale (PPS). We used the Wilcoxon signed-rank test, the marginal homogeneity test, Kendall's tau-b, weighted kappa, percentage of agreement, and the Bland-Altman method to measure differences, associations, and agreement between physicians and patients. Results Statistical measures signal large differences between patient and physician satisfaction ratings with more favourable ratings provided by patients and a low correspondence regardless of group allocation. Closer examination of the raw data revealed a high ceiling effect of satisfaction ratings and only slight disagreement regarding the distributions of differences between physicians' and patients' ratings. Conclusions Traditional statistical measures of association and agreement are not able to capture a clinically relevant appreciation of the physician-patient relationship by both parties in skewed satisfaction ratings. Only the Bland-Altman method for assessing agreement augmented by bar charts of differences was able to indicate this. Trial registration ISRCTN: ISRCT71348772 PMID:21592337
Mallik, Saurav; Bhadra, Tapas; Maulik, Ujjwal
2017-01-01
Epigenetic Biomarker discovery is an important task in bioinformatics. In this article, we develop a new framework of identifying statistically significant epigenetic biomarkers using maximal-relevance and minimal-redundancy criterion based feature (gene) selection for multi-omics dataset. Firstly, we determine the genes that have both expression as well as methylation values, and follow normal distribution. Similarly, we identify the genes which consist of both expression and methylation values, but do not follow normal distribution. For each case, we utilize a gene-selection method that provides maximal-relevant, but variable-weighted minimum-redundant genes as top ranked genes. For statistical validation, we apply t-test on both the expression and methylation data consisting of only the normally distributed top ranked genes to determine how many of them are both differentially expressed andmethylated. Similarly, we utilize Limma package for performing non-parametric Empirical Bayes test on both expression and methylation data comprising only the non-normally distributed top ranked genes to identify how many of them are both differentially expressed and methylated. We finally report the top-ranking significant gene-markerswith biological validation. Moreover, our framework improves positive predictive rate and reduces false positive rate in marker identification. In addition, we provide a comparative analysis of our gene-selection method as well as othermethods based on classificationperformances obtained using several well-known classifiers.
Autonomous Scanning Probe Microscopy in Situ Tip Conditioning through Machine Learning.
Rashidi, Mohammad; Wolkow, Robert A
2018-05-23
Atomic-scale characterization and manipulation with scanning probe microscopy rely upon the use of an atomically sharp probe. Here we present automated methods based on machine learning to automatically detect and recondition the quality of the probe of a scanning tunneling microscope. As a model system, we employ these techniques on the technologically relevant hydrogen-terminated silicon surface, training the network to recognize abnormalities in the appearance of surface dangling bonds. Of the machine learning methods tested, a convolutional neural network yielded the greatest accuracy, achieving a positive identification of degraded tips in 97% of the test cases. By using multiple points of comparison and majority voting, the accuracy of the method is improved beyond 99%.
Iridology: A systematic review.
Ernst, E
1999-02-01
Iridologists claim to be able to diagnose medical conditions through abnormalities of pigmentation in the iris. This technique is popular in many countries. Therefore it is relevant to ask whether it is valid. To systematically review all interpretable tests of the validity of iridology as a diagnostic tool. DATA SOURCE AND EXTRACTION: Three independent literature searches were performed to identify all blinded tests. Data were extracted in a predefined, standardized fashion. Four case control studies were found. The majority of these investigations suggests that iridology is not a valid diagnostic method. The validity of iridology as a diagnostic tool is not supported by scientific evaluations. Patients and therapists should be discouraged from using this method.
Intelligence's likelihood and evolutionary time frame
NASA Astrophysics Data System (ADS)
Bogonovich, Marc
2011-04-01
This paper outlines hypotheses relevant to the evolution of intelligent life and encephalization in the Phanerozoic. If general principles are inferable from patterns of Earth life, implications could be drawn for astrobiology. Many of the outlined hypotheses, relevant data, and associated evolutionary and ecological theory are not frequently cited in astrobiological journals. Thus opportunity exists to evaluate reviewed hypotheses with an astrobiological perspective. A quantitative method is presented for testing one of the reviewed hypotheses (hypothesis i; the diffusion hypothesis). Questions are presented throughout, which illustrate that the question of intelligent life's likelihood can be expressed as multiple, broadly ranging, more tractable questions.
Lux, H; Cavalcante, L Barreira; Baur, X
2018-06-01
Hilar and mediastinal lymphadenopathy may represent a diagnostic challenge in clinical practice. This article is intended to facilitate differential diagnosis by a systematic description of relevant pathologies, notably with occupational etiology. Clinical findings of relevant diseases, i. e. of tuberculosis, chronic beryllium disease, sarcoidosis, lung cancer, malignant lymphoma, Epstein-Barr virus infection, and histoplasmosis are compared.Case history, imaging and laboratory tests have important diagnostic impact. But also invasive methods can be necessary in order to exclude and prove malignancy, infection or autoimmune disease. © Georg Thieme Verlag KG Stuttgart · New York.
A drawback of current in vitro chemical testing is that many commonly used cell lines lack chemical metabolism. This hinders the use and relevance of cell culture in high throughput chemical toxicity screening. To address this challenge, we engineered HEK293T cells to overexpress...
Student Achievement in Large-Lecture Remedial Math Classes
ERIC Educational Resources Information Center
Monte, Brent M.
2011-01-01
Due to the increase in students seeking remedial math classes at the community college level, coupled with declining revenues to the community colleges and a lack of classroom availability, the need to consider increasing class size has become a relevant and timely issue. This study is a mixed-method, quasi-experimental study testing effects of…
2006-12-01
speed of search engines improves the efficiency of such methods, effectiveness is not improved. The objective of this thesis is to construct and test...interest, users are assisted in finding a relevant set of key terms that will aid the search engines in narrowing, widening, or refocusing a Web search
Place-Based Pedagogy in the Era of Accountability: An Action Research Study
ERIC Educational Resources Information Center
Saracino, Peter C.
2010-01-01
Today's most common method of teaching biology--driven by calls for standardization and high-stakes testing--relies on a standards-based, de-contextualized approach to education. This results in "one size fits all" curriculums that ignore local contexts relevant to students' lives, discourage student engagement and ultimately work against a deep…
A Cross-Linguistic Study of the Acquisition of Clitic and Pronoun Production
ERIC Educational Resources Information Center
Varlokosta, Spyridoula; Belletti, Adriana; Costa, João; Friedmann, Naama; Gavarró, Anna; Grohmann, Kleanthes K.; Guasti, Maria Teresa; Tuller, Laurice; Lobo, Maria; Andelkovic, Darinka; Argemí, Núria; Avram, Larisa; Berends, Sanne; Brunetto, Valentina; Delage, Hélène; Ezeizabarrena, María-José; Fattal, Iris; Haman, Ewa; van Hout, Angeliek; de López, Kristine Jensen; Katsos, Napoleon; Kologranic, Lana; Krstic, Nadezda; Kraljevic, Jelena Kuvac; Miekisz, Aneta; Nerantzini, Michaela; Queraltó, Clara; Radic, Zeljana; Ruiz, Sílvia; Sauerland, Uli; Sevcenco, Anca; Smoczynska, Magdalena; Theodorou, Eleni; van der Lely, Heather; Veenstra, Alma; Weston, John; Yachini, Maya; Yatsushiro, Kazuko
2016-01-01
This study develops a single elicitation method to test the acquisition of third-person pronominal objects in 5-year-olds for 16 languages. This methodology allows us to compare the acquisition of pronominals in languages that lack object clitics ("pronoun languages") with languages that employ clitics in the relevant context…
ERIC Educational Resources Information Center
Goodman, Kenneth; Grad, Roland; Pluye, Pierre; Nowacki, Amy; Hickner, John
2012-01-01
Introduction: Electronic knowledge resources have the potential to rapidly provide answers to clinicians' questions. We sought to determine clinicians' reasons for searching these resources, the rate of finding relevant information, and the perceived clinical impact of the information they retrieved. Methods: We asked general internists, family…
ERIC Educational Resources Information Center
Yerrick, Randy; Johnson, Joseph
2009-01-01
This mixed methods study examined the effects of inserting laptops and science technology tools in middle school environments. Working together with a local university, middle school science teaching faculty members wrote and aligned curricula, explored relevant science education literature, tested lessons with summer school students, and prepared…
The path for incorporating new alternative methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. Some of these challenges include development of relevant and predictive test systems and computational models to integrate...
Value Added: Do New Teacher Evaluation Methods Make the Grade?
ERIC Educational Resources Information Center
Garrett, Kristi
2011-01-01
Measuring a teacher's effectiveness in quantifiable ways is a logical step in a society driven by the SMART goals (specific, measurable, attainable, relevant, and timely objectives) that pervade modern management. The idea of using student performance on standardized tests to judge a teacher's effectiveness picked up steam after the Obama…
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
Huang, Qing; Fu, Wei-Ling; You, Jian-Ping; Mao, Qing
2016-10-01
Ebola virus disease (EVD), caused by Ebola virus (EBOV), is a potent acute infectious disease with a high case-fatality rate. Etiological and serological EBOV detection methods, including techniques that involve the detection of the viral genome, virus-specific antigens and anti-virus antibodies, are standard laboratory diagnostic tests that facilitate confirmation or exclusion of EBOV infection. In addition, routine blood tests, liver and kidney function tests, electrolytes and coagulation tests and other diagnostic examinations are important for the clinical diagnosis and treatment of EVD. Because of the viral load in body fluids and secretions from EVD patients, all body fluids are highly contagious. As a result, biosafety control measures during the collection, transport and testing of clinical specimens obtained from individuals scheduled to undergo EBOV infection testing (including suspected, probable and confirmed cases) are crucial. This report has been generated following extensive work experience in the China Ebola Treatment Center (ETC) in Liberia and incorporates important information pertaining to relevant diagnostic standards, clinical significance, operational procedures, safety controls and other issues related to laboratory testing of EVD. Relevant opinions and suggestions are presented in this report to provide contextual awareness associated with the development of standards and/or guidelines related to EVD laboratory testing.
[The role of biotechnology in pharmaceutical drug design].
Gaisser, Sibylle; Nusser, Michael
2010-01-01
Biotechnological methods have become an important tool in pharmaceutical drug research and development. Today approximately 15 % of drug revenues are derived from biopharmaceuticals. The most relevant indications are oncology, metabolic disorders and disorders of the musculoskeletal system. For the future it can be expected that the relevance of biopharmaceuticals will further increase. Currently, the share of substances in preclinical testing that rely on biotechnology is more than 25 % of all substances in preclinical testing. Products for the treatment of cancer, metabolic disorders and infectious diseases are most important. New therapeutic approaches such as RNA interference only play a minor role in current commercial drug research and development with 1.5 % of all biological preclinical substances. Investments in sustainable high technology such as biotechnology are of vital importance for a highly developed country like Germany because of its lack of raw materials. Biotechnology helps the pharmaceutical industry to develop new products, new processes, methods and services and to improve existing ones. Thus, international competitiveness can be strengthened, new jobs can be created and existing jobs preserved.
Fast detection of air contaminants using immunobiological methods
NASA Astrophysics Data System (ADS)
Schmitt, Katrin; Bolwien, Carsten; Sulz, Gerd; Koch, Wolfgang; Dunkhorst, Wilhelm; Lödding, Hubert; Schwarz, Katharina; Holländer, Andreas; Klockenbring, Torsten; Barth, Stefan; Seidel, Björn; Hofbauer, Wolfgang; Rennebarth, Torsten; Renzl, Anna
2009-05-01
The fast and direct identification of possibly pathogenic microorganisms in air is gaining increasing interest due to their threat for public health, e.g. in clinical environments or in clean rooms of food or pharmaceutical industries. We present a new detection method allowing the direct recognition of relevant germs or bacteria via fluorescence-labeled antibodies within less than one hour. In detail, an air-sampling unit passes particles in the relevant size range to a substrate which contains antibodies with fluorescence labels for the detection of a specific microorganism. After the removal of the excess antibodies the optical detection unit comprising reflected-light and epifluorescence microscopy can identify the microorganisms by fast image processing on a single-particle level. First measurements with the system to identify various test particles as well as interfering influences have been performed, in particular with respect to autofluorescence of dust particles. Specific antibodies for the detection of Aspergillus fumigatus spores have been established. The biological test system consists of protein A-coated polymer particles which are detected by a fluorescence-labeled IgG. Furthermore the influence of interfering particles such as dust or debris is discussed.
Statistical inference for time course RNA-Seq data using a negative binomial mixed-effect model.
Sun, Xiaoxiao; Dalpiaz, David; Wu, Di; S Liu, Jun; Zhong, Wenxuan; Ma, Ping
2016-08-26
Accurate identification of differentially expressed (DE) genes in time course RNA-Seq data is crucial for understanding the dynamics of transcriptional regulatory network. However, most of the available methods treat gene expressions at different time points as replicates and test the significance of the mean expression difference between treatments or conditions irrespective of time. They thus fail to identify many DE genes with different profiles across time. In this article, we propose a negative binomial mixed-effect model (NBMM) to identify DE genes in time course RNA-Seq data. In the NBMM, mean gene expression is characterized by a fixed effect, and time dependency is described by random effects. The NBMM is very flexible and can be fitted to both unreplicated and replicated time course RNA-Seq data via a penalized likelihood method. By comparing gene expression profiles over time, we further classify the DE genes into two subtypes to enhance the understanding of expression dynamics. A significance test for detecting DE genes is derived using a Kullback-Leibler distance ratio. Additionally, a significance test for gene sets is developed using a gene set score. Simulation analysis shows that the NBMM outperforms currently available methods for detecting DE genes and gene sets. Moreover, our real data analysis of fruit fly developmental time course RNA-Seq data demonstrates the NBMM identifies biologically relevant genes which are well justified by gene ontology analysis. The proposed method is powerful and efficient to detect biologically relevant DE genes and gene sets in time course RNA-Seq data.
Mining functionally relevant gene sets for analyzing physiologically novel clinical expression data.
Turcan, Sevin; Vetter, Douglas E; Maron, Jill L; Wei, Xintao; Slonim, Donna K
2011-01-01
Gene set analyses have become a standard approach for increasing the sensitivity of transcriptomic studies. However, analytical methods incorporating gene sets require the availability of pre-defined gene sets relevant to the underlying physiology being studied. For novel physiological problems, relevant gene sets may be unavailable or existing gene set databases may bias the results towards only the best-studied of the relevant biological processes. We describe a successful attempt to mine novel functional gene sets for translational projects where the underlying physiology is not necessarily well characterized in existing annotation databases. We choose targeted training data from public expression data repositories and define new criteria for selecting biclusters to serve as candidate gene sets. Many of the discovered gene sets show little or no enrichment for informative Gene Ontology terms or other functional annotation. However, we observe that such gene sets show coherent differential expression in new clinical test data sets, even if derived from different species, tissues, and disease states. We demonstrate the efficacy of this method on a human metabolic data set, where we discover novel, uncharacterized gene sets that are diagnostic of diabetes, and on additional data sets related to neuronal processes and human development. Our results suggest that our approach may be an efficient way to generate a collection of gene sets relevant to the analysis of data for novel clinical applications where existing functional annotation is relatively incomplete.
Testing Scientific Software: A Systematic Literature Review.
Kanewala, Upulee; Bieman, James M
2014-10-01
Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.
Combining Relevance Vector Machines and exponential regression for bearing residual life estimation
NASA Astrophysics Data System (ADS)
Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico
2012-08-01
In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.
Nanthini, B. Suguna; Santhi, B.
2017-01-01
Background: Epilepsy causes when the repeated seizure occurs in the brain. Electroencephalogram (EEG) test provides valuable information about the brain functions and can be useful to detect brain disorder, especially for epilepsy. In this study, application for an automated seizure detection model has been introduced successfully. Materials and Methods: The EEG signals are decomposed into sub-bands by discrete wavelet transform using db2 (daubechies) wavelet. The eight statistical features, the four gray level co-occurrence matrix and Renyi entropy estimation with four different degrees of order, are extracted from the raw EEG and its sub-bands. Genetic algorithm (GA) is used to select eight relevant features from the 16 dimension features. The model has been trained and tested using support vector machine (SVM) classifier successfully for EEG signals. The performance of the SVM classifier is evaluated for two different databases. Results: The study has been experimented through two different analyses and achieved satisfactory performance for automated seizure detection using relevant features as the input to the SVM classifier. Conclusion: Relevant features using GA give better accuracy performance for seizure detection. PMID:28781480
Advantages and limitations of common testing methods for antioxidants.
Amorati, R; Valgimigli, L
2015-05-01
Owing to the importance of antioxidants in the protection of both natural and man-made materials, a large variety of testing methods have been proposed and applied. These include methods based on inhibited autoxidation studies, which are better followed by monitoring the kinetics of oxygen consumption or of the formation of hydroperoxides, the primary oxidation products. Analytical determination of secondary oxidation products (e.g. carbonyl compounds) has also been used. The majority of testing methods, however, do not involve substrate autoxidation. They are based on the competitive bleaching of a probe (e.g. ORAC assay, β-carotene, crocin bleaching assays, and luminol assay), on reaction with a different probe (e.g. spin-trapping and TOSC assay), or they are indirect methods based on the reduction of persistent radicals (e.g. galvinoxyl, DPPH and TEAC assays), or of inorganic oxidizing species (e.g. FRAP, CUPRAC and Folin-Ciocalteu assays). Yet other methods are specific for preventive antioxidants. The relevance, advantages, and limitations of these methods are critically discussed, with respect to their chemistry and the mechanisms of antioxidant activity. A variety of cell-based assays have also been proposed, to investigate the biological activity of antioxidants. Their importance and critical aspects are discussed, along with arguments for the selection of the appropriate testing methods according to the different needs.
Nyberg, Richard Edward; Russell Smith, A
2013-01-01
Spinal motion palpation (SMP) is a standard component of a manual therapy examination despite questionable reliability. The present research is inconclusive as to the relevance of the findings from SMP, with respect to the patient’s pain complaints. Differences in the testing methods and interpretation of spinal mobility testing are problematic. If SMP is to be a meaningful component of a spinal examination, the methods for testing and interpretation must be carefully scrutinized. The intent of this narrative review is to facilitate a better understanding of how SMP should provide the examiner with relevant information for assessment and treatment of patients with spinal pain disorders. The concept of just noticeable difference is presented and applied to SMP as a suggestion for determining the neutral zone behavior of a spinal segment. In addition, the use of a lighter, or more passive receptive palpation technique, is considered as a means for increasing tactile discrimination of spinal movement behavior. Further understanding of the scientific basis of testing SMP may improve intra- and inter-examiner reliability. The significance of the findings from SMP should be considered in context of the patient’s functional problem. Methodological changes may be indicated for the performance of SMP techniques, such as central posterior-anterior (PA) pressure and passive intervertebral motion tests, in order to improve reliability. Instructors of manual therapy involved in teaching SMP should be knowledgeable of the neurophysiological processes of touch sensation so as to best advise students in the application of the various testing techniques. PMID:24421627
Dottori, Martin; Sedeño, Lucas; Martorell Caro, Miguel; Alifano, Florencia; Hesse, Eugenia; Mikulan, Ezequiel; García, Adolfo M; Ruiz-Tagle, Amparo; Lillo, Patricia; Slachevsky, Andrea; Serrano, Cecilia; Fraiman, Daniel; Ibanez, Agustin
2017-06-19
Developing effective and affordable biomarkers for dementias is critical given the difficulty to achieve early diagnosis. In this sense, electroencephalographic (EEG) methods offer promising alternatives due to their low cost, portability, and growing robustness. Here, we relied on EEG signals and a novel information-sharing method to study resting-state connectivity in patients with behavioral variant frontotemporal dementia (bvFTD) and controls. To evaluate the specificity of our results, we also tested Alzheimer's disease (AD) patients. The classification power of the ensuing connectivity patterns was evaluated through a supervised classification algorithm (support vector machine). In addition, we compared the classification power yielded by (i) functional connectivity, (ii) relevant neuropsychological tests, and (iii) a combination of both. BvFTD patients exhibited a specific pattern of hypoconnectivity in mid-range frontotemporal links, which showed no alterations in AD patients. These functional connectivity alterations in bvFTD were replicated with a low-density EEG setting (20 electrodes). Moreover, while neuropsychological tests yielded acceptable discrimination between bvFTD and controls, the addition of connectivity results improved classification power. Finally, classification between bvFTD and AD patients was better when based on connectivity than on neuropsychological measures. Taken together, such findings underscore the relevance of EEG measures as potential biomarker signatures for clinical settings.
Methods for the design and analysis of power optimized finite-state machines using clock gating
NASA Astrophysics Data System (ADS)
Chodorowski, Piotr
2017-11-01
The paper discusses two methods of design of power optimized FSMs. Both methods use clock gating techniques. The main objective of the research was to write a program capable of generating automatic hardware description of finite-state machines in VHDL and testbenches to help power analysis. The creation of relevant output files is detailed step by step. The program was tested using the LGSynth91 FSM benchmark package. An analysis of the generated circuits shows that the second method presented in this paper leads to significant reduction of power consumption.
NASA Astrophysics Data System (ADS)
Sugiyanto, Pribadi, Supriyanto, Bambang
2017-09-01
The purpose of this study was to investigate the effectiveness of Creative & Productive instructional method compared with conventional method. This research was a quasi-experimental study involving all Civil Engineering students at Universitas Negeri Malang who were taking a course of Steel Structure. The students were randomly assigned to two different treatment groups, 30 students in experimental group and 37 students in the control group. It was assumed that these groups were equal in all relevant aspects; they differed only in the treatment administered. We used the t-test to test the hypothesis. The results of this research suggest that: (l) the use of Creative & Productive instructional method can significantly improve students' learning achievement, (2) the use of Creative & Productive instructional method can significantly improve students' retention, (3) students' motivation has a significant effect on their learning achievement, and (4) students' motivation has a significant effect on their retention.
Syal, Karan; Shen, Simon; Yang, Yunze; Wang, Shaopeng; Haydel, Shelley E; Tao, Nongjian
2017-08-25
To combat antibiotic resistance, a rapid antibiotic susceptibility testing (AST) technology that can identify resistant infections at disease onset is required. Current clinical AST technologies take 1-3 days, which is often too slow for accurate treatment. Here we demonstrate a rapid AST method by tracking sub-μm scale bacterial motion with an optical imaging and tracking technique. We apply the method to clinically relevant bacterial pathogens, Escherichia coli O157: H7 and uropathogenic E. coli (UPEC) loosely tethered to a glass surface. By analyzing dose-dependent sub-μm motion changes in a population of bacterial cells, we obtain the minimum bactericidal concentration within 2 h using human urine samples spiked with UPEC. We validate the AST method using the standard culture-based AST methods. In addition to population studies, the method allows single cell analysis, which can identify subpopulations of resistance strains within a sample.
van der Voet, Hilko; Goedhart, Paul W; Schmidt, Kerstin
2017-11-01
An equivalence testing method is described to assess the safety of regulated products using relevant data obtained in historical studies with assumedly safe reference products. The method is illustrated using data from a series of animal feeding studies with genetically modified and reference maize varieties. Several criteria for quantifying equivalence are discussed, and study-corrected distribution-wise equivalence is selected as being appropriate for the example case study. An equivalence test is proposed based on a high probability of declaring equivalence in a simplified situation, where there is no between-group variation, where the historical and current studies have the same residual variance, and where the current study is assumed to have a sample size as set by a regulator. The method makes use of generalized fiducial inference methods to integrate uncertainties from both the historical and the current data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Developing a pressure ulcer risk assessment scale for patients in long-term care.
Lepisto, Mervi; Eriksson, Elina; Hietanen, Helvi; Lepisto, Jyri; Lauri, Sirkka
2006-02-01
Previous pressure ulcer risk assessment scales appear to have relied on opinions about risk factors and are based on care setting rather than research evidence. Utilizing 21 existing risk assessment scales and relevant risk factor literature, an instrument was developed by Finnish researchers that takes into account individual patient risk factors, devices and methods applied in nursing care, and organizational characteristics. The instrument underwent two pilot tests to assess the relevance and clarity of the instrument: the first involved 43 nurses and six patients; the second involved 50 nurses with expertise in wound care. Changes to questionnaire items deemed necessary as a result of descriptive analysis and agreement percentages were completed. After pilot testing, the final instrument addressed the following issues: 1) patient risks: activity, mobility in bed, mental status, nutrition, urinary incontinence, fecal incontinence, sensory perception, and skin condition; 2) devices and methods used in patient care: technical devices, bed type, mattress, overlay, seat cushions, and care methods; and 3) staff number and structure, maximum number of beds, and beds in use (the last group of questions were included to ensure participants understood the items; results were not analyzed). The phases of the study provided an expeditious means of data collection and a suitable opportunity to assess how the instrument would function in practice. Instrument reliability and validity were improved as a result of the pilot testing and can be enhanced further with continued use and assessment.
Edinger, Tracy; Cohen, Aaron M.; Bedrick, Steven; Ambert, Kyle; Hersh, William
2012-01-01
Objective: Secondary use of electronic health record (EHR) data relies on the ability to retrieve accurate and complete information about desired patient populations. The Text Retrieval Conference (TREC) 2011 Medical Records Track was a challenge evaluation allowing comparison of systems and algorithms to retrieve patients eligible for clinical studies from a corpus of de-identified medical records, grouped by patient visit. Participants retrieved cohorts of patients relevant to 35 different clinical topics, and visits were judged for relevance to each topic. This study identified the most common barriers to identifying specific clinic populations in the test collection. Methods: Using the runs from track participants and judged visits, we analyzed the five non-relevant visits most often retrieved and the five relevant visits most often overlooked. Categories were developed iteratively to group the reasons for incorrect retrieval for each of the 35 topics. Results: Reasons fell into nine categories for non-relevant visits and five categories for relevant visits. Non-relevant visits were most often retrieved because they contained a non-relevant reference to the topic terms. Relevant visits were most often infrequently retrieved because they used a synonym for a topic term. Conclusions: This failure analysis provides insight into areas for future improvement in EHR-based retrieval with techniques such as more widespread and complete use of standardized terminology in retrieval and data entry systems. PMID:23304287
Measurement techniques and instruments suitable for life-prediction testing of photovoltaic arrays
NASA Technical Reports Server (NTRS)
Noel, G. T.; Sliemers, F. A.; Deringer, G. C.; Wood, V. E.; Wilkes, K. E.; Gaines, G. B.; Carmichael, D. C.
1978-01-01
Array failure modes, relevant materials property changes, and primary degradation mechanisms are discussed as a prerequisite to identifying suitable measurement techniques and instruments. Candidate techniques and instruments are identified on the basis of extensive reviews of published and unpublished information. These methods are organized in six measurement categories - chemical, electrical, optical, thermal, mechanical, and other physicals. Using specified evaluation criteria, the most promising techniques and instruments for use in life prediction tests of arrays were selected.
Personalized genetic testing as a tool for integrating ethics instruction into biology courses.
Zhang, Tenny R; Anderson, Misti Ault
2014-12-01
Personalized genetic testing (PGT) has been used by some educational institutions as a pedagogical tool for teaching human genetics. While work has been done that examines the potential for PGT to improve students' interest and understanding of the science involved in genetic testing, there has been less dialogue about how this method might be useful for integrating ethical and societal issues surrounding genetic testing into classroom discussions. Citing the importance of integrating ethics into the biology classroom, we argue that PGT can be an effective educational tool for integrating ethics and science education, and discuss relevant ethical considerations for instructors using this approach.
NASA Technical Reports Server (NTRS)
Zhu, Dongming; Bhatt, Ramakrishna T.; Harder, Bryan
2016-01-01
This paper presents the developments of thermo-mechanical testing approaches and durability performance of environmental barrier coatings (EBCs) and EBC coated SiCSiC ceramic matrix composites (CMCs). Critical testing aspects of the CMCs will be described, including state of the art instrumentations such as temperature, thermal gradient, and full field strain measurements; materials thermal conductivity evolutions and thermal stress resistance; NDE methods; thermo-mechanical stress and environment interactions associated damage accumulations. Examples are also given for testing ceramic matrix composite sub-elements and small airfoils to help better understand the critical and complex CMC and EBC properties in engine relevant testing environments.
Personalized Genetic Testing as a Tool for Integrating Ethics Instruction into Biology Courses
Zhang, Tenny R.; Anderson, Misti Ault
2014-01-01
Personalized genetic testing (PGT) has been used by some educational institutions as a pedagogical tool for teaching human genetics. While work has been done that examines the potential for PGT to improve students’ interest and understanding of the science involved in genetic testing, there has been less dialogue about how this method might be useful for integrating ethical and societal issues surrounding genetic testing into classroom discussions. Citing the importance of integrating ethics into the biology classroom, we argue that PGT can be an effective educational tool for integrating ethics and science education, and discuss relevant ethical considerations for instructors using this approach. PMID:25574278
ERIC Educational Resources Information Center
Martin, Andrew J.; Lazendic, Goran
2018-01-01
The present study investigated the implications of computer-adaptive testing (operationalized by way of multistage adaptive testing; MAT) and "conventional" fixed order computer testing for various test-relevant outcomes in numeracy, including achievement, test-relevant motivation and engagement, and subjective test experience. It did so…
Experimental determination of the oral bioavailability and bioaccessibility of lead particles
2012-01-01
In vivo estimations of Pb particle bioavailability are costly and variable, because of the nature of animal assays. The most feasible alternative for increasing the number of investigations carried out on Pb particle bioavailability is in vitro testing. This testing method requires calibration using in vivo data on an adapted animal model, so that the results will be valid for childhood exposure assessment. Also, the test results must be reproducible within and between laboratories. The Relative Bioaccessibility Leaching Procedure, which is calibrated with in vivo data on soils, presents the highest degree of validation and simplicity. This method could be applied to Pb particles, including those in paint and dust, and those in drinking water systems, which although relevant, have been poorly investigated up to now for childhood exposure assessment. PMID:23173867
K(3)EDTA Vacuum Tubes Validation for Routine Hematological Testing.
Lima-Oliveira, Gabriel; Lippi, Giuseppe; Salvagno, Gian Luca; Montagnana, Martina; Poli, Giovanni; Solero, Giovanni Pietro; Picheth, Geraldo; Guidi, Gian Cesare
2012-01-01
Background and Objective. Some in vitro diagnostic devices (e.g, blood collection vacuum tubes and syringes for blood analyses) are not validated before the quality laboratory managers decide to start using or to change the brand. Frequently, the laboratory or hospital managers select the vacuum tubes for blood collection based on cost considerations or on relevance of a brand. The aim of this study was to validate two dry K(3)EDTA vacuum tubes of different brands for routine hematological testing. Methods. Blood specimens from 100 volunteers in two different K(3)EDTA vacuum tubes were collected by a single, expert phlebotomist. The routine hematological testing was done on Advia 2120i hematology system. The significance of the differences between samples was assessed by paired Student's t-test after checking for normality. The level of statistical significance was set at P < 0.05. Results and Conclusions. Different brand's tubes evaluated can represent a clinically relevant source of variations only on mean platelet volume (MPV) and platelet distribution width (PDW). Basically, our validation will permit the laboratory or hospital managers to select the brand's vacuum tubes validated according to him/her technical or economical reasons for routine hematological tests.
K3EDTA Vacuum Tubes Validation for Routine Hematological Testing
Lima-Oliveira, Gabriel; Lippi, Giuseppe; Salvagno, Gian Luca; Montagnana, Martina; Poli, Giovanni; Solero, Giovanni Pietro; Picheth, Geraldo; Guidi, Gian Cesare
2012-01-01
Background and Objective. Some in vitro diagnostic devices (e.g, blood collection vacuum tubes and syringes for blood analyses) are not validated before the quality laboratory managers decide to start using or to change the brand. Frequently, the laboratory or hospital managers select the vacuum tubes for blood collection based on cost considerations or on relevance of a brand. The aim of this study was to validate two dry K3EDTA vacuum tubes of different brands for routine hematological testing. Methods. Blood specimens from 100 volunteers in two different K3EDTA vacuum tubes were collected by a single, expert phlebotomist. The routine hematological testing was done on Advia 2120i hematology system. The significance of the differences between samples was assessed by paired Student's t-test after checking for normality. The level of statistical significance was set at P < 0.05. Results and Conclusions. Different brand's tubes evaluated can represent a clinically relevant source of variations only on mean platelet volume (MPV) and platelet distribution width (PDW). Basically, our validation will permit the laboratory or hospital managers to select the brand's vacuum tubes validated according to him/her technical or economical reasons for routine hematological tests. PMID:22888448
Clinical-decision support based on medical literature: A complex network approach
NASA Astrophysics Data System (ADS)
Jiang, Jingchi; Zheng, Jichuan; Zhao, Chao; Su, Jia; Guan, Yi; Yu, Qiubin
2016-10-01
In making clinical decisions, clinicians often review medical literature to ensure the reliability of diagnosis, test, and treatment because the medical literature can answer clinical questions and assist clinicians making clinical decisions. Therefore, finding the appropriate literature is a critical problem for clinical-decision support (CDS). First, the present study employs search engines to retrieve relevant literature about patient records. However, the result of the traditional method is usually unsatisfactory. To improve the relevance of the retrieval result, a medical literature network (MLN) based on these retrieved papers is constructed. Then, we show that this MLN has small-world and scale-free properties of a complex network. According to the structural characteristics of the MLN, we adopt two methods to further identify the potential relevant literature in addition to the retrieved literature. By integrating these potential papers into the MLN, a more comprehensive MLN is built to answer the question of actual patient records. Furthermore, we propose a re-ranking model to sort all papers by relevance. We experimentally find that the re-ranking model can improve the normalized discounted cumulative gain of the results. As participants of the Text Retrieval Conference 2015, our clinical-decision method based on the MLN also yields higher scores than the medians in most topics and achieves the best scores for topics: #11 and #12. These research results indicate that our study can be used to effectively assist clinicians in making clinical decisions, and the MLN can facilitate the investigation of CDS.
Linking Assessment and Instruction Using Ontologies. CSE Technical Report 693
ERIC Educational Resources Information Center
Chung, Gregory K. W. K.; Delacruz, Girlie C.; Dionne, Gary B.; Bewley, William L.
2006-01-01
In this study we report on a test of a method that uses ontologies to individualize instruction by directly linking assessment results to the delivery of relevant content. Our sample was 2nd Lieutenants undergoing entry-level training on rifle marksmanship. Ontologies are explicit expressions of the concepts in a domain, the links among the…
Pedagogy of Science Teaching Tests: Formative Assessments of Science Teaching Orientations
ERIC Educational Resources Information Center
Cobern, William W.; Schuster, David; Adams, Betty; Skjold, Brandy Ann; Mugaloglu, Ebru Zeynep; Bentz, Amy; Sparks, Kelly
2014-01-01
A critical aspect of teacher education is gaining pedagogical content knowledge of how to teach science for conceptual understanding. Given the time limitations of college methods courses, it is difficult to touch on more than a fraction of the science topics potentially taught across grades K-8, particularly in the context of relevant pedagogies.…
Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.
Şener, Duygu Dede; Oğul, Hasan
2016-06-01
Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.
Measures of fish behavior as indicators of sublethal toxicosis during standard toxicity tests
Little, E.E.; DeLonay, A.J.
1996-01-01
Behavioral functions essential for growth and survival can be dramatically altered by sublethal exposure to toxicants. Measures of these behavioral responses are effective in detecting adverse effects of sublethal contaminant exposure. Behavioral responses of fishes can be qualitatively and quantitatively evaluated during routine toxicity tests. At selected intervals of exposure, qualitative evaluations are accomplished through direct observations, whereas video recordings are used for quantitative evaluations. Standardized procedures for behavioral evaluation are readily applicable to different fish species and provide rapid, sensitive, and ecologically relevant assessments of sublethal exposure. The methods are readily applied to standardized test protocols.
Zug, Kathryn A; Pham, Anh Khoa; Belsito, Donald V; DeKoven, Joel G; DeLeo, Vincent A; Fowler, Joseph F; Fransway, Anthony F; Maibach, Howard I; Marks, James G; Mathias, C G Toby; Pratt, Melanie D; Sasseville, Denis; Storrs, Frances J; Taylor, James S; Warshaw, Erin M; Zirwas, Matthew J
2014-01-01
Allergic contact dermatitis is common in children. Epicutaneous patch testing is an important tool for identifying responsible allergens. The objective of this study was to provide the patch test results from children (aged ≤18 years) examined by the North American Contact Dermatitis Group from 2005 to 2012. This is a retrospective analysis of children patch-tested with the North American Contact Dermatitis Group 65- or 70-allergen series. Frequencies and counts were compared with previously published data (2001-2004) using χ statistics. A total of 883 children were tested during the study period. A percentage of 62.3% had ≥1 positive patch test and 56.7% had ≥1 relevant positive patch test. Frequencies of positive patch test and relevant positive patch test reaction were highest with nickel sulfate (28.1/25.6), cobalt chloride (12.3/9.1), neomycin sulfate (7.1/6.6), balsam of Peru (5.7/5.5), and lanolin alcohol 50% petrolatum vehicle (5.5/5.1). The ≥1 positive patch test and ≥1 relevant positive patch test in the children did not differ significantly from adults (≥19 years) or from previously tested children (2001-2004). The percentage of clinically relevant positive patch tests for 27 allergens differed significantly between the children and adults. A total of 23.6% of children had a relevant positive reaction to at least 1 supplemental allergen. Differences in positive patch test and relevant positive patch test frequencies between children and adults as well as test periods confirm the importance of reporting periodic updates of patch testing in children to enhance clinicians' vigilance to clinically important allergens.
A novel tensile test method to assess texture and gaping in salmon fillets.
Ashton, Thomas J; Michie, Ian; Johnston, Ian A
2010-05-01
A new tensile strength method was developed to quantify the force required to tear a standardized block of Atlantic salmon muscle with the aim of identifying those samples more prone to factory downgrading as a result of softness and fillet gaping. The new method effectively overcomes problems of sample attachment encountered with previous tensile strength tests. The repeatability and sensitivity and predictability of the new technique were evaluated against other common instrumental texture measurement methods. The relationship between sensory assessments of firmness and parameters from the instrumental texture methods was also determined. Data from the new method were shown to have the strongest correlations with gaping severity (r =-0.514, P < 0.001) and the highest level of repeatability of data when analyzing cold-smoked samples. The Warner Bratzler shear method gave the most repeatable data from fresh samples and had the highest correlations between fresh and smoked product from the same fish (r = 0.811, P < 0.001). A hierarchical cluster analysis placed the tensile test in the top cluster, alongside the Warner Bratzler method, demonstrating that it also yields adequate data with respect to these tests. None of the tested sensory analysis attributes showed significant relationships to mechanical tests except fillet firmness, with correlations (r) of 0.42 for cylinder probe maximum force (P = 0.005) and 0.31 for tensile work (P = 0.04). It was concluded that the tensile test method developed provides an important addition to the available tools for mechanical analysis of salmon quality, particularly with respect to the prediction of gaping during factory processing, which is a serious commercial problem. A novel, reliable method of measuring flesh tensile strength in salmon, provides data of relevance to gaping.
Shi, Yi; Gao, Ping; Gong, Yuchuan; Ping, Haili
2010-10-04
A biphasic in vitro test method was used to examine release profiles of a poorly soluble model drug, celecoxib (CEB), from its immediate release formulations. Three formulations of CEB were investigated in this study, including a commercial Celebrex capsule, a solution formulation (containing cosolvent and surfactant) and a supersaturatable self-emulsifying drug delivery system (S-SEDDS). The biphasic test system consisted of an aqueous buffer and a water-immiscible organic solvent (e.g., octanol) with the use of both USP II and IV apparatuses. The aqueous phase provided a nonsink dissolution medium for CEB, while the octanol phase acted as a sink for CEB partitioning. For comparison, CEB concentration-time profiles of these formulations in the aqueous medium under either a sink condition or a nonsink condition were also explored. CEB release profiles of these formulations observed in the aqueous medium from either the sink condition test, the nonsink condition test, or the biphasic test have little relevance to the pharmacokinetic observations (e.g., AUC, C(max)) in human subjects. In contrast, a rank order correlation among the three CEB formulations is obtained between the in vitro AUC values of CEB from the octanol phase up to t = 2 h and the in vivo mean AUC (or C(max)) values. As the biphasic test permits a rapid removal of drug from the aqueous phase by partitioning into the organic phase, the amount of drug in the organic phase represents the amount of drug accumulated in systemic circulation in vivo. This hypothesis provides the scientific rationale for the rank order relationship among these CEB formulations between their CEB concentrations in the organic phase and the relative AUC or C(max). In addition, the biphasic test method permits differentiation and discrimination of key attributes among the three different CEB formulations. This work demonstrates that the biphasic in vitro test method appears to be useful as a tool in evaluating performance of formulations of poorly water-soluble drugs and to provide potential for establishing an in vitro-in vivo relationship.
Functional assessment of the ex vivo vocal folds through biomechanical testing: A review
Dion, Gregory R.; Jeswani, Seema; Roof, Scott; Fritz, Mark; Coelho, Paulo; Sobieraj, Michael; Amin, Milan R.; Branski, Ryan C.
2016-01-01
The human vocal folds are complex structures made up of distinct layers that vary in cellular and extracellular composition. The mechanical properties of vocal fold tissue are fundamental to the study of both the acoustics and biomechanics of voice production. To date, quantitative methods have been applied to characterize the vocal fold tissue in both normal and pathologic conditions. This review describes, summarizes, and discusses the most commonly employed methods for vocal fold biomechanical testing. Force-elongation, torsional parallel plate rheometry, simple-shear parallel plate rheometry, linear skin rheometry, and indentation are the most frequently employed biomechanical tests for vocal fold tissues and each provide material properties data that can be used to compare native tissue verses diseased for treated tissue. Force-elongation testing is clinically useful, as it allows for functional unit testing, while rheometry provides physiologically relevant shear data, and nanoindentation permits micrometer scale testing across different areas of the vocal fold as well as whole organ testing. Thoughtful selection of the testing technique during experimental design to evaluate a hypothesis is important to optimizing biomechanical testing of vocal fold tissues. PMID:27127075
Harmonisation of animal testing alternatives in China.
Cheng, Shujun; Qu, Xiaoting; Qin, Yao
2017-12-01
More and more countries are lining up to follow the EU's approach and implement a full ban on the sale of cosmetics that have been tested on animals, which has been the case in the EU since 2013. Besides animal welfare considerations, the need for mutual acceptance of data (MAD) and harmonisation of the global market have made the move toward non-animal testing a desirable general trend for countries worldwide. Over the last 10 years, the concept of alternative methods has been gradually developing in China. This has seen the harmonisation of relevant legislation, the organisation of various theoretical and hands-on training sessions, the exploration of method validation, the adoption of internationally recognised methods, the propagation of alternative testing standards, and an in-depth investigation into the potential use of in vitro methods in the biosciences. There are barriers to this progress, including the demand for a completely new infrastructure, the need to build technology capability, the requirement for a national standardisation system formed through international co-operation, and the lack of technical assistance to facilitate self-innovation. China is now increasing speed in harmonising its approach to the use of non-animal alternatives, accelerating technological development and attempting to incorporate non-animal, in vitro, testing methods into the national regulatory system.
NASA Astrophysics Data System (ADS)
Adamkowski, A.; Krzemianowski, Z.
2012-11-01
The paper presents experiences gathered during many years of utilizing the current-meter and pressure-time methods for flow rate measurements in many hydropower plants. The integration techniques used in these both methods are different from the recommendations contained in the relevant international standards, mainly from the graphical and arithmetical ones. The results of the comparative analysis of both methods applied at the same time during the hydraulic performance tests of two Kaplan turbines in one of the Polish hydropower plant are presented in the final part of the paper. In the case of the pressure-time method application, the concrete penstocks of the tested turbines required installing a special measuring instrumentation inside the penstock. The comparison has shown a satisfactory agreement between the results of discharge measurements executed using the both considered methods. Maximum differences between the discharge values have not exceeded 1.0 % and the average differences have not been greater than 0.5 %.
Mobile phone interference with medical equipment and its clinical relevance: a systematic review.
Lawrentschuk, Nathan; Bolton, Damien M
2004-08-02
To conduct a systematic review of studies on clinically relevant digital mobile phone electromagnetic interference with medical equipment. MEDLINE and SUMSEARCH were searched for the period 1966-2004. The Cochrane Library and Database of Abstracts of Reviews of Effects were also searched for systematic reviews. Studies were eligible if published in a peer-reviewed journal in English, and if they included testing of digital mobile phones for clinically relevant interference with medical equipment used to monitor or treat patients, but not implantable medical devices. As there was considerable heterogeneity in medical equipment studied and the conduct of testing, results were summarised rather than subjected to meta-analysis. Clinically relevant electromagnetic interference (EMI) secondary to mobile phones potentially endangering patients occurred in 45 of 479 devices tested at 900 MHz and 14 of 457 devices tested at 1800 MHz. However, in the largest studies, the prevalence of clinically relevant EMI was low. Most clinically relevant EMI occurred when mobile phones were used within 1 m of medical equipment. Although testing was not standardised between studies and equipment tested was not identical, it is of concern that at least 4% of devices tested in any study were susceptible to clinically relevant EMI. All studies recommend some type of restriction of mobile phone use in hospitals, with use greater than 1 m from equipment and restrictions in clinical areas being the most common.
Visual conspicuity: a new simple standard, its reliability, validity and applicability.
Wertheim, A H
2010-03-01
A general standard for quantifying conspicuity is described. It derives from a simple and easy method to quantitatively measure the visual conspicuity of an object. The method stems from the theoretical view that the conspicuity of an object is not a property of that object, but describes the degree to which the object is perceptually embedded in, i.e. laterally masked by, its visual environment. First, three variations of a simple method to measure the strength of such lateral masking are described and empirical evidence for its reliability and its validity is presented, as are several tests of predictions concerning the effects of viewing distance and ambient light. It is then shown how this method yields a conspicuity standard, expressed as a number, which can be made part of a rule of law, and which can be used to test whether or not, and to what extent, the conspicuity of a particular object, e.g. a traffic sign, meets a predetermined criterion. An additional feature is that, when used under different ambient light conditions, the method may also yield an index of the amount of visual clutter in the environment. Taken together the evidence illustrates the methods' applicability in both the laboratory and in real-life situations. STATEMENT OF RELEVANCE: This paper concerns a proposal for a new method to measure visual conspicuity, yielding a numerical index that can be used in a rule of law. It is of importance to ergonomists and human factor specialists who are asked to measure the conspicuity of an object, such as a traffic or rail-road sign, or any other object. The new method is simple and circumvents the need to perform elaborate (search) experiments and thus has great relevance as a simple tool for applied research.
Pressure ulcer prevention algorithm content validation: a mixed-methods, quantitative study.
van Rijswijk, Lia; Beitz, Janice M
2015-04-01
Translating pressure ulcer prevention (PUP) evidence-based recommendations into practice remains challenging for a variety of reasons, including the perceived quality, validity, and usability of the research or the guideline itself. Following the development and face validation testing of an evidence-based PUP algorithm, additional stakeholder input and testing were needed. Using convenience sampling methods, wound care experts attending a national wound care conference and a regional wound ostomy continence nursing (WOCN) conference and/or graduates of a WOCN program were invited to participate in an Internal Review Board-approved, mixed-methods quantitative survey with qualitative components to examine algorithm content validity. After participants provided written informed consent, demographic variables were collected and participants were asked to comment on and rate the relevance and appropriateness of each of the 26 algorithm decision points/steps using standard content validation study procedures. All responses were anonymous. Descriptive summary statistics, mean relevance/appropriateness scores, and the content validity index (CVI) were calculated. Qualitative comments were transcribed and thematically analyzed. Of the 553 wound care experts invited, 79 (average age 52.9 years, SD 10.1; range 23-73) consented to participate and completed the study (a response rate of 14%). Most (67, 85%) were female, registered (49, 62%) or advanced practice (12, 15%) nurses, and had > 10 years of health care experience (88, 92%). Other health disciplines included medical doctors, physical therapists, nurse practitioners, and certified nurse specialists. Almost all had received formal wound care education (75, 95%). On a Likert-type scale of 1 (not relevant/appropriate) to 4 (very relevant and appropriate), the average score for the entire algorithm/all decision points (N = 1,912) was 3.72 with an overall CVI of 0.94 (out of 1). The only decision point/step recommendation with a CVI of ≤ 0.70 was the recommendation to provide medical-grade sheepskin for patients at high risk for friction/shear. Many positive and substantive suggestions for minor modifications including color, flow, and algorithm orientation were received. The high overall and individual item rating scores and CVI further support the validity and appropriateness of the PUP algorithm with the addition of the minor modifications. The generic recommendations facilitate individualization, and future research should focus on construct validation testing.
A 100-Year Review: Sensory analysis of milk.
Schiano, A N; Harwood, W S; Drake, M A
2017-12-01
Evaluation of the sensory characteristics of food products has been, and will continue to be, the ultimate method for evaluating product quality. Sensory quality is a parameter that can be evaluated only by humans and consists of a series of tests or tools that can be applied objectively or subjectively within the constructs of carefully selected testing procedures and parameters. Depending on the chosen test, evaluators are able to probe areas of interest that are intrinsic product attributes (e.g., flavor profiles and off-flavors) as well as extrinsic measures (e.g., market penetration and consumer perception). This review outlines the literature pertaining to relevant testing procedures and studies of the history of sensory analysis of fluid milk. In addition, evaluation methods outside of traditional sensory techniques and future outlooks on the subject of sensory analysis of fluid milk are explored and presented. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Thermal Protection Test Bed Pathfinder Development Project
NASA Technical Reports Server (NTRS)
Snapp, Cooper
2015-01-01
In order to increase thermal protection capabilities for future reentry vehicles, a method to obtain relevant test data is required. Although arc jet testing can be used to obtain some data on materials, the best method to obtain these data is to actually expose them to an atmospheric reentry. The overprediction of the Orion EFT-1 flight data is an example of how the ground test to flight traceability is not fully understood. The RED-Data small reentry capsule developed by Terminal Velocity Aerospace is critical to understanding this traceability. In order to begin to utilize this technology, ES3 needs to be ready to build and integrate heat shields onto the RED-Data vehicle. Using a heritage Shuttle tile material for the heat shield will both allow valuable insight into the environment that the RED-Data vehicle can provide and give ES3 the knowledge and capability to build and integrate future heat shields for this vehicle.
Spent Pot Lining Characterization Framework
NASA Astrophysics Data System (ADS)
Ospina, Gustavo; Hassan, Mohamed I.
2017-09-01
Spent pot lining (SPL) management represents a major concern for aluminum smelters. There are two key elements for spent pot lining management: recycling and safe storage. Spent pot lining waste can potentially have beneficial uses in co-firing in cement plants. Also, safe storage of SPL is of utmost importance. Gas generation of SPL reaction with water and ignition sensitivity must be studied. However, determining the feasibility of SPL co-firing and developing the required procedures for safe storage rely on determining experimentally all the necessary SPL properties along with the appropriate test methods, recognized by emissions standards and fire safety design codes. The applicable regulations and relevant SPL properties for this purpose are presented along with the corresponding test methods.
Schaarup, Clara; Hartvigsen, Gunnar; Larsen, Lars Bo; Tan, Zheng-Hua; Årsand, Eirik; Hejlesen, Ole Kristian
2015-01-01
The Online Diabetes Exercise System was developed to motivate people with Type 2 diabetes to do a 25 minutes low-volume high-intensity interval training program. In a previous multi-method evaluation of the system, several usability issues were identified and corrected. Despite the thorough testing, it was unclear whether all usability problems had been identified using the multi-method evaluation. Our hypothesis was that adding the eye-tracking triangulation to the multi-method evaluation would increase the accuracy and completeness when testing the usability of the system. The study design was an Eye-tracking Triangulation; conventional eye-tracking with predefined tasks followed by The Post-Experience Eye-Tracked Protocol (PEEP). Six Areas of Interests were the basis for the PEEP-session. The eye-tracking triangulation gave objective and subjective results, which are believed to be highly relevant for designing, implementing, evaluating and optimizing systems in the field of health informatics. Future work should include testing the method on a larger and more representative group of users and apply the method on different system types.
NASA Astrophysics Data System (ADS)
Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus
2017-05-01
For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.
Inter-departmental dosimetry audits – development of methods and lessons learned
Eaton, David J.; Bolton, Steve; Thomas, Russell A. S.; Clark, Catharine H.
2015-01-01
External dosimetry audits give confidence in the safe and accurate delivery of radiotherapy. In the United Kingdom, such audits have been performed for almost 30 years. From the start, they included clinically relevant conditions, as well as reference machine output. Recently, national audits have tested new or complex techniques, but these methods are then used in regional audits by a peer-to-peer approach. This local approach builds up the radiotherapy community, facilitates communication, and brings synergy to medical physics. PMID:26865753
Flexible Reporting of Clinical Data
Andrews, Robert D.
1987-01-01
Two prototype methods have been developed to aid in the presentation of relevant clinical data: 1) an integrated report that displays results from a patient's computer-stored data and also allows manual entry of data, and 2) a graph program that plots results of multiple kinds of tests. These reports provide a flexible means of displaying data to help evaluate patient treatment. The two methods also explore ways of integrating the display of data from multiple components of the Veterans Administration's (VA) Decentralized Hospital Computer Program (DHCP) database.
Diagnostic value of clinical tests for degenerative rotator cuff disease in medical practice.
Lasbleiz, S; Quintero, N; Ea, K; Petrover, D; Aout, M; Laredo, J D; Vicaut, E; Bardin, T; Orcel, P; Beaudreuil, J
2014-06-01
To assess the diagnostic value of clinical tests for degenerative rotator cuff disease (DRCD) in medical practice. Patients with DRCD were prospectively included. Eleven clinical tests of the rotator cuff have been done. One radiologist performed ultrasonography (US) of the shoulder. Results of US were expressed as normal tendon, tendinopathy or full-thickness tear (the reference). For each clinical test and each US criteria, sensitivity, specificity, negative predictive value and positive predictive value, accuracy, negative likelihood ratio (NLR) and positive likelihood ratio (PLR) were calculated. Clinical relevance was defined as PLR ≥2 and NLR ≤0.5. For 35 patients (39 shoulders), Jobe (PLR: 2.08, NLR: 0.31) and full-can (2, 0.5) test results were relevant for diagnosis of supraspinatus tears and resisted lateral rotation (2.42, 0.5) for infraspinatus tears, with weakness as response criteria. The lift-off test (8.50, 0.27) was relevant for subscapularis tears with lag sign as response criteria. Yergason's test (3.7, 0.41) was relevant for tendinopathy of the long head of the biceps with pain as a response criterion. There was no relevant clinical test for diagnosis of tendinopathy of supraspinatus, infraspinatus or subscapularis. Five of 11 clinical tests were relevant for degenerative rotator cuff disease. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Liu, Donglai; Zhou, Haiwei; Shi, Dawei; Shen, Shu; Tian, Yabin; Wang, Lin; Lou, Jiatao; Cong, Rong; Lu, Juan; Zhang, Henghui; Zhao, Meiru; Zhu, Shida; Cao, Zhisheng; Jin, Ruilin; Wang, Yin; Zhang, Xiaoni; Yang, Guohua; Wang, Youchun; Zhang, Chuntao
2018-01-01
Background: Widespread clinical implementation of next-generation sequencing (NGS)-based cancer in vitro diagnostic tests (IVDs) highlighted the urgency to establish reference materials which could provide full control of the process from nucleic acid extraction to test report generation. The formalin-fixed, paraffin-embedded (FFPE) tissue and blood plasma containing circulating tumor deoxyribonucleic acid (ctDNA) were mostly used for clinically detecting onco-relevant mutations. Methods: We respectively developed multiplex FFPE and plasma reference materials covering three clinically onco-relevant mutations within the epidermal growth factor receptor ( EGFR ) gene at serial allelic frequencies. All reference materials were quantified and validated via droplet digital polymerase chain reaction (ddPCR), and then were distributed to eight domestic manufacturers for the collaborative evaluation of the performance of several domestic NGS-based cancer IVDs covering four major NGS platforms (NextSeq, HiSeq, Ion Proton and BGISEQ). Results: All expected mutations except one at extremely low allelic frequencies were detected, despite some differences in coefficient of variation (CV) which increased with the decrease of allelic frequency (CVs ranging from 18% to 106%). It was worth noting that the CV value seemed to correlate with a particular mutation as well. The repeatability of determination of different mutations was L858R>T790M>19del. Conclusions: The results indicated our reference materials would be pivotal for quality control of NGS-based cancer IVDs and would guide the further development of reference materials covering more onco-relevant mutations.
Ducrot, Virginie; Teixeira-Alves, Mickaël; Lopes, Christelle; Delignette-Muller, Marie-Laure; Charles, Sandrine; Lagadic, Laurent
2010-10-01
Long-term effects of endocrine disruptors (EDs) on aquatic invertebrates remain difficult to assess, mainly due to the lack of appropriate sensitive toxicity test methods and relevant data analysis procedures. This study aimed at identifying windows of sensitivity to EDs along the life-cycle of the freshwater snail Lymnaea stagnalis, a candidate species for the development of forthcoming test guidelines. Juveniles, sub-adults, young adults and adults were exposed for 21 days to the fungicide vinclozolin (VZ). Survival, growth, onset of reproduction, fertility and fecundity were monitored weekly. Data were analyzed using standard statistical analysis procedures and mixed-effect models. No deleterious effect on survival and growth occurred in snails exposed to VZ at environmentally relevant concentrations. A significant impairment of the male function occurred in young adults, leading to infertility at concentrations exceeding 0.025 μg/L. Furthermore, fecundity was impaired in adults exposed to concentrations exceeding 25 μg/L. Biological responses depended on VZ concentration, exposure duration and on their interaction, leading to complex response patterns. The use of a standard statistical approach to analyze those data led to underestimation of VZ effects on reproduction, whereas effects could reliably be analyzed by mixed-effect models. L. stagnalis may be among the most sensitive invertebrate species to VZ, a 21-day reproduction test allowing the detection of deleterious effects at environmentally relevant concentrations of the fungicide. These results thus reinforce the relevance of L. stagnalis as a good candidate species for the development of guidelines devoted to the risk assessment of EDs.
Clinical relevance is associated with allergen-specific wheal size in skin prick testing
Haahtela, T; Burbach, G J; Bachert, C; Bindslev-Jensen, C; Bonini, S; Bousquet, J; Bousquet-Rouanet, L; Bousquet, P J; Bresciani, M; Bruno, A; Canonica, G W; Darsow, U; Demoly, P; Durham, S R; Fokkens, W J; Giavi, S; Gjomarkaj, M; Gramiccioni, C; Kowalski, M L; Losonczy, G; Orosz, M; Papadopoulos, N G; Stingl, G; Todo-Bom, A; von Mutius, E; Köhli, A; Wöhrl, S; Järvenpää, S; Kautiainen, H; Petman, L; Selroos, O; Zuberbier, T; Heinzerling, L M
2014-01-01
Background Within a large prospective study, the Global Asthma and Allergy European Network (GA2LEN) has collected skin prick test (SPT) data throughout Europe to make recommendations for SPT in clinical settings. Objective To improve clinical interpretation of SPT results for inhalant allergens by providing quantitative decision points. Methods The GA2LEN SPT study with 3068 valid data sets was used to investigate the relationship between SPT results and patient-reported clinical relevance for each of the 18 inhalant allergens as well as SPT wheal size and physician-diagnosed allergy (rhinitis, asthma, atopic dermatitis, food allergy). The effects of age, gender, and geographical area on SPT results were assessed. For each allergen, the wheal size in mm with an 80% positive predictive value (PPV) for being clinically relevant was calculated. Results Depending on the allergen, from 40% (blatella) to 87–89% (grass, mites) of the positive SPT reactions (wheal size ≥ 3 mm) were associated with patient-reported clinical symptoms when exposed to the respective allergen. The risk of allergic symptoms increased significantly with larger wheal sizes for 17 of the 18 allergens tested. Children with positive SPT reactions had a smaller risk of sensitizations being clinically relevant compared with adults. The 80% PPV varied from 3 to 10 mm depending on the allergen. Conclusion These ‘reading keys’ for 18 inhalant allergens can help interpret SPT results with respect to their clinical significance. A SPT form with the standard allergens including mm decision points for each allergen is offered for clinical use. PMID:24283409
Family Influences on Mania-Relevant Cognitions and Beliefs: A Cognitive Model of Mania and Reward
Chen, Stephen H.; Johnson, Sheri L.
2012-01-01
Objective The present study proposed and tested a cognitive model of mania and reward. Method Undergraduates (N = 284; 68.4% female; mean age = 20.99 years, standard deviation ± 3.37) completed measures of family goal setting and achievement values, personal reward-related beliefs, cognitive symptoms of mania, and risk for mania. Results Correlational analyses and structural equation modeling supported two distinct, but related facets of mania-relevant cognition: stably present reward-related beliefs and state-dependent cognitive symptoms in response to success and positive emotion. Results also indicated that family emphasis on achievement and highly ambitious extrinsic goals were associated with these mania-relevant cognitions. Finally, controlling for other factors, cognitive symptoms in response to success and positive emotion were uniquely associated with lifetime propensity towards mania symptoms. Conclusions Results support the merit of distinguishing between facets of mania-relevant cognition and the importance of the family in shaping both aspects of cognition. PMID:22623269
Levert, Marie-Josée; Lefebvre, Hélène; Gélinas, Isabelle; McKerall, Michelle; Roy, Odette; Proulx, Michelle
2016-06-01
This pilot project aims to test and see the relevance of the direct observation method to collect data on the barriers and facilitators to attending public places by seniors with TBI. The study is based on the conceptual framework VADA WHO which focuses on the development of friendly built and technological environments for seniors. Three elderly people participated in the study, recruited from an ongoing project, The Citizen Intervention in Community Living (APIC), in the presence of their personalized attendant. The study shows the feasibility of the method in terms of its acceptability and resources mobilized. It shows its relevance to access additional data that would have been difficult to obtain using others methods (e.g., semi-structured interview), such as the identification of the strategies used by the participants to address the obstacles encountered (avoidance, travel planning, use of physical and preventative support of the personalized attendant).
Legaria, María C; Bianchini, Hebe M; Castello, Liliana; Carloni, Graciela; Di Martino, Ana; Fernández Canigia, Liliana; Litterio, Mirta; Rollet, Raquel; Rossetti, Adelaida; Predari, Silvia C
2011-01-01
Through time, anaerobic bacteria have shown good susceptibility to clinically useful antianaerobic agents. Nevertheless, the antimicrobial resistance profile of most of the anaerobic species related to severe infections in humans has been modified in the last years and different kinds of resistance to the most active agents have emerged, making their effectiveness less predictable. With the aim of finding an answer and for the purpose of facilitating the detection of anaerobic antimicrobial resistance, the Anaerobic Subcommittee of the Asociación Argentina de Microbiología developed the First Argentine consensus guidelines for in vitro antimicrobial susceptibility testing of clinically relevant anaerobic bacteria in humans. This document resulted from the compatibilization of the Clinical and Laboratory Standards Institute recommendations, the international literature and the work and experience of the Subcommittee. The Consensus document provides a brief taxonomy review, and exposes why and when anaerobic antimicrobial susceptibility tests should be conducted, and which antimicrobial agents can be used according to the species involved. The recommendations on how to perform, read and interpret in vitro anaerobic antimicrobial susceptibility tests with each method are exposed. Finally, the antibiotic susceptibility profile, the classification of antibiotics according to their in vitro activities, the natural and acquired mechanisms of resistance, the emerging resistance and the regional antibiotic resistance profile of clinically relevant anaerobic species are shown.
Hofmann, Bjørn
2017-04-01
To develop a method for exposing and elucidating ethical issues with human cognitive enhancement (HCE). The intended use of the method is to support and facilitate open and transparent deliberation and decision making with respect to this emerging technology with great potential formative implications for individuals and society. Literature search to identify relevant approaches. Conventional content analysis of the identified papers and methods in order to assess their suitability for assessing HCE according to four selection criteria. Method development. Amendment after pilot testing on smart-glasses. Based on three existing approaches in health technology assessment a method for exposing and elucidating ethical issues in the assessment of HCE technologies was developed. Based on a pilot test for smart-glasses, the method was amended. The method consists of six steps and a guiding list of 43 questions. A method for exposing and elucidating ethical issues in the assessment of HCE was developed. The method provides the ground work for context specific ethical assessment and analysis. Widespread use, amendments, and further developments of the method are encouraged.
Natural rubber latex allergy and asthma.
Tarlo, S M
2001-01-01
Allergic responses to natural rubber latex (NRL) continue to be reported. In adults, the major exposure is in the occupational setting, especially in relation to NRL glove use by health care workers. Issues addressed over the past year include improving diagnostic methods for NRL allergy and characterization of NRL allergens relevant to various exposure groups and evaluating strategies for prevention and early detection of NRL allergy. Assessment of in vitro tests show good intertest correlation but lower sensitivity compared with skin test responses. NRL allergens have been further characterized as reported in the past year. Development of recombinant Hev b 3, a major NRL allergen relevant to children with spina bifida, enhances the likelihood for improved diagnostic reagents. Preliminary reports of primary preventive strategies suggest that avoidance of high-protein, powdered gloves in health care facilities can be cost-effective and is associated with a decline in sensitized workers.
The NEWMEDS rodent touchscreen test battery for cognition relevant to schizophrenia.
Hvoslef-Eide, M; Mar, A C; Nilsson, S R O; Alsiö, J; Heath, C J; Saksida, L M; Robbins, T W; Bussey, T J
2015-11-01
The NEWMEDS initiative (Novel Methods leading to New Medications in Depression and Schizophrenia, http://www.newmeds-europe.com ) is a large industrial-academic collaborative project aimed at developing new methods for drug discovery for schizophrenia. As part of this project, Work package 2 (WP02) has developed and validated a comprehensive battery of novel touchscreen tasks for rats and mice for assessing cognitive domains relevant to schizophrenia. This article provides a review of the touchscreen battery of tasks for rats and mice for assessing cognitive domains relevant to schizophrenia and highlights validation data presented in several primary articles in this issue and elsewhere. The battery consists of the five-choice serial reaction time task and a novel rodent continuous performance task for measuring attention, a three-stimulus visual reversal and the serial visual reversal task for measuring cognitive flexibility, novel non-matching to sample-based tasks for measuring spatial working memory and paired-associates learning for measuring long-term memory. The rodent (i.e. both rats and mice) touchscreen operant chamber and battery has high translational value across species due to its emphasis on construct as well as face validity. In addition, it offers cognitive profiling of models of diseases with cognitive symptoms (not limited to schizophrenia) through a battery approach, whereby multiple cognitive constructs can be measured using the same apparatus, enabling comparisons of performance across tasks. This battery of tests constitutes an extensive tool package for both model characterisation and pre-clinical drug discovery.
Schreiber, Benjamin; Fischer, Jonas; Schiwy, Sabrina; Hollert, Henner; Schulz, Ralf
2018-04-01
The effects of sediment contamination on fish are of high significance for the protection of ecosystems, human health and economy. However, standardized sediment bioassays with benthic fish species, that mimic bioavailability of potentially toxic compounds and comply with the requirements of alternative test methods, are still scarce. In order to address this issue, embryos of the benthic European weatherfish (Misgurnus fossilis) were exposed to freeze-dried sediment (via sediment contact assays (SCA)) and sediment extracts (via acute fish embryo toxicity tests) varying in contamination level. The extracts were gained by accelerated solvent extraction with (i) acetone and (ii) pressurized hot water (PHWE) and subsequently analyzed for polycyclic aromatic hydrocarbons, polychlorinated biphenyls and polychlorinated dibenzodioxins and dibenzofurans. Furthermore, embryos of the predominately used zebrafish (Danio rerio) were exposed to extracts from the two most contaminated sediments. Results indicated sufficient robustness of weatherfish embryos towards varying test conditions and sensitivity towards relevant sediment-bound compounds. Furthermore, a compliance of effect concentrations derived from weatherfish embryos exposed to sediment extracts (96h-LC 50 ) with both measured gradient of sediment contamination and previously published results was observed. In comparison to zebrafish, weatherfish embryos showed higher sensitivity to the bioavailability-mimicking extracts from PHWE but lower sensitivity to extracts gained with acetone. SCAs conducted with weatherfish embryos revealed practical difficulties that prevented an implementation with three of four sediments tested. In summary, an application of weatherfish embryos, using bioassays with sediment extracts from PHWE might increase the ecological relevance of sediment toxicity testing: it allows investigations using benthic and temperate fish species considering both bioavailable contaminants and animal welfare concerns. Copyright © 2017 Elsevier B.V. All rights reserved.
Histamine 50-skin-prick test: a tool to diagnose histamine intolerance.
Kofler, Lukas; Ulmer, Hanno; Kofler, Heinz
2011-01-01
Background. Histamine intolerance results from an imbalance between histamine intake and degradation. In healthy persons, dietary histamine can be sufficiently metabolized by amine oxidases, whereas persons with low amine oxidase activity are at risk of histamine toxicity. Diamine oxidase (DAO) is the key enzyme in degradation. Histamine elicits a wide range of effects. Histamine intolerance displays symptoms, such as rhinitis, headache, gastrointestinal symptoms, palpitations, urticaria and pruritus. Objective. Diagnosis of histamine intolerance until now is based on case history; neither a validated questionnaire nor a routine test is available. It was the aim of this trial to evaluate the usefullness of a prick-test for the diagnosis of histamine intolerance. Methods. Prick-testing with 1% histamine solution and wheal size-measurement to assess the relation between the wheal in prick-test, read after 20 to 50 minutes, as sign of slowed histamine degradation as well as history and symptoms of histamine intolerance. Results. Besides a pretest with 17 patients with HIT we investigated 156 persons (81 with HIT, 75 controls): 64 out of 81 with histamine intolerance(HIT), but only 14 out of 75 persons from the control-group presented with a histamine wheal ≥3 mm after 50 minutes (P < .0001). Conclusion and Clinical Relevance. Histamine-50 skin-prickt-test offers a simple tool with relevance.
Histamine 50-Skin-Prick Test: A Tool to Diagnose Histamine Intolerance
Kofler, Lukas; Ulmer, Hanno; Kofler, Heinz
2011-01-01
Background. Histamine intolerance results from an imbalance between histamine intake and degradation. In healthy persons, dietary histamine can be sufficiently metabolized by amine oxidases, whereas persons with low amine oxidase activity are at risk of histamine toxicity. Diamine oxidase (DAO) is the key enzyme in degradation. Histamine elicits a wide range of effects. Histamine intolerance displays symptoms, such as rhinitis, headache, gastrointestinal symptoms, palpitations, urticaria and pruritus. Objective. Diagnosis of histamine intolerance until now is based on case history; neither a validated questionnaire nor a routine test is available. It was the aim of this trial to evaluate the usefullness of a prick-test for the diagnosis of histamine intolerance. Methods. Prick-testing with 1% histamine solution and wheal size-measurement to assess the relation between the wheal in prick-test, read after 20 to 50 minutes, as sign of slowed histamine degradation as well as history and symptoms of histamine intolerance. Results. Besides a pretest with 17 patients with HIT we investigated 156 persons (81 with HIT, 75 controls): 64 out of 81 with histamine intolerance(HIT), but only 14 out of 75 persons from the control-group presented with a histamine wheal ≥3 mm after 50 minutes (P < .0001). Conclusion and Clinical Relevance. Histamine-50 skin-prickt-test offers a simple tool with relevance. PMID:23724226
Liu, Guo-Ping; Yan, Jian-Jun; Wang, Yi-Qin; Fu, Jing-Jing; Xu, Zhao-Xia; Guo, Rui; Qian, Peng
2012-01-01
Background. In Traditional Chinese Medicine (TCM), most of the algorithms are used to solve problems of syndrome diagnosis that only focus on one syndrome, that is, single label learning. However, in clinical practice, patients may simultaneously have more than one syndrome, which has its own symptoms (signs). Methods. We employed a multilabel learning using the relevant feature for each label (REAL) algorithm to construct a syndrome diagnostic model for chronic gastritis (CG) in TCM. REAL combines feature selection methods to select the significant symptoms (signs) of CG. The method was tested on 919 patients using the standard scale. Results. The highest prediction accuracy was achieved when 20 features were selected. The features selected with the information gain were more consistent with the TCM theory. The lowest average accuracy was 54% using multi-label neural networks (BP-MLL), whereas the highest was 82% using REAL for constructing the diagnostic model. For coverage, hamming loss, and ranking loss, the values obtained using the REAL algorithm were the lowest at 0.160, 0.142, and 0.177, respectively. Conclusion. REAL extracts the relevant symptoms (signs) for each syndrome and improves its recognition accuracy. Moreover, the studies will provide a reference for constructing syndrome diagnostic models and guide clinical practice. PMID:22719781
Catherine, Faget-Agius; Aurélie, Vincenti; Eric, Guedj; Pierre, Michel; Raphaëlle, Richieri; Marine, Alessandrini; Pascal, Auquier; Christophe, Lançon; Laurent, Boyer
2017-12-30
This study aims to define functioning levels of patients with schizophrenia by using a method of interpretable clustering based on a specific functioning scale, the Functional Remission Of General Schizophrenia (FROGS) scale, and to test their validity regarding clinical and neuroimaging characterization. In this observational study, patients with schizophrenia have been classified using a hierarchical top-down method called clustering using unsupervised binary trees (CUBT). Socio-demographic, clinical, and neuroimaging SPECT perfusion data were compared between the different clusters to ensure their clinical relevance. A total of 242 patients were analyzed. A four-group functioning level structure has been identified: 54 are classified as "minimal", 81 as "low", 64 as "moderate", and 43 as "high". The clustering shows satisfactory statistical properties, including reproducibility and discriminancy. The 4 clusters consistently differentiate patients. "High" functioning level patients reported significantly the lowest scores on the PANSS and the CDSS, and the highest scores on the GAF, the MARS and S-QoL 18. Functioning levels were significantly associated with cerebral perfusion of two relevant areas: the left inferior parietal cortex and the anterior cingulate. Our study provides relevant functioning levels in schizophrenia, and may enhance the use of functioning scale. Copyright © 2017 Elsevier B.V. All rights reserved.
Prom-On, Santitham; Chanthaphan, Atthawut; Chan, Jonathan Hoyin; Meechai, Asawin
2011-02-01
Relationships among gene expression levels may be associated with the mechanisms of the disease. While identifying a direct association such as a difference in expression levels between case and control groups links genes to disease mechanisms, uncovering an indirect association in the form of a network structure may help reveal the underlying functional module associated with the disease under scrutiny. This paper presents a method to improve the biological relevance in functional module identification from the gene expression microarray data by enhancing the structure of a weighted gene co-expression network using minimum spanning tree. The enhanced network, which is called a backbone network, contains only the essential structural information to represent the gene co-expression network. The entire backbone network is decoupled into a number of coherent sub-networks, and then the functional modules are reconstructed from these sub-networks to ensure minimum redundancy. The method was tested with a simulated gene expression dataset and case-control expression datasets of autism spectrum disorder and colorectal cancer studies. The results indicate that the proposed method can accurately identify clusters in the simulated dataset, and the functional modules of the backbone network are more biologically relevant than those obtained from the original approach.
Williams, Loretta A.; Agarwal, Sonika; Bodurka, Diane C.; Saleeba, Angele K.; Sun, Charlotte C.; Cleeland, Charles S.
2013-01-01
Context Experts in patient-reported outcome (PRO) measurement emphasize the importance of including patient input in the development of PRO measures. Although best methods for acquiring this input are not yet identified, patient input early in instrument development ensures that instrument content captures information most important and relevant to patients in understandable terms. Objectives The M. D. Anderson Symptom Inventory (MDASI) is a reliable, valid PRO instrument for assessing cancer symptom burden. We report a qualitative (open-ended, in-depth) interviewing method that can be used to incorporate patient input into PRO symptom measure development, with our experience in constructing a MDASI module for ovarian cancer (MDASI-OC) as a model. Methods Fourteen patients with ovarian cancer (OC) described symptoms experienced at the time of the study, at diagnosis, and during prior treatments. Researchers and clinicians used content analysis of interview transcripts to identify symptoms in patient language. Symptoms were ranked on the basis of the number of patients mentioning them and by clinician assessment of relevance. Results Forty-two symptoms were mentioned. Eight OC-specific items will be added to the 13 core symptom items and six interference items of the MDASI in a test version of the MDASI-OC based on the number of patients mentioning them and clinician assessment of importance. The test version is undergoing psychometric evaluation. Conclusion The qualitative interviewing process, used to develop the test MDASI-OC, systematically captures common symptoms important to patients with ovarian cancer. This methodology incorporates the patient experience recommended by experts in PRO instrument development. PMID:23615044
Wang, Juan; Smith, Christopher E.; Sankar, Jagannathan; Yun, Yeoheung; Huang, Nan
2015-01-01
Absorbable metals have been widely tested in various in vitro settings using cells to evaluate their possible suitability as an implant material. However, there exists a gap between in vivo and in vitro test results for absorbable materials. A lot of traditional in vitro assessments for permanent materials are no longer applicable to absorbable metallic implants. A key step is to identify and test the relevant microenvironment and parameters in test systems, which should be adapted according to the specific application. New test methods are necessary to reduce the difference between in vivo and in vitro test results and provide more accurate information to better understand absorbable metallic implants. In this investigative review, we strive to summarize the latest test methods for characterizing absorbable magnesium-based stent for bioabsorption/biodegradation behavior in the mimicking vascular environments. Also, this article comprehensively discusses the direction of test standardization for absorbable stents to paint a more accurate picture of the in vivo condition around implants to determine the most important parameters and their dynamic interactions. PMID:26816631
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiefel, Denis, E-mail: Denis.Kiefel@airbus.com, E-mail: Rainer.Stoessel@airbus.com; Stoessel, Rainer, E-mail: Denis.Kiefel@airbus.com, E-mail: Rainer.Stoessel@airbus.com; Grosse, Christian, E-mail: Grosse@tum.de
2015-03-31
In recent years, an increasing number of safety-relevant structures are designed and manufactured from carbon fiber reinforced polymers (CFRP) in order to reduce weight of airplanes by taking the advantage of their specific strength into account. Non-destructive testing (NDT) methods for quantitative defect analysis of damages are liquid- or air-coupled ultrasonic testing (UT), phased array ultrasonic techniques, and active thermography (IR). The advantage of these testing methods is the applicability on large areas. However, their quantitative information is often limited on impact localization and size. In addition to these techniques, Airbus Group Innovations operates a micro x-ray computed tomography (μ-XCT)more » system, which was developed for CFRP characterization. It is an open system which allows different kinds of acquisition, reconstruction, and data evaluation. One main advantage of this μ-XCT system is its high resolution with 3-dimensional analysis and visualization opportunities, which enables to gain important quantitative information for composite part design and stress analysis. Within this study, different NDT methods will be compared at CFRP samples with specified artificial impact damages. The results can be used to select the most suitable NDT-method for specific application cases. Furthermore, novel evaluation and visualization methods for impact analyzes are developed and will be presented.« less
Geiser, Christian; Burns, G. Leonard; Servera, Mateu
2014-01-01
Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. We show that interesting incremental information about method effects can be gained from including mean structures and tests of MI across methods in MTMM models. We present a modeling framework for testing MI in the first step of a CFA-MTMM analysis. We also discuss the relevance of MI in the context of four more complex CFA-MTMM models with method factors. We focus on three recently developed multiple-indicator CFA-MTMM models for structurally different methods [the correlated traits-correlated (methods – 1), latent difference, and latent means models; Geiser et al., 2014a; Pohl and Steyer, 2010; Pohl et al., 2008] and one model for interchangeable methods (Eid et al., 2008). We demonstrate that some of these models require or imply MI by definition for a proper interpretation of trait or method factors, whereas others do not, and explain why MI may or may not be required in each model. We show that in the model for interchangeable methods, testing for MI is critical for determining whether methods can truly be seen as interchangeable. We illustrate the theoretical issues in an empirical application to an MTMM study of attention deficit and hyperactivity disorder (ADHD) with mother, father, and teacher ratings as methods. PMID:25400603
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
Seiler, CM; Fröhlich, BE; Veit, JA; Gazyakan, E; Wente, MN; Wollermann, C; Deckert, A; Witte, S; Victor, N; Buchler, MW; Knaebel, HP
2006-01-01
Background Annually, more than 90000 surgical procedures of the thyroid gland are performed in Germany. Strategies aimed at reducing the duration of the surgical procedure are relevant to patients and the health care system especially in the context of reducing costs. However, new techniques for quick and safe hemostasis have to be tested in clinically relevance randomized controlled trials before a general recommendation can be given. The current standard for occlusion of blood vessels in thyroid surgery is ligatures. Vascular clips may be a safe alternative but have not been investigated in a large RCT. Methods/design CLIVIT (Clips versus Ligatures in Thyroid Surgery) is an investigator initiated, multicenter, patient-blinded, two-group parallel relevance randomized controlled trial designed by the Study Center of the German Surgical Society. Patients scheduled for elective resection of at least two third of the gland for benign thyroid disease are eligible for participation. After surgical exploration patients are randomized intraoperatively into either the conventional ligature group, or into the clip group. The primary objective is to test for a relevant reduction in operating time (at least 15 min) when using the clip technique. Since April 2004, 121 of the totally required 420 patients were randomized in five centers. Discussion As in all trials the different forms of bias have to be considered, and as in this case, a surgical trial, the role of surgical expertise plays a key role, and will be documented and analyzed separately. This is the first randomized controlled multicenter relevance trial to compare different vessel occlusion techniques in thyroid surgery with adequate power and other detailed information about the design as well as framework. If significant, the results might be generalized and may change the current surgical practice. PMID:16948853
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
The local lymph node assay and skin sensitization testing.
Kimber, Ian; Dearman, Rebecca J
2010-01-01
The mouse local lymph node assay (LLNA) is a method for the identification and characterization of skin sensitization hazards. In this context the method can be used both to identify contact allergens, and also determine the relative skin sensitizing potency as a basis for derivation of effective risk assessments.The assay is based on measurement of proliferative responses by draining lymph node cells induced following topical exposure of mice to test chemicals. Such responses are known to be causally and quantitatively associated with the acquisition of skin sensitization and therefore provide a relevant marker for characterization of contact allergic potential.The LLNA has been the subject of exhaustive evaluation and validation exercises and has been assigned Organization for Economic Cooperation and Development (OECD) test guideline 429. Herein we describe the conduct and interpretation of the LLNA.
Categorizing document by fuzzy C-Means and K-nearest neighbors approach
NASA Astrophysics Data System (ADS)
Priandini, Novita; Zaman, Badrus; Purwanti, Endah
2017-08-01
Increasing of technology had made categorizing documents become important. It caused by increasing of number of documents itself. Managing some documents by categorizing is one of Information Retrieval application, because it involve text mining on its process. Whereas, categorization technique could be done both Fuzzy C-Means (FCM) and K-Nearest Neighbors (KNN) method. This experiment would consolidate both methods. The aim of the experiment is increasing performance of document categorize. First, FCM is in order to clustering training documents. Second, KNN is in order to categorize testing document until the output of categorization is shown. Result of the experiment is 14 testing documents retrieve relevantly to its category. Meanwhile 6 of 20 testing documents retrieve irrelevant to its category. Result of system evaluation shows that both precision and recall are 0,7.
Heart Rate Variability Dynamics for the Prognosis of Cardiovascular Risk
Ramirez-Villegas, Juan F.; Lam-Espinosa, Eric; Ramirez-Moreno, David F.; Calvo-Echeverry, Paulo C.; Agredo-Rodriguez, Wilfredo
2011-01-01
Statistical, spectral, multi-resolution and non-linear methods were applied to heart rate variability (HRV) series linked with classification schemes for the prognosis of cardiovascular risk. A total of 90 HRV records were analyzed: 45 from healthy subjects and 45 from cardiovascular risk patients. A total of 52 features from all the analysis methods were evaluated using standard two-sample Kolmogorov-Smirnov test (KS-test). The results of the statistical procedure provided input to multi-layer perceptron (MLP) neural networks, radial basis function (RBF) neural networks and support vector machines (SVM) for data classification. These schemes showed high performances with both training and test sets and many combinations of features (with a maximum accuracy of 96.67%). Additionally, there was a strong consideration for breathing frequency as a relevant feature in the HRV analysis. PMID:21386966
A Critical View of Static Stretching and Its Relevance in Physical Education
ERIC Educational Resources Information Center
Parrott, James Allen; Zhu, Xihe
2013-01-01
Stretching before activity has been a customary part of most physical education classes (PE), with static stretching typically the preferred method due to its ease of implementation. Historical and implicit support for its continued use is due in part to the sit-and-reach test and flexibility as one of the components of health-related fitness.…
Beronius, Anna; Molander, Linda; Zilliacus, Johanna; Rudén, Christina; Hanberg, Annika
2018-05-28
The Science in Risk Assessment and Policy (SciRAP) web-based platform was developed to promote and facilitate structure and transparency in the evaluation of ecotoxicity and toxicity studies for hazard and risk assessment of chemicals. The platform includes sets of criteria and a colour-coding tool for evaluating the reliability and relevance of individual studies. The SciRAP method for evaluating in vivo toxicity studies was first published in 2014 and the aim of the work presented here was to evaluate and develop that method further. Toxicologists and risk assessors from different sectors and geographical areas were invited to test the SciRAP criteria and tool on a specific set of in vivo toxicity studies and to provide feedback concerning the scientific soundness and user-friendliness of the SciRAP approach. The results of this expert assessment were used to refine and improve both the evaluation criteria and the colour-coding tool. It is expected that the SciRAP web-based platform will continue to be developed and enhanced to keep up to date with the needs of end-users. Copyright © 2018 John Wiley & Sons, Ltd.
Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui
2011-01-01
Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called ‘gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing. PMID:21811306
Solution of second order quasi-linear boundary value problems by a wavelet method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn
2015-03-10
A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less
Mingers, Daniel; Köhler, Denis; Huchzermeier, Christian; Hinrichs, Günter
2017-01-01
Does the Youth Psychopathic Traits Inventory identify one or more high-risk subgroups among young offenders? Which recommendations for possible courses of action can be derived for individual clinical or forensic cases? Method: Model-based cluster analysis (Raftery, 1995) was conducted on a sample of young offenders (N = 445, age 14–22 years, M = 18.5, SD = 1.65). The resulting model was then tested for differences between clusters with relevant context variables of psychopathy. The variables included measures of intelligence, social competence, drug use, and antisocial behavior. Results: Three clusters were found (Low Trait, Impulsive/Irresponsible, Psychopathy) that differ highly significantly concerning YPI scores and the variables mentioned above. The YPI Scores Δ Low = 4.28 (Low Trait – Impulsive/Irresponsible) and Δ High = 6.86 (Impulsive/Irresponsible – Psychopathy) were determined to be thresholds between the clusters. The allocation of a person to be assessed within the calculated clusters allows for an orientation of consequent tests beyond the diagnosis of psychopathy. We conclude that the YPI is a valuable instrument for the assessment of young offenders, as it yields clinically and forensically relevant information concerning the cause and expected development of psychopathological behavior.
Identification of medically relevant Nocardia species with an abbreviated battery of tests.
Kiska, Deanna L; Hicks, Karen; Pettit, David J
2002-04-01
Identification of Nocardia to the species level is useful for predicting antimicrobial susceptibility patterns and defining the pathogenicity and geographic distribution of these organisms. We sought to develop an identification method which was accurate, timely, and employed tests which would be readily available in most clinical laboratories. We evaluated the API 20C AUX yeast identification system as well as several biochemical tests and Kirby-Bauer susceptibility patterns for the identification of 75 isolates encompassing the 8 medically relevant Nocardia species. There were few biochemical reactions that were sufficiently unique for species identification; of note, N. nova were positive for arylsulfatase, N. farcinica were positive for opacification of Middlebrook 7H11 agar, and N. brasiliensis and N. pseudobrasiliensis were the only species capable of liquefying gelatin. API 20C sugar assimilation patterns were unique for N. transvalensis, N. asteroides IV, and N. brevicatena. There was overlap among the assimilation patterns for the other species. Species-specific patterns of susceptibility to gentamicin, tobramycin, amikacin, and erythromycin were obtained for N. nova, N. farcinica, and N. brevicatena, while there was overlap among the susceptibility patterns for the other isolates. No single method could identify all Nocardia isolates to the species level; therefore, a combination of methods was necessary. An algorithm utilizing antibiotic susceptibility patterns, citrate utilization, acetamide utilization, and assimilation of inositol and adonitol accurately identified all isolates. The algorithm was expanded to include infrequent drug susceptibility patterns which have been reported in the literature but which were not seen in this study.
[Method for evaluating the competence of specialists--the validation of 360-degree-questionnaire].
Nørgaard, Kirsten; Pedersen, Juri; Ravn, Lisbeth; Albrecht-Beste, Elisabeth; Holck, Kim; Fredløv, Maj; Møller, Lars Krag
2010-04-19
Assessment of physicians' performance focuses on the quality of their work. The aim of this study was to develop a valid, usable and acceptable multisource feedback assessment tool (MFAT) for hospital consultants. Statements were produced on consultant competencies within non-medical areas like collaboration, professionalism, communication, health promotion, academics and administration. The statements were validated by physicians and later by non-physician professionals after adjustments had been made. In a pilot test, a group of consultants was assessed using the final collection of statements of the MFAT. They received a report with their personal results and subsequently evaluated the assessment method. In total, 66 statements were developed and after validation they were reduced and reformulated to 35. Mean scores for relevance and "easy to understand" of the statements were in the range between "very high degree" and "high degree". In the pilot test, 18 consultants were assessed by themselves, by 141 other physicians and by 125 other professionals in the hospital. About two thirds greatly benefited of the assessment report and half identified areas for personal development. About a third did not want the head of their department to know the assessment results directly; however, two thirds found a potential value in discussing the results with the head. We developed an MFAT for consultants with relevant and understandable statements. A pilot test confirmed that most of the consultants gained from the assessment, but some did not like to share their results with their heads. For these specialists other methods should be used.
NASA Astrophysics Data System (ADS)
Pfefer, Joshua; Agrawal, Anant
2012-03-01
In recent years there has been increasing interest in development of consensus, tissue-phantom-based approaches for assessment of biophotonic imaging systems, with the primary goal of facilitating clinical translation of novel optical technologies. Well-characterized test methods based on tissue phantoms can provide useful tools for performance assessment, thus enabling standardization and device inter-comparison during preclinical development as well as quality assurance and re-calibration in the clinical setting. In this review, we study the role of phantom-based test methods as described in consensus documents such as international standards for established imaging modalities including X-ray CT, MRI and ultrasound. Specifically, we focus on three image quality characteristics - spatial resolution, spatial measurement accuracy and image uniformity - and summarize the terminology, metrics, phantom design/construction approaches and measurement/analysis procedures used to assess these characteristics. Phantom approaches described are those in routine clinical use and tend to have simplified morphology and biologically-relevant physical parameters. Finally, we discuss the potential for applying knowledge gained from existing consensus documents in the development of standardized, phantom-based test methods for optical coherence tomography.
NASA Technical Reports Server (NTRS)
McCutcheon, David Matthew
2017-01-01
During the structural certification effort for the Space Launch System solid rocket booster nozzle, it was identified that no consistent method for addressing local negative margins of safety in non-metallic materials had been developed. Relevant areas included bond-line terminations and geometric features in the composite nozzle liners. In order to gain understanding, analog test specimens were designed that very closely mimic the conditions in the actual full scale hardware. Different locations in the nozzle were represented by different analog specimen designs. This paper describes those tests and corresponding results. Finite element analysis results for the tests are presented. Strain gage correlation of the analysis to the test results is addressed. Furthermore, finite fracture mechanics (a coupled stress and energy failure criterion) is utilized to predict the observed crack pop-in loads for the different configurations. The finite fracture mechanics predictions are found to be within a 10% error relative to the average measured pop-in load for each of four configurations. Initiation locations, arrest behaviors, and resistances to further post-arrest crack propagation are also discussed.
NASA Technical Reports Server (NTRS)
Olson, Sandra
2011-01-01
To better evaluate the buoyant contributions to the convective cooling (or heating) inherent in normal-gravity material flammability test methods, we derive a convective heat transfer correlation that can be used to account for the forced convective stretch effects on the net radiant heat flux for both ignition delay time and burning rate. The Equivalent Low Stretch Apparatus (ELSA) uses an inverted cone heater to minimize buoyant effects while at the same time providing a forced stagnation flow on the sample, which ignites and burns as a ceiling fire. Ignition delay and burning rate data is correlated with incident heat flux and convective heat transfer and compared to results from other test methods and fuel geometries using similarity to determine the equivalent stretch rates and thus convective cooling (or heating) rates for those geometries. With this correlation methodology, buoyant effects inherent in normal gravity material flammability test methods can be estimated, to better apply the test results to low stretch environments relevant to spacecraft material selection.
Velaga, Sitaram P; Djuris, Jelena; Cvijic, Sandra; Rozou, Stavroula; Russo, Paola; Colombo, Gaia; Rossi, Alessandra
2018-02-15
In vitro dissolution testing is routinely used in the development of pharmaceutical products. Whilst the dissolution testing methods are well established and standardized for oral dosage forms, i.e. tablets and capsules, there are no pharmacopoeia methods or regulatory requirements for testing the dissolution of orally inhaled powders. Despite this, a wide variety of dissolution testing methods for orally inhaled powders has been developed and their bio-relevance has been evaluated. This review provides an overview of the in vitro dissolution methodologies for dry inhalation products, with particular emphasis on dry powder inhalers, where the dissolution behavior of the respirable particles can have a role on duration and absorption of the drug. Dissolution mechanisms of respirable particles as well as kinetic models have been presented. A more recent biorelevant dissolution set-ups and media for studying inhalation biopharmaceutics were also reviewed. In addition, factors affecting interplay between dissolution and absorption of deposited particles in the context of biopharmaceutical considerations of inhalation products were examined. Copyright © 2017 Elsevier B.V. All rights reserved.
A new alternative method for testing skin irritation using a human skin model: a pilot study.
Miles, A; Berthet, A; Hopf, N B; Gilliet, M; Raffoul, W; Vernez, D; Spring, P
2014-03-01
Studies assessing skin irritation to chemicals have traditionally used laboratory animals; however, such methods are questionable regarding their relevance for humans. New in vitro methods have been validated, such as the reconstructed human epidermis (RHE) model (Episkin®, Epiderm®). The comparison (accuracy) with in vivo results such as the 4-h human patch test (HPT) is 76% at best (Epiderm®). There is a need to develop an in vitro method that better simulates the anatomo-pathological changes encountered in vivo. To develop an in vitro method to determine skin irritation using human viable skin through histopathology, and compare the results of 4 tested substances to the main in vitro methods and in vivo animal method (Draize test). Human skin removed during surgery was dermatomed and mounted on an in vitro flow-through diffusion cell system. Ten chemicals with known non-irritant (heptylbutyrate, hexylsalicylate, butylmethacrylate, isoproturon, bentazon, DEHP and methylisothiazolinone (MI)) and irritant properties (folpet, 1-bromohexane and methylchloroisothiazolinone (MCI/MI)), a negative control (sodiumchloride) and a positive control (sodiumlaurylsulphate) were applied. The skin was exposed at least for 4h. Histopathology was performed to investigate irritation signs (spongiosis, necrosis, vacuolization). We obtained 100% accuracy with the HPT model; 75% with the RHE models and 50% with the Draize test for 4 tested substances. The coefficients of variation (CV) between our three test batches were <0.1, showing good reproducibility. Furthermore, we reported objectively histopathological irritation signs (irritation scale): strong (folpet), significant (1-bromohexane), slight (MCI/MI at 750/250ppm) and none (isoproturon, bentazon, DEHP and MI). This new in vitro test method presented effective results for the tested chemicals. It should be further validated using a greater number of substances; and tested in different laboratories in order to suitably evaluate reproducibility. Copyright © 2013 Elsevier Ltd. All rights reserved.
Implementation of the common phrase index method on the phrase query for information retrieval
NASA Astrophysics Data System (ADS)
Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah
2017-08-01
As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.
Time-Frequency Learning Machines for Nonstationarity Detection Using Surrogates
NASA Astrophysics Data System (ADS)
Borgnat, Pierre; Flandrin, Patrick; Richard, Cédric; Ferrari, André; Amoud, Hassan; Honeine, Paul
2012-03-01
Time-frequency representations provide a powerful tool for nonstationary signal analysis and classification, supporting a wide range of applications [12]. As opposed to conventional Fourier analysis, these techniques reveal the evolution in time of the spectral content of signals. In Ref. [7,38], time-frequency analysis is used to test stationarity of any signal. The proposed method consists of a comparison between global and local time-frequency features. The originality is to make use of a family of stationary surrogate signals for defining the null hypothesis of stationarity and, based upon this information, to derive statistical tests. An open question remains, however, about how to choose relevant time-frequency features. Over the last decade, a number of new pattern recognition methods based on reproducing kernels have been introduced. These learning machines have gained popularity due to their conceptual simplicity and their outstanding performance [30]. Initiated by Vapnik’s support vector machines (SVM) [35], they offer now a wide class of supervised and unsupervised learning algorithms. In Ref. [17-19], the authors have shown how the most effective and innovative learning machines can be tuned to operate in the time-frequency domain. This chapter follows this line of research by taking advantage of learning machines to test and quantify stationarity. Based on one-class SVM, our approach uses the entire time-frequency representation and does not require arbitrary feature extraction. Applied to a set of surrogates, it provides the domain boundary that includes most of these stationarized signals. This allows us to test the stationarity of the signal under investigation. This chapter is organized as follows. In Section 22.2, we introduce the surrogate data method to generate stationarized signals, namely, the null hypothesis of stationarity. The concept of time-frequency learning machines is presented in Section 22.3, and applied to one-class SVM in order to derive a stationarity test in Section 22.4. The relevance of the latter is illustrated by simulation results in Section 22.5.
Adaptive Set-Based Methods for Association Testing
Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo
2017-01-01
With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371
Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Prabhu, Dinesh K.
2011-01-01
An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.
Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Prabhu, Dinesh K.
2013-01-01
An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.
Novel fiber optic immunosensor instrument
NASA Astrophysics Data System (ADS)
Wang, Zhiyu; Huang, Wenling; Tang, Lei; Zhou, Bo; Li, Yugi; He, Jun
1996-09-01
It has developed and performed a novel fiberoptic immunosensor instrument with operating wavelength 400 - 760 nm and repeatability cv equals 0.27%. The instrument has many excellent features such as simplified operation, faster testing time, higher sensitivity and economic cost. It has completely eliminated recovery period which traditional immunosensor owned due to use separative sensor structure. It can widely apply to test for bacteria, virus, hormone, parasite and cancer protein in clinical examination. The instrument has operated in laboratory and relevant medicine units and successfully tested monoclonal rat-anti-human of 413 cases in clinic and prepared with existing ELISA method, the coincidence probability reached 94 to 100%.
Early Flight Fission Test Facilities (EFF-TF) To Support Near-Term Space Fission Systems
NASA Astrophysics Data System (ADS)
van Dyke, Melissa
2004-02-01
Through hardware based design and testing, the EFF-TF investigates fission power and propulsion component, subsystems, and integrated system design and performance. Through demonstration of systems concepts (designed by Sandia and Los Alamos National Laboratories) in relevant environments, previous non-nuclear tests in the EFF-TF have proven to be a highly effective method (from both cost and performance standpoint) to identify and resolve integration issues. Ongoing research at the EFF-TF is geared towards facilitating research, development, system integration, and system utilization via cooperative efforts with DOE labs, industry, universities, and other NASA centers. This paper describes the current efforts for 2003.
Stenner, Elisabetta; Barbati, Giulia; West, Nicole; Ben, Fabia Del; Martin, Francesca; Ruscio, Maurizio
2018-06-01
Our aim was to verify if procalcitonin (PCT) measurements using the new point-of-care testing i-CHROMATM are interchangeable with those of Liaison XL. One hundred seventeen serum samples were processed sequentially on a Liaison XL and i-CHROMATM. Statistical analysis was done using the Passing-Bablok regression, Bland-Altman test, and Cohen's Kappa statistic. Proportional and constant differences were observed between i-CHROMATM and Liaison XL. The 95% CI of the mean bias% was very large, exceeding the maximum allowable TE% and the clinical reference change value. However, the concordance between methods at the clinical relevant cutoffs was strong, with the exception of the 0.25 ng/mL cutoff which was moderate. Our data suggest that i-CHROMATM is not interchangeable with Liaison XL. However, while the strong concordance at the clinical relevant cutoffs allows us to consider i-CHROMATM a suitable option to Liaison XL to support clinicians' decision-making; nevertheless, the moderate agreement at the 0.25 ng/mL cutoff recommends caution in interpreting the data around this cutoff.
Corazza, Monica; Virgili, Annarosa
2005-05-01
In patients suspected of allergic contact dermatitis because of topical ophthalmic medicaments, patch tests performed with patients' own products are often negative. The irritant anionic surfactant sodium lauryl sulfate (SLS) may alter the stratum corneum and increase antigen penetration. Pre-treatment of the skin with SLS 0.5% for 24 h was performed in the sites of patch tests with patients' own products in 15 selected patients. In patients previously negative to their own products tested with conventional patch tests, SLS pre-treatment showed 6 new relevant positive reactions and induced a stronger positive reaction in 1 patient. SLS pre-treatment could be proposed as an alternative promising method, which may increase sensitivity of patch tests with patients' own products.
Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes
Nunley, Karen A.; Ryan, Christopher M.; Jennings, J. Richard; Aizenstein, Howard J.; Zgibor, Janice C.; Costacou, Tina; Boudreau, Robert M.; Miller, Rachel; Orchard, Trevor J.; Saxton, Judith A.
2015-01-01
OBJECTIVE The aim of this study was to investigate the presence and correlates of clinically relevant cognitive impairment in middle-aged adults with childhood-onset type 1 diabetes (T1D). RESEARCH DESIGN AND METHODS During 2010–2013, 97 adults diagnosed with T1D and aged <18 years (age and duration 49 ± 7 and 41 ± 6 years, respectively; 51% female) and 138 similarly aged adults without T1D (age 49 ± 7 years; 55% female) completed extensive neuropsychological testing. Biomedical data on participants with T1D were collected periodically since 1986–1988. Cognitive impairment status was based on the number of test scores ≥1.5 SD worse than demographically appropriate published norms: none, mild (only one test), or clinically relevant (two or more tests). RESULTS The prevalence of clinically relevant cognitive impairment was five times higher among participants with than without T1D (28% vs. 5%; P < 0.0001), independent of education, age, or blood pressure. Effect sizes were large (Cohen d 0.6–0.9; P < 0.0001) for psychomotor speed and visuoconstruction tasks and were modest (d 0.3–0.6; P < 0.05) for measures of executive function. Among participants with T1D, prevalent cognitive impairment was related to 14-year average A1c >7.5% (58 mmol/mol) (odds ratio [OR] 3.0; P = 0.009), proliferative retinopathy (OR 2.8; P = 0.01), and distal symmetric polyneuropathy (OR 2.6; P = 0.03) measured 5 years earlier; higher BMI (OR 1.1; P = 0.03); and ankle-brachial index ≥1.3 (OR 4.2; P = 0.01) measured 20 years earlier, independent of education. CONCLUSIONS Clinically relevant cognitive impairment is highly prevalent among these middle-aged adults with childhood-onset T1D. In this aging cohort, chronic hyperglycemia and prevalent microvascular disease were associated with cognitive impairment, relationships shown previously in younger populations with T1D. Two additional potentially modifiable risk factors for T1D-related cognitive impairment, vascular health and BMI, deserve further study. PMID:26153270
Lemmer, K; Howaldt, S; Heinrich, R; Roder, A; Pauli, G; Dorner, B G; Pauly, D; Mielke, M; Schwebke, I; Grunow, R
2017-11-01
The work aimed at developing and evaluating practically relevant methods for testing of disinfectants on contaminated personal protective equipment (PPE). Carriers were prepared from PPE fabrics and contaminated with Bacillus subtilis spores. Peracetic acid (PAA) was applied as a suitable disinfectant. In method 1, the contaminated carrier was submerged in PAA solution; in method 2, the contaminated area was covered with PAA; and in method 3, PAA, preferentially combined with a surfactant, was dispersed as a thin layer. In each method, 0·5-1% PAA reduced the viability of spores by a factor of ≥6 log 10 within 3 min. The technique of the most realistic method 3 proved to be effective at low temperatures and also with a high organic load. Vaccinia virus and Adenovirus were inactivated with 0·05-0·1% PAA by up to ≥6 log 10 within 1 min. The cytotoxicity of ricin was considerably reduced by 2% PAA within 15 min of exposure. PAA/detergent mixture enabled to cover hydrophobic PPE surfaces with a thin and yet effective disinfectant layer. The test methods are objective tools for estimating the biocidal efficacy of disinfectants on hydrophobic flexible surfaces. © 2017 The Society for Applied Microbiology.
76 FR 4345 - A Method To Assess Climate-Relevant Decisions: Application in the Chesapeake Bay
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... ENVIRONMENTAL PROTECTION AGENCY [FRL-9257-2] A Method To Assess Climate-Relevant Decisions... external peer review workshop to review the external review draft document titled, ``A Method to Assess.../peerreview/register-chesapeake.htm . The draft ``A Method to Assess Climate-Relevant Decisions: Application...
A biochemical protocol for the isolation and identification of current species of Vibrio in seafood.
Ottaviani, D; Masini, L; Bacchiocchi, S
2003-01-01
We report a biochemical method for the isolation and identification of the current species of vibrios using just one operative protocol. The method involves an enrichment phase with incubation at 30 degrees C for 8-24 h in alkaline peptone water and an isolation phase on thiosulphate-citrate-salt sucrose agar plates incubating at 30 degrees C for 24 h. Four biochemical tests and Alsina's scheme were performed for genus and species identification, respectively. All biochemical tests were optimized as regards conditions of temperature, time of incubation and media composition. The whole standardized protocol was always able to give a correct identification when applied to 25 reference strains of Vibrio and 134 field isolates. The data demonstrated that the assay method allows an efficient recovery, isolation and identification of current species of Vibrio in seafood obtaining results within 2-7 days. This method based on biochemical tests could be applicable even in basic microbiology laboratories, and can be used simultaneously to isolate and discriminate all clinically relevant species of Vibrio.
The Gaussian CL s method for searches of new physics
Qian, X.; Tan, A.; Ling, J. J.; ...
2016-04-23
Here we describe a method based on the CL s approach to present results in searches of new physics, under the condition that the relevant parameter space is continuous. Our method relies on a class of test statistics developed for non-nested hypotheses testing problems, denoted by ΔT, which has a Gaussian approximation to its parent distribution when the sample size is large. This leads to a simple procedure of forming exclusion sets for the parameters of interest, which we call the Gaussian CL s method. Our work provides a self-contained mathematical proof for the Gaussian CL s method, that explicitlymore » outlines the required conditions. These conditions are milder than that required by the Wilks' theorem to set confidence intervals (CIs). We illustrate the Gaussian CL s method in an example of searching for a sterile neutrino, where the CL s approach was rarely used before. We also compare data analysis results produced by the Gaussian CL s method and various CI methods to showcase their differences.« less
[Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].
Vanegas, Jairo; Vásquez, Fabián
Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
Proposed Flight Research of a Dual-Bell Rocket Nozzle Using the NASA F-15 Airplane
NASA Technical Reports Server (NTRS)
Jones, Daniel S.; Bui, Trong T.; Ruf, Joseph H.
2013-01-01
For more than a half-century, several types of altitude-compensating rocket nozzles have been proposed and analyzed, but very few have been adequately tested in a relevant flight environment. One type of altitude-compensating nozzle is the dual-bell rocket nozzle, which was first introduced into literature in 1949. Despite the performance advantages that have been predicted, both analytically and through static test data, the dual-bell nozzle has still not been adequately tested in a relevant flight environment. This paper proposes a method for conducting testing and research with a dual-bell rocket nozzle in a flight environment. We propose to leverage the existing NASA F-15 airplane and Propulsion Flight Test Fixture as the flight testbed, with the dual-bell nozzle operating during captive-carried flights, and with the nozzle subjected to a local flow field similar to that of a launch vehicle. The primary objective of this effort is not only to advance the technology readiness level of the dual-bell nozzle, but also to gain a greater understanding of the nozzle mode transitional sensitivity to local flow-field effects, and to quantify the performance benefits with this technology. The predicted performance benefits are significant, and may result in reducing the cost of delivering payloads to low-Earth orbit.
Gabriel, Adel; Violato, Claudio
2009-01-01
Background To develop and psychometrically assess a multiple choice question (MCQ) instrument to test knowledge of depression and its treatments in patients suffering from depression. Methods A total of 63 depressed patients and twelve psychiatric experts participated. Based on empirical evidence from an extensive review, theoretical knowledge and in consultations with experts, 27-item MCQ knowledge of depression and its treatment test was constructed. Data collected from the psychiatry experts were used to assess evidence of content validity for the instrument. Results Cronbach's alpha of the instrument was 0.68, and there was an overall 87.8% agreement (items are highly relevant) between experts about the relevance of the MCQs to test patient knowledge on depression and its treatments. There was an overall satisfactory patients' performance on the MCQs with 78.7% correct answers. Results of an item analysis indicated that most items had adequate difficulties and discriminations. Conclusion There was adequate reliability and evidence for content and convergent validity for the instrument. Future research should employ a lager and more heterogeneous sample from both psychiatrist and community samples, than did the present study. Meanwhile, the present study has resulted in psychometrically tested instruments for measuring knowledge of depression and its treatment of depressed patients. PMID:19754944
Proposed Flight Research of a Dual-Bell Rocket Nozzle Using the NASA F-15 Airplane
NASA Technical Reports Server (NTRS)
Jones, Daniel S.; Bui, Trong T.; Ruf, Joseph H.
2013-01-01
For more than a half-century, several types of altitude-compensating rocket nozzles have been proposed and analyzed, but very few have been adequately tested in a relevant flight environment. One type of altitude-compensating nozzle is the dual-bell rocket nozzle, which was first introduced into literature in 1949. Despite the performance advantages that have been predicted, both analytically and through static test data, the dual-bell nozzle has still not been adequately tested in a relevant flight environment. This presentation proposes a method for conducting testing and research with a dual-bell rocket nozzle in a flight environment. We propose to leverage the existing NASA F-15 airplane and Propulsion Flight Test Fixture as the flight testbed, with the dual-bell nozzle operating during captive-carried flights, and with the nozzle subjected to a local flow field similar to that of a launch vehicle. The primary objective of this effort is not only to advance the technology readiness level of the dual-bell nozzle, but also to gain a greater understanding of the nozzle mode transitional sensitivity to local flow-field effects, and to quantify the performance benefits with this technology. The predicted performance benefits are significant, and may result in reducing the cost of delivering payloads to low-Earth orbit.
Rennert, Hanna; Eng, Kenneth; Zhang, Tuo; Tan, Adrian; Xiang, Jenny; Romanel, Alessandro; Kim, Robert; Tam, Wayne; Liu, Yen-Chun; Bhinder, Bhavneet; Cyrta, Joanna; Beltran, Himisha; Robinson, Brian; Mosquera, Juan Miguel; Fernandes, Helen; Demichelis, Francesca; Sboner, Andrea; Kluk, Michael; Rubin, Mark A; Elemento, Olivier
2016-01-01
We describe Exome Cancer Test v1.0 (EXaCT-1), the first New York State-Department of Health-approved whole-exome sequencing (WES)-based test for precision cancer care. EXaCT-1 uses HaloPlex (Agilent) target enrichment followed by next-generation sequencing (Illumina) of tumour and matched constitutional control DNA. We present a detailed clinical development and validation pipeline suitable for simultaneous detection of somatic point/indel mutations and copy-number alterations (CNAs). A computational framework for data analysis, reporting and sign-out is also presented. For the validation, we tested EXaCT-1 on 57 tumours covering five distinct clinically relevant mutations. Results demonstrated elevated and uniform coverage compatible with clinical testing as well as complete concordance in variant quality metrics between formalin-fixed paraffin embedded and fresh-frozen tumours. Extensive sensitivity studies identified limits of detection threshold for point/indel mutations and CNAs. Prospective analysis of 337 cancer cases revealed mutations in clinically relevant genes in 82% of tumours, demonstrating that EXaCT-1 is an accurate and sensitive method for identifying actionable mutations, with reasonable costs and time, greatly expanding its utility for advanced cancer care. PMID:28781886
Simulation Testing for Selection of Critical Care Medicine Trainees. A Pilot Feasibility Study.
Cocciante, Adriano G; Nguyen, Martin N; Marane, Candida F; Panayiotou, Anita E; Karahalios, Amalia; Beer, Janet A; Johal, Navroop; Morris, John; Turner, Stacy; Hessian, Elizabeth C
2016-04-01
Selection of physicians into anesthesiology, intensive care, and emergency medicine training has traditionally relied on evaluation of curriculum vitae, letters of recommendation, and interviews, despite these methods being poor predictors of subsequent workplace performance. In this study, we evaluated the feasibility and face validity of incorporating assessment of nontechnical skills in simulation and personality traits into an existing junior doctor selection framework. Candidates short-listed for a critical care residency position were invited to participate in the study. On the interview day, consenting candidates participated in a simulation scenario and debriefing and completed a personality test (16 Personality Factor Questionnaire) and a survey. Timing of participants' progression through the stations and faculty staff numbers were evaluated. Nontechnical skills were evaluated and candidates ranked using the Ottawa Crisis Resource Management Global Rating Scale (Ottawa GRS). Nontechnical skills ranking and traditional selection method ranking were compared using the concordance correlation coefficient. Interrater reliability was assessed using the concordance correlation coefficient. Thirteen of 20 eligible participants consented to study inclusion. All participants completed the necessary stations without significant time delays. Eighteen staff members were required to conduct interviews, simulation, debriefing, and personality testing. Participants rated the simulation station to be acceptable, fair, and relevant and as providing an opportunity to demonstrate abilities. Personality testing was rated less fair, less relevant, and less acceptable, and as giving less opportunity to demonstrate abilities. Participants reported that simulation was equally as stressful as the interview, whereas personality testing was rated less stressful. Assessors rated both personality testing and simulation as acceptable and able to provide additional information about candidates. The Ottawa GRS showed moderate interrater concordance. There was moderate concordance between rankings based on traditional selection methods and Ottawa GRS rankings (ρ = 0.52; 95% confidence interval, -0.02 to 0.82; P = 0.06). A multistation selection process involving interviews, simulation, and personality testing is feasible and has face validity. A potential barrier to adoption is the high number of faculty required to conduct the process.
de Alwis, Manudul Pahansen; Äng, Björn Olov; Garme, Karl
2017-01-01
Objective High-performance marine craft personnel (HPMCP) are regularly exposed to vibration and repeated shock (VRS) levels exceeding maximum limitations stated by international legislation. Whereas such exposure reportedly is detrimental to health and performance, the epidemiological data necessary to link these adverse effects causally to VRS are not available in the scientific literature, and no suitable tools for acquiring such data exist. This study therefore constructed a questionnaire for longitudinal investigations in HPMCP. Methods A consensus panel defined content domains, identified relevant items and outlined a questionnaire. The relevance and simplicity of the questionnaire’s content were then systematically assessed by expert raters in three consecutive stages, each followed by revisions. An item-level content validity index (I-CVI) was computed as the proportion of experts rating an item as relevant and simple, and a scale-level content validity index (S-CVI/Ave) as the average I-CVI across items. The thresholds for acceptable content validity were 0.78 and 0.90, respectively. Finally, a dynamic web version of the questionnaire was constructed and pilot tested over a 1-month period during a marine exercise in a study population sample of eight subjects, while accelerometers simultaneously quantified VRS exposure. Results Content domains were defined as work exposure, musculoskeletal pain and human performance, and items were selected to reflect these constructs. Ratings from nine experts yielded S-CVI/Ave of 0.97 and 1.00 for relevance and simplicity, respectively, and the pilot test suggested that responses were sensitive to change in acceleration and that the questionnaire, following some adjustments, was feasible for its intended purpose. Conclusions A dynamic web-based questionnaire for longitudinal survey of key variables in HPMCP was constructed. Expert ratings supported that the questionnaire content is relevant, simple and sufficiently comprehensive, and the pilot test suggested that the questionnaire is feasible for longitudinal measurements in the study population. PMID:28729320
Shah, Darshil U; Reynolds, Thomas P S; Ramage, Michael H
2017-07-20
From the stems of agricultural crops to the structural trunks of trees, studying the mechanical behaviour of plant stems is critical for both commerce and science. Plant scientists are also increasingly relying on mechanical test data for plant phenotyping. Yet there are neither standardized methods nor systematic reviews of current methods for the testing of herbaceous stems. We discuss the architecture of plant stems and highlight important micro- and macrostructural parameters that need to be controlled and accounted for when designing test methodologies, or that need to be understood in order to explain observed mechanical behaviour. Then, we critically evaluate various methods to test structural properties of stems, including flexural bending (two-, three-, and four-point bending) and axial loading (tensile, compressive, and buckling) tests. Recommendations are made on best practices. This review is relevant to fundamental studies exploring plant biomechanics, mechanical phenotyping of plants, and the determinants of mechanical properties in cell walls, as well as to application-focused studies, such as in agro-breeding and forest management projects, aiming to understand deformation processes of stem structures. The methods explored here can also be extended to other elongated, rod-shaped organs (e.g. petioles, midribs, and even roots). © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Formanek, Martin; Vana, Martin; Houfek, Karel
2010-09-30
We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less
Sullivan, Ann K; Sperle, Ida; Raben, Dorthe; Amato-Gauci, Andrew J; Lundgren, Jens Dilling; Yazdanpanah, Yazdan; Jakobsen, Stine Finne; Tavoschi, Lara
2017-01-01
Background: An evaluation of the 2010 ECDC guidance on HIV testing, conducted in October 2015–January 2016, assessed its impact, added value, relevance and usability and the need for updated guidance. Methods: Data sources were two surveys: one for the primary target audience (health policymakers and decision makers, national programme managers and ECDC official contact points in the European Union/European Economic Area (EU/EEA) countries and one for a broader target audience (clinicians, civil society organisations and international public health agencies); two moderated focus group discussions (17 participants each); webpage access data; a literature citation review; and an expert consultation (18 participants) to discuss the evaluation findings. Results: Twenty-three of 28 primary target audience and 31 of 51 broader target audience respondents indicated the guidance was the most relevant when compared with other international guidance. Primary target audience respondents in 11 of 23 countries reported that they had used the guidance in development, monitoring and/or evaluation of their national HIV testing policy, guidelines, programme and/or strategy, and 29 of 51 of the broader target audience respondents reported having used the guidance in their work. Both the primary and broader target audience considered it important or very important to have an EU/EEA-level HIV testing guidance (23/28 and 46/51, respectively). Conclusion: The guidance has been widely used to develop policies, guidelines, programmes and strategies in the EU/EEA and should be regularly updated due to continuous developments in the field in order to continue to serve as an important reference guidance in the region. PMID:29208158
Code of Federal Regulations, 2013 CFR
2013-10-01
... substances, preparations, and relevant chemicals. 162.060-32 Section 162.060-32 Shipping COAST GUARD... APPROVAL ENGINEERING EQUIPMENT Ballast Water Management Systems § 162.060-32 Testing and evaluation requirements for active substances, preparations, and relevant chemicals. (a) A ballast water management system...
Code of Federal Regulations, 2012 CFR
2012-10-01
... substances, preparations, and relevant chemicals. 162.060-32 Section 162.060-32 Shipping COAST GUARD... APPROVAL ENGINEERING EQUIPMENT Ballast Water Management Systems § 162.060-32 Testing and evaluation requirements for active substances, preparations, and relevant chemicals. (a) A ballast water management system...
Code of Federal Regulations, 2014 CFR
2014-10-01
... substances, preparations, and relevant chemicals. 162.060-32 Section 162.060-32 Shipping COAST GUARD... APPROVAL ENGINEERING EQUIPMENT Ballast Water Management Systems § 162.060-32 Testing and evaluation requirements for active substances, preparations, and relevant chemicals. (a) A ballast water management system...
Approval Motive and Academic Behaviors: The Self Reinforcement Hypothesis
ERIC Educational Resources Information Center
Matell, Michael S.; Smith, Ronald E.
1970-01-01
Testing of college students in differing conditions as to performance being relevant to academic achievement goals revealed that under hgih relevance conditions scores on the Marlowe Crowne Social Desirability Scale were unrelated to test performance. Under low relevant conditions, the need for approval was highly related to performance in high…
Alépée, N; Hibatallah, J; Klaric, M; Mewes, K R; Pfannenbecker, U; McNamee, P
2016-06-01
Cosmetics Europe recently established HPLC/UPLC-spectrophotometry as a suitable alternative endpoint detection system for measurement of formazan in the MTT-reduction assay of reconstructed human tissue test methods irrespective of the test system involved. This addressed a known limitation for such test methods that use optical density for measurement of formazan and may be incompatible for evaluation of strong MTT reducer and/or coloured chemicals. To build on the original project, Cosmetics Europe has undertaken a second study that focuses on evaluation of chemicals with functionalities relevant to cosmetic products. Such chemicals were primarily identified from the Scientific Committee on Consumer Safety (SCCS) 2010 memorandum (addendum) on the in vitro test EpiSkin™ for skin irritation testing. Fifty test items were evaluated in which both standard photometry and HPLC/UPLC-spectrophotometry were used for endpoint detection. The results obtained in this study: 1) provide further support for Within Laboratory Reproducibility of HPLC-UPLC-spectrophotometry for measurement of formazan; 2) demonstrate, through use a case study with Basazol C Blue pr. 8056, that HPLC/UPLC-spectrophotometry enables determination of an in vitro classification even when this is not possible using standard photometry and 3) addresses the question raised by SCCS in their 2010 memorandum (addendum) to consider an endpoint detection system not involving optical density quantification in in vitro reconstructed human epidermis skin irritation test methods. Copyright © 2016 Elsevier Ltd. All rights reserved.
Koboldt, Daniel C.; Kanchi, Krishna L.; Gui, Bin; Larson, David E.; Fulton, Robert S.; Isaacs, William B.; Kraja, Aldi; Borecki, Ingrid B.; Jia, Li; Wilson, Richard K.; Mardis, Elaine R.; Kibel, Adam S.
2016-01-01
Background Common variants have been associated with prostate cancer risk. Unfortunately, few are reproducibly linked to aggressive disease, the phenotype of greatest clinical relevance. One possible explanation is that rare genetic variants underlie a significant proportion of the risk for aggressive disease. Method To identify such variants, we performed a two staged approach using whole exome sequencing followed by targeted sequencing of 800 genes in 652 aggressive prostate cancer patients and 752 disease-free controls in both African and European Americans. In each population, we tested rare variants for association using two gene-based aggregation tests. We established a study-wide significance threshold of 3.125 × 10−5 to correct for multiple testing. Results TET2 in African-Americans was associated with aggressive disease with 24.4% of cases harboring a rare deleterious variant compared to 9.6% of controls (FET p = 1.84×10−5, OR=3.0; SKAT-O p= 2.74×10−5). We report 8 additional genes with suggestive evidence of association, including the DNA repair genes PARP2 and MSH6. Finally, we observed an excess of rare truncation variants in 5 genes including the DNA repair genes MSH6, BRCA1 and BRCA2. This adds to the growing body of evidence that DNA repair pathway defects may influence susceptibility to aggressive prostate cancer. Conclusion Our findings suggest that rare variants influence risk of clinically relevant prostate cancer and, if validated, could serve to identify men for screening, prophylaxis and treatment. Impact This study provides evidence that rare variants in TET2 may help identify African-American men at increased risk for clinically relevant prostate cancer. PMID:27486019
NASA Astrophysics Data System (ADS)
Budhwani, Karim Ismail
The tremendous quality of life impact notwithstanding, cardiovascular diseases and Cancer add up to over US$ 700bn each year in financial costs alone. Aging and population growth are expected to further expand the problem space while drug research and development remain expensive. However, preclinical costs can be substantially mitigated by substituting animal models with in vitro devices that accurately model human cardiovascular transport. Here we present a novel physiologically relevant lab-on-a-brane that simulates in vivo pressure, flow, strain, and shear waveforms associated with normal and pathological conditions in large and small blood vessels for studying molecular transport across the endothelial monolayer. The device builds upon previously demonstrated integrated microfluidic loop design by: (a) introducing nanoscale pores in the substrate membrane to enable transmembrane molecular transport, (b) transforming the substrate membrane into a nanofibrous matrix for 3D smooth muscle cell (SMC) tissue culture, (c) integrating electrospinning fabrication methods, (d) engineering an invertible sandwich cell culture device architecture, and (e) devising a healthy co-culture mechanism for human arterial endothelial cell (HAEC) monolayer and multiple layers of human smooth muscle cells (HSMC) to accurately mimic arterial anatomy. Structural and mechanical characterization was conducted using confocal microscopy, SEM, stress/strain analysis, and infrared spectroscopy. Transport was characterized using FITC-Dextran hydraulic permeability protocol. Structure and transport characterization successfully demonstrate device viability as a physiologically relevant arterial mimic for testing transendothelial transport. Thus, our lab-on-a-brane provides a highly effective and efficient, yet considerably inexpensive, physiologically relevant alternative for pharmacokinetic evaluation; possibly reducing animals used in pre-clinical testing, clinical trials cost from false starts, and time-to-market. Furthermore, this platform can be easily configured for testing targeted therapeutic delivery and in multiple simultaneous arrays for personalized and precision medicine applications.
ERIC Educational Resources Information Center
Eklof, Hanna; Nyroos, Mikaela
2013-01-01
Although large-scale national tests have been used for many years in Swedish compulsory schools, very little is known about how pupils actually react to these tests. The question is relevant, however, as pupil reactions in the test situation may affect test performance as well as future attitudes towards assessment. The question is relevant also…
75 FR 53298 - A Method to Assess Climate-Relevant Decisions: Application in the Chesapeake Bay
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... ENVIRONMENTAL PROTECTION AGENCY [FRL-9195-4; Docket ID No. EPA-HQ-ORD-2010-0709] A Method to... comment period for the draft document titled, ``A Method to Assess Climate-Relevant Decision: Application... draft ``A Method To Assess Climate-Relevant Decisions: Application in the Chesapeake Bay'' is available...
Bridging naturalistic and laboratory assessment of memory: the Baycrest mask fit test.
Armson, Michael J; Abdi, Hervé; Levine, Brian
2017-09-01
Autobiographical memory tests provide a naturalistic counterpoint to the artificiality of laboratory research methods, yet autobiographical events are uncontrolled and, in most cases, unverifiable. In this study, we capitalised on a scripted, complex naturalistic event - the mask fit test (MFT), a standardised procedure required of hospital employees - to bridge the gap between naturalistic and laboratory memory assessment. We created a test of recognition memory for the MFT and administered it to 135 hospital employees who had undertaken the MFT at various points over the past five years. Multivariate analysis revealed two dimensions defined by accuracy and response bias. Accuracy scores showed the expected relationship to encoding-test delay, supporting the validity of this measure. Relative to younger adults, older adults' memory for this naturalistic event was better than would be predicted from the cognitive ageing literature, a result consistent with the notion that older adults' memory performance is enhanced when stimuli are naturalistic and personally relevant. These results demonstrate that testing recognition memory for a scripted event is a viable method of studying autobiographical memory.
Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-01
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
Well-tempered metadynamics: a smoothly converging and tunable free-energy method.
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-18
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
ERIC Educational Resources Information Center
Nicholson, James; Ridgway, Jim
2017-01-01
White and Gorard make important and relevant criticisms of some of the methods commonly used in social science research, but go further by criticising the logical basis for inferential statistical tests. This paper comments briefly on matters we broadly agree on with them and more fully on matters where we disagree. We agree that too little…
Invited Paper Thin Film Technology In Design And Production Of Optical Systems
NASA Astrophysics Data System (ADS)
Guenther, K. H.; Menningen, R.; Burke, C. A.
1983-10-01
Basic optical properties of dielectric thin films for interference applications and of metallic optical coatings are reviewed. Some design considerations of how to use thin films best in optical systems are given, and some aspects of thin film production technology relevant to the optical designer and the optician are addressed. The necessity of proper specifications, inclusive of test methods, is emphasized.
Yun, Jun-Won; Hailian, Quan; Na, Yirang; Kang, Byeong-Cheol; Yoon, Jung-Hee; Cho, Eun-Young; Lee, Miri; Kim, Da-Eun; Bae, SeungJin; Seok, Seung Hyeok; Lim, Kyung-Min
2016-12-01
In an effort to explore the use of alternative methods to animal testing for the evaluation of the ocular irritancy of medical devices, we evaluated representative contact lenses with the bovine corneal opacity and permeability test (BCOP) and an in vitro eye irritation test using the three-dimensionally-reconstructed human corneal epithelium (RhCE) models, EpiOcular™ and MCTT HCE™. In addition, we compared the obtained results with the ISO standard in vivo rabbit eye irritation test (ISO10993-10). Along with the positive controls (benzalkonium chloride, BAK, 0.02, 0.2, and 1%), the extracts of 4 representative contact lenses (soft, disposable, hard, and colored lenses) and 2 reference lenses (dye-eluting and BAK-coated lenses) were tested. All the lenses, except for the BAK-coated lens, were determined non-irritants in all test methods, while the positive controls yielded relevant results. More importantly, BCOP, EpiOcular™, and MCTT HCE™ yielded a consistent decision for all the tested samples, with the exception of 0.2% BAK in BCOP, for which no prediction could be made. Overall, all the in vitro tests correlated well with the in vivo rabbit eye irritation test, and furthermore, the combination of in vitro tests as a tiered testing strategy was able to produce results similar to those seen in vivo. These observations suggest that such methods can be used as alternative assays to replace the conventional in vivo test method in the evaluation of the ocular irritancy of ophthalmic medical devices, although further study is necessary. Copyright © 2016. Published by Elsevier Ltd.
Grafström, Roland C; Nymark, Penny; Hongisto, Vesa; Spjuth, Ola; Ceder, Rebecca; Willighagen, Egon; Hardy, Barry; Kaski, Samuel; Kohonen, Pekka
2015-11-01
This paper outlines the work for which Roland Grafström and Pekka Kohonen were awarded the 2014 Lush Science Prize. The research activities of the Grafström laboratory have, for many years, covered cancer biology studies, as well as the development and application of toxicity-predictive in vitro models to determine chemical safety. Through the integration of in silico analyses of diverse types of genomics data (transcriptomic and proteomic), their efforts have proved to fit well into the recently-developed Adverse Outcome Pathway paradigm. Genomics analysis within state-of-the-art cancer biology research and Toxicology in the 21st Century concepts share many technological tools. A key category within the Three Rs paradigm is the Replacement of animals in toxicity testing with alternative methods, such as bioinformatics-driven analyses of data obtained from human cell cultures exposed to diverse toxicants. This work was recently expanded within the pan-European SEURAT-1 project (Safety Evaluation Ultimately Replacing Animal Testing), to replace repeat-dose toxicity testing with data-rich analyses of sophisticated cell culture models. The aims and objectives of the SEURAT project have been to guide the application, analysis, interpretation and storage of 'omics' technology-derived data within the service-oriented sub-project, ToxBank. Particularly addressing the Lush Science Prize focus on the relevance of toxicity pathways, a 'data warehouse' that is under continuous expansion, coupled with the development of novel data storage and management methods for toxicology, serve to address data integration across multiple 'omics' technologies. The prize winners' guiding principles and concepts for modern knowledge management of toxicological data are summarised. The translation of basic discovery results ranged from chemical-testing and material-testing data, to information relevant to human health and environmental safety. 2015 FRAME.
Could situational judgement tests be used for selection into dental foundation training?
Patterson, F; Ashworth, V; Mehra, S; Falcon, H
2012-07-13
To pilot and evaluate a machine-markable situational judgement test (SJT) designed to select candidates into UK dental foundation training. Single centre pilot study. UK postgraduate deanery in 2010. Seventy-four candidates attending interview for dental foundation training in Oxford and Wessex Deaneries volunteered to complete the situational judgement test. The situational judgement test was developed to assess relevant professional attributes for dentistry (for example, empathy and integrity) in a machine-markable format. Test content was developed by subject matter experts working with experienced psychometricians. Evaluation of psychometric properties of the pilot situational judgement test (for example, reliability, validity and fairness). Scores in the dental foundation training selection process (short-listing and interviews) were used to examine criterion-related validity. Candidates completed an evaluation questionnaire to examine candidate reactions and face validity of the new test. Forty-six candidates were female and 28 male; mean age was 23.5-years-old (range 22-32). Situational judgement test scores were normally distributed and the test showed good internal reliability when corrected for test length (α = 0.74). Situational judgement test scores positively correlated with the management, leadership and professionalism interview (N = 50; r = 0.43, p <0.01) but not with the clinical skills interview, providing initial evidence of criterion-related validity as the situational judgement test is designed to test non-cognitive professional attributes beyond clinical knowledge. Most candidates perceived the situational judgement test as relevant to dentistry, appropriate for their training level, and fair. This initial pilot study suggests that a situational judgement test is an appropriate and innovative method to measure professional attributes (eg empathy and integrity) for selection into foundation training. Further research will explore the long-term predictive validity of the situational judgement test once candidates have entered training.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of a reliability database (RDB) methodology to determine applicable reliability data for inclusion in the quantification of the PRA. The RDBmore » method developed during this project seeks to satisfy the requirements of the Data Analysis element of the ASME/ANS Non-LWR PRA standard. The RDB methodology utilizes a relevancy test to examine reliability data and determine whether it is appropriate to include as part of the reliability database for the PRA. The relevancy test compares three component properties to establish the level of similarity to components examined as part of the PRA. These properties include the component function, the component failure modes, and the environment/boundary conditions of the component. The relevancy test is used to gauge the quality of data found in a variety of sources, such as advanced reactor-specific databases, non-advanced reactor nuclear databases, and non-nuclear databases. The RDB also establishes the integration of expert judgment or separate reliability analysis with past reliability data. This paper provides details on the RDB methodology, and includes an example application of the RDB methodology for determining the reliability of the intermediate heat exchanger of a sodium fast reactor. The example explores a variety of reliability data sources, and assesses their applicability for the PRA of interest through the use of the relevancy test.« less
Screening Electronic Health Record-Related Patient Safety Reports Using Machine Learning.
Marella, William M; Sparnon, Erin; Finley, Edward
2017-03-01
The objective of this study was to develop a semiautomated approach to screening cases that describe hazards associated with the electronic health record (EHR) from a mandatory, population-based patient safety reporting system. Potentially relevant cases were identified through a query of the Pennsylvania Patient Safety Reporting System. A random sample of cases were manually screened for relevance and divided into training, testing, and validation data sets to develop a machine learning model. This model was used to automate screening of remaining potentially relevant cases. Of the 4 algorithms tested, a naive Bayes kernel performed best, with an area under the receiver operating characteristic curve of 0.927 ± 0.023, accuracy of 0.855 ± 0.033, and F score of 0.877 ± 0.027. The machine learning model and text mining approach described here are useful tools for identifying and analyzing adverse event and near-miss reports. Although reporting systems are beginning to incorporate structured fields on health information technology and the EHR, these methods can identify related events that reporters classify in other ways. These methods can facilitate analysis of legacy safety reports by retrieving health information technology-related and EHR-related events from databases without fields and controlled values focused on this subject and distinguishing them from reports in which the EHR is mentioned only in passing. Machine learning and text mining are useful additions to the patient safety toolkit and can be used to semiautomate screening and analysis of unstructured text in safety reports from frontline staff.
Ehlers, Shawn G; Field, William E; Ess, Daniel R
2017-01-26
Recent interest in rearward visibility for private, construction, and commercial vehicles and documentation of rearward runovers involving bystanders outside the field of vision of the vehicle operator led to an investigation into the need for enhanced methods of rearward visibility for large, off-highway, agricultural equipment. A review of the literature found limited relevant research and minimal data on incidents involving rearward runovers of bystanders and co-workers. This article reviews the findings regarding the methods identified and tested to collect and analyze rearward visibility data, from the operator's perspective, for large self-propelled agricultural equipment, including the four-wheel drive tractors, combines, agricultural sprayers, and skid-steer loaders that are increasingly found on agricultural production sites. The methods identified, largely drawn from research conducted on private and commercial vehicles, were tested to determine their application in identifying rearward blind spots. These methods are described, and the findings from field-testing of specific machines are provided. Recommendations include establishing an appropriate engineering standard regarding rearward visibility of agricultural equipment with limited rearward vision and the use of rearward alarm systems for warning bystanders of rearward movement. Copyright© by the American Society of Agricultural Engineers.
Moretti, Stefano; van Leeuwen, Danitsja; Gmuender, Hans; Bonassi, Stefano; van Delft, Joost; Kleinjans, Jos; Patrone, Fioravante; Merlo, Domenico Franco
2008-01-01
Background In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low p-value. However, the interpretation of each single p-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, game theory has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions. Results In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called Comparative Analysis of Shapley value (shortly, CASh), is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability. Conclusion CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a synergistic effect between coalitional games and statistics that resulted in a selection of genes with a potential impact in the regulation of complex pathways. PMID:18764936
Renner, Gerolf; Irblich, Dieter
2016-11-01
Test Reviews in Child Psychology: Test Users Wish to Obtain Practical Information Relevant to their Respective Field of Work This study investigated to what extent diagnosticians use reviews of psychometric tests for children and adolescents, how they evaluate their quality, and what they expect concerning content. Test users (n = 323) from different areas of work (notably social pediatrics, early intervention, special education, speech and language therapy) rated test reviews as one of the most important sources of information. Readers of test reviews value practically oriented descriptions and evaluations of tests that are relevant to their respective field of work. They expect independent reviews that critically discuss opportunities and limits of the tests under scrutiny. The results show that authors of test reviews should not only have a background in test theory but should also be familiar with the practical application of tests in various settings.
Rasmussen, Kirsten; González, Mar; Kearns, Peter; Sintes, Juan Riego; Rossi, François; Sayre, Phil
2016-02-01
This paper charts the almost ten years of history of OECD's work on nanosafety, during which the programme of the OECD on the Testing and Assessment of Manufactured Nanomaterials covered the testing of eleven nanomaterials for about 59 end-points addressing physical-chemical properties, mammalian and environmental toxicity, environmental fate and material safety. An overview of the materials tested, the test methods applied and the discussions regarding the applicability of the OECD test guidelines, which are recognised methods for regulatory testing of chemicals, are given. The results indicate that many existing OECD test guidelines are suitable for nanomaterials and consequently, hazard data collected using such guidelines will fall under OECD's system of Mutual Acceptance of Data (MAD) which is a legally binding instrument to facilitate the international acceptance of information for the regulatory safety assessment of chemicals. At the same time, some OECD test guidelines and guidance documents need to be adapted to address nanomaterials while new test guidelines and guidance documents may be needed to address endpoints that are more relevant to nanomaterials. This paper presents examples of areas where test guidelines or guidance for nanomaterials are under development. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Extracellular matrix proteins as temporary coating for thin-film neural implants
NASA Astrophysics Data System (ADS)
Ceyssens, Frederik; Deprez, Marjolijn; Turner, Neill; Kil, Dries; van Kuyck, Kris; Welkenhuysen, Marleen; Nuttin, Bart; Badylak, Stephen; Puers, Robert
2017-02-01
Objective. This study investigates the suitability of a thin sheet of extracellular matrix (ECM) proteins as a resorbable coating for temporarily reinforcing fragile or ultra-low stiffness thin-film neural implants to be placed on the brain, i.e. microelectrocorticographic (µECOG) implants. Approach. Thin-film polyimide-based electrode arrays were fabricated using lithographic methods. ECM was harvested from porcine tissue by a decellularization method and coated around the arrays. Mechanical tests and an in vivo experiment on rats were conducted, followed by a histological tissue study combined with a statistical equivalence test (confidence interval approach, 0.05 significance level) to compare the test group with an uncoated control group. Main results. After 3 months, no significant damage was found based on GFAP and NeuN staining of the relevant brain areas. Significance. The study shows that ECM sheets are a suitable temporary coating for thin µECOG neural implants.
Bouffard, Jeffrey A
2007-08-01
Previous hypothetical scenario tests of rational choice theory have presented all participants with the same set of consequences, implicitly assuming that these consequences would be relevant for each individual. Recent research demonstrates that those researcher-presented consequences do not accurately reflect those considered by study participants and that there is individual variation in the relevance of various consequences. Despite this and some theoretical propositions that such differences should exist, little empirical research has explored the possibility of predicting such variation. This study allows participants to develop their own set of relevant consequences for three hypothetical offenses and examines how several demographic and theoretical variables impact those consequences' relevance. Exploratory results suggest individual factors impact the perceived relevance of several cost and benefit types, even among a relatively homogenous sample of college students. Implications for future tests of rational choice theory, as well as policy implications are discussed.
Relevance of Micro-leakage to Orthodontic Bonding - a Review
M, Karandish
2016-01-01
As it is seen, by passing the evolutionary process of banding of orthodontic attachments to the bonding ones, orthodontics have witnessed many developments, such as application of new adhesives, optimized base designs, new bracket materials, curing methods and more efficient primers. The studies often address the morphological, micro-leakage, and shear bond tests to evaluate bond efficacy. Among studies endeavored to develop the bond strength of brackets, some observed the reduction of micro-leakage of bracket-adhesive and enamel-adhesive interfaces. Owing to the importance of micro-leakage in orthodontics, this study aimed at reviewing the micro-leakage values directly relevant to the enamel decay and debonding of the brackets. To reach the best bond strength, the researchers tried to design different studies to evaluate the effect of variables and prevent any possible side effects in clinical situations. It is noticed that most studies have mainly focused on adhesives, enamel preparation and methods of curing which are discussed in this review. The literature was reviewed by searching databases, using micro-leakage and orthodontic bonding as the keywords . Having found the relevant studies, the researchers entered them into the database. After reviewing numerous studies conducted in this field, the type of adhesive or curing method was not found to have determinative role in the value of micro-leakage although more standardized studies are needed. PMID:28959751
Kandasamy, Ram; Lee, Andrea T; Morgan, Michael M
2017-12-01
The development of new anti-migraine treatments is limited by the difficulty inassessing migraine pain in laboratory animals. Depression of activity is one of the few diagnostic criteria formigraine that can be mimicked in rats. The goal of the present study was to test the hypothesis thatdepression of home cage wheel running is a reliable and clinically relevant method to assess migraine painin rats. Adult female rats were implanted with a cannula to inject allyl isothiocyanate (AITC) onto the dura to induce migraine pain, as has been shown before. Rats recovered from implantation surgery for 8 days in cages containing a running wheel. Home cage wheel running was recorded 23 h a day. AITC and the migraine medication sumatriptan were administered in the hour prior to onset of the dark phase. Administration of AITC caused a concentration-dependent decrease in wheel running that lasted 3 h. The duration and magnitude of AITC-induced depression of wheel running was consistent following three repeated injections spaced 48 h apart. Administration of sumatriptan attenuated AITC-induced depressionof wheel running when a large dose (1 mg/kg) was administered immediately following AITC administration. Wheel running patterns did not change when sumatriptan was given to naïve rats. These data indicate that home cage wheel running is a sensitive, reliable, and clinically relevant method to assess migraine pain in the rat.
Clinical application of high throughput molecular screening techniques for pharmacogenomics
Wiita, Arun P; Schrijver, Iris
2011-01-01
Genetic analysis is one of the fastest-growing areas of clinical diagnostics. Fortunately, as our knowledge of clinically relevant genetic variants rapidly expands, so does our ability to detect these variants in patient samples. Increasing demand for genetic information may necessitate the use of high throughput diagnostic methods as part of clinically validated testing. Here we provide a general overview of our current and near-future abilities to perform large-scale genetic testing in the clinical laboratory. First we review in detail molecular methods used for high throughput mutation detection, including techniques able to monitor thousands of genetic variants for a single patient or to genotype a single genetic variant for thousands of patients simultaneously. These methods are analyzed in the context of pharmacogenomic testing in the clinical laboratories, with a focus on tests that are currently validated as well as those that hold strong promise for widespread clinical application in the near future. We further discuss the unique economic and clinical challenges posed by pharmacogenomic markers. Our ability to detect genetic variants frequently outstrips our ability to accurately interpret them in a clinical context, carrying implications both for test development and introduction into patient management algorithms. These complexities must be taken into account prior to the introduction of any pharmacogenomic biomarker into routine clinical testing. PMID:23226057
Preusser, Matthias; Berghoff, Anna S.; Manzl, Claudia; Filipits, Martin; Weinhäusel, Andreas; Pulverer, Walter; Dieckmann, Karin; Widhalm, Georg; Wöhrer, Adelheid; Knosp, Engelbert; Marosi, Christine; Hainfellner, Johannes A.
2014-01-01
Testing of the MGMT promoter methylation status in glioblastoma is relevant for clinical decision making and research applications. Two recent and independent phase III therapy trials confirmed a prognostic and predictive value of the MGMT promoter methylation status in elderly glioblastoma patients. Several methods for MGMT promoter methylation testing have been proposed, but seem to be of limited test reliability. Therefore, and also due to feasibility reasons, translation of MGMT methylation testing into routine use has been protracted so far. Pyrosequencing after prior DNA bisulfite modification has emerged as a reliable, accurate, fast and easy-to-use method for MGMT promoter methylation testing in tumor tissues (including formalin-fixed and paraffin-embedded samples). We performed an intra- and inter-laboratory ring trial which demonstrates a high analytical performance of this technique. Thus, pyrosequencing-based assessment of MGMT promoter methylation status in glioblastoma meets the criteria of high analytical test performance and can be recommended for clinical application, provided that strict quality control is performed. Our article summarizes clinical indications, practical instructions and open issues for MGMT promoter methylation testing in glioblastoma using pyrosequencing. PMID:24359605
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
A Density Perturbation Method to Study the Eigenstructure of Two-Phase Flow Equation Systems
NASA Astrophysics Data System (ADS)
Cortes, J.; Debussche, A.; Toumi, I.
1998-12-01
Many interesting and challenging physical mechanisms are concerned with the mathematical notion of eigenstructure. In two-fluid models, complex phasic interactions yield a complex eigenstructure which may raise numerous problems in numerical simulations. In this paper, we develop a perturbation method to examine the eigenvalues and eigenvectors of two-fluid models. This original method, based on the stiffness of the density ratio, provides a convenient tool to study the relevance of pressure momentum interactions and allows us to get precise approximations of the whole flow eigendecomposition for minor requirements. Roe scheme is successfully implemented and some numerical tests are presented.
NASA Astrophysics Data System (ADS)
Bandi, T.; Shea, H.; Neels, A.
2014-06-01
The performance and aging of MEMS often rely on the stability of the mechanical properties over time and under harsh conditions. An overview is given on methods to investigate small variations of the mechanical properties of structural MEMS materials by functional characterization, high-resolution x-ray diffraction methods (HR-XRD) and environmental testing. The measurement of the dynamical properties of micro-resonators is a powerful method for the investigation of elasticity variations in structures relevant to microtechnology. X-ray diffraction techniques are used to analyze residual strains and deformations with high accuracy and in a non-destructive manner at surfaces and in buried micro-structures. The influence of elevated temperatures and radiation damage on the performance of resonant microstructures with a focus on quartz and single crystal silicon is discussed and illustrated with examples including work done in our laboratories at CSEM and EPFL.
Gao, Yu-Fei; Li, Bi-Qing; Cai, Yu-Dong; Feng, Kai-Yan; Li, Zhan-Dong; Jiang, Yang
2013-01-27
Identification of catalytic residues plays a key role in understanding how enzymes work. Although numerous computational methods have been developed to predict catalytic residues and active sites, the prediction accuracy remains relatively low with high false positives. In this work, we developed a novel predictor based on the Random Forest algorithm (RF) aided by the maximum relevance minimum redundancy (mRMR) method and incremental feature selection (IFS). We incorporated features of physicochemical/biochemical properties, sequence conservation, residual disorder, secondary structure and solvent accessibility to predict active sites of enzymes and achieved an overall accuracy of 0.885687 and MCC of 0.689226 on an independent test dataset. Feature analysis showed that every category of the features except disorder contributed to the identification of active sites. It was also shown via the site-specific feature analysis that the features derived from the active site itself contributed most to the active site determination. Our prediction method may become a useful tool for identifying the active sites and the key features identified by the paper may provide valuable insights into the mechanism of catalysis.
Application of omics data in regulatory toxicology: report of an international BfR expert workshop.
Marx-Stoelting, P; Braeuning, A; Buhrke, T; Lampen, A; Niemann, L; Oelgeschlaeger, M; Rieke, S; Schmidt, F; Heise, T; Pfeil, R; Solecki, R
2015-11-01
Advances in omics techniques and molecular toxicology are necessary to provide new perspectives for regulatory toxicology. By the application of modern molecular techniques, more mechanistic information should be gained to support standard toxicity studies and to contribute to a reduction and refinement of animal experiments required for certain regulatory purposes. The relevance and applicability of data obtained by omics methods to regulatory purposes such as grouping of chemicals, mode of action analysis or classification and labelling needs further improvement, defined validation and cautious expert judgment. Based on the results of an international expert workshop organized 2014 by the Federal Institute for Risk Assessment in Berlin, this paper is aimed to provide a critical overview of the regulatory relevance and reliability of omics methods, basic requirements on data quality and validation, as well as regulatory criteria to decide which effects observed by omics methods should be considered adverse or non-adverse. As a way forward, it was concluded that the inclusion of omics data can facilitate a more flexible approach for regulatory risk assessment and may help to reduce or refine animal testing.
Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering.
Rodríguez-Sotelo, J L; Peluffo-Ordoñez, D; Cuesta-Frau, D; Castellanos-Domínguez, G
2012-10-01
The computer-assisted analysis of biomedical records has become an essential tool in clinical settings. However, current devices provide a growing amount of data that often exceeds the processing capacity of normal computers. As this amount of information rises, new demands for more efficient data extracting methods appear. This paper addresses the task of data mining in physiological records using a feature selection scheme. An unsupervised method based on relevance analysis is described. This scheme uses a least-squares optimization of the input feature matrix in a single iteration. The output of the algorithm is a feature weighting vector. The performance of the method was assessed using a heartbeat clustering test on real ECG records. The quantitative cluster validity measures yielded a correctly classified heartbeat rate of 98.69% (specificity), 85.88% (sensitivity) and 95.04% (general clustering performance), which is even higher than the performance achieved by other similar ECG clustering studies. The number of features was reduced on average from 100 to 18, and the temporal cost was a 43% lower than in previous ECG clustering schemes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Hirtz, Christophe; Vialaret, Jérôme; Gabelle, Audrey; Nowak, Nora; Dauvilliers, Yves; Lehmann, Sylvain
2016-01-01
I125 radioimmunoassay (RIA) is currently the standard technique for quantifying cerebrospinal fluid (CSF) orexin-A/hypocretin-1, a biomarker used to diagnose narcolepsy type 1. However, orexin-A RIA is liable to undergo cross-reactions with matrix constituents generating interference, high variability between batches, low precision and accuracy, and requires special radioactivity precautions. Here we developed the first quantitative mass spectrometry assay of orexin-A based on a multiple reaction monitoring (MRM) approach. This method was tested in keeping with the Clinical and Laboratory Standards Institute (CLSI) guidelines and its clinical relevance was confirmed by comparing patients with narcolepsy type 1 versus patients with other neurological conditions. The results obtained using MRM and RIA methods were highly correlated, and Bland–Altman analysis established their interchangeability. However, the MRM values had a wider distribution and were 2.5 time lower than the RIA findings. In conclusion, this method of assay provides a useful alternative to RIA to quantify orexin-A, and may well replace it not only in narcolepsy type 1, but also in the increasing number of pathologies in which the quantification of this analyte is relevant. PMID:27165941
Probabilistic peak detection in CE-LIF for STR DNA typing.
Woldegebriel, Michael; van Asten, Arian; Kloosterman, Ate; Vivó-Truyols, Gabriel
2017-07-01
In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Similarity Metrics for Closed Loop Dynamic Systems
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Yang, Lee C.; Bedrossian, Naz; Hall, Robert A.
2008-01-01
To what extent and in what ways can two closed-loop dynamic systems be said to be "similar?" This question arises in a wide range of dynamic systems modeling and control system design applications. For example, bounds on error models are fundamental to the controller optimization with modern control design methods. Metrics such as the structured singular value are direct measures of the degree to which properties such as stability or performance are maintained in the presence of specified uncertainties or variations in the plant model. Similarly, controls-related areas such as system identification, model reduction, and experimental model validation employ measures of similarity between multiple realizations of a dynamic system. Each area has its tools and approaches, with each tool more or less suited for one application or the other. Similarity in the context of closed-loop model validation via flight test is subtly different from error measures in the typical controls oriented application. Whereas similarity in a robust control context relates to plant variation and the attendant affect on stability and performance, in this context similarity metrics are sought that assess the relevance of a dynamic system test for the purpose of validating the stability and performance of a "similar" dynamic system. Similarity in the context of system identification is much more relevant than are robust control analogies in that errors between one dynamic system (the test article) and another (the nominal "design" model) are sought for the purpose of bounding the validity of a model for control design and analysis. Yet system identification typically involves open-loop plant models which are independent of the control system (with the exception of limited developments in closed-loop system identification which is nonetheless focused on obtaining open-loop plant models from closed-loop data). Moreover the objectives of system identification are not the same as a flight test and hence system identification error metrics are not directly relevant. In applications such as launch vehicles where the open loop plant is unstable it is similarity of the closed-loop system dynamics of a flight test that are relevant.
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.
76 FR 12356 - A Method To Assess Climate-Relevant Decisions: Application in the Chesapeake Bay
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-07
... ENVIRONMENTAL PROTECTION AGENCY [FRL-9276-3] A Method To Assess Climate-Relevant Decisions: Application in the Chesapeake Bay AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of..., ``A Method to Assess Climate-Relevant Decisions: Application in the Chesapeake Bay'' (EPA/600/R-10...
NASA Astrophysics Data System (ADS)
Sutrisno, Dardiri, Ahmad; Sugandi, R. Machmud
2017-09-01
This study aimed to address the procedure, effectiveness, and problems in the implementation of learning model for Building Engineering Apprenticeship Training Programme. This study was carried out through survey method and experiment. The data were collected using questionnaire, test, and assessment sheet. The collected data were examined through description, t-test, and covariance analysis. The results of the study showed that (1) the model's procedure covered preparation course, readiness assessment, assignment distribution, handing over students to apprenticeship instructors, task completion, assisting, field assessment, report writing, and follow-up examination, (2) the Learning Community model could significantly improve students' active learning, but not improve students' hard skills and soft skills, and (3) the problems emerging in the implementation of the model were (1) students' difficulties in finding apprenticeship places and qualified instructors, and asking for relevant tasks, (2) teachers' difficulties in determining relevant tasks and monitoring students, and (3) apprenticeship instructors' difficulties in assigning, monitoring, and assessing students.
Akpheokhai, Leonard I; Oribhabor, Blessing J
2016-01-01
The interaction of man with the ecosystem is a major factor causing environmental pollution and its attendant consequences such as climate change in our world today. Patents relating to nematodes' relevance in soil quality management and their significance as biomarkers in aquatic substrates were reviewed. Nematodes are useful in rapid, easy and inexpensive method for testing the toxicity of substance (e.g. aquatic substrates). This review paper sets out to examine and discuss the issue of soil pollution, functions of nematodes in soil and aquatic substrates as well as bio-indicators in soil health management in terrestrial ecology. The information used were on the basis of secondary sources from previous research. It is abundantly clear that the population dynamics of plant parasitic or free-living nematodes have useful potentials as biomonitor for soil health and other forms of environmental contamination through agricultural activities, industrial pollution and oil spillage, and the analysis of nematode community structure could be used as complementary information obtained from conventional soil testing approaches.
Exploiting the systematic review protocol for classification of medical abstracts.
Frunza, Oana; Inkpen, Diana; Matwin, Stan; Klement, William; O'Blenis, Peter
2011-01-01
To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers. The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload. For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%. The per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review. Copyright © 2010 Elsevier B.V. All rights reserved.
Health condition identification of multi-stage planetary gearboxes using a mRVM-based method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Liu, Zongyao; Wu, Xionghui; Li, Naipeng; Chen, Wu; Lin, Jing
2015-08-01
Multi-stage planetary gearboxes are widely applied in aerospace, automotive and heavy industries. Their key components, such as gears and bearings, can easily suffer from damage due to tough working environment. Health condition identification of planetary gearboxes aims to prevent accidents and save costs. This paper proposes a method based on multiclass relevance vector machine (mRVM) to identify health condition of multi-stage planetary gearboxes. In this method, a mRVM algorithm is adopted as a classifier, and two features, i.e. accumulative amplitudes of carrier orders (AACO) and energy ratio based on difference spectra (ERDS), are used as the input of the classifier to classify different health conditions of multi-stage planetary gearboxes. To test the proposed method, seven health conditions of a two-stage planetary gearbox are considered and vibration data is acquired from the planetary gearbox under different motor speeds and loading conditions. The results of three tests based on different data show that the proposed method obtains an improved identification performance and robustness compared with the existing method.
Microbiology Education in Nursing Practice.
Durrant, Robert J; Doig, Alexa K; Buxton, Rebecca L; Fenn, JoAnn P
2017-01-01
Nurses must have sufficient education and training in microbiology to perform many roles within clinical nursing practice (e.g., administering antibiotics, collecting specimens, preparing specimens for transport and delivery, educating patients and families, communicating results to the healthcare team, and developing care plans based on results of microbiology studies and patient immunological status). It is unclear whether the current microbiology courses required of nursing students in the United States focus on the topics that are most relevant to nursing practice. To gauge the relevance of current microbiology education to nursing practice, we created a confidential, web-based survey that asked nurses about their past microbiology education, the types of microbiology specimens they collect, their duties that require knowledge of microbiology, and how frequently they encounter infectious diseases in practice. We used the survey responses to develop data-driven recommendations for educators who teach microbiology to pre-nursing and nursing students. Two hundred ninety-six Registered Nurses (RNs) completed the survey. The topics they deemed most relevant to current practice were infection control, hospital-acquired infections, disease transmission, and collection and handling of patient specimens. Topics deemed least relevant were the Gram stain procedure and microscope use. In addition, RNs expressed little interest in molecular testing methods. This may reflect a gap in their understanding of the uses of these tests, which could be bridged in a microbiology course. We now have data in support of anecdotal evidence that nurses are most engaged when learning about microbiology topics that have the greatest impact on patient care. Information from this survey will be used to shift the focus of microbiology courses at our university to topics more relevant to nursing practice. Further, these findings may also support an effort to evolve national recommendations for microbiology education in pre-nursing and nursing curricula.
Microbiology Education in Nursing Practice†
Durrant, Robert J.; Doig, Alexa K.; Buxton, Rebecca L.; Fenn, JoAnn P.
2017-01-01
Nurses must have sufficient education and training in microbiology to perform many roles within clinical nursing practice (e.g., administering antibiotics, collecting specimens, preparing specimens for transport and delivery, educating patients and families, communicating results to the healthcare team, and developing care plans based on results of microbiology studies and patient immunological status). It is unclear whether the current microbiology courses required of nursing students in the United States focus on the topics that are most relevant to nursing practice. To gauge the relevance of current microbiology education to nursing practice, we created a confidential, web-based survey that asked nurses about their past microbiology education, the types of microbiology specimens they collect, their duties that require knowledge of microbiology, and how frequently they encounter infectious diseases in practice. We used the survey responses to develop data-driven recommendations for educators who teach microbiology to pre-nursing and nursing students. Two hundred ninety-six Registered Nurses (RNs) completed the survey. The topics they deemed most relevant to current practice were infection control, hospital-acquired infections, disease transmission, and collection and handling of patient specimens. Topics deemed least relevant were the Gram stain procedure and microscope use. In addition, RNs expressed little interest in molecular testing methods. This may reflect a gap in their understanding of the uses of these tests, which could be bridged in a microbiology course. We now have data in support of anecdotal evidence that nurses are most engaged when learning about microbiology topics that have the greatest impact on patient care. Information from this survey will be used to shift the focus of microbiology courses at our university to topics more relevant to nursing practice. Further, these findings may also support an effort to evolve national recommendations for microbiology education in pre-nursing and nursing curricula. PMID:28861140
THE DYNAMIC LEAP AND BALANCE TEST (DLBT): A TEST-RETEST RELIABILITY STUDY
Newman, Thomas M.; Smith, Brent I.; John Miller, Sayers
2017-01-01
Background There is a need for new clinical assessment tools to test dynamic balance during typical functional movements. Common methods for assessing dynamic balance, such as the Star Excursion Balance Test, which requires controlled movement of body segments over an unchanged base of support, may not be an adequate measure for testing typical functional movements that involve controlled movement of body segments along with a change in base of support. Purpose/hypothesis The purpose of this study was to determine the reliability of the Dynamic Leap and Balance Test (DLBT) by assessing its test-retest reliability. It was hypothesized that there would be no statistically significant differences between testing days in time taken to complete the test. Study Design Reliability study Methods Thirty healthy college aged individuals participated in this study. Participants performed a series of leaps in a prescribed sequence, unique to the DLBT test. Time required by the participants to complete the 20-leap task was the dependent variable. Subjects leaped back and forth from peripheral to central targets alternating weight bearing from one leg to the other. Participants landed on the central target with the tested limb and were required to stabilize for two seconds before leaping to the next target. Stability was based upon qualitative measures similar to Balance Error Scoring System. Each assessment was comprised of three trials and performed on two days with a separation of at least six days. Results Two-way mixed ANOVA was used to analyze the differences in time to complete the sequence between the three trial averages of the two testing sessions. Intraclass Correlation Coefficient (ICC3,1) was used to establish between session test-retest reliability of the test trial averages. Significance was set a priori at p ≤ 0.05. No significant differences (p > 0.05) were detected between the two testing sessions. The ICC was 0.93 with a 95% confidence interval from 0.84 to 0.96. Conclusion This test is a cost-effective, easy to administer and clinically relevant novel measure for assessing dynamic balance that has excellent test-retest reliability. Clinical relevance As a new measure of dynamic balance, the DLBT has the potential to be a cost-effective, challenging and functional tool for clinicians. Level of Evidence 2b PMID:28900556
First experiences with an accelerated CMV antigenemia test: CMV Brite Turbo assay.
Visser, C E; van Zeijl, C J; de Klerk, E P; Schillizi, B M; Beersma, M F; Kroes, A C
2000-06-01
Cytomegalovirus disease is still a major problem in immunocompromised patients, such as bone marrow or kidney transplantation patients. The detection of viral antigen in leukocytes (antigenemia) has proven to be a clinically relevant marker of CMV activity and has found widespread application. Because most existing assays are rather time-consuming and laborious, an accelerated version (Brite Turbo) of an existing method (Brite) has been developed. The major modification is in the direct lysis of erythrocytes instead of separation by sedimentation. In this study the Brite Turbo method has been compared with the conventional Brite method to detect CMV antigen pp65 in peripheral blood leukocytes of 107 consecutive immunocompromised patients. Both tests produced similar results. Discrepancies were limited to the lowest positive range and sensitivity and specificity were comparable for both tests. Two major advantages of the Brite Turbo method could be observed in comparison to the original method: assay-time was reduced by more than 50% and only 2 ml of blood was required. An additional advantage was the higher number of positive nuclei in the Brite Turbo method attributable to the increased number of granulocytes in the assay. Early detection of CMV infection or reactivation has become faster and easier with this modified assay.
[Determination of cost-effective strategies in colorectal cancer screening].
Dervaux, B; Eeckhoudt, L; Lebrun, T; Sailly, J C
1992-01-01
The object of the article is to implement particular methodologies in order to determine which strategies are cost-effective in the mass screening of colorectal cancer after a positive Hemoccult test. The first approach to be presented consists in proposing a method which enables all the admissible diagnostic strategies to be determined. The second approach enables a minimal cost function to be estimated using an adaptation of "Data Envelopment Analysis". This method proves to be particularly successful in cost-efficiency analysis, when the performance indicators are numerous and hard to aggregate. The results show that there are two cost-effective strategies after a positive Hemoccult test: coloscopy and sigmoidoscopy; they put into question the relevance of double contrast barium enema in the diagnosis of colo-rectal lesions.
Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F
2017-08-01
Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.
Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit
2017-06-01
We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.
Anaerobic Biodegradation of Detergent Surfactants
Merrettig-Bruns, Ute; Jelen, Erich
2009-01-01
Detergent surfactants can be found in wastewater in relevant concentrations. Most of them are known as ready degradable under aerobic conditions, as required by European legislation. Far fewer surfactants have been tested so far for biodegradability under anaerobic conditions. The natural environment is predominantly aerobic, but there are some environmental compartments such as river sediments, sub-surface soil layer and anaerobic sludge digesters of wastewater treatment plants which have strictly anaerobic conditions. This review gives an overview on anaerobic biodegradation processes, the methods for testing anaerobic biodegradability, and the anaerobic biodegradability of different detergent surfactant types (anionic, nonionic, cationic, amphoteric surfactants).
Analysis of small crack behavior for airframe applications
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Chan, K. S.; Hudak, S. J., Jr.; Davidson, D. L.
1994-01-01
The small fatigue crack problem is critically reviewed from the perspective of airframe applications. Different types of small cracks-microstructural, mechanical, and chemical-are carefully defined and relevant mechanisms identified. Appropriate analysis techniques, including both rigorous scientific and practical engineering treatments, are briefly described. Important materials data issues are addressed, including increased scatter in small crack data and recommended small crack test methods. Key problems requiring further study are highlighted.
Research and Development of Energetic Ionic Liquids
2012-03-01
Navy/ AF ) – USAF AF - M315E • Propellant uses ionic liquids to yield low vapor toxicity 22 – Sweden/ECAPS LMP-103S • Propellant uses ADN-based formulation...hydrazine replacement monopropellant objectives, relevant monopropellant properties, AF -M1028A monopropellant composition and physical properties...thruster tests of AF -M1028A, ionic liquids as explosives, predictive toxicology, predictive methods expected payoff. AFRL continues efforts in energetic
Waste Characterization Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle
2016-02-02
This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.
Turan, Nefize; Miller, Brandon A; Heider, Robert A; Nadeem, Maheen; Sayeed, Iqbal; Stein, Donald G; Pradilla, Gustavo
2017-11-01
The most important aspect of a preclinical study seeking to develop a novel therapy for neurological diseases is whether the therapy produces any clinically relevant functional recovery. For this purpose, neurobehavioral tests are commonly used to evaluate the neuroprotective efficacy of treatments in a wide array of cerebrovascular diseases and neurotrauma. Their use, however, has been limited in experimental subarachnoid hemorrhage studies. After several randomized, double-blinded, controlled clinical trials repeatedly failed to produce a benefit in functional outcome despite some improvement in angiographic vasospasm, more rigorous methods of neurobehavioral testing became critical to provide a more comprehensive evaluation of the functional efficacy of proposed treatments. While several subarachnoid hemorrhage studies have incorporated an array of neurobehavioral assays, a standardized methodology has not been agreed upon. Here, we review neurobehavioral tests for rodents and their potential application to subarachnoid hemorrhage studies. Developing a standardized neurobehavioral testing regimen in rodent studies of subarachnoid hemorrhage would allow for better comparison of results between laboratories and a better prediction of what interventions would produce functional benefits in humans.
Final Report, January 1991 - July 1992
NASA Astrophysics Data System (ADS)
Ferrara, Jon
1992-07-01
This report covers final schedules, expenses and billings, monthly reports, testing, and deliveries for this contract. The goal of the detector development program for the Solar and Heliospheric Spacecraft (SOHO) EUV Imaging Telescope (EIT) is an Extreme UltraViolet (EUV) CCD (Change Collecting Device) camera. As a part of the CCD screening effort, the quantum efficiency (QE) of a prototype CCD has been measured in the NRL EUV laboratory over the wavelength range of 256 to 735 Angstroms. A simplified model has been applied to these QE measurements to illustrate the relevant physical processes that determine the performance of the detector. The charge transfer efficiency (CTE) characteristics of the Tektronix 1024 X 1024 CCD being developed for STIS/SOHO space imaging applications have been characterized at different signal levels, operating conditions, and temperatures using a variety of test methods. A number of CCD's have been manufactured using processing techniques developed to improve CTE, and test results on these devices will be used in determining the final chip design. In this paper, we discuss the CTE test methods used and present the results and conclusions of these tests.
Alépée, N; Grandidier, M H; Cotovio, J
2014-03-01
The EpiSkin™ skin corrosion test method was formally validated and adopted within the context of OECD TG 431 for identifying corrosive and non-corrosive chemicals. The EU Classification, Labelling and Packaging Regulation (EU CLP) system requires the sub-categorisation of corrosive chemicals into the three UN GHS optional subcategories 1A, 1B and 1C. The present study was undertaken to investigate the usefulness of the validated EpiSkin™ test method to identify skin corrosive UN GHS Categories 1A, 1B and 1C using the original and validated prediction model and adapted controls for direct MTT reduction. In total, 85 chemicals selected by the OECD expert group on skin corrosion were tested in three independent runs. The results obtained were highly reproducible both within (>80%) and between (>78%) laboratories when compared with historical data. Moreover the results obtained showed that the EpiSkin™ test method is highly sensitive (99%) and specific (80%) in discriminating corrosive from non-corrosive chemicals and allows reliable and relevant identification of the different skin corrosive UN GHS subcategories, with high accuracies being obtained for both UN GHS Categories 1A (83%) and 1B/1C (76%) chemicals. The overall accuracy of the test method to subcategorise corrosive chemicals into three or two UN GHS subcategories ranged from 75% to 79%. Considering those results, the revised OECD Test Guideline 431 permit the use of EpiSkin™ for subcategorising corrosive chemicals into at least two classes (Category 1A and Category 1B/1C). Copyright © 2013. Published by Elsevier Ltd.
Use of behavioral avoidance testing in natural resource damage assessment
Lipton, J.; Little, E.E.; Marr, J.C.A.; DeLonay, A.J.; Bengston, David A.; Henshel, Diane S.
1996-01-01
Natural Resource Damage Assessment (NRDA) provisions established under federal and state statutes enable natural resource trustees to recover compensation from responsible parties to restore injured natural resources. Behavioral avoidance testing with fish has been used in NRDAs to determine injuries to natural resources and to establish restoration thresholds. In this manuscript we evaluate the use of avoidance testing to NRDA. Specifically, we discuss potential “acceptance criteria” to evaluate the applicability and relevance of avoidance testing. These acceptance criteria include: (1) regulatory relevance, (2) reproducibility of testing, (3) ecological significance, (4) quality assurance/quality control, and (5) relevance to restoration. We discuss each of these criteria with respect to avoidance testing. Overall, we conclude that avoidance testing can be an appropriate, defensible, and desirable aspect of an NRDA.
Analytic processing of distance.
Dopkins, Stephen; Galyer, Darin
2018-01-01
How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.
Allergy Diagnosis in Children and Adults: Performance of a New Point-of-Care Device, ImmunoCAP Rapid
2009-01-01
Background Allergy is a serious problem affecting approximately 1 of 4 individuals. The symptoms with and without allergy etiology are often difficult to distinguish from each other without using an IgE antibody test. The aim of this study was to investigate the performance of a new point-of-care (POC) test for IgE antibodies to relevant allergens in Europe. Methods IgE antibodies from children and adults with allergies recruited from allergy clinics in Sweden and Spain were analyzed for 10 allergens, suitable for the age groups, using the new POC test and ImmunoCAP laboratory test. The IgE antibody level best discriminating between positive and negative results (the cutoff point) for the different allergens of the POC test and the efficacy of the POC and the ImmunoCAP laboratory tests for diagnosing allergy compared with that of clinical diagnosis were investigated. Results The estimated cutoffs for the different allergens in the POC test ranged from 0.70 to 2.56 kUA/L. Taking into account all positive allergen results in a given patient, the POC test could identify 95% of the patients with allergies. Seventy-eight percent of the allergen-specific physicians' diagnoses were identified and 97% of the negative ones. Most allergens exhibited good performance, identifying about 80% of clinically relevant cases. However, dog, mugwort, and wall pellitory would benefit from improvement. Conclusions The POC test will be a valuable adjunct in the identification or exclusion of patients with allergies and their most likely offending allergens, both in specialist and general care settings. PMID:23283063
Elsdon, Dale S; Spanswick, Selina; Zaslawski, Chris; Meier, Peter C
2017-01-01
A protocol for a prospective single-blind parallel four-arm randomized placebo-controlled trial with repeated measures was designed to test the effects of various acupuncture methods compared with sham. Eighty self-selected participants with myofascial pain in the upper trapezius muscle were randomized into four groups. Group 1 received acupuncture to a myofascial trigger point (MTrP) in the upper trapezius. Group 2 received acupuncture to the MTrP in addition to relevant distal points. Group 3 received acupuncture to the relevant distal points only. Group 4 received a sham treatment to both the MTrP and distal points using a deactivated acupuncture laser device. Treatment was applied four times within 2 weeks with outcomes measured throughout the trial and at 2 weeks and 4 weeks posttreatment. Outcome measurements were a 100-mm visual analog pain scale, SF-36, pressure pain threshold, Neck Disability Index, the Upper Extremity Functional Index, lateral flexion in the neck, McGill Pain Questionnaire, Massachusetts General Hospital Acupuncture Sensation Scale, Working Alliance Inventory (short form), and the Credibility Expectance Questionnaire. Two-way analysis of variance (ANOVA) with repeated measures were used to assess the differences between groups. Copyright © 2017 Medical Association of Pharmacopuncture Institute. Published by Elsevier B.V. All rights reserved.
van Halsema, Clare L.; Chihota, Violet N.; Gey van Pittius, Nicolaas C.; Fielding, Katherine L.; Lewis, James J.; van Helden, Paul D.; Churchyard, Gavin J.; Grant, Alison D.
2015-01-01
Background. The clinical relevance of nontuberculous mycobacteria (NTM), detected by liquid more than solid culture in sputum specimens from a South African mining workforce, is uncertain. We aimed to describe the current spectrum and relevance of NTM in this population. Methods. An observational study including individuals with sputum NTM isolates, recruited at workforce tuberculosis screening and routine clinics. Symptom questionnaires were administered at the time of sputum collection and clinical records and chest radiographs reviewed retrospectively. Results. Of 232 individuals included (228 (98%) male, median age 44 years), M. gordonae (60 individuals), M. kansasii (50), and M. avium complex (MAC: 38) were the commonest species. Of 38 MAC isolates, only 2 (5.3%) were from smear-positive sputum specimens and 30/38 grew in liquid but not solid culture. MAC was especially prevalent among symptomatic, HIV-positive individuals. HIV prevalence was high: 57/74 (77%) among those tested. No differences were found in probability of death or medical separation by NTM species. Conclusions. M. gordonae, M. kansasii, and MAC were the commonest NTM among miners with suspected tuberculosis, with most MAC from smear-negative specimens in liquid culture only. HIV testing and identification of key pathogenic NTM in this setting are essential to ensure optimal treatment. PMID:26180817
Assessment of protein set coherence using functional annotations
Chagoyen, Monica; Carazo, Jose M; Pascual-Montano, Alberto
2008-01-01
Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846
Dekant, Wolfgang; Melching-Kollmuss, Stephanie; Kalberlah, Fritz
2010-03-01
In Europe, limits for tolerable concentrations of "non-relevant metabolites" for active ingredients (AI) of plant protection products in drinking water between 0.1 and 10 microg/L are discussed depending on the toxicological information available. "Non-relevant metabolites" are degradation products of AIs, which do not or only partially retain the targeted toxicities of AIs. For "non-relevant metabolites" without genotoxicity (to be confirmed by testing in vitro), the application of the concept of "thresholds of toxicological concern" results in a health-based drinking water limit of 4.5 microg/L even for Cramer class III compounds, using the TTC threshold of 90 microg/person/day (divided by 10 and 2). Taking into account the thresholds derived from two reproduction toxicity data bases a drinking water limit of 3.0 microg/L is proposed. Therefore, for "non-relevant metabolites" whose drinking water concentration is below 3.0 microg/L, no toxicity testing is necessary. This work develops a toxicity assessment strategy as a basis to delineate health-based limits for "non-relevant metabolites" in ground and drinking water. Toxicological testing is recommended to investigate, whether the metabolites are relevant or not, based on the hazard properties of the parent AIs, as outlined in the SANCO Guidance document. Also, genotoxicity testing of the water metabolites is clearly recommended. In this publication, tiered testing strategies are proposed for non-relevant metabolites, when drinking water concentrations >3.0 microg/L will occur. Conclusions based on structure-activity relationships and the detailed toxicity database on the parent AI should be included. When testing in animals is required for risk assessment, key aspects are studies along OECD-testing guidelines with "enhanced" study designs addressing additional endpoints such as reproductive toxicity and a developmental screening test to derive health-based tolerable drinking water limits with a limited number of animals. The testing strategies are similar to those used in the initial hazard assessment of high production volume (HPV) chemicals. For "non-relevant metabolites" which are also formed as products of the biotransformation of the parent AI in mammals, the proposed toxicity testing strategies uses the repeat-dose oral toxicity study combined with a reproductive/developmental screening as outlined in OECD test guidelines 407 and 422 with integration of determination of hormonal activities. For "non-relevant metabolites" not formed during biotransformation of the AI in mammals, the strategy relies on an "enhanced" 90-day oral study covering additional endpoints regarding hormonal effects and male and female fertility in combination with a prenatal developmental toxicity study (OECD test guideline 414). The integration of the results of these studies into the risk assessment process applies large minimal margins of exposure (MOEs) to compensate for the shorter duration of the studies. The results of the targeted toxicity testing will provide a science basis for setting tolerable drinking water limits for "non-relevant metabolites" based on their toxicology. Based on the recommendations given in the SANCO guidance document and the work described in this and the accompanying paper, a concise re-evaluation of the Guidance document is proposed. (c) 2009 Elsevier Inc. All rights reserved.
Schramm, Elisabeth; Kürten, Andreas; Hölzer, Jasper; Mitschke, Stefan; Mühlberger, Fabian; Sklorz, Martin; Wieser, Jochen; Ulrich, Andreas; Pütz, Michael; Schulte-Ladbeck, Rasmus; Schultze, Rainer; Curtius, Joachim; Borrmann, Stephan; Zimmermann, Ralf
2009-06-01
An in-house-built ion trap mass spectrometer combined with a soft ionization source has been set up and tested. As ionization source, an electron beam pumped vacuum UV (VUV) excimer lamp (EBEL) was used for single-photon ionization. It was shown that soft ionization allows the reduction of fragmentation of the target analytes and the suppression of most matrix components. Therefore, the combination of photon ionization with the tandem mass spectrometry (MS/MS) capability of an ion trap yields a powerful tool for molecular ion peak detection and identification of organic trace compounds in complex matrixes. This setup was successfully tested for two different applications. The first one is the detection of security-relevant substances like explosives, narcotics, and chemical warfare agents. One test substance from each of these groups was chosen and detected successfully with single photon ionization ion trap mass spectrometry (SPI-ITMS) MS/MS measurements. Additionally, first tests were performed, demonstrating that this method is not influenced by matrix compounds. The second field of application is the detection of process gases. Here, exhaust gas from coffee roasting was analyzed in real time, and some of its compounds were identified using MS/MS studies.
Sweetlove, Cyril; Chenèble, Jean-Charles; Barthel, Yves; Boualam, Marc; L'Haridon, Jacques; Thouand, Gérald
2016-09-01
Difficulties encountered in estimating the biodegradation of poorly water-soluble substances are often linked to their limited bioavailability to microorganisms. Many original bioavailability improvement methods (BIMs) have been described, but no global approach was proposed for a standardized comparison of these. The latter would be a valuable tool as part of a wider strategy for evaluating poorly water-soluble substances. The purpose of this study was to define an evaluation strategy following the assessment of different BIMs adapted to poorly water-soluble substances with ready biodegradability tests. The study was performed with two poorly water-soluble chemicals-a solid, anthraquinone, and a liquid, isodecyl neopentanoate-and five BIMs were compared to the direct addition method (reference method), i.e., (i) ultrasonic dispersion, (ii) adsorption onto silica gel, (iii) dispersion using an emulsifier, (iv) dispersion with silicone oil, and (v) dispersion with emulsifier and silicone oil. A two-phase evaluation strategy of solid and liquid chemicals was developed involving the selection of the most relevant BIMs for enhancing the biodegradability of tested substances. A description is given of a BIM classification ratio (R BIM), which enables a comparison to be made between the different test chemical sample preparation methods used in the various tests. Thereby, using this comparison, the BIMs giving rise to the greatest biodegradability were ultrasonic dispersion and dispersion with silicone oil or with silicone oil and emulsifier for the tested solid chemical, adsorption onto silica gel, and ultrasonic dispersion for the liquid one.
NASA Astrophysics Data System (ADS)
Baldwin, K. A.; Hauge, R.; Dechaine, J. M.; Varrella, G.; Egger, A. E.
2013-12-01
The development and adoption of the Next Generation Science Standards (NGSS) raises a challenge in teacher preparation: few current teacher preparation programs prepare students to teach science the way it is presented in the NGSS, which emphasize systems thinking, interdisciplinary science, and deep engagement in the scientific process. In addition, the NGSS include more geoscience concepts and methods than previous standards, yet this is a topic area in which most college students are traditionally underprepared. Although nationwide, programmatic reform is needed, there are a few targets where relatively small, course-level changes can have a large effect. One of these targets is the 'science methods' course for pre-service elementary teachers, a requirement in virtually all teacher preparation programs. Since many elementary schools, both locally and across the country, have adopted a kit based science curriculum, examining kits is often a part of a science methods course. Unfortunately, solely relying on a kit based curriculum may leave gaps in science content curriculum as one prepares teachers to meet the NGSS. Moreover, kits developed at the national level often fall short in connecting geoscientific content to the locally relevant societal issues that engage students. This highlights the need to train pre-service elementary teachers to supplement kit curriculum with inquiry based geoscience investigations that consider relevant societal issues, promote systems thinking and incorporate connections between earth, life, and physical systems. We are developing a module that teaches geoscience concepts in the context of locally relevant societal issues while modeling effective pedagogy for pre-service elementary teachers. Specifically, we focus on soils, an interdisciplinary topic relevant to multiple geoscience-related societal grand challenges (e.g., water, food) that is difficult to engage students in. Module development is funded through InTeGrate, NSF's STEP Center in the geosciences. The module goals are: 1) Pre-service teachers will apply classification methods, testing procedures and interdisciplinary systems thinking to analyze and evaluate a relevant societal issue in the context of soils, 2) Pre-service teachers will design, develop, and facilitate a standards-based K-8 soils unit, incorporating a relevant broader societal issue that applies authentic geoscientific data, and incorporates geoscientific habits of mind. In addition, pre-service teachers will look toward the NGSS and align activities with content standards, systems thinking, and science and engineering practices. This poster will provide an overview of module development to date as well as a summary of pre-semester survey results indicating pre-service elementary teachers' ideas (beliefs, attitudes, preconceptions, and content knowledge) about teaching soils, and making science relevant in a K-8 classroom.
Alvarez-Meza, Andres M.; Orozco-Gutierrez, Alvaro; Castellanos-Dominguez, German
2017-01-01
We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. PMID:29056897
Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther
2015-03-01
Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.
Kuperman, Roman G; Siciliano, Steven D; Römbke, Jörg; Oorts, Koen
2014-01-01
Although it is widely recognized that microorganisms are essential for sustaining soil fertility, structure, nutrient cycling, groundwater purification, and other soil functions, soil microbial toxicity data were excluded from the derivation of Ecological Soil Screening Levels (Eco-SSL) in the United States. Among the reasons for such exclusion were claims that microbial toxicity tests were too difficult to interpret because of the high variability of microbial responses, uncertainty regarding the relevance of the various endpoints, and functional redundancy. Since the release of the first draft of the Eco-SSL Guidance document by the US Environmental Protection Agency in 2003, soil microbial toxicity testing and its use in ecological risk assessments have substantially improved. A wide range of standardized and nonstandardized methods became available for testing chemical toxicity to microbial functions in soil. Regulatory frameworks in the European Union and Australia have successfully incorporated microbial toxicity data into the derivation of soil threshold concentrations for ecological risk assessments. This article provides the 3-part rationale for including soil microbial processes in the development of soil clean-up values (SCVs): 1) presenting a brief overview of relevant test methods for assessing microbial functions in soil, 2) examining data sets for Cu, Ni, Zn, and Mo that incorporated soil microbial toxicity data into regulatory frameworks, and 3) offering recommendations on how to integrate the best available science into the method development for deriving site-specific SCVs that account for bioavailability of metals and metalloids in soil. Although the primary focus of this article is on the development of the approach for deriving SCVs for metals and metalloids in the United States, the recommendations provided in this article may also be applicable in other jurisdictions that aim at developing ecological soil threshold values for protection of microbial processes in contaminated soils. PMID:24376192
Pal, Jayanta Kumar; Ray, Shubhra Sankar; Pal, Sankar K
2017-10-01
MicroRNAs (miRNA) are one of the important regulators of cell division and also responsible for cancer development. Among the discovered miRNAs, not all are important for cancer detection. In this regard a fuzzy mutual information (FMI) based grouping and miRNA selection method (FMIGS) is developed to identify the miRNAs responsible for a particular cancer. First, the miRNAs are ranked and divided into several groups. Then the most important group is selected among the generated groups. Both the steps viz., ranking of miRNAs and selection of the most relevant group of miRNAs, are performed using FMI. Here the number of groups is automatically determined by the grouping method. After the selection process, redundant miRNAs are removed from the selected set of miRNAs as per user's necessity. In a part of the investigation we proposed a FMI based particle swarm optimization (PSO) method for selecting relevant miRNAs, where FMI is used as a fitness function to determine the fitness of the particles. The effectiveness of FMIGS and FMI based PSO is tested on five data sets and their efficiency in selecting relevant miRNAs are demonstrated. The superior performance of FMIGS to some existing methods are established and the biological significance of the selected miRNAs is observed by the findings of the biological investigation and publicly available pathway analysis tools. The source code related to our investigation is available at http://www.jayanta.droppages.com/FMIGS.html. Copyright © 2017 Elsevier Ltd. All rights reserved.
Vujaklija, Ivan; Roche, Aidan D; Hasenoehrl, Timothy; Sturma, Agnes; Amsuess, Sebastian; Farina, Dario; Aszmann, Oskar C
2017-01-01
Missing an upper limb dramatically impairs daily-life activities. Efforts in overcoming the issues arising from this disability have been made in both academia and industry, although their clinical outcome is still limited. Translation of prosthetic research into clinics has been challenging because of the difficulties in meeting the necessary requirements of the market. In this perspective article, we suggest that one relevant factor determining the relatively small clinical impact of myocontrol algorithms for upper limb prostheses is the limit of commonly used laboratory performance metrics. The laboratory conditions, in which the majority of the solutions are being evaluated, fail to sufficiently replicate real-life challenges. We qualitatively support this argument with representative data from seven transradial amputees. Their ability to control a myoelectric prosthesis was tested by measuring the accuracy of offline EMG signal classification, as a typical laboratory performance metrics, as well as by clinical scores when performing standard tests of daily living. Despite all subjects reaching relatively high classification accuracy offline, their clinical scores varied greatly and were not strongly predicted by classification accuracy. We therefore support the suggestion to test myocontrol systems using clinical tests on amputees, fully fitted with sockets and prostheses highly resembling the systems they would use in daily living, as evaluation benchmark. Agreement on this level of testing for systems developed in research laboratories would facilitate clinically relevant progresses in this field.
Hira-Kazal, R; Shea-Simonds, P; Peacock, J L; Maher, J
2015-01-01
Anti-nuclear antibody (ANA) testing assists in the diagnosis of several immune-mediated disorders. The gold standard method for detection of these antibodies is by indirect immunofluorescence testing on human epidermoid laryngeal carcinoma (HEp-2) cells. However, many laboratories test for these antibodies using solid-phase assays such as enzyme-linked immunosorbent assay (ELISA), which allows for higher throughput testing at reduced cost. In this study, we have audited the performance of a previously established ELISA assay to screen for ANA, making comparison with the gold standard HEp-2 immunofluorescence test. A prospective and unselected sample of 89 consecutive ANA test requests by consultant rheumatologists were evaluated in parallel over a period of 10 months using both tests. ELISA and HEp-2 screening assays yielded 40 (45%) and 72 (81%) positive test results, respectively, demonstrating lack of concordance between test methods. Using standard and clinical samples, it was demonstrated that the ELISA method did not detect several ANA with nucleolar, homogeneous and speckled immunofluorescence patterns. None of these ELISANEG HEp-2POS ANA were reactive with a panel of six extractable nuclear antigens or with double-stranded DNA. Nonetheless, 13 of these samples (15%) originated from patients with recognized ANA-associated disease (n = 7) or Raynaud's phenomenon (n = 6). We conclude that ELISA screening may fail to detect clinically relevant ANA that lack defined specificity for antigen. PMID:25412573
Validation of a new ELISA method for in vitro potency testing of hepatitis A vaccines.
Morgeaux, S; Variot, P; Daas, A; Costanzo, A
2013-01-01
The goal of the project was to standardise a new in vitro method in replacement of the existing standard method for the determination of hepatitis A virus antigen content in hepatitis A vaccines (HAV) marketed in Europe. This became necessary due to issues with the method used previously, requiring the use of commercial test kits. The selected candidate method, not based on commercial kits, had already been used for many years by an Official Medicines Control Laboratory (OMCL) for routine testing and batch release of HAV. After a pre-qualification phase (Phase 1) that showed the suitability of the commercially available critical ELISA reagents for the determination of antigen content in marketed HAV present on the European market, an international collaborative study (Phase 2) was carried out in order to fully validate the method. Eleven laboratories took part in the collaborative study. They performed assays with the candidate standard method and, in parallel, for comparison purposes, with their own in-house validated methods where these were available. The study demonstrated that the new assay provides a more reliable and reproducible method when compared to the existing standard method. A good correlation of the candidate standard method with the in vivo immunogenicity assay in mice was shown previously for both potent and sub-potent (stressed) vaccines. Thus, the new standard method validated during the collaborative study may be implemented readily by manufacturers and OMCLs for routine batch release but also for in-process control or consistency testing. The new method was approved in October 2012 by Group of Experts 15 of the European Pharmacopoeia (Ph. Eur.) as the standard method for in vitro potency testing of HAV. The relevant texts will be revised accordingly. Critical reagents such as coating reagent and detection antibodies have been adopted by the Ph. Eur. Commission and are available from the EDQM as Ph. Eur. Biological Reference Reagents (BRRs).
A new method for measuring the psychoacoustical properties of tinnitus
2013-01-01
Background This study investigates the usefulness and effectiveness of a new way of tinnitus screening and diagnosing. The authors believe that in order to arrive at relevant diagnostic information, select the tinnitus treatment and quantitatively substantiate its effects, the measurement of the Tinnitus psychoacoustic parameters should be made an inherent part of the Tinnitus therapy. Methods For this purpose the multimedia-based sound synthesizer has been proposed for testing tinnitus and the results obtained this way are compared with the outcome of the audiometer-based Wilcoxon test. The method has been verified with 14 patients suffering from tinnitus. Results The experiments reveal capabilities, limitations, advantages and disadvantages of both methods. The synthesizer enables the patient to estimate his/her tinnitus more than twice as fast as the audiometer and makes the information on the tinnitus character perception more accurate. The analysis of the Wilcoxon test results shows that there are statistically important differences between the two tests. Conclusions Patients using the synthesizer operate the software application themselves and thus get more involved in testing. Moreover, they do not concentrate on describing verbally their tinnitus, which could be difficult for some of them. As a result, the test outcome is closer to the perceived tinnitus. However, the more complex the description of the perceived tinnitus, the harder it is to determine the sound parameters of the patient’s perception. It also takes more time regardless of the method. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1954066324109436 PMID:24354736
The Analysis of Risk Factors in No Thumb Test in Total Knee Arthroplasty
Kim, Jee Hyoung; Ko, Dong Oh; Yoo, Chang Wook; Chun, Tae Hwan; Lee, Jung Soo
2011-01-01
Background We would like to analyze the risk factors of no thumb test among knee alignment tests during total knee arthroplasty surgery. Methods The 156 cases of total knee arthroplasty by an operator from October 2009 to April 2010 were analyzed according to preoperative indicators including body weight, height, degree of varus deformity, and patella subluxation and surgical indicators such as pre-osteotomy patella thickness, degree of patella degeneration, no thumb test which was evaluated after medial prepatella incision and before bone resection (1st test), no thumb test which was evaluated with corrective valgus stress (2nd test, J test), and the kind of prosthesis. We comparatively analyzed indicators affecting no thumb test (3rd test). Results There was no relation between age, sex, and body weight and no thumb test (3rd test). Patellar sulcus angle (p = 0.795), patellar congruence angle (p = 0.276) and preoperative mechanical axis showed no relationship. The 1st no thumb test (p = 0.007) and 2nd test (p = 0.002) showed significant relation with the 3rd no thumb test. Among surgical indicators, pre-osteotomy patella thickness (p = 0.275) and degeneration of patella (p = 0.320) were not relevant but post-osteotomy patellar thickness (p = 0.002) was relevant to no thumb test (3rd test). According to prosthesis, there was no significance with Nexgen (p = 0.575). However, there was significant correlation between Scorpio (p = 0.011), Vanguard (p = 0.049) and no thumb test (3rd test). Especially, Scorpio had a tendency to dislocate the patella, but Vanguard to stabilize the patella. Conclusions No thumb test (3rd test) is correlated positively with 1st test, 2nd test, and post-osteotomy patella thickness. Therefore, the more patella osteotomy and the prosthesis with high affinity to patellofemoral alignment would be required for correct patella alignment. PMID:22162789
NASA Astrophysics Data System (ADS)
Bravo-Imaz, Inaki; Davari Ardakani, Hossein; Liu, Zongchang; García-Arribas, Alfredo; Arnaiz, Aitor; Lee, Jay
2017-09-01
This paper focuses on analyzing motor current signature for fault diagnosis of gearboxes operating under transient speed regimes. Two different strategies are evaluated, extensively tested and compared to analyze the motor current signature in order to implement a condition monitoring system for gearboxes in industrial machinery. A specially designed test bench is used, thoroughly monitored to fully characterize the experiments, in which gears in different health status are tested. The measured signals are analyzed using discrete wavelet decomposition, in different decomposition levels using a range of mother wavelets. Moreover, a dual-level time synchronous averaging analysis is performed on the same signal to compare the performance of the two methods. From both analyses, the relevant features of the signals are extracted and cataloged using a self-organizing map, which allows for an easy detection and classification of the diverse health states of the gears. The results demonstrate the effectiveness of both methods for diagnosing gearbox faults. A slightly better performance was observed for dual-level time synchronous averaging method. Based on the obtained results, the proposed methods can used as effective and reliable condition monitoring procedures for gearbox condition monitoring using only motor current signature.
NASA Astrophysics Data System (ADS)
Straß, B.; Conrad, C.; Wolter, B.
2017-03-01
Composite materials and material compounds are of increasing importance, because of the steadily rising relevance of resource saving lightweight constructions. Quality assurance with appropriate Nondestructive Testing (NDT) methods is a key aspect for reliable and efficient production. Quality changes have to be detected already in the manufacturing flow in order to take adequate corrective actions. For materials and compounds the classical NDT methods for defectoscopy, like X-ray and Ultrasound (US) are still predominant. Nevertheless, meanwhile fast, contactless NDT methods, like air-borne ultrasound, dynamic thermography and special Eddy-Current techniques are available in order to detect cracks, voids, pores and delaminations but also for characterizing fiber content, distribution and alignment. In Metal-Matrix Composites US back-scattering can be used for this purpose. US run-time measurements allow the detection of thermal stresses at the metal-matrix interface. Another important area is the necessity for NDT in joining. To achieve an optimum material utilization and product safety as well as the best possible production efficiency, there is a need for NDT methods for in-line inspection of the joint quality while joining or immediately afterwards. For this purpose EMAT (Electromagnetic Acoustic Transducer) technique or Acoustic Emission testing can be used.
Meta-Analysis of Inquiry-Based Instruction Research
NASA Astrophysics Data System (ADS)
Hasanah, N.; Prasetyo, A. P. B.; Rudyatmi, E.
2017-04-01
Inquiry-based instruction in biology has been the focus of educational research conducted by Unnes biology department students in collaboration with their university supervisors. This study aimed to describe the methodological aspects, inquiry teaching methods critically, and to analyse the results claims, of the selected four student research reports, grounded in inquiry, based on the database of Unnes biology department 2014. Four experimental quantitative research of 16 were selected as research objects by purposive sampling technique. Data collected through documentation study was qualitatively analysed regarding methods used, quality of inquiry syntax, and finding claims. Findings showed that the student research was still the lack of relevant aspects of research methodology, namely in appropriate sampling procedures, limited validity tests of all research instruments, and the limited parametric statistic (t-test) not supported previously by data normality tests. Their consistent inquiry syntax supported the four mini-thesis claims that inquiry-based teaching influenced their dependent variables significantly. In other words, the findings indicated that positive claims of the research results were not fully supported by good research methods, and well-defined inquiry procedures implementation.
MacTavish, Rachel M.; Cohen, Risa A.
2014-01-01
• Premise of the study: A microcosm unit with tidal simulation was developed to address the challenge of maintaining ecologically relevant tidal regimes while performing controlled greenhouse experiments on smooth cordgrass, Spartina alterniflora. • Methods and Results: We designed a simple, inexpensive, easily replicated microcosm unit with tidal simulation and tested whether S. alterniflora growth in microcosms with tidal simulation was similar to that of tidally influenced plants in the field on Sapelo Island, Georgia. After three months of exposure to either natural or simulated tidal treatment, plants in microcosms receiving tidal simulation had similar stem density, height, and above- and belowground biomass to plants in field plots. • Conclusions: The tidal simulator developed may provide an inexpensive, effective method for conducting studies on S. alterniflora and other tidally influenced plants in controlled settings to be used not only to complement field studies, but also in locations without coastal access. PMID:25383265
NASA Astrophysics Data System (ADS)
De Clercq, Eva M.; Vandemoortele, Femke; De Wulf, Robert R.
2006-06-01
When signing Agenda 21, several countries agreed to monitor the status of forests to ensure their sustainable use. For reporting on the change in spatial forest cover pattern on a regional scale, pattern metrics are widely used. These indices are not often thoroughly evaluated as to their sensitivity to remote sensing data characteristics. Hence, one would not know whether the change in the metric values was due to actual landscape pattern changes or to characteristic variation of multitemporal remote sensing data. The objective of this study is to empirically test an array of pattern metrics for the monitoring of spatial forest cover. Different user requirements are used as point of departure. This proved to be a straightforward method for selecting relevant pattern indices. We strongly encourage the systematic screening of these indices prior to use in order to get a deeper understanding of the results obtained by them.
Pfuhler, Stefan; Kirst, Annette; Aardema, Marilyn; Banduhn, Norbert; Goebel, Carsten; Araki, Daisuke; Costabel-Farkas, Margit; Dufour, Eric; Fautz, Rolf; Harvey, James; Hewitt, Nicola J; Hibatallah, Jalila; Carmichael, Paul; Macfarlane, Martin; Reisinger, Kerstin; Rowland, Joanna; Schellauf, Florian; Schepky, Andreas; Scheel, Julia
2010-01-01
For the assessment of genotoxic effects of cosmetic ingredients, a number of well-established and regulatory accepted in vitro assays are in place. A caveat to the use of these assays is their relatively low specificity and high rate of false or misleading positive results. Due to the 7th amendment to the EU Cosmetics Directive ban on in vivo genotoxicity testing for cosmetics that was enacted March 2009, it is no longer possible to conduct follow-up in vivo genotoxicity tests for cosmetic ingredients positive in in vitro genotoxicity tests to further assess the relevance of the in vitro findings. COLIPA, the European Cosmetics Association, has initiated a research programme to improve existing and develop new in vitro methods. A COLIPA workshop was held in Brussels in April 2008 to analyse the best possible use of available methods and approaches to enable a sound assessment of the genotoxic hazard of cosmetic ingredients. Common approaches of cosmetic companies are described, with recommendations for evaluating in vitro genotoxins using non-animal approaches. A weight of evidence approach was employed to set up a decision-tree for the integration of alternative methods into tiered testing strategies. Copyright 2010 Elsevier Inc. All rights reserved.
Effectiveness-implementation Hybrid Designs
Curran, Geoffrey M.; Bauer, Mark; Mittman, Brian; Pyne, Jeffrey M.; Stetler, Cheryl
2013-01-01
Objectives This study proposes methods for blending design components of clinical effectiveness and implementation research. Such blending can provide benefits over pursuing these lines of research independently; for example, more rapid translational gains, more effective implementation strategies, and more useful information for decision makers. This study proposes a “hybrid effectiveness-implementation” typology, describes a rationale for their use, outlines the design decisions that must be faced, and provides several real-world examples. Results An effectiveness-implementation hybrid design is one that takes a dual focus a priori in assessing clinical effectiveness and implementation. We propose 3 hybrid types: (1) testing effects of a clinical intervention on relevant outcomes while observing and gathering information on implementation; (2) dual testing of clinical and implementation interventions/strategies; and (3) testing of an implementation strategy while observing and gathering information on the clinical intervention’s impact on relevant outcomes. Conclusions The hybrid typology proposed herein must be considered a construct still in evolution. Although traditional clinical effectiveness and implementation trials are likely to remain the most common approach to moving a clinical intervention through from efficacy research to public health impact, judicious use of the proposed hybrid designs could speed the translation of research findings into routine practice. PMID:22310560
Clifton, Abigail; Lee, Geraldine; Norman, Ian J; O'Callaghan, David; Tierney, Karen; Richards, Derek
2015-01-01
Background Poor self-management of symptoms and psychological distress leads to worse outcomes and excess health service use in cardiovascular disease (CVD). Online-delivered therapy is effective, but generic interventions lack relevance for people with specific long-term conditions, such as cardiovascular disease. Objective To develop a comprehensive online CVD-specific intervention to improve both self-management and well-being, and to test acceptability and feasibility. Methods Informed by the Medical Research Council (MRC) guidance for the development of complex interventions, we adapted an existing evidence-based generic intervention for depression and anxiety for people with CVD. Content was informed by a literature review of existing resources and trial evidence, and the findings of a focus group study. Think-aloud usability testing was conducted to identify improvements to design and content. Acceptability and feasibility were tested in a cross-sectional study. Results Focus group participants (n=10) agreed that no existing resource met all their needs. Improvements such as "collapse and expand" features were added based on findings that participants’ information needs varied, and specific information, such as detecting heart attacks and when to seek help, was added. Think-aloud testing (n=2) led to changes in font size and design changes around navigation. All participants of the cross-sectional study (10/10, 100%) were able to access and use the intervention. Reported satisfaction was good, although the intervention was perceived to lack relevance for people without comorbid psychological distress. Conclusions We have developed an evidence-based, theory-informed, user-led online intervention for improving self-management and well-being in CVD. The use of multiple evaluation tests informed improvements to content and usability. Preliminary acceptability and feasibility has been demonstrated. The Space from Heart Disease intervention is now ready to be tested for effectiveness. This work has also identified that people with CVD symptoms and comorbid distress would be the most appropriate sample for a future randomized controlled trial to evaluate its effectiveness. PMID:26133739
Student science achievement and the integration of Indigenous knowledge on standardized tests
NASA Astrophysics Data System (ADS)
Dupuis, Juliann; Abrams, Eleanor
2017-09-01
In this article, we examine how American Indian students in Montana performed on standardized state science assessments when a small number of test items based upon traditional science knowledge from a cultural curriculum, "Indian Education for All", were included. Montana is the first state in the US to mandate the use of a culturally relevant curriculum in all schools and to incorporate this curriculum into a portion of the standardized assessment items. This study compares White and American Indian student test scores on these particular test items to determine how White and American Indian students perform on culturally relevant test items compared to traditional standard science test items. The connections between student achievement on adapted culturally relevant science test items versus traditional items brings valuable insights to the fields of science education, research on student assessments, and Indigenous studies.
Experimental analysis of the sheet metal forming behavior of newly developed press hardening steels
NASA Astrophysics Data System (ADS)
Meza-García, Enrique; Kräusel, Verena; Landgrebe, Dirk
2018-05-01
The aim of this work was the characterization of the newly developed press hardening sheet alloys 1800 PHS and 2000 PHS developed by SSAB with regard to their hot forming behavior on the basis of the experimental determination of relevant mechanical and technological properties. For this purpose conventional and non-conventional sheet metal testing methods were used. To determine the friction coefficient, the strip drawing test was applied, while the deep drawing cup test was used to determine the maximum draw depth. Finally, a V-bending test was carried out to evaluate the springback behavior of the investigated alloys by varying the blank temperature and quenching media. This work provides a technological guideline for the production of press hardened sheet parts made of these investigated sheet metals.
A novel method of measuring the melting point of animal fats.
Lloyd, S S; Dawkins, S T; Dawkins, R L
2014-10-01
The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.
Akselband, Y; Cabral, C; Shapiro, D S; McGrath, P
2005-08-01
Control of multi-drug-resistant tuberculosis has been hampered by the lack of simple, rapid and sensitive methods for assessing bacterial growth and antimicrobial susceptibility. Due to the increasing incidence and high frequency of mutations, it is unlikely that culture methods will disappear in the foreseeable future. Therefore, the need to modernize methods for rapid detection of viable clinical isolates, at a minimum as a gold standard, will persist. Previously, we confirmed the feasibility of using the Gel Microdrop (GMD) Growth Assay for identifying sub-populations of resistant Mycobacteria by testing different laboratory strains. Briefly, this assay format relies on encapsulating single bacterium in agarose microspheres and identifying clonogenic growth using flow cytometry and fluorescent staining. In this study, we modified the GMD Growth Assay to make it suitable for clinical applications. We demonstrated the effectiveness and safety of this novel approach for detecting drug susceptibility in clinically relevant laboratory strains as well as clinical isolates of Mycobacterium tuberculosis. Correlation between results using the GMD Growth Assay format and results using two well characterized methods (Broth Microdilution MIC and BACTEC 460TB) was 87.5% and 90%, respectively. However, due to the inherent sensitivity of flow cytometry and the ability to detect small (<1%) sub-populations of resistant mycobacteria, the GMD Growth Assay identified more cases of drug resistance. Using 4 clinically relevant mycobacterial strains, we assessed susceptibility to primary anti-tuberculosis drugs using both the Broth Microdilution MIC method and the GMD Growth Assay. We performed 24 tests on isoniazid-resistant BCG, Mycobacterium tuberculosis H37Ra and Mycobacterium avium strains. The Broth Microdilution MIC method identified 7 cases (29.1%) of resistance to INH and EMB compared to the GMD Growth Assay which identified resistance in 10 cases (41.6%); in 3 cases (12.5%), resistance to INH and EMB was detected only with the GMD Growth Assay. In addition, using 20 Mycobacterium tuberculosis clinical isolates, we compared results using BACTEC 460TB method performed by collaborators and the GMD Growth Assay. Eight of 20 (40%) clinical isolates, which were not identified as drug-resistant using the conventional BACTEC 460TB method, were resistant to 1, 2, or 3 different concentrations of drugs using the GMD Growth Assay (13 cases of 140 experiments). In one case (isolate 1879), resistance to 10.0 microg/ml of STR detected using BACTEC 460TB method was not confirmed by the GMD Growth Assay. Thus, the overall agreement between these methods was 90% (14 discrepant results of 140 experiments). These data demonstrate that the GMD Growth Assay is an accurate and sensitive method for rapid susceptibility testing of Mycobacterium tuberculosis for use in clinical reference laboratory settings.
Aiyer, Amiethab; Russell, Nicholas A; Pelletier, Matthew H; Myerson, Mark; Walsh, William R
2016-06-01
Background The optimal fixation method for the first tarsometatarsal arthrodesis remains controversial. This study aimed to develop a reproducible first tarsometatarsal testing model to evaluate the biomechanical performance of different reconstruction techniques. Methods Crossed screws or a claw plate were compared with a single or double shape memory alloy staple configuration in 20 Sawbones models. Constructs were mechanically tested in 4-point bending to 1, 2, and 3 mm of plantar displacement. The joint contact force and area were measured at time zero, and following 1 and 2 mm of bending. Peak load, stiffness, and plantar gapping were determined. Results Both staple configurations induced a significantly greater contact force and area across the arthrodesis than the crossed screw and claw plate constructs at all measurements. The staple constructs completely recovered their plantar gapping following each test. The claw plate generated the least contact force and area at the joint interface and had significantly greater plantar gapping than all other constructs. The crossed screw constructs were significantly stiffer and had significantly less plantar gapping than the other constructs, but this gapping was not recoverable. Conclusions Crossed screw fixation provides a rigid arthrodesis with limited compression and contact footprint across the joint. Shape memory alloy staples afford dynamic fixation with sustained compression across the arthrodesis. A rigid polyurethane foam model provides an anatomically relevant comparison for evaluating the interface between different fixation techniques. Clinical Relevance The dynamic nature of shape memory alloy staples offers the potential to permit early weight bearing and could be a useful adjunctive device to impart compression across an arthrodesis of the first tarsometatarsal joint. Therapeutic, Level V: Bench testing. © 2015 The Author(s).
Ey, E; Yang, M; Katz, A M; Woldeyohannes, L; Silverman, J L; Leblond, C S; Faure, P; Torquet, N; Le Sourd, A-M; Bourgeron, T; Crawley, J N
2012-11-01
Mutations in NLGN4X have been identified in individuals with autism spectrum disorders and other neurodevelopmental disorders. A previous study reported that adult male mice lacking neuroligin4 (Nlgn4) displayed social approach deficits in the three-chambered test, altered aggressive behaviors and reduced ultrasonic vocalizations. To replicate and extend these findings, independent comprehensive analyses of autism-relevant behavioral phenotypes were conducted in later generations of the same line of Nlgn4 mutant mice at the National Institute of Mental Health in Bethesda, MD, USA and at the Institut Pasteur in Paris, France. Adult social approach was normal in all three genotypes of Nlgn4 mice tested at both sites. Reciprocal social interactions in juveniles were similarly normal across genotypes. No genotype differences were detected in ultrasonic vocalizations in pups separated from the nest or in adults during reciprocal social interactions. Anxiety-like behaviors, self-grooming, rotarod and open field exploration did not differ across genotypes, and measures of developmental milestones and general health were normal. Our findings indicate an absence of autism-relevant behavioral phenotypes in subsequent generations of Nlgn4 mice tested at two locations. Testing environment and methods differed from the original study in some aspects, although the presence of normal sociability was seen in all genotypes when methods taken from Jamain et al. (2008) were used. The divergent results obtained from this study indicate that phenotypes may not be replicable across breeding generations, and highlight the significant roles of environmental, generational and/or procedural factors on behavioral phenotypes. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
Islam, M Mazharul
2013-05-01
The long tradition of high prevalence of consanguineous marriages in Omani society may have ramifications for reproductive behaviour and health of offspring. To examine the relevance of consanguinity to reproductive behaviour, adverse pregnancy outcome and offspring mortality in Oman. The data analysed came from the 2000 Oman National Health Survey. Selected indicators that are related to reproductive behaviour, adverse pregnancy outcome and offspring mortality were considered as explanatory variables. Various statistical methods and tests were used for data analysis. Consanguineous marriage was found to be associated with lower age at first birth, higher preference for larger family size, lower level of husband-wife communication about use of family planning methods and lower rate of contraceptive use. Although bivariate analysis showed elevated fertility and childhood mortality among the women with consanguineous marriage, after controlling for relevant socio-demographic factors in multivariate analysis, fertility, childhood mortality and foetal loss showed no significant association with consanguinity in Oman. Consanguinity plays an important role in determining some of the aspects of reproduction and health of newborns, but did not show any detrimental effects on fertility and offspring mortality. The high level of consanguinity and its relevance to reproduction in Oman need to be considered in its public health strategy in a culturally acceptable manner.
Extracting laboratory test information from biomedical text
Kang, Yanna Shen; Kayaalp, Mehmet
2013-01-01
Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058
Filter Media Tests Under Simulated Martian Atmospheric Conditions
NASA Technical Reports Server (NTRS)
Agui, Juan H.
2016-01-01
Human exploration of Mars will require the optimal utilization of planetary resources. One of its abundant resources is the Martian atmosphere that can be harvested through filtration and chemical processes that purify and separate it into its gaseous and elemental constituents. Effective filtration needs to be part of the suite of resource utilization technologies. A unique testing platform is being used which provides the relevant operational and instrumental capabilities to test articles under the proper simulated Martian conditions. A series of tests were conducted to assess the performance of filter media. Light sheet imaging of the particle flow provided a means of detecting and quantifying particle concentrations to determine capturing efficiencies. The media's efficiency was also evaluated by gravimetric means through a by-layer filter media configuration. These tests will help to establish techniques and methods for measuring capturing efficiency and arrestance of conventional fibrous filter media. This paper will describe initial test results on different filter media.
Niedermaier, Tobias; Weigl, Korbinian; Hoffmeister, Michael; Brenner, Hermann
2017-01-01
Background Colorectal cancer (CRC) is a common but largely preventable cancer. Although fecal immunochemical tests (FITs) detect the majority of CRCs, they miss some of the cancers and most advanced adenomas (AAs). The potential of blood tests in complementing FITs for the detection of CRC or AA has not yet been systematically investigated. Methods We conducted a systematic review of performance of FIT combined with an additional blood test for CRC and AA detection versus FIT alone. PubMed and Web of Science were searched until June 9, 2017. Results Some markers substantially increased sensitivity for CRC when combined with FIT, albeit typically at a major loss of specificity. For AA, no relevant increase in sensitivity could be achieved. Conclusion Combining FIT and blood tests might be a promising approach to enhance sensitivity of CRC screening, but comprehensive evaluation of promising marker combinations in screening populations is needed. PMID:29435309
An analysis of a typology of family health nursing practice.
Macduff, Colin
2006-01-01
In this article, Colin Macduff analyses the construction and testing of a typology of family health nursing practice. Following a summary of relevant methods and findings from two linked empirical research studies, more detailed analysis of the conceptual foundations, nature and purpose of the typology is presented. This process serves to exemplify and address some of the issues highlighted in the associated article that reviews the use of typologies within nursing.
2013-02-01
of a bearing must be put into practice. There are many potential methods, the most traditional being the use of statistical time-domain features...accelerate degradation to test multiples bearings to gain statistical relevance and extrapolate results to scale for field conditions. Temperature...as time statistics , frequency estimation to improve the fault frequency detection. For future investigations, one can further explore the
Lawson, Ben D; Britt, Thomas W; Kelley, Amanda M; Athy, Jeremy R; Legan, Shauna M
2017-08-01
The coordination of team effort on shared tasks is an area of inquiry. A number of tests of team performance in challenging environments have been developed without comparison or standardization. This article provides a systematic review of the most accessible and usable low-to-medium fidelity computerized tests of team performance and determines which are most applicable to military- and aviation-relevant research, such as studies of group command, control, communication, and crew coordination. A search was conducted to identify computerized measures of team performance. In addition to extensive literature searches (DTIC, Psychinfo, PubMed), the authors reached out to team performance researchers at conferences and through electronic communication. Identified were 57 potential tests according to 6 specific selection criteria (e.g., the requirement for automated collection of team performance and coordination processes, the use of military-relevant scenarios). The following seven tests (listed alphabetically) were considered most suitable for military needs: Agent Enabled Decision Group Environment (AEDGE), C3Conflict, the C3 (Command, Control, & Communications) Interactive Task for Identifying Emerging Situations (NeoCITIES), Distributed Dynamic Decision Making (DDD), Duo Wondrous Original Method Basic Awareness/Airmanship Test (DuoWOMBAT), the Leader Development Simulator (LDS), and the Planning Task for Teams (PLATT). Strengths and weaknesses of these tests are described and recommendations offered to help researchers identify the test most suitable for their particular needs. Adoption of a few standard computerized test batteries to study team performance would facilitate the evaluation of interventions intended to enhance group performance in multiple challenging military and aerospace operational environments.Lawson BD, Britt TW, Kelley AM, Athy JR, Legan SM. Computerized tests of team performance and crew coordination suitable for military/aviation settings. Aerosp Med Hum Perform. 2017; 88(8):722-729.
Measurement of Creep Properties of Ultra-High-Temperature Materials by a Novel Non-Contact Technique
NASA Technical Reports Server (NTRS)
Hyers, Robert W.; Lee, Jonghyun; Rogers, Jan R.; Liaw, Peter K.
2007-01-01
A non-contact technique for measuring the creep properties of materials has been developed and validated as part of a collaboration among the University of Massachusetts, NASA Marshall Space Flight Center Electrostatic Levitation Facility (ESL), and the University of Tennessee. This novel method has several advantages over conventional creep testing. The sample is deformed by the centripetal acceleration from the rapid rotation, and the deformed shapes are analyzed to determine the strain. Since there is no contact with grips, there is no theoretical maximum temperature and no concern about chemical compatibility. Materials may be tested at the service temperature even for extreme environments such as rocket nozzles, or above the service temperature for accelerated testing of materials for applications such as jet engines or turbopumps for liquid-fueled engines. The creep measurements have been demonstrated to 2400 C with niobium, while the test facility, the NASA MSFC ESL, has processed materials up to 3400 C. Furthermore, the ESL creep method employs a distribution of stress to determine the stress exponent from a single test, versus the many tests required by conventional methods. Determination of the stress exponent from the ESL creep tests requires very precise measurement of the surface shape of the deformed sample for comparison to deformations predicted by finite element models for different stress exponents. An error analysis shows that the stress exponent can be determined to about 1% accuracy with the current methods and apparatus. The creep properties of single-crystal niobium at 1985 C showed excellent agreement with conventional tests performed according to ASTM Standard E-139. Tests on other metals, ceramics, and composites relevant to rocket propulsion and turbine engines are underway.
Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho
2017-08-01
The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.
Dumont, J.N.; Bantle, J.A.; Linder, G.; ,
2003-01-01
The energy crisis of the 1970's and 1980's prompted the search for alternative sources of fuel. With development of alternate sources of energy, concerns for biological resources potentially adversely impacted by these alternative technologies also heightened. For example, few biological tests were available at the time to study toxic effects of effluents on surface waters likely to serve as receiving streams for energy-production facilities; hence, we began to use Xenopus laevis embryos as test organisms to examine potential toxic effects associated with these effluents upon entering aquatic systems. As studies focused on potential adverse effects on aquatic systems continued, a test procedure was developed that led to the initial standardization of FETAX. Other .than a limited number of aquatic toxicity tests that used fathead minnows and cold-water fishes such as rainbow trout, X. laevis represented the only other aquatic vertebrate test system readily available to evaluate complex effluents. With numerous laboratories collaborating, the test with X. laevis was refined, improved, and developed as ASTM E-1439, Standard Guide for the Conducting Frog Embryo Teratogenesis Assay-Xenopus (FETAX). Collabrative work in the 1990s yielded procedural enhancements, for example, development of standard test solutions and exposure methods to handle volatile organics and hydrophobic compounds. As part of the ASTM process, a collaborative interlaboratory study was performed to determine the repeatability and reliability of FETAX. Parallel to these efforts, methods were also developed to test sediments and soils, and in situ test methods were developed to address "lab-to-field extrapolation errors" that could influence the method's use in ecological risk assessments. Additionally, a metabolic activation system composed of rat liver microsomes was developed which made FETAX more relevant to mammalian studies.
Antal, Péter; Kiszel, Petra Sz.; Gézsi, András; Hadadi, Éva; Virág, Viktor; Hajós, Gergely; Millinghoffer, András; Nagy, Adrienne; Kiss, András; Semsei, Ágnes F.; Temesi, Gergely; Melegh, Béla; Kisfali, Péter; Széll, Márta; Bikov, András; Gálffy, Gabriella; Tamási, Lilla; Falus, András; Szalai, Csaba
2012-01-01
Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called Bayesian network based Bayesian multilevel analysis of relevance (BN-BMLA). This method uses Bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the Bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated. With frequentist methods one SNP (rs3751464 in the FRMD6 gene) provided evidence for an association with asthma (OR = 1.43(1.2–1.8); p = 3×10−4). The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics. In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance. PMID:22432035
In vivo distribution of spinal intervertebral stiffness based on clinical flexibility tests.
Lafon, Yoann; Lafage, Virginie; Steib, Jean-Paul; Dubousset, Jean; Skalli, Wafa
2010-01-15
A numerical study was conducted to identify the intervertebral stiffness of scoliotic spines from spinal flexibility tests. To study the intervertebral 3-dimensional (3D) stiffness distribution along scoliotic spine. Few methods have been reported in literature to quantify the in vivo 3D intervertebral stiffness of the scoliotic spine. Based on the simulation of flexibility tests, these methods were operator-dependent and could yield to clinically irrelevant stiffnesses. This study included 30 patients surgically treated for severe idiopathic scoliosis. A previously validated trunk model, with patient-specific geometry, was used to simulate bending tests according to the in vivo displacements of T1 and L5 measured from bending test radiographs. Differences between in vivo and virtual spinal behaviors during bending tests (left and right) were computed in terms of vertebral rotations and translation. An automated method, driven by a priori knowledge, identified intervertebral stiffnesses in order to reproduce the in vivo spinal behavior. Because of the identification of intervertebral stiffnesses, differences between in vivo and virtual spinal displacements were drastically reduced (95% of the differences less than +/-3 mm for vertebral translation). Intervertebral stiffness distribution after identification was analyzed. On convex side test, the intervertebral stiffness of the compensatory curves increased in most cases, whereas the major curve became more flexible. Stiffness singularities were found in junctional zones: these specific levels were predominantly flexible, both in torsion and in lateral bending. The identification of in vivo intervertebral stiffness may improve our understanding of scoliotic spine and the relevance of patient-specific methods for surgical planning.
Improving Retrieval Performance by Relevance Feedback.
ERIC Educational Resources Information Center
Salton, Gerard; Buckley, Chris
1990-01-01
Briefly describes the principal relevance feedback methods that have been introduced over the years and evaluates the effectiveness of the methods in producing improved query formulations. Prescriptions are given for conducting text retrieval operations iteratively using relevance feedback. (24 references) (Author/CLB)
Brniak, Witold; Jachowicz, Renata; Pelka, Przemyslaw
2015-09-01
Even that orodispersible tablets (ODTs) have been successfully used in therapy for more than 20 years, there is still no compendial method of their disintegration time evaluation other than the pharmacopoeial disintegration test conducted in 800-900 mL of distilled water. Therefore, several alternative tests more relevant to in vivo conditions were described by different researchers. The aim of this study was to compare these methods and correlate them with in vivo results. Six series of ODTs were prepared by direct compression. Their mechanical properties and disintegration times were measured with pharmacopoeial and alternative methods and compared with the in vivo results. The highest correlation with oral disintegration time was found in the case of own-construction apparatus with additional weight and the employment of the method proposed by Narazaki et al. The correlation coefficients were 0.9994 (p < 0.001), and 0.9907 (p < 0.001) respectively. The pharmacopoeial method correlated with the in vivo data much worse (r = 0.8925, p < 0.05). These results have shown that development of novel biorelevant methods of ODT's disintegration time determination is eligible and scientifically justified.
Rhebergen, Koenraad S; van Esch, Thamar E M; Dreschler, Wouter A
2015-06-01
A temporal resolution test in addition to the pure-tone audiogram may be of great clinical interest because of its relevance in speech perception and expected relevance in hearing aid fitting. Larsby and Arlinger developed an appropriate clinical test, but this test uses a Békèsy-tracking procedure for estimating masked thresholds in stationary and interrupted noise to assess release of masking (RoM) for temporal resolution. Generally the Hughson-Westlake up-down procedure is used in the clinic to measure the pure-tone thresholds in quiet. A uniform approach will facilitate clinical application and might be appropriate for RoM measurements as well. Because there is no golden standard for measuring the RoM in the clinic, we examine in the present study the Hughson-Westlake up-down procedure to measure the RoM and compare the results with the Békèsy-tracking procedure. The purpose of the current study was to examine the differences between a Békèsy-tracking procedure and the Hughson-Westlake up-down procedure for estimating masked thresholds in stationary and interrupted noise to assess RoM. RoM is assessed in eight normal-hearing (NH) and ten hearing-impaired (HI) listeners through both methods. Results from both methods are compared with each other and with predicted thresholds from a model. Wilcoxon signed-rank tests, paired t tests. Some differences between the two methods were found. We used a model to quantify the results of the two measurement procedures. The results of the Hughson-Westlake procedure were clearly better in agreement with the model than the results of the Békèsy-tracking procedure. Furthermore, the Békèsy-tracking procedure showed more spread in the results of the NH listeners than the Hughson-Westlake procedure. The Hughson-Westlake procedure seems to be an applicable alternative for measuring RoM for temporal resolution in the clinical audiological practice. American Academy of Audiology.
Scholz, Stefan; Sela, Erika; Blaha, Ludek; Braunbeck, Thomas; Galay-Burgos, Malyka; García-Franco, Mauricio; Guinea, Joaquin; Klüver, Nils; Schirmer, Kristin; Tanneberger, Katrin; Tobor-Kapłon, Marysia; Witters, Hilda; Belanger, Scott; Benfenati, Emilio; Creton, Stuart; Cronin, Mark T D; Eggen, Rik I L; Embry, Michelle; Ekman, Drew; Gourmelon, Anne; Halder, Marlies; Hardy, Barry; Hartung, Thomas; Hubesch, Bruno; Jungmann, Dirk; Lampi, Mark A; Lee, Lucy; Léonard, Marc; Küster, Eberhard; Lillicrap, Adam; Luckenbach, Till; Murk, Albertinka J; Navas, José M; Peijnenburg, Willie; Repetto, Guillermo; Salinas, Edward; Schüürmann, Gerrit; Spielmann, Horst; Tollefsen, Knut Erik; Walter-Rohde, Susanne; Whale, Graham; Wheeler, James R; Winter, Matthew J
2013-12-01
Tests with vertebrates are an integral part of environmental hazard identification and risk assessment of chemicals, plant protection products, pharmaceuticals, biocides, feed additives and effluents. These tests raise ethical and economic concerns and are considered as inappropriate for assessing all of the substances and effluents that require regulatory testing. Hence, there is a strong demand for replacement, reduction and refinement strategies and methods. However, until now alternative approaches have only rarely been used in regulatory settings. This review provides an overview on current regulations of chemicals and the requirements for animal tests in environmental hazard and risk assessment. It aims to highlight the potential areas for alternative approaches in environmental hazard identification and risk assessment. Perspectives and limitations of alternative approaches to animal tests using vertebrates in environmental toxicology, i.e. mainly fish and amphibians, are discussed. Free access to existing (proprietary) animal test data, availability of validated alternative methods and a practical implementation of conceptual approaches such as the Adverse Outcome Pathways and Integrated Testing Strategies were identified as major requirements towards the successful development and implementation of alternative approaches. Although this article focusses on European regulations, its considerations and conclusions are of global relevance. Copyright © 2013 Elsevier Inc. All rights reserved.
Researching reducing health disparities: mixed-methods approaches.
Stewart, Miriam; Makwarimba, Edward; Barnfather, Alison; Letourneau, Nicole; Neufeld, Anne
2008-03-01
There is a pressing need for assessment and intervention research focused on reducing health disparities. In our research program, the use of mixed methods has enhanced assessment of the mediating impacts of social support on the health of vulnerable populations and enabled the design and testing of support interventions. This paper highlights the benefits and challenges of mixed methods for investigating inequities; and, illustrates the application of mixed methods in two exemplar studies focused on vulnerable populations in Canada. Qualitative methods fostered in-depth understanding of vulnerable populations' support needs, support resources, intervention preferences, and satisfaction with intervention strategies and impacts. Quantitative methods documented the effectiveness and outcomes of intervention strategies, and enhanced the reliability and validity of assessments and interventions. The researchers demonstrate that participatory strategies are needed to make studies more relevant to reducing health disparities, contextually appropriate, and empowering.
Lippa, Sara M
2018-04-01
Over the past two decades, there has been much research on measures of response bias and myriad measures have been validated in a variety of clinical and research samples. This critical review aims to guide clinicians through the use of performance validity tests (PVTs) from test selection and administration through test interpretation and feedback. Recommended cutoffs and relevant test operating characteristics are presented. Other important issues to consider during test selection, administration, interpretation, and feedback are discussed including order effects, coaching, impact on test data, and methods to combine measures and improve predictive power. When interpreting performance validity measures, neuropsychologists must use particular caution in cases of dementia, low intelligence, English as a second language/minority cultures, or low education. PVTs provide valuable information regarding response bias and, under the right circumstances, can provide excellent evidence of response bias. Only after consideration of the entire clinical picture, including validity test performance, can concrete determinations regarding the validity of test data be made.
Kelder, Johannes C; Rutten, Frans H; Hoes, Arno W
2009-02-01
With the emergence of novel diagnostic tests, e.g. point-of-care tests, clinically relevant empirical evidence is needed to assess whether such a test should be used in daily practice. With the example of the value of B-type natriuretic peptides (BNP) in the diagnostic assessment of suspected heart failure, we will discuss the major methodological issues crucial in diagnostic research; most notably the choice of the study population and the data analysis with a multivariable approach. BNP have been studied extensively in the emergency care setting, and also several studies in the primary care are available. The usefulness of this test when applied in combination with other readily available tests is still not adequately addressed in the relevant patient domain, i.e. those who are clinically suspected of heart failure by their GP. Future diagnostic research in primary care should be targeted much more at answering the clinically relevant question 'Is it useful to add this (new) test to the other tests I usually perform, including history taking and physical examination, in patients I suspect of having a certain disease'.
Treatment of atomic and molecular line blanketing by opacity sampling
NASA Technical Reports Server (NTRS)
Johnson, H. R.; Krupp, B. M.
1976-01-01
A sampling technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is subjected to several tests. In this opacity sampling (OS) technique, the global opacity is sampled at only a selected set of frequencies, and at each of these frequencies the total monochromatic opacity is obtained by summing the contribution of every relevant atomic and molecular line. In accord with previous results, we find that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 100 frequency points are adequate for many purposes. The effects of atomic and molecular lines are separately studied. A test model computed using the OS method agrees very well with a model having identical atmospheric parameters, but computed with the giant line (opacity distribution function) method.
NASA Astrophysics Data System (ADS)
Brodic, D.
2011-01-01
Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.
Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi
2017-09-22
DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
Engineered nanomaterials (ENMs) are increasingly entering the environment with uncertain consequences including potential ecological effects. Various research communities view differently whether ecotoxicological testing of ENMs should be conducted using environmentally relevant ...
A framework for automatic information quality ranking of diabetes websites.
Belen Sağlam, Rahime; Taskaya Temizel, Tugba
2015-01-01
Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).
Contribution to interplay between a delamination test and a sensory analysis of mid-range lipsticks.
Richard, C; Tillé-Salmon, B; Mofid, Y
2016-02-01
Lipstick is currently one of the most sold products of cosmetics industry, and the competition between the various manufacturers is significant. Customers mainly seek products with high spreadability, especially long-lasting or long wear on the lips. Evaluation tests of cosmetics are usually performed by sensory analysis. This can then represent a considerable cost. The object of this study was to develop a fast and simple test of delamination (objective method with calibrated instruments) and to interplay the obtained results with those of a discriminative sensory analysis (subjective method) in order to show the relevance of the instrumental test. Three mid-range lipsticks were randomly chosen and were tested. They were made of compositions as described by the International Nomenclature of Cosmetic Ingredients (INCI). Instrumental characterization was performed by texture profile analysis and by a special delamination test. The sensory analysis was voluntarily conducted with an untrained panel as blind test to confirm or reverse the possible interplay. The two approaches or methods gave the same type of classification. The high-fat lipstick had the worst behaviour with the delamination test and the worst notation of the intensity of descriptors with the sensory analysis. There is a high correlation between the sensory analysis and the instrumental measurements in this study. The delamination test carried out should permit to quickly determine the lasting (screening test) and in consequence optimize the basic formula of lipsticks. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
McKinney, Brett A.; White, Bill C.; Grill, Diane E.; Li, Peter W.; Kennedy, Richard B.; Poland, Gregory A.; Oberg, Ann L.
2013-01-01
Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k) for each gene to optimize the Relief-F test statistics (importance scores) for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak) Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to detect both main effects and interaction effects. Software Availability: http://insilico.utulsa.edu/ReliefSeq.php. PMID:24339943
Shackelford, Rodney E.; Whitling, Nicholas A.; McNab, Patricia; Japa, Shanker
2012-01-01
Activating point mutations in codons 12, 13, and 61 of the KRAS proto-oncogene are common in colorectal, non–small cell lung, pancreatic, and thyroid cancers. Constitutively activated KRAS mutations are strongly associated with a resistance to anti–epidermal growth factor receptor (EGFR) therapies, such as panitumumab and cetuximab used for treating metastatic colorectal carcinoma and EGFR tyrosine inhibitors used for advanced non–small cell lung cancers. Since anti-EGFR therapies are costly and may exert deleterious effects on individuals without activating mutations, KRAS mutation testing is recommended prior to the initiation of anti-EGFR therapy for these malignancies. The goal of this review is to summarize the KRAS mutation testing methods. Testing is now routinely requested in the clinical practice to provide data to assign the most appropriate anticancer chemotherapy for each given patient. Review of the most relevant literature was performed. Several areas were considered: ordering of the test, selection of the sample to be tested, and review of the testing methodologies. We found that several different methods are used for clinical KRAS mutation testing. Each of the methodologies is described, and information is provided about their performance, cost, turnaround times, detection limits, sensitivities, and specificities. We also provided “tips” for the appropriate selection and preparation of the sample to be tested. This is an important aspect of KRAS testing for clinical use, as the results of the test will affect clinical decisions with consequences for the patient. PMID:23264846
NASA Astrophysics Data System (ADS)
Agrawal, Anant
Optical coherence tomography (OCT) is a powerful medical imaging modality that uniquely produces high-resolution cross-sectional images of tissue using low energy light. Its clinical applications and technological capabilities have grown substantially since its invention about twenty years ago, but efforts have been limited to develop tools to assess performance of OCT devices with respect to the quality and content of acquired images. Such tools are important to ensure information derived from OCT signals and images is accurate and consistent, in order to support further technology development, promote standardization, and benefit public health. The research in this dissertation investigates new physical and computational models which can provide unique insights into specific performance characteristics of OCT devices. Physical models, known as phantoms, are fabricated and evaluated in the interest of establishing standardized test methods to measure several important quantities relevant to image quality. (1) Spatial resolution is measured with a nanoparticle-embedded phantom and model eye which together yield the point spread function under conditions where OCT is commonly used. (2) A multi-layered phantom is constructed to measure the contrast transfer function along the axis of light propagation, relevant for cross-sectional imaging capabilities. (3) Existing and new methods to determine device sensitivity are examined and compared, to better understand the detection limits of OCT. A novel computational model based on the finite-difference time-domain (FDTD) method, which simulates the physics of light behavior at the sub-microscopic level within complex, heterogeneous media, is developed to probe device and tissue characteristics influencing the information content of an OCT image. This model is first tested in simple geometric configurations to understand its accuracy and limitations, then a highly realistic representation of a biological cell, the retinal cone photoreceptor, is created and its resulting OCT signals studied. The phantoms and their associated test methods have successfully yielded novel types of data on the specific performance parameters of interest, which can feed standardization efforts within the OCT community. The level of signal detail provided by the computational model is unprecedented and gives significant insights into the effects of subcellular structures on OCT signals. Together, the outputs of this research effort serve as new tools in the toolkit to examine the intricate details of how and how well OCT devices produce information-rich images of biological tissue.
Diagnosis of dry eye disease and emerging technologies
Zeev, Maya Salomon-Ben; Miller, Darby Douglas; Latkany, Robert
2014-01-01
Dry eye is one of the most commonly encountered problems in ophthalmology. Signs can include punctate epithelial erosions, hyperemia, low tear lakes, rapid tear break-up time, and meibomian gland disease. Current methods of diagnosis include a slit-lamp examination with and without different stains, including fluorescein, rose bengal, and lissamine green. Other methods are the Schirmer test, tear function index, tear break-up time, and functional visual acuity. Emerging technologies include meniscometry, optical coherence tomography, tear film stability analysis, interferometry, tear osmolarity, the tear film normalization test, ocular surface thermography, and tear biomarkers. Patient-specific considerations involve relevant history of autoimmune disease, refractive surgery or use of oral medications, and allergies or rosacea. Other patient considerations include clinical examination for lid margin disease and presence of lagophthalmos or blink abnormalities. Given a complex presentation and a variety of signs and symptoms, it would be beneficial if there was an inexpensive, readily available, and reproducible diagnostic test for dry eye. PMID:24672224
Raman spectroscopic studies on bacteria
NASA Astrophysics Data System (ADS)
Maquelin, Kees; Choo-Smith, Lin-P'ing; Endtz, Hubert P.; Bruining, Hajo A.; Puppels, Gerwin J.
2000-11-01
Routine clinical microbiological identification of pathogenic micro-organisms is largely based on nutritional and biochemical tests. Laboratory results can be presented to a clinician after 2 - 3 days for most clinically relevant micro- organisms. Most of this time is required to obtain pure cultures and enough biomass for the tests to be performed. In the case of severely ill patients, this unavoidable time delay associated with such identification procedures can be fatal. A novel identification method based on confocal Raman microspectroscopy will be presented. With this method it is possible to obtain Raman spectra directly from microbial microcolonies on the solid culture medium, which have developed after only 6 hours of culturing for most commonly encountered organisms. Not only does this technique enable rapid (same day) identifications, but also preserves the sample allowing it to be double-checked with traditional tests. This, combined with the speed and minimal sample handling indicate that confocal Raman microspectroscopy has much potential as a powerful new tool in clinical diagnostic microbiology.
ERIC Educational Resources Information Center
Downing, Steven M.; Maatsch, Jack L.
To test the effect of clinically relevant multiple-choice item content on the validity of statistical discriminations of physicians' clinical competence, data were collected from a field test of the Emergency Medicine Examination, test items for the certification of specialists in emergency medicine. Two 91-item multiple-choice subscales were…
Measurement in Cross-Cultural Neuropsychology
Pedraza, Otto; Mungas, Dan
2010-01-01
The measurement of cognitive abilities across diverse cultural, racial, and ethnic groups has a contentious history, with broad political, legal, economic, and ethical repercussions. Advances in psychometric methods and converging scientific ideas about genetic variation afford new tools and theoretical contexts to move beyond the reflective analysis of between-group test score discrepancies. Neuropsychology is poised to benefit from these advances to cultivate a richer understanding of the factors that underlie cognitive test score disparities. To this end, the present article considers several topics relevant to the measurement of cognitive abilities across groups from diverse ancestral origins, including fairness and bias, equivalence, diagnostic validity, item response theory, and differential item functioning. PMID:18814034
Coupling Schemes for Multiphysics Reactor Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vijay Mahadeven; Jean Ragusa
2007-11-01
This report documents the progress of the student Vijay S. Mahadevan from the Nuclear Engineering Department of Texas A&M University over the summer of 2007 during his visit to the INL. The purpose of his visit was to investigate the physics-based preconditioned Jacobian-free Newton-Krylov method applied to physics relevant to nuclear reactor simulation. To this end he studied two test problems that represented reaction-diffusion and advection-reaction. These two test problems will provide the basis for future work in which neutron diffusion, nonlinear heat conduction, and a twophase flow model will be tightly coupled to provide an accurate model of amore » BWR core.« less
AQUA-USERS: AQUAculture USEr Driven Operational Remote Sensing Information Services
NASA Astrophysics Data System (ADS)
Laanen, Marnix; Poser, Kathrin; Peters, Steef; de Reus, Nils; Ghebrehiwot, Semhar; Eleveld, Marieke; Miller, Peter; Groom, Steve; Clements, Oliver; Kurekin, Andrey; Martinez Vicente, Victor; Brotas, Vanda; Sa, Carolina; Couto, Andre; Brito, Ana; Amorim, Ana; Dale, Trine; Sorensen, Kai; Boye Hansen, Lars; Huber, Silvia; Kaas, Hanne; Andersson, Henrik; Icely, John; Fragoso, Bruno
2015-12-01
The FP7 project AQUA-USERS provides the aquaculture industry with user-relevant and timely information based on the most up-to-date satellite data and innovative optical in-situ measurements. Its key purpose is to develop an application that brings together satellite information on water quality and temperature with in-situ observations as well as relevant weather prediction and met-ocean data. The application and its underlying database are linked to a decision support system that includes a set of (user-determined) management options. Specific focus is on the development of indicators for aquaculture management including indicators for harmful algae bloom (HAB) events. The methods and services developed within AQUA-USERS are tested by the members of the user board, who represent different geographic areas and aquaculture production systems.
Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.
Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui
2018-02-01
In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.
Genotoxicity assessment of nanomaterials: recommendations on best practices, assays and methods.
Elespuru, Rosalie; Pfuhler, Stefan; Aardema, Marilyn; Chen, Tao; Doak, Shareen H; Doherty, Ann; Farabaugh, Christopher S; Kenny, Julia; Manjanatha, Mugimane; Mahadevan, Brinda; Moore, Martha M; Ouédraogo, Gladys; Stankowski, Leon F; Tanir, Jennifer Y
2018-04-26
Nanomaterials (NMs) present unique challenges in safety evaluation. An international working group, the Genetic Toxicology Technical Committee of the International Life Sciences Institute's Health and Environmental Sciences Institute, has addressed issues related to the genotoxicity assessment of NMs. A critical review of published data has been followed by recommendations on methods alterations and best practices for the standard genotoxicity assays: bacterial reverse mutation (Ames); in vitro mammalian assays for mutations, chromosomal aberrations, micronucleus induction, or DNA strand breaks (comet); and in vivo assays for genetic damage (micronucleus, comet and transgenic mutation assays). The analysis found a great diversity of tests and systems used for in vitro assays; many did not meet criteria for a valid test, and/or did not use validated cells and methods in the Organization for Economic Co-operation and Development Test Guidelines, and so these results could not be interpreted. In vivo assays were less common but better performed. It was not possible to develop conclusions on test system agreement, NM activity, or mechanism of action. However, the limited responses observed for most NMs were consistent with indirect genotoxic effects, rather than direct interaction of NMs with DNA. We propose a revised genotoxicity test battery for NMs that includes in vitro mammalian cell mutagenicity and clastogenicity assessments; in vivo assessments would be added only if warranted by information on specific organ exposure or sequestration of NMs. The bacterial assays are generally uninformative for NMs due to limited particle uptake and possible lack of mechanistic relevance, and are thus omitted in our recommended test battery for NM assessment. Recommendations include NM characterization in the test medium, verification of uptake into target cells, and limited assay-specific methods alterations to avoid interference with uptake or endpoint analysis. These recommendations are summarized in a Roadmap guideline for testing.
NASA Astrophysics Data System (ADS)
Lenferna, Georges Alexandre; Russotto, Rick D.; Tan, Amanda; Gardiner, Stephen M.; Ackerman, Thomas P.
2017-06-01
In this paper, we focus on stratospheric sulfate injection as a geoengineering scheme, and provide a combined scientific and ethical analysis of climate response tests, which are a subset of outdoor tests that would seek to impose detectable and attributable changes to climate variables on global or regional scales. We assess the current state of scientific understanding on the plausibility and scalability of climate response tests. Then, we delineate a minimal baseline against which to consider whether certain climate response tests would be relevant for a deployment scenario. Our analysis shows that some climate response tests, such as those attempting to detect changes in regional climate impacts, may not be deployable in time periods relevant to realistic geoengineering scenarios. This might pose significant challenges for justifying stratospheric sulfate aerosol injection deployment overall. We then survey some of the major ethical challenges that proposed climate response tests face. We consider what levels of confidence would be required to ethically justify approving a proposed test; whether the consequences of tests are subject to similar questions of justice, compensation, and informed consent as full-scale deployment; and whether questions of intent and hubris are morally relevant for climate response tests. We suggest further research into laboratory-based work and modeling may help to narrow the scientific uncertainties related to climate response tests, and help inform future ethical debate. However, even if such work is pursued, the ethical issues raised by proposed climate response tests are significant and manifold.
2012-01-01
Background Routine pre-operative tests for anesthesia management are often ordered by both anesthesiologists and surgeons for healthy patients undergoing low-risk surgery. The Theoretical Domains Framework (TDF) was developed to investigate determinants of behaviour and identify potential behaviour change interventions. In this study, the TDF is used to explore anaesthesiologists’ and surgeons’ perceptions of ordering routine tests for healthy patients undergoing low-risk surgery. Methods Sixteen clinicians (eleven anesthesiologists and five surgeons) throughout Ontario were recruited. An interview guide based on the TDF was developed to identify beliefs about pre-operative testing practices. Content analysis of physicians’ statements into the relevant theoretical domains was performed. Specific beliefs were identified by grouping similar utterances of the interview participants. Relevant domains were identified by noting the frequencies of the beliefs reported, presence of conflicting beliefs, and perceived influence on the performance of the behaviour under investigation. Results Seven of the twelve domains were identified as likely relevant to changing clinicians’ behaviour about pre-operative test ordering for anesthesia management. Key beliefs were identified within these domains including: conflicting comments about who was responsible for the test-ordering (Social/professional role and identity); inability to cancel tests ordered by fellow physicians (Beliefs about capabilities and social influences); and the problem with tests being completed before the anesthesiologists see the patient (Beliefs about capabilities and Environmental context and resources). Often, tests were ordered by an anesthesiologist based on who may be the attending anesthesiologist on the day of surgery while surgeons ordered tests they thought anesthesiologists may need (Social influences). There were also conflicting comments about the potential consequences associated with reducing testing, from negative (delay or cancel patients’ surgeries), to indifference (little or no change in patient outcomes), to positive (save money, avoid unnecessary investigations) (Beliefs about consequences). Further, while most agreed that they are motivated to reduce ordering unnecessary tests (Motivation and goals), there was still a report of a gap between their motivation and practice (Behavioural regulation). Conclusion We identified key factors that anesthesiologists and surgeons believe influence whether they order pre-operative tests routinely for anesthesia management for a healthy adults undergoing low-risk surgery. These beliefs identify potential individual, team, and organisation targets for behaviour change interventions to reduce unnecessary routine test ordering. PMID:22682612
NASA Astrophysics Data System (ADS)
Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Pfefer, T. Joshua
2017-03-01
As Photoacoustic Tomography (PAT) matures and undergoes clinical translation, objective performance test methods are needed to facilitate device development, regulatory clearance and clinical quality assurance. For mature medical imaging modalities such as CT, MRI, and ultrasound, tissue-mimicking phantoms are frequently incorporated into consensus standards for performance testing. A well-validated set of phantom-based test methods is needed for evaluating performance characteristics of PAT systems. To this end, we have constructed phantoms using a custom tissue-mimicking material based on PVC plastisol with tunable, biologically-relevant optical and acoustic properties. Each phantom is designed to enable quantitative assessment of one or more image quality characteristics including 3D spatial resolution, spatial measurement accuracy, ultrasound/PAT co-registration, uniformity, penetration depth, geometric distortion, sensitivity, and linearity. Phantoms contained targets including high-intensity point source targets and dye-filled tubes. This suite of phantoms was used to measure the dependence of performance of a custom PAT system (equipped with four interchangeable linear array transducers of varying design) on design parameters (e.g., center frequency, bandwidth, element geometry). Phantoms also allowed comparison of image artifacts, including surface-generated clutter and bandlimited sensing artifacts. Results showed that transducer design parameters create strong variations in performance including a trade-off between resolution and penetration depth, which could be quantified with our method. This study demonstrates the utility of phantom-based image quality testing in device performance assessment, which may guide development of consensus standards for PAT systems.
2013-01-01
Background Knowledge and understanding of basic biomedical sciences remain essential to medical practice, particularly when faced with the continual advancement of diagnostic and therapeutic modalities. Evidence suggests, however, that retention tends to atrophy across the span of an average medical course and into the early postgraduate years, as preoccupation with clinical medicine predominates. We postulated that perceived relevance demonstrated through applicability to clinical situations may assist in retention of basic science knowledge. Methods To test this hypothesis in our own medical student cohort, we administered a paper-based 50 MCQ assessment to a sample of students from Years 2 through 5. Covariates pertaining to demographics, prior educational experience, and the perceived clinical relevance of each question were also collected. Results A total of 232 students (Years 2–5, response rate 50%) undertook the assessment task. This sample had comparable demographic and performance characteristics to the whole medical school cohort. In general, discipline-specific and overall scores were better for students in the latter years of the course compared to those in Year 2; male students and domestic students tended to perform better than their respective counterparts in certain disciplines. In the clinical years, perceived clinical relevance was significantly and positively correlated with item performance. Conclusions This study suggests that perceived clinical relevance is a contributing factor to the retention of basic science knowledge and behoves curriculum planners to make clinical relevance a more explicit component of applied science teaching throughout the medical course. PMID:24099045
ERIC Educational Resources Information Center
van Well, Sonja; Kolk, Annemarie M.; Klugkist, Irene G.
2008-01-01
The authors tested the hypothesis that a match between the gender relevance of a stressor and one's sex or gender role identification would elicit higher cardiovascular responses. Healthy female and male undergraduates (n = 108) were exposed to two stressors: the Cold Pressor Test (CPT) and the n-back task. Stressor relevance was manipulated to be…
A Comparison of Two Methods for Boolean Query Relevancy Feedback.
ERIC Educational Resources Information Center
Salton, G.; And Others
1984-01-01
Evaluates and compares two recently proposed automatic methods for relevance feedback of Boolean queries (Dillon method, which uses probabilistic approach as basis, and disjunctive normal form method). Conclusions are drawn concerning the use of effective feedback methods in a Boolean query environment. Nineteen references are included. (EJS)
Effect of postmortem sampling technique on the clinical significance of autopsy blood cultures.
Hove, M; Pencil, S D
1998-02-01
Our objective was to investigate the value of postmortem autopsy blood cultures performed with an iodine-subclavian technique relative to the classical method of atrial heat searing and antemortem blood cultures. The study consisted of a prospective autopsy series with each case serving as its own control relative to subsequent testing, and a retrospective survey of patients coming to autopsy who had both autopsy blood cultures and premortem blood cultures. A busy academic autopsy service (600 cases per year) at University of Texas Medical Branch Hospitals, Galveston, Texas, served as the setting for this work. The incidence of non-clinically relevant (false-positive) culture results were compared using different methods for collecting blood samples in a prospective series of 38 adult autopsy specimens. One hundred eleven adult autopsy specimens in which both postmortem and antemortem blood cultures were obtained were studied retrospectively. For both studies, positive culture results were scored as either clinically relevant or false positives based on analysis of the autopsy findings and the clinical summary. The rate of false-positive culture results obtained by an iodine-subclavian technique from blood drawn soon after death were statistically significantly lower (13%) than using the classical method of obtaining blood through the atrium after heat searing at the time of the autopsy (34%) in the same set of autopsy subjects. When autopsy results were compared with subjects' antemortem blood culture results, there was no significant difference in the rate of non-clinically relevant culture results in a paired retrospective series of antemortem blood cultures and postmortem blood cultures using the iodine-subclavian postmortem method (11.7% v 13.5%). The results indicate that autopsy blood cultures obtained using the iodine-subclavian technique have reliability equivalent to that of antemortem blood cultures.
Fully automated motion correction in first-pass myocardial perfusion MR image sequences.
Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2008-11-01
This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
Quality and Certification of Electronic Health Records
Hoerbst, A.; Ammenwerth, E.
2010-01-01
Background Numerous projects, initiatives, and programs are dedicated to the development of Electronic Health Records (EHR) worldwide. Increasingly more of these plans have recently been brought from a scientific environment to real life applications. In this context, quality is a crucial factor with regard to the acceptance and utility of Electronic Health Records. However, the dissemination of the existing quality approaches is often rather limited. Objectives The present paper aims at the description and comparison of the current major quality certification approaches to EHRs. Methods A literature analysis was carried out in order to identify the relevant publications with regard to EHR quality certification. PubMed, ACM Digital Library, IEEExplore, CiteSeer, and Google (Scholar) were used to collect relevant sources. The documents that were obtained were analyzed using techniques of qualitative content analysis. Results The analysis discusses and compares the quality approaches of CCHIT, EuroRec, IHE, openEHR, and EN13606. These approaches differ with regard to their focus, support of service-oriented EHRs, process of (re-)certification and testing, number of systems certified and tested, supporting organizations, and regional relevance. Discussion The analyzed approaches show differences with regard to their structure and processes. System vendors can exploit these approaches in order to improve and certify their information systems. Health care organizations can use these approaches to support selection processes or to assess the quality of their own information systems. PMID:23616834
Decoding 2D-PAGE complex maps: relevance to proteomics.
Pietrogrande, Maria Chiara; Marchetti, Nicola; Dondi, Francesco; Righetti, Pier Giorgio
2006-03-20
This review describes two mathematical approaches useful for decoding the complex signal of 2D-PAGE maps of protein mixtures. These methods are helpful for interpreting the large amount of data of each 2D-PAGE map by extracting all the analytical information hidden therein by spot overlapping. Here the basic theory and application to 2D-PAGE maps are reviewed: the means for extracting information from the experimental data and their relevance to proteomics are discussed. One method is based on the quantitative theory of statistical model of peak overlapping (SMO) using the spot experimental data (intensity and spatial coordinates). The second method is based on the study of the 2D-autocovariance function (2D-ACVF) computed on the experimental digitised map. They are two independent methods that are able to extract equal and complementary information from the 2D-PAGE map. Both methods permit to obtain fundamental information on the sample complexity and the separation performance and to single out ordered patterns present in spot positions: the availability of two independent procedures to compute the same separation parameters is a powerful tool to estimate the reliability of the obtained results. The SMO procedure is an unique tool to quantitatively estimate the degree of spot overlapping present in the map, while the 2D-ACVF method is particularly powerful in simply singling out the presence of order in the spot position from the complexity of the whole 2D map, i.e., spot trains. The procedures were validated by extensive numerical computation on computer-generated maps describing experimental 2D-PAGE gels of protein mixtures. Their applicability to real samples was tested on reference maps obtained from literature sources. The review describes the most relevant information for proteomics: sample complexity, separation performance, overlapping extent, identification of spot trains related to post-translational modifications (PTMs).
Desynchronization of stochastically synchronized chemical oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snari, Razan; Tinsley, Mark R., E-mail: mark.tinsley@mail.wvu.edu, E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
Identification of clinically relevant viridans streptococci by an oligonucleotide array.
Chen, Chao Chien; Teng, Lee Jene; Kaiung, Seng; Chang, Tsung Chain
2005-04-01
Viridans streptococci (VS) are common etiologic agents of subacute infective endocarditis and are capable of causing a variety of pyogenic infections. Many species of VS are difficult to differentiate by phenotypic traits. An oligonucleotide array based on 16S-23S rRNA gene intergenic spacer (ITS) sequences was developed to identify 11 clinically relevant VS. These 11 species were Streptococcus anginosus, S. constellatus, S. gordonii, S. intermedius, S. mitis, S. mutans, S. oralis, S. parasanguinis, S. salivarius, S. sanguinis, and S. uberis. The method consisted of PCR amplification of the ITS regions by using a pair of universal primers, followed by hybridization of the digoxigenin-labeled PCR products to a panel of species-specific oligonucleotides immobilized on a nylon membrane. After 120 strains of the 11 species of VG and 91 strains of other bacteria were tested, the sensitivity and specificity of the oligonucleotide array were found to be 100% (120 of 120 strains) and 95.6% (87 of 91 strains), respectively. S. pneumoniae cross-hybridized to the probes used for the identification of S. mitis, and simple biochemical tests such as optochin susceptibility or bile solubility should be used to differentiate S. pneumoniae from S. mitis. In conclusion, identification of species of VS by use of the present oligonucleotide array is accurate and could be used as an alternative reliable method for species identification of strains of VS.
49 CFR 240.215 - Retaining information supporting determinations.
Code of Federal Regulations, 2010 CFR
2010-10-01
...; (3) Any relevant data furnished by a governmental agency concerning the person's motor vehicle... administered. (e) The information concerning demonstrated performance skills that the railroad shall retain... the performance skills test(s) that documents the relevant operating facts on which the evaluation is...
Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods
NASA Astrophysics Data System (ADS)
Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.
2017-09-01
Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.
Hypothesis tests for the detection of constant speed radiation moving sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir
2015-07-01
Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less
Quantitative methods in assessment of neurologic function.
Potvin, A R; Tourtellotte, W W; Syndulko, K; Potvin, J
1981-01-01
Traditionally, neurologists have emphasized qualitative techniques for assessing results of clinical trials. However, in recent years qualitative evaluations have been increasingly augmented by quantitative tests for measuring neurologic functions pertaining to mental state, strength, steadiness, reactions, speed, coordination, sensation, fatigue, gait, station, and simulated activities of daily living. Quantitative tests have long been used by psychologists for evaluating asymptomatic function, assessing human information processing, and predicting proficiency in skilled tasks; however, their methodology has never been directly assessed for validity in a clinical environment. In this report, relevant contributions from the literature on asymptomatic human performance and that on clinical quantitative neurologic function are reviewed and assessed. While emphasis is focused on tests appropriate for evaluating clinical neurologic trials, evaluations of tests for reproducibility, reliability, validity, and examiner training procedures, and for effects of motivation, learning, handedness, age, and sex are also reported and interpreted. Examples of statistical strategies for data analysis, scoring systems, data reduction methods, and data display concepts are presented. Although investigative work still remains to be done, it appears that carefully selected and evaluated tests of sensory and motor function should be an essential factor for evaluating clinical trials in an objective manner.
Gilbert, Rebecca S; Nunez, Brandy; Sakurai, Kumi; Fielder, Thomas; Ni, Hsiao-Tzu
2016-03-24
Growing concerns about safety of ART on human gametes, embryos, clinical outcomes and long-term health of offspring require improved methods of risk assessment to provide functionally relevant assays for quality control testing and pre-clinical studies prior to clinical implementation. The one-cell mouse embryo assay (MEA) is the most widely used for development and quality testing of human ART products; however, concerns exist due to the insensitivity/variability of this bioassay which lacks standardization and involves subjective analysis by morphology alone rather than functional analysis of the developing embryos. We hypothesized that improvements to MEA by the use of functional molecular biomarkers could enhance sensitivity and improve detection of suboptimal materials/conditions. Fresh one-cell transgenic mouse embryos with green fluorescent protein (GFP) expression driven by Pou6f1 or Cdx2 control elements were harvested and cultured to blastocysts in varied test and control conditions to compare assessment by standard morphology alone versus the added dynamic expression of GFP for screening and selection of critical raw materials and detection of suboptimal culture conditions. Transgenic mouse embryos expressing functionally relevant biomarkers of normal early embryo development can be used to monitor the developmental impact of culture conditions. This novel approach provides a superior MEA that is more meaningful and sensitive for detection of embryotoxicity than morphological assessment alone.
A real-time PCR diagnostic method for detection of Naegleria fowleri.
Madarová, Lucia; Trnková, Katarína; Feiková, Sona; Klement, Cyril; Obernauerová, Margita
2010-09-01
Naegleria fowleri is a free-living amoeba that can cause primary amoebic meningoencephalitis (PAM). While, traditional methods for diagnosing PAM still rely on culture, more current laboratory diagnoses exist based on conventional PCR methods; however, only a few real-time PCR processes have been described as yet. Here, we describe a real-time PCR-based diagnostic method using hybridization fluorescent labelled probes, with a LightCycler instrument and accompanying software (Roche), targeting the Naegleria fowleriMp2Cl5 gene sequence. Using this method, no cross reactivity with other tested epidemiologically relevant prokaryotic and eukaryotic organisms was found. The reaction detection limit was 1 copy of the Mp2Cl5 DNA sequence. This assay could become useful in the rapid laboratory diagnostic assessment of the presence or absence of Naegleria fowleri. Copyright 2009 Elsevier Inc. All rights reserved.
Enhanced visualization of the retinal vasculature using depth information in OCT.
de Moura, Joaquim; Novo, Jorge; Charlón, Pablo; Barreira, Noelia; Ortega, Marcos
2017-12-01
Retinal vessel tree extraction is a crucial step for analyzing the microcirculation, a frequently needed process in the study of relevant diseases. To date, this has normally been done by using 2D image capture paradigms, offering a restricted visualization of the real layout of the retinal vasculature. In this work, we propose a new approach that automatically segments and reconstructs the 3D retinal vessel tree by combining near-infrared reflectance retinography information with Optical Coherence Tomography (OCT) sections. Our proposal identifies the vessels, estimates their calibers, and obtains the depth at all the positions of the entire vessel tree, thereby enabling the reconstruction of the 3D layout of the complete arteriovenous tree for subsequent analysis. The method was tested using 991 OCT images combined with their corresponding near-infrared reflectance retinography. The different stages of the methodology were validated using the opinion of an expert as a reference. The tests offered accurate results, showing coherent reconstructions of the 3D vasculature that can be analyzed in the diagnosis of relevant diseases affecting the retinal microcirculation, such as hypertension or diabetes, among others.
NASA Astrophysics Data System (ADS)
Treagust, David F.; Qureshi, Sheila S.; Vishnumolakala, Venkat Rao; Ojeil, Joseph; Mocerino, Mauro; Southam, Daniel C.
2018-04-01
Educational reforms in Qatar have seen the implementation of inquiry-based learning and other student-centred pedagogies. However, there have been few efforts to investigate how these adopted western pedagogies are aligned with the high context culture of Qatar. The study presented in this article highlights the implementation of a student-centred intervention called Process-Oriented Guided Inquiry Learning (POGIL) in selected independent Arabic government schools in Qatar. The study followed a theoretical framework composed of culturally relevant pedagogical practice and social constructivism in teaching and learning. A mixed method research design involving experimental and comparison groups was utilised. Carefully structured learning materials when implemented systematically in a POGIL intervention helped Grade 10 science students improve their perceptions of chemistry learning measured from pre- and post-tests as measured by the What Is Happening In this Class (WIHIC) questionnaire and school-administered achievement test. The study further provided school-based mentoring and professional development opportunities for teachers in the region. Significantly, POGIL was found to be adaptable in the Arabic context.
Assessment of statistical significance and clinical relevance.
Kieser, Meinhard; Friede, Tim; Gondan, Matthias
2013-05-10
In drug development, it is well accepted that a successful study will demonstrate not only a statistically significant result but also a clinically relevant effect size. Whereas standard hypothesis tests are used to demonstrate the former, it is less clear how the latter should be established. In the first part of this paper, we consider the responder analysis approach and study the performance of locally optimal rank tests when the outcome distribution is a mixture of responder and non-responder distributions. We find that these tests are quite sensitive to their planning assumptions and have therefore not really any advantage over standard tests such as the t-test and the Wilcoxon-Mann-Whitney test, which perform overall well and can be recommended for applications. In the second part, we present a new approach to the assessment of clinical relevance based on the so-called relative effect (or probabilistic index) and derive appropriate sample size formulae for the design of studies aiming at demonstrating both a statistically significant and clinically relevant effect. Referring to recent studies in multiple sclerosis, we discuss potential issues in the application of this approach. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Günter, Tuğçe; Alpat, Sibel Kılınç
2017-11-01
The purpose of this study was to investigate the effect of the case-based learning (CBL) method used in "biochemical oxygen demand (BOD)," which is a topic taught in the environmental chemistry course, at Dokuz Eylul University, on the academic achievement and opinions of students. The research had a quasi-experimental design and the study group consisted of 4th and 5th grade students (N = 18) attending the Chemistry Teaching Program in a university in Izmir. The "Biochemical Oxygen Demand Achievement Test (BODAT)" and the structured interview form were used as data collection tools. The results of BODAT post-test showed the higher increase in the achievement scores of the experimental group may be an indication of the effectiveness of the CBL method in improving academic achievement in the relevant topic. In addition, the experimental and control group students had positive opinions regarding the method, the scenario, and the material. The students found the method, the scenario, and the material to be interesting, understandable/instructional, relatable with everyday life, suitable for the topic, and enhancing active participation.
Kekäläinen, J; Podgorniak, T; Puolakka, T; Hyvärinen, P; Vainikka, A
2014-11-01
Selectivity of recreational angling on fish behaviour was studied by examining whether capture order or lure type (natural v. artificial bait) in ice-fishing could explain behavioural variation among perch Perca fluviatilis individuals. It was also tested if individually assessed personality predicts fish behaviour in groups, in the presence of natural predators. Perca fluviatilis showed individually repeatable behaviour both in individual and in group tests. Capture order, capture method, condition factor or past growth rate did not explain variation in individual behaviour. Individually determined boldness as well as fish size, however, were positively associated with first entrance to the predator zone (i.e. initial risk taking) in group behaviour tests. Individually determined boldness also explained long-term activity and total time spent in the vicinity of predators in the group. These findings suggest that individual and laboratory-based boldness tests predict boldness of P. fluviatilis in also ecologically relevant conditions, i.e. in shoals and in the presence of natural predators. The present results, however, also indicate that the above-mentioned two angling methods may not be selective for certain behavioural types in comparison to each other. © 2014 The Fisheries Society of the British Isles.
Van Driessche, Stijn; Van Roie, Evelien; Vanwanseele, Benedicte; Delecluse, Christophe
2018-01-01
Isotonic testing and measures of rapid power production are emerging as functionally relevant test methods for detection of muscle aging. Our objective was to assess reliability of rapid velocity and power measures in older adults using the isotonic mode of an isokinetic dynamometer. Sixty-three participants (aged 65 to 82 years) underwent a test-retest protocol with one week time interval. Isotonic knee extension tests were performed at four different loads: 0%, 25%, 50% and 75% of maximal isometric strength. Peak velocity (pV) and power (pP) were determined as the highest values of the velocity and power curve. Rate of velocity (RVD) and power development (RPD) were calculated as the linear slopes of the velocity- and power-time curve. Relative and absolute measures of test-retest reliability were analyzed using intraclass correlation coefficients (ICC), standard error of measurement (SEM) and Bland-Altman analyses. Overall, reliability was high for pV, pP, RVD and RPD at 0%, 25% and 50% load (ICC: .85 - .98, SEM: 3% - 10%). A trend for increased reliability at lower loads seemed apparent. The tests at 75% load led to range of motion failure and should be avoided. In addition, results demonstrated that caution is advised when interpreting early phase results (first 50ms). To conclude, our results support the use of the isotonic mode of an isokinetic dynamometer for testing rapid power and velocity characteristics in older adults, which is of high clinical relevance given that these muscle characteristics are emerging as the primary outcomes for preventive and rehabilitative interventions in aging research.
Tang, L; Khan, S U; Muhammad, N A
2001-11-01
The purpose of this work is to develop a bio-relevant dissolution method for formulation screening in order to select an enhanced bioavailable formulation for a poorly water-soluble drug. The methods used included a modified rotating disk apparatus for measuring intrinsic dissolution rate of the new chemical entity (NCE) and the USP dissolution method II for evaluating dissolution profiles of the drug in three different dosage forms. The in vitro dissolution results were compared with the in vivo bioavailability for selecting a bio-relevant medium. The results showed that the solubility of the NCE was proportional to the concentration of sodium lauryl sulfate (SLS) in the media. The apparent intrinsic dissolution rate of the NCE was linear to the rotational speed of the disk, which indicated that the dissolution of the drug is a diffusion-controlled mechanism. The apparent intrinsic dissolution rate was also linear to the surfactant concentration in the media, which was interpreted using the Noyes and Whitney Empirical Theory. Three formulations were studied in three different SLS media using the bulk drug as a reference. The dissolution results were compared with the corresponding bioavailability results in dogs. In the 1% SLS--sink conditions--the drug release from all the formulations was complete and the dissolution results were discriminative for the difference in particle size of the drug in the formulations. However, the data showed poor IVIV correlation. In the 0.5% SLS medium--non-sink conditions--the dissolution results showed the same rank order among the tested formulations as the bioavailability. The best IVIV correlation was obtained from the dissolution in 0.25% SLS medium, an over-saturated condition. The conclusions are: a surfactant medium increases the apparent intrinsic dissolution rate of the NCE linearly due to an increase in solubility. A low concentration of surfactant in the medium (0.25%) is more bio-relevant than higher concentrations of surfactant in the media for the poorly water-soluble drug. Creating sink conditions (based on bulk drug solubilities) by using a high concentration of a surfactant in the dissolution medium may not be a proper approach in developing a bio-relevant dissolution method for a poorly water-soluble drug.
Causo, Maria Serena; Ciccotti, Giovanni; Bonella, Sara; Vuilleumier, Rodolphe
2006-08-17
Linearized mixed quantum-classical simulations are a promising approach for calculating time-correlation functions. At the moment, however, they suffer from some numerical problems that may compromise their efficiency and reliability in applications to realistic condensed-phase systems. In this paper, we present a method that improves upon the convergence properties of the standard algorithm for linearized calculations by implementing a cumulant expansion of the relevant averages. The effectiveness of the new approach is tested by applying it to the challenging computation of the diffusion of an excess electron in a metal-molten salt solution.
A New Genomics-Driven Taxonomy of Bacteria and Archaea: Are We There Yet?
2016-01-01
Taxonomy is often criticized for being too conservative and too slow and having limited relevance because it has not taken into consideration the latest methods and findings. Yet the cumulative work product of its practitioners underpins contemporary microbiology and serves as a principal means of shaping and referencing knowledge. Using methods drawn from the field of exploratory data analysis, this minireview examines the current state of the field as it transitions from a taxonomy based on 16S rRNA gene sequences to one based on whole-genome sequences and tests the validity of some commonly held beliefs. PMID:27194687
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleary, M.P.
This paper provides comments to a companion journal paper on predictive modeling of hydraulic fracturing patterns (N.R. Warpinski et. al., 1994). The former paper was designed to compare various modeling methods to demonstrate the most accurate methods under various geologic constraints. The comments of this paper are centered around potential deficiencies in the former authors paper which include: limited actual comparisons offered between models, the issues of matching predictive data with that from related field operations was lacking or undocumented, and the relevance/impact of accurate modeling on the overall hydraulic fracturing cost and production.
Ligozzi, Marco; Bernini, Cinzia; Bonora, Maria Grazia; de Fatima, Maria; Zuliani, Jessica; Fontana, Roberta
2002-01-01
A study was conducted to evaluate the new VITEK 2 system (bioMérieux) for identification and antibiotic susceptibility testing of gram-positive cocci. Clinical isolates of Staphylococcus aureus (n = 100), coagulase-negative staphylococci (CNS) (n = 100), Enterococcus spp. (n = 89), Streptococcus agalactiae (n = 29), and Streptococcus pneumoniae (n = 66) were examined with the ID-GPC identification card and with the AST-P515 (for staphylococci), AST-P516 (for enterococci and S. agalactiae) and AST-P506 (for pneumococci) susceptibility cards. The identification comparison methods were the API Staph for staphylococci and the API 20 Strep for streptococci and enterococci; for antimicrobial susceptibility testing, the agar dilution method according to the procedure of the National Committee for Clinical Laboratory Standards (NCCLS) was used. The VITEK 2 system correctly identified to the species level (only one choice or after simple supplementary tests) 99% of S. aureus, 96.5% of S. agalactiae, 96.9% of S. pneumoniae, 92.7% of Enterococcus faecalis, 91.3% of Staphylococcus haemolyticus, and 88% of Staphylococcus epidermidis but was least able to identify Enterococcus faecium (71.4% correct). More than 90% of gram-positive cocci were identified within 3 h. According to the NCCLS breakpoints, antimicrobial susceptibility testing with the VITEK 2 system gave 96% correct category agreement, 0.82% very major errors, 0.17% major errors, and 2.7% minor errors. Antimicrobial susceptibility testing showed category agreement from 94 to 100% for S. aureus, from 90 to 100% for CNS, from 91 to 100% for enterococci, from 96 to 100% for S. agalactiae, and from 91 to 100% for S. pneumoniae. Microorganism-antibiotic combinations that gave very major errors were CNS-erythromycin, CNS-oxacillin, enterococci-teicoplanin, and enterococci-high-concentration gentamicin. Major errors were observed for CNS-oxacillin and S. agalactiae-tetracycline combinations. In conclusion the results of this study indicate that the VITEK 2 system represents an accurate and acceptable means for performing identification and antibiotic susceptibility tests with medically relevant gram-positive cocci. PMID:11980942
Ranking the whole MEDLINE database according to a large training set using text indexing.
Suomela, Brian P; Andrade, Miguel A
2005-03-24
The MEDLINE database contains over 12 million references to scientific literature, with about 3/4 of recent articles including an abstract of the publication. Retrieval of entries using queries with keywords is useful for human users that need to obtain small selections. However, particular analyses of the literature or database developments may need the complete ranking of all the references in the MEDLINE database as to their relevance to a topic of interest. This report describes a method that does this ranking using the differences in word content between MEDLINE entries related to a topic and the whole of MEDLINE, in a computational time appropriate for an article search query engine. We tested the capabilities of our system to retrieve MEDLINE references which are relevant to the subject of stem cells. We took advantage of the existing annotation of references with terms from the MeSH hierarchical vocabulary (Medical Subject Headings, developed at the National Library of Medicine). A training set of 81,416 references was constructed by selecting entries annotated with the MeSH term stem cells or some child in its sub tree. Frequencies of all nouns, verbs, and adjectives in the training set were computed and the ratios of word frequencies in the training set to those in the entire MEDLINE were used to score references. Self-consistency of the algorithm, benchmarked with a test set containing the training set and an equal number of references randomly selected from MEDLINE was better using nouns (79%) than adjectives (73%) or verbs (70%). The evaluation of the system with 6,923 references not used for training, containing 204 articles relevant to stem cells according to a human expert, indicated a recall of 65% for a precision of 65%. This strategy appears to be useful for predicting the relevance of MEDLINE references to a given concept. The method is simple and can be used with any user-defined training set. Choice of the part of speech of the words used for classification has important effects on performance. Lists of words, scripts, and additional information are available from the web address http://www.ogic.ca/projects/ks2004/.
Lanvers-Kaminsky, Claudia; Rüffer, Andrea; Würthwein, Gudrun; Gerss, Joachim; Zucchetti, Massimo; Ballerini, Andrea; Attarbaschi, Andishe; Smisek, Petr; Nath, Christa; Lee, Samiuela; Elitzur, Sara; Zimmermann, Martin; Möricke, Anja; Schrappe, Martin; Rizzari, Carmelo; Boos, Joachim
2018-02-01
In the international AIEOP-BFM ALL 2009 trial, asparaginase (ASE) activity was monitored after each dose of pegylated Escherichia coli ASE (PEG-ASE). Two methods were used: the aspartic acid β-hydroxamate (AHA) test and medac asparaginase activity test (MAAT). As the latter method overestimates PEG-ASE activity because it calibrates using E. coli ASE, method comparison was performed using samples from the AIEOP-BFM ALL 2009 trial. PEG-ASE activities were determined using MAAT and AHA test in 2 sets of samples (first set: 630 samples and second set: 91 samples). Bland-Altman analysis was performed on ratios between MAAT and AHA tests. The mean difference between both methods, limits of agreement, and 95% confidence intervals were calculated and compared for all samples and samples grouped according to the calibration ranges of the MAAT and the AHA test. PEG-ASE activity determined using the MAAT was significantly higher than when determined using the AHA test (P < 0.001; Wilcoxon signed-rank test). Within the calibration range of the MAAT (30-600 U/L), PEG-ASE activities determined using the MAAT were on average 23% higher than PEG-ASE activities determined using the AHA test. This complies with the mean difference reported in the MAAT manual. With PEG-ASE activities >600 U/L, the discrepancies between MAAT and AHA test increased. Above the calibration range of the MAAT (>600 U/L) and the AHA test (>1000 U/L), a mean difference of 42% was determined. Because more than 70% of samples had PEG-ASE activities >600 U/L and required additional sample dilution, an overall mean difference of 37% was calculated for all samples (37% for the first and 34% for the second set). Comparison of the MAAT and AHA test for PEG-ASE activity confirmed a mean difference of 23% between MAAT and AHA test for PEG-ASE activities between 30 and 600 U/L. The discrepancy increased in samples with >600 U/L PEG-ASE activity, which will be especially relevant when evaluating high PEG-ASE activities in relation to toxicity, efficacy, and population pharmacokinetics.
NASA Technical Reports Server (NTRS)
Wilkenfeld, J. M.; Judge, R. J. R.; Harlacher, B. L.
1982-01-01
A combined experimental and analytical program to develop system electrical test procedures for the qualification of spacecraft against damage produced by space-electron-induced discharges (EID) occurring on spacecraft dielectric outer surfaces is described. The data on the response of a simple satellite model, called CAN, to electron-induced discharges is presented. The experimental results were compared to predicted behavior and to the response of the CAN to electrical injection techniques simulating blowoff and arc discharges. Also included is a review of significant results from other ground tests and the P78-2 program to form a data base from which is specified those test procedures which optimally simulate the response of spacecraft to EID. The electrical and electron spraying test data were evaluated to provide a first-cut determination of the best methods for performance of electrical excitation qualification tests from the point of view of simulation fidelity.
Cooper, J J; Brayford, M J; Laycock, P A
2014-08-01
A new method is described which can be used to determine the setting times of small amounts of high value bone cements. The test was developed to measure how the setting times of a commercially available synthetic calcium sulfate cement (Stimulan, Biocomposites, UK) in two forms (standard and Rapid Cure) varies with the addition of clinically relevant antibiotics. The importance of being able to accurately quantify these setting times is discussed. The results demonstrate that this new method, which is shown to correlate to the Vicat needle, gives reliable and repeatable data with additional benefits expressed in the article. The majority of antibiotics mixed were found to retard the setting reaction of the calcium sulfate cement.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
Autofocus method for automated microscopy using embedded GPUs.
Castillo-Secilla, J M; Saval-Calvo, M; Medina-Valdès, L; Cuenca-Asensi, S; Martínez-Álvarez, A; Sánchez, C; Cristóbal, G
2017-03-01
In this paper we present a method for autofocusing images of sputum smears taken from a microscope which combines the finding of the optimal focus distance with an algorithm for extending the depth of field (EDoF). Our multifocus fusion method produces an unique image where all the relevant objects of the analyzed scene are well focused, independently to their distance to the sensor. This process is computationally expensive which makes unfeasible its automation using traditional embedded processors. For this purpose a low-cost optimized implementation is proposed using limited resources embedded GPU integrated on cutting-edge NVIDIA system on chip. The extensive tests performed on different sputum smear image sets show the real-time capabilities of our implementation maintaining the quality of the output image.
International Harmonization and Cooperation in the Validation of Alternative Methods.
Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie
The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.
A novel accelerated oxidative stability screening method for pharmaceutical solids.
Zhu, Donghua Alan; Zhang, Geoff G Z; George, Karen L S T; Zhou, Deliang
2011-08-01
Despite the fact that oxidation is the second most frequent degradation pathway for pharmaceuticals, means of evaluating the oxidative stability of pharmaceutical solids, especially effective stress testing, are still lacking. This paper describes a novel experimental method for peroxide-mediated oxidative stress testing on pharmaceutical solids. The method utilizes urea-hydrogen peroxide, a molecular complex that undergoes solid-state decomposition and releases hydrogen peroxide vapor at elevated temperatures (e.g., 30°C), as a source of peroxide. The experimental setting for this method is simple, convenient, and can be operated routinely in most laboratories. The fundamental parameter of the system, that is, hydrogen peroxide vapor pressure, was determined using a modified spectrophotometric method. The feasibility and utility of the proposed method in solid form selection have been demonstrated using various solid forms of ephedrine. No degradation was detected for ephedrine hydrochloride after exposure to the hydrogen peroxide vapor for 2 weeks, whereas both anhydrate and hemihydrate free base forms degraded rapidly under the test conditions. In addition, both the anhydrate and the hemihydrate free base degraded faster when exposed to hydrogen peroxide vapor at 30°C under dry condition than at 30°C/75% relative humidity (RH). A new degradation product was also observed under the drier condition. The proposed method provides more relevant screening conditions for solid dosage forms, and is useful in selecting optimal solid form(s), determining potential degradation products, and formulation screening during development. Copyright © 2011 Wiley-Liss, Inc.
Landman, David; Salamera, Julius; Quale, John
2013-12-01
Carbapenem-resistant Enterobacter species are emerging nosocomial pathogens. As with most multidrug-resistant Gram-negative pathogens, the polymyxins are often the only therapeutic option. In this study involving clinical isolates of E. cloacae and E. aerogenes, susceptibility testing methods with polymyxin B were analyzed. All isolates underwent testing by the broth microdilution (in duplicate) and agar dilution (in duplicate) methods, and select isolates were examined by the Etest method. Selected isolates were also examined for heteroresistance by population analysis profiling. Using a susceptibility breakpoint of ≤2 μg/ml, categorical agreement by all four dilution tests (two broth microdilution and two agar dilution) was achieved in only 76/114 (67%) of E. cloacae isolates (65 susceptible, 11 resistant). Thirty-eight (33%) had either conflicting or uninterpretable results (multiple skip wells, i.e., wells that exhibit no growth although growth does occur at higher concentrations). Of the 11 consistently resistant isolates, five had susceptible MICs as determined by Etest. Heteroresistant subpopulations were detected in eight of eight isolates tested, with greater percentages in isolates with uninterpretable MICs. For E. aerogenes, categorical agreement between the four dilution tests was obtained in 48/56 (86%), with conflicting and/or uninterpretable results in 8/56 (14%). For polymyxin susceptibility testing of Enterobacter species, close attention must be paid to the presence of multiple skip wells, leading to uninterpretable results. Susceptibility also should not be assumed based on the results of a single test. Until the clinical relevance of skip wells is defined, interpretation of polymyxin susceptibility tests for Enterobacter species should be undertaken with extreme caution.
Landman, David; Salamera, Julius
2013-01-01
Carbapenem-resistant Enterobacter species are emerging nosocomial pathogens. As with most multidrug-resistant Gram-negative pathogens, the polymyxins are often the only therapeutic option. In this study involving clinical isolates of E. cloacae and E. aerogenes, susceptibility testing methods with polymyxin B were analyzed. All isolates underwent testing by the broth microdilution (in duplicate) and agar dilution (in duplicate) methods, and select isolates were examined by the Etest method. Selected isolates were also examined for heteroresistance by population analysis profiling. Using a susceptibility breakpoint of ≤2 μg/ml, categorical agreement by all four dilution tests (two broth microdilution and two agar dilution) was achieved in only 76/114 (67%) of E. cloacae isolates (65 susceptible, 11 resistant). Thirty-eight (33%) had either conflicting or uninterpretable results (multiple skip wells, i.e., wells that exhibit no growth although growth does occur at higher concentrations). Of the 11 consistently resistant isolates, five had susceptible MICs as determined by Etest. Heteroresistant subpopulations were detected in eight of eight isolates tested, with greater percentages in isolates with uninterpretable MICs. For E. aerogenes, categorical agreement between the four dilution tests was obtained in 48/56 (86%), with conflicting and/or uninterpretable results in 8/56 (14%). For polymyxin susceptibility testing of Enterobacter species, close attention must be paid to the presence of multiple skip wells, leading to uninterpretable results. Susceptibility also should not be assumed based on the results of a single test. Until the clinical relevance of skip wells is defined, interpretation of polymyxin susceptibility tests for Enterobacter species should be undertaken with extreme caution. PMID:24088860
The Field Relevance of NHTSA's Oblique Research Moving Deformable Barrier Tests.
Prasad, Priya; Dalmotas, Dainius; German, Alan
2014-11-01
A small overlap frontal crash test has been recently introduced by the Insurance Institute for Highway Safety in its frontal rating scheme. Another small overlap frontal crash test is under development by the National Highway Traffic Safety Administration (NHTSA). Whereas the IIHS test is conducted against a fixed rigid barrier, the NHTSA test is conducted with a moving deformable barrier that overlaps 35% of the vehicle being tested and the angle between the longitudinal axis of the barrier and the longitudinal axis of the test vehicle is 15 degrees. The field relevance of the IIHS test has been the subject of a paper by Prasad et al. (2014). The current study is aimed at examining the field relevance of the NHTSA test. The field relevance is indicated by the frequency of occurrence of real world crashes that are simulated by the test conditions, the proportion of serious-to-fatal real world injuries explained by the test condition, and rates of serious injury to the head, chest and other body regions in the real world crashes resembling the test condition. The database examined for real world crashes is NASS. Results of the study indicate that 1.4% of all frontal 11-to-1 o'clock crashes are simulated by the test conditions that account for 2.4% to 4.5% of all frontal serious-to-fatal (MAIS3+F) injuries. Injury rates of the head and the chest are substantially lower in far-side than in near-side frontal impacts. Crash test ATD rotational responses of the head in the tests overpredict the real world risk of serious-to-fatal brain injuries.
Curran, Geoffrey M; Bauer, Mark; Mittman, Brian; Pyne, Jeffrey M; Stetler, Cheryl
2012-03-01
This study proposes methods for blending design components of clinical effectiveness and implementation research. Such blending can provide benefits over pursuing these lines of research independently; for example, more rapid translational gains, more effective implementation strategies, and more useful information for decision makers. This study proposes a "hybrid effectiveness-implementation" typology, describes a rationale for their use, outlines the design decisions that must be faced, and provides several real-world examples. An effectiveness-implementation hybrid design is one that takes a dual focus a priori in assessing clinical effectiveness and implementation. We propose 3 hybrid types: (1) testing effects of a clinical intervention on relevant outcomes while observing and gathering information on implementation; (2) dual testing of clinical and implementation interventions/strategies; and (3) testing of an implementation strategy while observing and gathering information on the clinical intervention's impact on relevant outcomes. The hybrid typology proposed herein must be considered a construct still in evolution. Although traditional clinical effectiveness and implementation trials are likely to remain the most common approach to moving a clinical intervention through from efficacy research to public health impact, judicious use of the proposed hybrid designs could speed the translation of research findings into routine practice.
False Memory in Aging Resulting From Self-Referential Processing
2013-01-01
Objectives. Referencing the self is known to enhance accurate memory, but less is known about how the strategy affects false memory, particularly for highly self-relevant information. Because older adults are more prone to false memories, we tested whether self-referencing increased false memories with age. Method. In 2 studies, older and younger adults rated adjectives for self-descriptiveness and later completed a surprise recognition test comprised of words rated previously for self-descriptiveness and novel lure words. Lure words were subsequently rated for self-descriptiveness in order to assess the impact of self-relevance on false memory. Study 2 introduced commonness judgments as a control condition, such that participants completed a recognition test on adjectives rated for commonness in addition to adjectives in the self-descriptiveness condition. Results. Across both studies, findings indicate an increased response bias to self-referencing that increased hit rates for both older and younger adults but also increased false alarms as information became more self-descriptive, particularly for older adults. Discussion. Although the present study supports previous literature showing a boost in memory for self-referenced information, the increase in false alarms, especially in older adults, highlights the potential for memory errors, particularly for information that is strongly related to the self. PMID:23576449
One-year test-retest reliability of intrinsic connectivity network fMRI in older adults
Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.
2014-01-01
“Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491
Students Explaining Science—Assessment of Science Communication Competence
NASA Astrophysics Data System (ADS)
Kulgemeyer, Christoph; Schecker, Horst
2013-12-01
Science communication competence (SCC) is an important educational goal in the school science curricula of several countries. However, there is a lack of research about the structure and the assessment of SCC. This paper specifies the theoretical framework of SCC by a competence model. We developed a qualitative assessment method for SCC that is based on an expert-novice dialog: an older student (explainer, expert) explains a physics phenomenon to a younger peer (addressee, novice) in a controlled test setting. The explanations are video-recorded and analysed by qualitative content analysis. The method was applied in a study with 46 secondary school students as explainers. Our aims were (a) to evaluate whether our model covers the relevant features of SCC, (b) to validate the assessment method and (c) to find characteristics of addressee-adequate explanations. A performance index was calculated to quantify the explainers' levels of competence on an ordinal scale. We present qualitative and quantitative evidence that the index is adequate for assessment purposes. It correlates with results from a written SCC test and a perspective taking test (convergent validity). Addressee-adequate explanations can be characterized by use of graphical representations and deliberate switches between scientific and everyday language.
ITALICS: an algorithm for normalization and DNA copy number calling for Affymetrix SNP arrays.
Rigaill, Guillem; Hupé, Philippe; Almeida, Anna; La Rosa, Philippe; Meyniel, Jean-Philippe; Decraene, Charles; Barillot, Emmanuel
2008-03-15
Affymetrix SNP arrays can be used to determine the DNA copy number measurement of 11 000-500 000 SNPs along the genome. Their high density facilitates the precise localization of genomic alterations and makes them a powerful tool for studies of cancers and copy number polymorphism. Like other microarray technologies it is influenced by non-relevant sources of variation, requiring correction. Moreover, the amplitude of variation induced by non-relevant effects is similar or greater than the biologically relevant effect (i.e. true copy number), making it difficult to estimate non-relevant effects accurately without including the biologically relevant effect. We addressed this problem by developing ITALICS, a normalization method that estimates both biological and non-relevant effects in an alternate, iterative manner, accurately eliminating irrelevant effects. We compared our normalization method with other existing and available methods, and found that ITALICS outperformed these methods for several in-house datasets and one public dataset. These results were validated biologically by quantitative PCR. The R package ITALICS (ITerative and Alternative normaLIzation and Copy number calling for affymetrix Snp arrays) has been submitted to Bioconductor.
2010-01-01
Background The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. Methods A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). Results The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). Conclusions As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach. PMID:20731858
The flaws and human harms of animal experimentation.
Akhtar, Aysha
2015-10-01
Nonhuman animal ("animal") experimentation is typically defended by arguments that it is reliable, that animals provide sufficiently good models of human biology and diseases to yield relevant information, and that, consequently, its use provides major human health benefits. I demonstrate that a growing body of scientific literature critically assessing the validity of animal experimentation generally (and animal modeling specifically) raises important concerns about its reliability and predictive value for human outcomes and for understanding human physiology. The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice. Additionally, I show how animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods. The resulting evidence suggests that the collective harms and costs to humans from animal experimentation outweigh potential benefits and that resources would be better invested in developing human-based testing methods.
AST Combustion Workshop: Diagnostics Working Group Report
NASA Technical Reports Server (NTRS)
Locke, Randy J.; Hicks, Yolanda R.; Hanson, Ronald K.
1996-01-01
A workshop was convened under NASA's Advanced Subsonics Technologies (AST) Program. Many of the principal combustion diagnosticians from industry, academia, and government laboratories were assembled in the Diagnostics/Testing Subsection of this workshop to discuss the requirements and obstacles to the successful implementation of advanced diagnostic techniques to the test environment of the proposed AST combustor. The participants, who represented the major relevant areas of advanced diagnostic methods currently applied to combustion and related fields, first established the anticipated AST combustor flowfield conditions. Critical flow parameters were then examined and prioritized as to their importance to combustor/fuel injector design and manufacture, environmental concerns, and computational interests. Diagnostic techniques were then evaluated in terms of current status, merits and obstacles for each flow parameter. All evaluations are presented in tabular form and recommendations are made on the best-suited diagnostic method to implement for each flow parameter in order of applicability and intrinsic value.
Linking ceragenins to water-treatment membranes to minimize biofouling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hibbs, Michael R.; Altman, Susan Jeanne; Feng, Yanshu
Ceragenins were used to create biofouling resistant water-treatment membranes. Ceragenins are synthetically produced antimicrobial peptide mimics that display broad-spectrum bactericidal activity. While ceragenins have been used on bio-medical devices, use of ceragenins on water-treatment membranes is novel. Biofouling impacts membrane separation processes for many industrial applications such as desalination, waste-water treatment, oil and gas extraction, and power generation. Biofouling results in a loss of permeate flux and increase in energy use. Creation of biofouling resistant membranes will assist in creation of clean water with lower energy usage and energy with lower water usage. Five methods of attaching three different cerageninmore » molecules were conducted and tested. Biofouling reduction was observed in the majority of the tests, indicating the ceragenins are a viable solution to biofouling on water treatment membranes. Silane direct attachment appears to be the most promising attachment method if a high concentration of CSA-121a is used. Additional refinement of the attachment methods are needed in order to achieve our goal of several log-reduction in biofilm cell density without impacting the membrane flux. Concurrently, biofilm forming bacteria were isolated from source waters relevant for water treatment: wastewater, agricultural drainage, river water, seawater, and brackish groundwater. These isolates can be used for future testing of methods to control biofouling. Once isolated, the ability of the isolates to grow biofilms was tested with high-throughput multiwell methods. Based on these tests, the following species were selected for further testing in tube reactors and CDC reactors: Pseudomonas ssp. (wastewater, agricultural drainage, and Colorado River water), Nocardia coeliaca or Rhodococcus spp. (wastewater), Pseudomonas fluorescens and Hydrogenophaga palleronii (agricultural drainage), Sulfitobacter donghicola, Rhodococcus fascians, Rhodobacter katedanii, and Paracoccus marcusii (seawater), and Sphingopyxis spp. (groundwater). The testing demonstrated the ability of these isolates to be used for biofouling control testing under laboratory conditions. Biofilm forming bacteria were obtained from all the source water samples.« less
Automatic evaluation of the Valsalva sinuses from cine-MRI
NASA Astrophysics Data System (ADS)
Blanchard, Cédric; Sliwa, Tadeusz; Lalande, Alain; Mohan, Pauliah; Bouchot, Olivier; Voisin, Yvon
2011-03-01
MRI appears to be particularly attractive for the study of the Sinuses of Valsalva (SV), however there is no global consensus on their suitable measurements. In this paper, we propose a new method, based on the mathematical morphology and combining a numerical geodesic reconstruction with an area estimation, to automatically evaluate the SV from a cine-MRI in a cross-sectional orientation. It consists in the extraction of the shape of the SV, the detection of relevant points (commissures, cusps and the centre of the SV), the measurement of relevant distances and in a classification of the valve as bicuspid or tricuspid by a metric evaluation of the SV. Our method was tested on 23 patient examinations and radii calculations were compared with a manual measurement. The classification of the valve as tricuspid or bicuspid was correct for all the cases. Moreover, there are an excellent correlation and an excellent concordance between manual and automatic measurements for images at diastolic phase (r= 0.97; y = x - 0.02; p=NS; mean of differences = -0.1 mm; standard deviation of differences = 2.3 mm) and at systolic phase (r= 0.96; y = 0.97 x + 0.80; p=NS ; mean of differences = -0.1 mm; standard deviation of differences = 2.4 mm). The cross-sectional orientation of the image acquisition plane conjugated with our automatic method provides a reliable morphometric evaluation of the SV, based on the automatic location of the centre of the SV, the commissure and the cusp positions. Measurements of distances between relevant points allow a precise evaluation of the SV.
Effects of self-relevant cues and cue valence on autobiographical memory specificity in dysphoria.
Matsumoto, Noboru; Mochizuki, Satoshi
2017-04-01
Reduced autobiographical memory specificity (rAMS) is a characteristic memory bias observed in depression. To corroborate the capture hypothesis in the CaRFAX (capture and rumination, functional avoidance, executive capacity and control) model, we investigated the effects of self-relevant cues and cue valence on rAMS using an adapted Autobiographical Memory Test conducted with a nonclinical population. Hierarchical linear modelling indicated that the main effects of depression and self-relevant cues elicited rAMS. Moreover, the three-way interaction among valence, self-relevance, and depression scores was significant. A simple slope test revealed that dysphoric participants experienced rAMS in response to highly self-relevant positive cues and low self-relevant negative cues. These results partially supported the capture hypothesis in nonclinical dysphoria. It is important to consider cue valence in future studies examining the capture hypothesis.
McCabe, Catherine; Dinsmore, John; Brady, Anne Marie; Mckee, Gabrielle; O'Donnell, Sharon; Prendergast, David
2014-01-01
Background. Behavioural change and self-management in patients with chronic illness may help to control symptoms, avoid rehospitalization, enhance quality of life, and decrease mortality and morbidity. Objective. Guided by action research principles and using mixed methods, the aim of this project was to develop peer based educational, motivational, and health-promoting peer based videos, using behavioural change principles, to support self-management in patients with COPD. Methods. Individuals (n = 32) living with COPD at home and involved in two community based COPD support groups were invited to participate in this project. Focus group/individual interviews and a demographic questionnaire were used to collect data. Results. Analysis revealed 6 categories relevant to behavioural change which included self-management, support, symptoms, knowledge, rehabilitation, and technology. Participants commented that content needed to be specific, and videos needed to be shorter, to be tailored to severity of condition, to demonstrate “normal” activities, to be positive, and to ensure that content is culturally relevant. Conclusions. This study demonstrated that detailed analysis of patient perspectives and needs for self-management is essential and should underpin the development of any framework, materials, and technology. The action research design principles provided an effective framework for eliciting the data and applying it to technology and testing its relevance to the user. PMID:24959177
Sol, Marleen Elisabeth; Verschuren, Olaf; de Groot, Laura; de Groot, Janke Frederike
2017-02-13
Wheelchair mobility skills (WMS) training is regarded by children using a manual wheelchair and their parents as an important factor to improve participation and daily physical activity. Currently, there is no outcome measure available for the evaluation of WMS in children. Several wheelchair mobility outcome measures have been developed for adults, but none of these have been validated in children. Therefore the objective of this study is to develop a WMS outcome measure for children using the current knowledge from literature in combination with the clinical expertise of health care professionals, children and their parents. Mixed methods approach. Phase 1: Item identification of WMS items through a systematic review using the 'COnsensus-based Standards for the selection of health Measurement Instruments' (COSMIN) recommendations. Phase 2: Item selection and validation of relevant WMS items for children, using a focus group and interviews with children using a manual wheelchair, their parents and health care professionals. Phase 3: Feasibility of the newly developed Utrecht Pediatric Wheelchair Mobility Skills Test (UP-WMST) through pilot testing. Phase 1: Data analysis and synthesis of nine WMS related outcome measures showed there is no widely used outcome measure with levels of evidence across all measurement properties. However, four outcome measures showed some levels of evidence on reliability and validity for adults. Twenty-two WMS items with the best clinimetric properties were selected for further analysis in phase 2. Phase 2: Fifteen items were deemed as relevant for children, one item needed adaptation and six items were considered not relevant for assessing WMS in children. Phase 3: Two health care professionals administered the UP-WMST in eight children. The instructions of the UP-WMST were clear, but the scoring method of the height difference items needed adaptation. The outdoor items for rolling over soft surface and the side slope item were excluded in the final version of the UP-WMST due to logistic reasons. The newly developed 15 item UP-WMST is a validated outcome measure which is easy to administer in children using a manual wheelchair. More research regarding reliability, construct validity and responsiveness is warranted before the UP-WMST can be used in practice.
2018-01-01
ABSTRACT To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli. These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. PMID:29475868
Shakeri, Heman; Volkova, Victoriya; Wen, Xuesong; Deters, Andrea; Cull, Charley; Drouillard, James; Müller, Christian; Moradijamei, Behnaz; Jaberi-Douraki, Majid
2018-05-01
To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. Copyright © 2018 Shakeri et al.
Seth, Mayank; Jackson, Karen V.; Winzelberg, Sarah; Giger, Urs
2012-01-01
Objective To compare accuracy and ease of use of a card agglutination assay, an immunochromatographic cartridge method, and a gel-based method for canine blood typing. Sample Blood samples from 52 healthy blood donor dogs, 10 dogs with immune-mediated hemolytic anemia (IMHA), and 29 dogs with other diseases. Procedures Blood samples were tested in accordance with manufacturer guidelines. Samples with low PCVs were created by the addition of autologous plasma to separately assess the effects of anemia on test results. Results Compared with a composite reference standard of agreement between 2 methods, the gel-based method was found to be 100% accurate. The card agglutination assay was 89% to 91% accurate, depending on test interpretation, and the immunochromatographic cartridge method was 93% accurate but 100% specific. Errors were observed more frequently in samples from diseased dogs, particularly those with IMHA. In the presence of persistent autoagglutination, dog erythrocyte antigen (DEA) 1.1 typing was not possible, except with the immunochromatographic cartridge method. Conclusions and Clinical Relevance The card agglutination assay and immunochromatographic cartridge method, performed by trained personnel, were suitable for in-clinic emergency DEA 1.1 blood typing. There may be errors, particularly for samples from dogs with IMHA, and the immunochromatographic cartridge method may have an advantage of allowing typing of samples with persistent autoagglutination. The laboratory gel-based method would be preferred for routine DEA 1.1 typing of donors and patients if it is available and time permits. Current DEA 1.1 typing techniques appear to be appropriately standardized and easy to use. PMID:22280380
Permutation testing of orthogonal factorial effects in a language-processing experiment using fMRI.
Suckling, John; Davis, Matthew H; Ooi, Cinly; Wink, Alle Meije; Fadili, Jalal; Salvador, Raymond; Welchew, David; Sendur, Levent; Maxim, Vochita; Bullmore, Edward T
2006-05-01
The block-paradigm of the Functional Image Analysis Contest (FIAC) dataset was analysed with the Brain Activation and Morphological Mapping software. Permutation methods in the wavelet domain were used for inference on cluster-based test statistics of orthogonal contrasts relevant to the factorial design of the study, namely: the average response across all active blocks, the main effect of speaker, the main effect of sentence, and the interaction between sentence and speaker. Extensive activation was seen with all these contrasts. In particular, different vs. same-speaker blocks produced elevated activation in bilateral regions of the superior temporal lobe and repetition suppression for linguistic materials (same vs. different-sentence blocks) in left inferior frontal regions. These are regions previously reported in the literature. Additional regions were detected in this study, perhaps due to the enhanced sensitivity of the methodology. Within-block sentence suppression was tested post-hoc by regression of an exponential decay model onto the extracted time series from the left inferior frontal gyrus, but no strong evidence of such an effect was found. The significance levels set for the activation maps are P-values at which we expect <1 false-positive cluster per image. Nominal type I error control was verified by empirical testing of a test statistic corresponding to a randomly ordered design matrix. The small size of the BOLD effect necessitates sensitive methods of detection of brain activation. Permutation methods permit the necessary flexibility to develop novel test statistics to meet this challenge.
Effects of Coherence and Relevance on Shallow and Deep Text Processing.
ERIC Educational Resources Information Center
Lehman, Stephen; Schraw, Gregory
2002-01-01
Examines the effects of coherence and relevance on shallow and deeper text processing, testing the hypothesis that enhancing the relevance of text segments compensates for breaks in local and global coherence. Results reveal that breaks in local coherence had no effect on any outcome measures, whereas relevance enhanced deeper processing.…
NASA Astrophysics Data System (ADS)
Beaumont, Robert
Currently, there are no reliable methods for screening potential armour materials and hence full-scale ballistic trials are needed. These are both costly and time-consuming in terms of the actual test and also in the materials development that needs to take place to produce sufficient material to give a meaningful result. Whilst it will not be possible to dispense with ballistic trials before material deployment in armour applications, the ability to shorten the development cycle would be advantageous. The thermal shock performance of ceramic armour materials has been highlighted as potential marker for ballistic performance. Hence the purpose of this study was to investigate this further. A new thermal shock technique that reproduced features relevant to ballistic testing was sought. As it would be beneficial to have a simple test that did not use much material, a water-drop method was adopted. This was combined with a variety of characterisation techniques, administered pre- and post-shock. The methods included measurement of the amplitude of ultrasonic wave transmission through the sample alongside residual strength testing using a biaxial ball-on-ball configuration and reflected light and confocal microscopy. Once the protocols had been refined the testing regime was applied to a group of ceramic materials. The materials selected were from two broad groups: alumina and carbide materials. Carbide ceramics show superior performance to alumina ceramics in ballistic applications so it was essential that any screening test would be easily able to differentiate the two groups. Within the alumina family, two commercially available materials, AD995 and Sintox FA, were selected. These were tested alongside three developmental silicon carbide-boron carbide composites, which had identical chemical compositions but different microstructures and thus presented more of a challenge in terms of differentiation. The results from the various tests were used to make predictions about the relative ballistic performances. The tests showed that all of the composites would outperform the alumina materials. Further, all of the tests led to the prediction that AD995 would be better ballistically than Sintox FA, possibly up to a factor of two better. The predictions were in very good agreement with literature values for depth-of-penetration testing. The situation was more complex for the carbide materials, with different tests leading to slightly different predictions. However, the predictions from the ultrasonic tests were consistent with the available ballistic data. Indeed, the ultrasonic data proved to be the most consistent predictor of ballistic performance, supporting the view that the total defect population is more relevant than a ‘critical flaw’ concept. Thus, it can be concluded that with further development, and subject to validation across a wider spread of materials and microstructures, thermal shock testing coupled with ultrasonic measurements could form the basis of a future screening test for ceramics for armour applications.
Cesnaitis, Romanas; Sobanska, Marta A; Versonnen, Bram; Sobanski, Tomasz; Bonnomet, Vincent; Tarazona, Jose V; De Coen, Wim
2014-03-15
For the first REACH registration deadline, companies have submitted registrations with relevant hazard and exposure information for substances at the highest tonnage level (above 1000 tonnes per year). At this tonnage level, information on the long-term toxicity of a substance to sediment organisms is required. There are a number of available test guidelines developed and accepted by various national/international organisations, which can be used to investigate long-term toxicity to sediment organisms. However instead of testing, registrants may also use other options to address toxicity to sediment organisms, e.g. weight of evidence approach, grouping of substances and read-across approaches, as well as substance-tailored exposure-driven testing. The current analysis of the data provided in ECHA database focuses on the test methods applied and the test organisms used in the experimental studies to assess long-term toxicity to sediment organisms. The main guidelines used for the testing of substances registered under REACH are the OECD guidelines and OSPAR Protocols on Methods for the Testing of Chemicals used in the Offshore Oil Industry: "Part A: A Sediment Bioassay using an Amphipod Corophium sp." explaining why one of the mostly used test organisms is the marine amphipod Corophium sp. In total, testing results with at least 40 species from seven phyla are provided in the database. However, it can be concluded that the ECHA database does not contain a high enough number of available experimental data on toxicity to sediment organisms for it to be used extensively by the scientific community (e.g. for development of non-testing methods to predict hazards to sediment organisms). © 2013.
A robust hypothesis test for the sensitive detection of constant speed radiation moving sources
NASA Astrophysics Data System (ADS)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence
2015-09-01
Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.
Accuracy of five serologic tests for the follow up of Strongyloides stercoralis infection.
Buonfrate, Dora; Sequi, Marco; Mejia, Rojelio; Cimino, Ruben O; Krolewiecki, Alejandro J; Albonico, Marco; Degani, Monica; Tais, Stefano; Angheben, Andrea; Requena-Mendez, Ana; Muñoz, José; Nutman, Thomas B; Bisoffi, Zeno
2015-02-01
Traditional faecal-based methods have poor sensitivity for the detection of S. stercoralis, therefore are inadequate for post-treatment evaluation of infected patients who should be carefully monitored to exclude the persistence of the infection. In a previous study, we demonstrated high accuracy of five serology tests for the screening and diagnosis of strongyloidiasis. Aim of this study is to evaluate the performance of the same five tests for the follow up of patients infected with S. stercoralis. Retrospective study on anonymized, cryo-preserved samples available at the Centre for Tropical Diseases (Negrar, Verona, Italy). Samples were collected before and from 3 to 12 months after treatment. The samples were tested with two commercially-available ELISA tests (IVD, Bordier), two techniques based on a recombinant antigen (NIE-ELISA and NIE-LIPS) and one in-house IFAT. The results of each test were evaluated both in relation to the results of fecal examination and to those of a composite reference standard (classifying as positive a sample with positive stools and/or at least three positive serology tests). The associations between the independent variables age and time and the dependent variable value of serological test (for all five tests), were analyzed by linear mixed-effects regression model. A high proportion of samples demonstrated for each test a seroreversion or a relevant decline (optical density/relative light units halved or decrease of at least two titers for IFAT) at follow up, results confirmed by the linear mixed effects model that showed a trend to seroreversion over time for all tests. In particular, IVD-ELISA (almost 90% samples demonstrated relevant decline) and IFAT (almost 87%) had the best performance. Considering only samples with a complete negativization, NIE-ELISA showed the best performance (72.5% seroreversion). Serology is useful for the follow up of patients infected with S. stercoralis and determining test of cure.
Virtual Design Method for Controlled Failure in Foldcore Sandwich Panels
NASA Astrophysics Data System (ADS)
Sturm, Ralf; Fischer, S.
2015-12-01
For certification, novel fuselage concepts have to prove equivalent crashworthiness standards compared to the existing metal reference design. Due to the brittle failure behaviour of CFRP this requirement can only be fulfilled by a controlled progressive crash kinematics. Experiments showed that the failure of a twin-walled fuselage panel can be controlled by a local modification of the core through-thickness compression strength. For folded cores the required change in core properties can be integrated by a modification of the fold pattern. However, the complexity of folded cores requires a virtual design methodology for tailoring the fold pattern according to all static and crash relevant requirements. In this context a foldcore micromodel simulation method is presented to identify the structural response of a twin-walled fuselage panels with folded core under crash relevant loading condition. The simulations showed that a high degree of correlation is required before simulation can replace expensive testing. In the presented studies, the necessary correlation quality could only be obtained by including imperfections of the core material in the micromodel simulation approach.
Enhanced conformational sampling of carbohydrates by Hamiltonian replica-exchange simulation.
Mishra, Sushil Kumar; Kara, Mahmut; Zacharias, Martin; Koca, Jaroslav
2014-01-01
Knowledge of the structure and conformational flexibility of carbohydrates in an aqueous solvent is important to improving our understanding of how carbohydrates function in biological systems. In this study, we extend a variant of the Hamiltonian replica-exchange molecular dynamics (MD) simulation to improve the conformational sampling of saccharides in an explicit solvent. During the simulations, a biasing potential along the glycosidic-dihedral linkage between the saccharide monomer units in an oligomer is applied at various levels along the replica runs to enable effective transitions between various conformations. One reference replica runs under the control of the original force field. The method was tested on disaccharide structures and further validated on biologically relevant blood group B, Lewis X and Lewis A trisaccharides. The biasing potential-based replica-exchange molecular dynamics (BP-REMD) method provided a significantly improved sampling of relevant conformational states compared with standard continuous MD simulations, with modest computational costs. Thus, the proposed BP-REMD approach adds a new dimension to existing carbohydrate conformational sampling approaches by enhancing conformational sampling in the presence of solvent molecules explicitly at relatively low computational cost.
Buttingsrud, Bård; Ryeng, Einar; King, Ross D; Alsberg, Bjørn K
2006-06-01
The requirement of aligning each individual molecule in a data set severely limits the type of molecules which can be analysed with traditional structure activity relationship (SAR) methods. A method which solves this problem by using relations between objects is inductive logic programming (ILP). Another advantage of this methodology is its ability to include background knowledge as 1st-order logic. However, previous molecular ILP representations have not been effective in describing the electronic structure of molecules. We present a more unified and comprehensive representation based on Richard Bader's quantum topological atoms in molecules (AIM) theory where critical points in the electron density are connected through a network. AIM theory provides a wealth of chemical information about individual atoms and their bond connections enabling a more flexible and chemically relevant representation. To obtain even more relevant rules with higher coverage, we apply manual postprocessing and interpretation of ILP rules. We have tested the usefulness of the new representation in SAR modelling on classifying compounds of low/high mutagenicity and on a set of factor Xa inhibitors of high and low affinity.
Blatti, Charles; Sinha, Saurabh
2016-07-15
Analysis of co-expressed gene sets typically involves testing for enrichment of different annotations or 'properties' such as biological processes, pathways, transcription factor binding sites, etc., one property at a time. This common approach ignores any known relationships among the properties or the genes themselves. It is believed that known biological relationships among genes and their many properties may be exploited to more accurately reveal commonalities of a gene set. Previous work has sought to achieve this by building biological networks that combine multiple types of gene-gene or gene-property relationships, and performing network analysis to identify other genes and properties most relevant to a given gene set. Most existing network-based approaches for recognizing genes or annotations relevant to a given gene set collapse information about different properties to simplify (homogenize) the networks. We present a network-based method for ranking genes or properties related to a given gene set. Such related genes or properties are identified from among the nodes of a large, heterogeneous network of biological information. Our method involves a random walk with restarts, performed on an initial network with multiple node and edge types that preserve more of the original, specific property information than current methods that operate on homogeneous networks. In this first stage of our algorithm, we find the properties that are the most relevant to the given gene set and extract a subnetwork of the original network, comprising only these relevant properties. We then re-rank genes by their similarity to the given gene set, based on a second random walk with restarts, performed on the above subnetwork. We demonstrate the effectiveness of this algorithm for ranking genes related to Drosophila embryonic development and aggressive responses in the brains of social animals. DRaWR was implemented as an R package available at veda.cs.illinois.edu/DRaWR. blatti@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander
2017-05-01
The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.
Brniak, Witold; Jachowicz, Renata; Pelka, Przemyslaw
2015-01-01
Even that orodispersible tablets (ODTs) have been successfully used in therapy for more than 20 years, there is still no compendial method of their disintegration time evaluation other than the pharmacopoeial disintegration test conducted in 800–900 mL of distilled water. Therefore, several alternative tests more relevant to in vivo conditions were described by different researchers. The aim of this study was to compare these methods and correlate them with in vivo results. Six series of ODTs were prepared by direct compression. Their mechanical properties and disintegration times were measured with pharmacopoeial and alternative methods and compared with the in vivo results. The highest correlation with oral disintegration time was found in the case of own-construction apparatus with additional weight and the employment of the method proposed by Narazaki et al. The correlation coefficients were 0.9994 (p < 0.001), and 0.9907 (p < 0.001) respectively. The pharmacopoeial method correlated with the in vivo data much worse (r = 0.8925, p < 0.05). These results have shown that development of novel biorelevant methods of ODT’s disintegration time determination is eligible and scientifically justified. PMID:27134547
Developing and Validating Personas in e-Commerce: A Heuristic Approach
NASA Astrophysics Data System (ADS)
Thoma, Volker; Williams, Bryn
A multi-method persona development process in a large e-commerce business is described. Personas are fictional representations of customers that describe typical user attributes to facilitate a user-centered approach in interaction design. In the current project persona attributes were derived from various data sources, such as stakeholder interviews, user tests and interviews, data mining, customer surveys, and ethnographic (direct observation, diary studies) research. The heuristic approach of using these data sources conjointly allowed for an early validation of relevant persona dimensions.
Achieving high-density states through shock-wave loading of precompressed samples
Jeanloz, Raymond; Celliers, Peter M.; Collins, Gilbert W.; Eggert, Jon H.; Lee, Kanani K. M.; McWilliams, R. Stewart; Brygoo, Stéphanie; Loubeyre, Paul
2007-01-01
Materials can be experimentally characterized to terapascal pressures by sending a laser-induced shock wave through a sample that is precompressed inside a diamond-anvil cell. This combination of static and dynamic compression methods has been experimentally demonstrated and ultimately provides access to the 10- to 100-TPa (0.1–1 Gbar) pressure range that is relevant to planetary science, testing first-principles theories of condensed matter, and experimentally studying a new regime of chemical bonding. PMID:17494771
Evaluation of the Radiometer whole blood glucose measuring system, EML 105.
Harff, G A; Janssen, W C; Rooijakkers, M L
1997-03-01
The performance of a new glucose electrode system from Radiometer was tested using two EML 105 analyzers (Radiometer Medical A/S, Copenhagen, Denmark). Results were very precise (both analyzers reported CV = 1.0% at a glucose concentration of 13.4 mmol/l). Comparison of methods was performed according to the NCCLS EP9-T guideline. Patients glucose results from both analyzers were lower compared with the results obtained with a Hitachi 911 (Boehringer Mannheim, Mannheim, Germany). There was no haematocrit dependency of relevance.
2012-05-01
Alexandria, Virginia 22314. Orders will be expedited if placed through the librarian or other person designated to request documents from DTIC...an official Department of the Army position, policy, or decision, unless so designated by other official documentation. Citation of trade names in...teamwork and evaluate the effectiveness of team training methods (Baker and Salas, 1997). Additionally, good measures of team performance should aid the
Assessment of NDE Reliability Data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.
1976-01-01
Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.