Statistically Controlling for Confounding Constructs Is Harder than You Think
Westfall, Jacob; Yarkoni, Tal
2016-01-01
Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707
Construct Validity of Three Clerkship Performance Assessments
ERIC Educational Resources Information Center
Lee, Ming; Wimmers, Paul F.
2010-01-01
This study examined construct validity of three commonly used clerkship performance assessments: preceptors' evaluations, OSCE-type clinical performance measures, and the NBME [National Board of Medical Examiners] medicine subject examination. Six hundred and eighty-six students taking the inpatient medicine clerkship from 2003 to 2007…
ERIC Educational Resources Information Center
Daisley, Richard J.
2011-01-01
This article explores the feasibility of using the Myers-Briggs Type Indicator (MBTI) as a framework for instructor development in a professional services training environment. It explores the consistency of MBTI with common adult learning theory, addresses questions on MBTI's reliability and validity, and explores the applicability of MBTI to the…
A Primer on Experimental and Quasi-experimental Design.
ERIC Educational Resources Information Center
Dawson, Thomas E.
Counseling psychology is a relatively new field that is gaining autonomy and respect. Unfortunately, research efforts in the field may lack an appropriate research design. This paper considers some of the more common types of research design and the associated threats to their validity. An example of each design type is drawn from the counseling…
Experiment module concepts study. Volume 3: Module and subsystem design
NASA Technical Reports Server (NTRS)
Hunter, J. R.; Chiarappa, D. J.
1970-01-01
The final common module set exhibiting wide commonality is described. The set consists of three types of modules: one free flying module and two modules that operate attached to the space station. The common module designs provide for the experiment program as defined. The feasibility, economy, and practicality of these modules hinges on factors that do not affect the approach or results of the commonality process, but are important to the validity of the common module concepts. Implementation of the total experiment program requires thirteen common modules: five CM-1, five CM-3, and three CM-4 modules.
Recursive formulae and performance comparisons for first mode dynamics of periodic structures
NASA Astrophysics Data System (ADS)
Hobeck, Jared D.; Inman, Daniel J.
2017-05-01
Periodic structures are growing in popularity especially in the energy harvesting and metastructures communities. Common types of these unique structures are referred to in the literature as zigzag, orthogonal spiral, fan-folded, and longitudinal zigzag structures. Many of these studies on periodic structures have two competing goals in common: (a) minimizing natural frequency, and (b) minimizing mass or volume. These goals suggest that no single design is best for all applications; therefore, there is a need for design optimization and comparison tools which first require efficient easy-to-implement models. All available structural dynamics models for these types of structures do provide exact analytical solutions; however, they are complex requiring tedious implementation and providing more information than necessary for practical applications making them computationally inefficient. This paper presents experimentally validated recursive models that are able to very accurately and efficiently predict the dynamics of the four most common types of periodic structures. The proposed modeling technique employs a combination of static deflection formulae and Rayleigh’s Quotient to estimate the first mode shape and natural frequency of periodic structures having any number of beams. Also included in this paper are the results of an extensive experimental validation study which show excellent agreement between model prediction and measurement. Lastly, the proposed models are used to evaluate the performance of each type of structure. Results of this performance evaluation reveal key advantages and disadvantages associated with each type of structure.
Stability of an empirical psychosocial taxonomy across type of diabetes and treatment.
Nouwen, A; Breton, M-C; Urquhart Law, G; Descôteaux, J
2007-01-01
The aims of the study were (i) to examine whether an empirical psychosocial taxonomy, based on key diabetes-related variables, is independent of type of diabetes and treatment, and (ii) to further establish the external validation of the taxonomy. In a cross-sectional study, 82 patients with Type 1 and 86 patients with Type 2 diabetes mellitus were assigned to one of three psychosocial patient profiles based on their Multidimensional Diabetes Questionnaire (MDQ) scores. General psychological and diabetes-specific measures were obtained through self-report and HbA(1c) was measured. Equal proportions of Type 1 and Type 2 patients, and of patients using insulin and oral medication/diet only were classified within each of the three psychosocial profiles. External validation confirmed the validity and distinctiveness of the patients' profiles. The patient profiles were independent of demographic variables, body mass index, duration of diabetes, complexity of treatment, number of complications, social desirability, and major stress levels. The Psychosocial Taxonomy for Patients with Diabetes provides a new way to categorize individuals who may have more in common than just their type of diabetes and/or its treatment and can help target interventions to individual patients' needs.
Discovery of cancer common and specific driver gene sets
2017-01-01
Abstract Cancer is known as a disease mainly caused by gene alterations. Discovery of mutated driver pathways or gene sets is becoming an important step to understand molecular mechanisms of carcinogenesis. However, systematically investigating commonalities and specificities of driver gene sets among multiple cancer types is still a great challenge, but this investigation will undoubtedly benefit deciphering cancers and will be helpful for personalized therapy and precision medicine in cancer treatment. In this study, we propose two optimization models to de novo discover common driver gene sets among multiple cancer types (ComMDP) and specific driver gene sets of one certain or multiple cancer types to other cancers (SpeMDP), respectively. We first apply ComMDP and SpeMDP to simulated data to validate their efficiency. Then, we further apply these methods to 12 cancer types from The Cancer Genome Atlas (TCGA) and obtain several biologically meaningful driver pathways. As examples, we construct a common cancer pathway model for BRCA and OV, infer a complex driver pathway model for BRCA carcinogenesis based on common driver gene sets of BRCA with eight cancer types, and investigate specific driver pathways of the liquid cancer lymphoblastic acute myeloid leukemia (LAML) versus other solid cancer types. In these processes more candidate cancer genes are also found. PMID:28168295
Validation of classification algorithms for childhood diabetes identified from administrative data.
Vanderloo, Saskia E; Johnson, Jeffrey A; Reimer, Kim; McCrea, Patrick; Nuernberger, Kimberly; Krueger, Hans; Aydede, Sema K; Collet, Jean-Paul; Amed, Shazhan
2012-05-01
Type 1 diabetes is the most common form of diabetes among children; however, the proportion of cases of childhood type 2 diabetes is increasing. In Canada, the National Diabetes Surveillance System (NDSS) uses administrative health data to describe trends in the epidemiology of diabetes, but does not specify diabetes type. The objective of this study was to validate algorithms to classify diabetes type in children <20 yr identified using the NDSS methodology. We applied the NDSS case definition to children living in British Columbia between 1 April 1996 and 31 March 2007. Through an iterative process, four potential classification algorithms were developed based on demographic characteristics and drug-utilization patterns. Each algorithm was then validated against a gold standard clinical database. Algorithms based primarily on an age rule (i.e., age <10 at diagnosis categorized type 1 diabetes) were most sensitive in the identification of type 1 diabetes; algorithms with restrictions on drug utilization (i.e., no prescriptions for insulin ± glucose monitoring strips categorized type 2 diabetes) were most sensitive for identifying type 2 diabetes. One algorithm was identified as having the optimal balance of sensitivity (Sn) and specificity (Sp) for the identification of both type 1 (Sn: 98.6%; Sp: 78.2%; PPV: 97.8%) and type 2 diabetes (Sn: 83.2%; Sp: 97.5%; PPV: 73.7%). Demographic characteristics in combination with drug-utilization patterns can be used to differentiate diabetes type among cases of pediatric diabetes identified within administrative health databases. Validation of similar algorithms in other regions is warranted. © 2011 John Wiley & Sons A/S.
Common Metrics for Human-Robot Interaction
2006-03-01
interaction spectrum. By doing so, we believe that: ( 1 ) our metrics are broadly applicable to a wide range of applications and ( 2 ) we can assess...currently valid OMB control number. 1 . REPORT DATE MAR 2006 2 . REPORT TYPE 3. DATES COVERED 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE Common...disambiguate or increase confidence for perceptual inference [ 2 ]). 1 ) Passive Perception: Passive perception involves interpreting sensor data
DOT National Transportation Integrated Search
2013-04-01
The project objective was to validate the results from ICT Project R27-1, which characterized in the : laboratory the strength, stiffness, and deformation behaviors of three different aggregate types : commonly used in Illinois for subgrade replaceme...
Gironés Muriel, Alberto; Campos Segovia, Ana; Ríos Gómez, Patricia
2018-01-01
The study of mediating variables and psychological responses to child surgery involves the evaluation of both the patient and the parents as regards different stressors. To have a reliable and reproducible valid evaluation tool that assesses the level of paternal involvement in relation to different stressors in the setting of surgery. A self-report questionnaire study was completed by 123 subjects of both sexes, subdivided into 2populations, due to their relationship with the hospital setting. The items were determined by a group of experts and analysed using the Lawshe validity index to determine a first validity of content. Subsequently, the reliability of the tool was determined by an item-re-item analysis of the 2sub-populations. A factorial analysis was performed to analyse the construct validity with the maximum likelihood and rotation of varimax type factors. A questionnaire of paternal concern was offered, consisting of 21 items with a Cronbach coefficient of 0.97, giving good precision and stability. The posterior factor analysis gives an adequate validity to the questionnaire, with the determination of 10 common stressors that cover 74.08% of the common and non-common variance of the questionnaire. The proposed questionnaire is reliable, valid and easy-to-apply and is developed to assess the level of paternal concern about the surgery of a child and to be able to apply measures and programs through the prior assessment of these elements. Copyright © 2016 Asociación Española de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
Development of assessment instruments to measure critical thinking skills
NASA Astrophysics Data System (ADS)
Sumarni, W.; Supardi, K. I.; Widiarti, N.
2018-04-01
Assessment instruments that is commonly used in the school generally have not been orientated on critical thinking skills. The purpose of this research is to develop assessment instruments to measure critical thinking skills, to test validity, reliability, and practicality. This type of research is Research and Development. There are two stages on the preface step, which are field study and literacy study. On the development steps, there some parts, which are 1) instrument construction, 2) expert validity, 3) limited scale tryout and 4) narrow scale try-out. The developed assessment instrument are analysis essay and problem solving. Instruments were declared valid, reliable and practical.
A comparison of two patient classification instruments in an acute care hospital.
Seago, Jean Ann
2002-05-01
Patient classification systems are alternately praised and vilified by staff nurses, nurse managers, and nurse executives. Most nurses agree that substantial resources are used to create or find, implement, manage, and maintain the systems, and that the predictive ability of the instruments is intermittent. The purpose of this study is to compare the predictive validity of two types of patient classification instruments commonly used in acute care hospitals in California. Acute care hospitals in California are required by both the Joint Commission on Accreditation of Healthcare Organizations and California Title 22 to have a reliable and valid patient classification system (PCS). The two general types of systems commonly used are the summative task type PCS and the critical incident or criterion type PCS. There is little to assist nurse executives in deciding which type of PCS to choose. There is modest research demonstrating the validity and reliability of different PCSs but no published data comparing the predictive validity of the different types of systems. The unit of analysis is one patient shift called the study shift. The study shift is defined as the first day shift after the patient has been in the hospital for a full 24 hours. Data were collected using medical record review only. Both types, criterion and summative, of PCS data collection instruments were completed for all patients at both collection points. Each patient had a before and after score for each type of instrument. Three hundred forty-nine medical records for inpatients meeting the inclusion criteria were examined. The average patient age was 76 years, the average length of stay was 6.6 days with an average of 6.7 secondary diagnoses recorded. Fifty-five percent of the sample was female and the most common primary diagnosis was CHF, followed by COPD, CVA, and pneumonia. There was a difference in mean summative predictor score and the mean summative actual score of 1.57 points with the predictor score higher (P =.001; CI =.62--2.5). For the criterion instrument, 68.4% of the predictor criterion scores were in category 2 compared to 65.5% of the actual criterion scores. The criterion predictor agreed with the criterion actual score 45% of the time for category 1 patients, 87.3% of the time for category 2 patients, 77.1% of the time for category 3 patients and 72.7% of the time for category 4 patients, with an overall agreement between predictor and actual criterion scores of 79.9% (Kappa P <.001, indicating agreement is not by chance). The most significant finding of this study is that there are virtually no differences in the predictive ability of summative versus criterion patient classification instruments. Using the same patients, both types of instruments predicted the actual score over 78% of the time.
Actor groups, related needs, and challenges at the climate downscaling interface
NASA Astrophysics Data System (ADS)
Rössler, Ole; Benestad, Rasmus; Diamando, Vlachogannis; Heike, Hübener; Kanamaru, Hideki; Pagé, Christian; Margarida Cardoso, Rita; Soares, Pedro; Maraun, Douglas; Kreienkamp, Frank; Christodoulides, Paul; Fischer, Andreas; Szabo, Peter
2016-04-01
At the climate downscaling interface, numerous downscaling techniques and different philosophies compete on being the best method in their specific terms. Thereby, it remains unclear to what extent and for which purpose these downscaling techniques are valid or even the most appropriate choice. A common validation framework that compares all the different available methods was missing so far. The initiative VALUE closes this gap with such a common validation framework. An essential part of a validation framework for downscaling techniques is the definition of appropriate validation measures. The selection of validation measures should consider the needs of the stakeholder: some might need a temporal or spatial average of a certain variable, others might need temporal or spatial distributions of some variables, still others might need extremes for the variables of interest or even inter-variable dependencies. Hence, a close interaction of climate data providers and climate data users is necessary. Thus, the challenge in formulating a common validation framework mirrors also the challenges between the climate data providers and the impact assessment community. This poster elaborates the issues and challenges at the downscaling interface as it is seen within the VALUE community. It suggests three different actor groups: one group consisting of the climate data providers, the other two groups being climate data users (impact modellers and societal users). Hence, the downscaling interface faces classical transdisciplinary challenges. We depict a graphical illustration of actors involved and their interactions. In addition, we identified four different types of issues that need to be considered: i.e. data based, knowledge based, communication based, and structural issues. They all may, individually or jointly, hinder an optimal exchange of data and information between the actor groups at the downscaling interface. Finally, some possible ways to tackle these issues are discussed.
Gray, Benjamin J; Bracken, Richard M; Turner, Daniel; Morgan, Kerry; Thomas, Michael; Williams, Sally P; Williams, Meurig; Rice, Sam; Stephens, Jeffrey W
2015-01-01
Background Use of a validated risk-assessment tool to identify individuals at high risk of developing type 2 diabetes is currently recommended. It is under-reported, however, whether a different risk tool alters the predicted risk of an individual. Aim This study explored any differences between commonly used validated risk-assessment tools for type 2 diabetes. Design and setting Cross-sectional analysis of individuals who participated in a workplace-based risk assessment in Carmarthenshire, South Wales. Method Retrospective analysis of 676 individuals (389 females and 287 males) who participated in a workplace-based diabetes risk-assessment initiative. Ten-year risk of type 2 diabetes was predicted using the validated QDiabetes®, Leicester Risk Assessment (LRA), FINDRISC, and Cambridge Risk Score (CRS) algorithms. Results Differences between the risk-assessment tools were apparent following retrospective analysis of individuals. CRS categorised the highest proportion (13.6%) of individuals at ‘high risk’ followed by FINDRISC (6.6%), QDiabetes (6.1%), and, finally, the LRA was the most conservative risk tool (3.1%). Following further analysis by sex, over one-quarter of males were categorised at high risk using CRS (25.4%), whereas a greater percentage of females were categorised as high risk using FINDRISC (7.8%). Conclusion The adoption of a different valid risk-assessment tool can alter the predicted risk of an individual and caution should be used to identify those individuals who really are at high risk of type 2 diabetes. PMID:26541180
Gray, Benjamin J; Bracken, Richard M; Turner, Daniel; Morgan, Kerry; Thomas, Michael; Williams, Sally P; Williams, Meurig; Rice, Sam; Stephens, Jeffrey W
2015-12-01
Use of a validated risk-assessment tool to identify individuals at high risk of developing type 2 diabetes is currently recommended. It is under-reported, however, whether a different risk tool alters the predicted risk of an individual. This study explored any differences between commonly used validated risk-assessment tools for type 2 diabetes. Cross-sectional analysis of individuals who participated in a workplace-based risk assessment in Carmarthenshire, South Wales. Retrospective analysis of 676 individuals (389 females and 287 males) who participated in a workplace-based diabetes risk-assessment initiative. Ten-year risk of type 2 diabetes was predicted using the validated QDiabetes(®), Leicester Risk Assessment (LRA), FINDRISC, and Cambridge Risk Score (CRS) algorithms. Differences between the risk-assessment tools were apparent following retrospective analysis of individuals. CRS categorised the highest proportion (13.6%) of individuals at 'high risk' followed by FINDRISC (6.6%), QDiabetes (6.1%), and, finally, the LRA was the most conservative risk tool (3.1%). Following further analysis by sex, over one-quarter of males were categorised at high risk using CRS (25.4%), whereas a greater percentage of females were categorised as high risk using FINDRISC (7.8%). The adoption of a different valid risk-assessment tool can alter the predicted risk of an individual and caution should be used to identify those individuals who really are at high risk of type 2 diabetes. © British Journal of General Practice 2015.
An Overlooked Factor in Sexual Abuse: Psychological and Physical Force Examined.
ERIC Educational Resources Information Center
Johnson, Scott A.
1998-01-01
Separate studies of sex offenders in treatment while serving prison sentences and placed on probation suggest that psychological force is more commonly used in sexual assault than physical force. Seven types of psychological force are described, and the conceptual validity of this schematic for use in treatment is evaluated. (Author/EMK)
Knight, Stacey; Camp, Nicola J
2011-04-01
Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.
Validation of a combined health literacy and numeracy instrument for patients with type 2 diabetes.
Luo, Huabin; Patil, Shivajirao P; Wu, Qiang; Bell, Ronny A; Cummings, Doyle M; Adams, Alyssa D; Hambidge, Bertha; Craven, Kay; Gao, Fei
2018-05-20
This study aimed to validate a new consolidated measure of health literacy and numeracy (health literacy scale [HLS] plus the subjective numeracy scale [SNS]) in patients with type 2 diabetes (T2DM). A convenience sample (N = 102) of patients with T2DM was recruited from an academic family medicine center in the southeastern US between September-December 2017. Participants completed a questionnaire that included the composite HLS/SNS (22 questions) and a commonly used objective measure of health literacy-S-TOFHLA (40 questions). Internal reliability of the HLS/SNS was assessed using Cronbach's alpha. Criterion and construct validity was assessed against the S-TOFHLA. The composite HLS/SNS had good internal reliability (Cronbach's alpha = 0.83). A confirmatory factor analysis revealed there were four factors in the new instrument. Model fit indices showed good model-data fit (RMSEA = 0.08). The Spearman's rank order correlation coefficient between the HLS/SNS and the S-TOFHLA was 0.45 (p < 0.01). Our study suggests that the composite HLS/SNS is a reliable, valid instrument. Published by Elsevier B.V.
Simms, Leonard J; Calabrese, William R
2016-02-01
Traditional personality disorders (PDs) are associated with significant psychosocial impairment. DSM-5 Section III includes an alternative hybrid personality disorder (PD) classification approach, with both type and trait elements, but relatively little is known about the impairments associated with Section III traits. Our objective was to study the incremental validity of Section III traits--compared to normal-range traits, traditional PD criterion counts, and common psychiatric symptomatology--in predicting psychosocial impairment. To that end, 628 current/recent psychiatric patients completed measures of PD traits, normal-range traits, traditional PD criteria, psychiatric symptomatology, and psychosocial impairments. Hierarchical regressions revealed that Section III PD traits incrementally predicted psychosocial impairment over normal-range personality traits, PD criterion counts, and common psychiatric symptomatology. In contrast, the incremental effects for normal-range traits, PD symptom counts, and common psychiatric symptomatology were substantially smaller than for PD traits. These findings have implications for PD classification and the impairment literature more generally.
Forensic validation of the SNPforID 52-plex assay.
Musgrave-Brown, Esther; Ballard, David; Balogh, Kinga; Bender, Klaus; Berger, Burkhard; Bogus, Magdalena; Børsting, Claus; Brion, María; Fondevila, Manuel; Harrison, Cheryl; Oguzturun, Ceylan; Parson, Walther; Phillips, Chris; Proff, Carsten; Ramos-Luis, Eva; Sanchez, Juan J; Sánchez Diz, Paula; Sobrino Rey, Bea; Stradmann-Bellinghausen, Beate; Thacker, Catherine; Carracedo, Angel; Morling, Niels; Scheithauer, Richard; Schneider, Peter M; Syndercombe Court, Denise
2007-06-01
The advantages of single nucleotide polymorphism (SNP) typing in forensic genetics are well known and include a wider choice of high-throughput typing platforms, lower mutation rates, and improved analysis of degraded samples. However, if SNPs are to become a realistic supplement to current short tandem repeat (STR) typing methods, they must be shown to successfully and reliably analyse the challenging samples commonly encountered in casework situations. The European SNPforID consortium, supported by the EU GROWTH programme, has developed a multiplex of 52 SNPs for forensic analysis, with the amplification of all 52 loci in a single reaction followed by two single base extension (SBE) reactions which are detected with capillary electrophoresis. In order to validate this assay, a variety of DNA extracts were chosen to represent problems such as low copy number and degradation that are commonly seen in forensic casework. A total of 40 extracts were used in the study, each of which was sent to two of the five participating laboratories for typing in duplicate or triplicate. Laboratories were instructed to carry out their analyses as if they were dealing with normal casework samples. Results were reported back to the coordinating laboratory and compared with those obtained from traditional STR typing of the same extracts using Powerplex 16 (Promega). These results indicate that, although the ability to successfully type good quality, low copy number extracts is lower, the 52-plex SNP assay performed better than STR typing on degraded samples, and also on samples that were both degraded and of limited quantity, suggesting that SNP analysis can provide advantages over STR analysis in forensically relevant circumstances. However, there were also additional problems arising from contamination and primer quality issues and these are discussed.
Weighing the Evidence in Peters' Rule: Does Neuronal Morphology Predict Connectivity?
Rees, Christopher L; Moradi, Keivan; Ascoli, Giorgio A
2017-02-01
Although the importance of network connectivity is increasingly recognized, identifying synapses remains challenging relative to the routine characterization of neuronal morphology. Thus, researchers frequently employ axon-dendrite colocations as proxies of potential connections. This putative equivalence, commonly referred to as Peters' rule, has been recently studied at multiple levels and scales, fueling passionate debates regarding its validity. Our critical literature review identifies three conceptually distinct but often confused applications: inferring neuron type circuitry, predicting synaptic contacts among individual cells, and estimating synapse numbers within neuron pairs. Paradoxically, at the originally proposed cell-type level, Peters' rule remains largely untested. Leveraging Hippocampome.org, we validate and refine the relationship between axonal-dendritic colocations and synaptic circuits, clarifying the interpretation of existing and forthcoming data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bertram, Christof A; Gurtner, Corinne; Dettwiler, Martina; Kershaw, Olivia; Dietert, Kristina; Pieper, Laura; Pischon, Hannah; Gruber, Achim D; Klopfleisch, Robert
2018-07-01
Integration of new technologies, such as digital microscopy, into a highly standardized laboratory routine requires the validation of its performance in terms of reliability, specificity, and sensitivity. However, a validation study of digital microscopy is currently lacking in veterinary pathology. The aim of the current study was to validate the usability of digital microscopy in terms of diagnostic accuracy, speed, and confidence for diagnosing and differentiating common canine cutaneous tumor types and to compare it to classical light microscopy. Therefore, 80 histologic sections including 17 different skin tumor types were examined twice as glass slides and twice as digital whole-slide images by 6 pathologists with different levels of experience at 4 time points. Comparison of both methods found digital microscopy to be noninferior for differentiating individual tumor types within the category epithelial and mesenchymal tumors, but diagnostic concordance was slightly lower for differentiating individual round cell tumor types by digital microscopy. In addition, digital microscopy was associated with significantly shorter diagnostic time, but diagnostic confidence was lower and technical quality was considered inferior for whole-slide images compared with glass slides. Of note, diagnostic performance for whole-slide images scanned at 200× magnification was noninferior in diagnostic performance for slides scanned at 400×. In conclusion, digital microscopy differs only minimally from light microscopy in few aspects of diagnostic performance and overall appears adequate for the diagnosis of individual canine cutaneous tumors with minor limitations for differentiating individual round cell tumor types and grading of mast cell tumors.
Do placebo based validation standards mimic real batch products behaviour? Case studies.
Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E
2011-06-01
Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Condon, Christopher; Clifford, Matthew
2012-01-01
Assessing school principal performance is both necessary and challenging. During the past five years, many states have begun using validated measures in summative assessments of novice principal competency as a basis for certification decisions. Although standardized tests are used for certification purposes, other types of assessments are being…
An automated genotyping tool for enteroviruses and noroviruses.
Kroneman, A; Vennema, H; Deforche, K; v d Avoort, H; Peñaranda, S; Oberste, M S; Vinjé, J; Koopmans, M
2011-06-01
Molecular techniques are established as routine in virological laboratories and virus typing through (partial) sequence analysis is increasingly common. Quality assurance for the use of typing data requires harmonization of genotype nomenclature, and agreement on target genes, depending on the level of resolution required, and robustness of methods. To develop and validate web-based open-access typing-tools for enteroviruses and noroviruses. An automated web-based typing algorithm was developed, starting with BLAST analysis of the query sequence against a reference set of sequences from viruses in the family Picornaviridae or Caliciviridae. The second step is phylogenetic analysis of the query sequence and a sub-set of the reference sequences, to assign the enterovirus type or norovirus genotype and/or variant, with profile alignment, construction of phylogenetic trees and bootstrap validation. Typing is performed on VP1 sequences of Human enterovirus A to D, and ORF1 and ORF2 sequences of genogroup I and II noroviruses. For validation, we used the tools to automatically type sequences in the RIVM and CDC enterovirus databases and the FBVE norovirus database. Using the typing-tools, 785(99%) of 795 Enterovirus VP1 sequences, and 8154(98.5%) of 8342 norovirus sequences were typed in accordance with previously used methods. Subtyping into variants was achieved for 4439(78.4%) of 5838 NoV GII.4 sequences. The online typing-tools reliably assign genotypes for enteroviruses and noroviruses. The use of phylogenetic methods makes these tools robust to ongoing evolution. This should facilitate standardized genotyping and nomenclature in clinical and public health laboratories, thus supporting inter-laboratory comparisons. Copyright © 2011 Elsevier B.V. All rights reserved.
Connaughton, Joanne; Wand, Benedict
2017-08-01
Headache is the most common type of pain reported by people with schizophrenia. This study aimed to establish prevalence, characteristics and management of these headaches. One hundred participants with schizophrenia/schizoaffective disorder completed a reliable and valid headache questionnaire. Two clinicians independently classified each headache as migraine, tension-type, cervicogenic or other. The 12-month prevalence of headache (57%) was higher than the general population (46%) with no evidence of a relationship between psychiatric clinical characteristics and presence of headache. Prevalence of cervicogenic (5%) and migraine (18%) was comparable to the general population. Tension-type (16%) had a lower prevalence and 19% of participants experienced other headache. No one with migraine was prescribed migraine specific medication; no one with cervicogenic and tension-type received best-practice treatment. Headache is a common complaint in people with schizophrenia/schizoaffective disorder with most fitting recognised diagnostic criteria for which effective interventions are available. No one in this sample was receiving best-practice care for their headache.
Lingner, Thomas; Kataya, Amr R. A.; Reumann, Sigrun
2012-01-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences.1 As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity.” Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals. PMID:22415050
Lingner, Thomas; Kataya, Amr R A; Reumann, Sigrun
2012-02-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences. As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity." Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals.
Wu, Jing; Zhu, Jifeng; Wang, Lanfen; Wang, Shumin
2017-01-01
Nucleotide-binding site and leucine-rich repeat (NBS-LRR) genes represent the largest and most important disease resistance genes in plants. The genome sequence of the common bean ( Phaseolus vulgaris L.) provides valuable data for determining the genomic organization of NBS-LRR genes. However, data on the NBS-LRR genes in the common bean are limited. In total, 178 NBS-LRR-type genes and 145 partial genes (with or without a NBS) located on 11 common bean chromosomes were identified from genome sequences database. Furthermore, 30 NBS-LRR genes were classified into Toll/interleukin-1 receptor (TIR)-NBS-LRR (TNL) types, and 148 NBS-LRR genes were classified into coiled-coil (CC)-NBS-LRR (CNL) types. Moreover, the phylogenetic tree supported the division of these PvNBS genes into two obvious groups, TNL types and CNL types. We also built expression profiles of NBS genes in response to anthracnose and common bacterial blight using qRT-PCR. Finally, we detected nine disease resistance loci for anthracnose (ANT) and seven for common bacterial blight (CBB) using the developed NBS-SSR markers. Among these loci, NSSR24, NSSR73, and NSSR265 may be located at new regions for ANT resistance, while NSSR65 and NSSR260 may be located at new regions for CBB resistance. Furthermore, we validated NSSR24, NSSR65, NSSR73, NSSR260, and NSSR265 using a new natural population. Our results provide useful information regarding the function of the NBS-LRR proteins and will accelerate the functional genomics and evolutionary studies of NBS-LRR genes in food legumes. NBS-SSR markers represent a wide-reaching resource for molecular breeding in the common bean and other food legumes. Collectively, our results should be of broad interest to bean scientists and breeders.
Wu, Jing; Zhu, Jifeng; Wang, Lanfen; Wang, Shumin
2017-01-01
Nucleotide-binding site and leucine-rich repeat (NBS-LRR) genes represent the largest and most important disease resistance genes in plants. The genome sequence of the common bean (Phaseolus vulgaris L.) provides valuable data for determining the genomic organization of NBS-LRR genes. However, data on the NBS-LRR genes in the common bean are limited. In total, 178 NBS-LRR-type genes and 145 partial genes (with or without a NBS) located on 11 common bean chromosomes were identified from genome sequences database. Furthermore, 30 NBS-LRR genes were classified into Toll/interleukin-1 receptor (TIR)-NBS-LRR (TNL) types, and 148 NBS-LRR genes were classified into coiled-coil (CC)-NBS-LRR (CNL) types. Moreover, the phylogenetic tree supported the division of these PvNBS genes into two obvious groups, TNL types and CNL types. We also built expression profiles of NBS genes in response to anthracnose and common bacterial blight using qRT-PCR. Finally, we detected nine disease resistance loci for anthracnose (ANT) and seven for common bacterial blight (CBB) using the developed NBS-SSR markers. Among these loci, NSSR24, NSSR73, and NSSR265 may be located at new regions for ANT resistance, while NSSR65 and NSSR260 may be located at new regions for CBB resistance. Furthermore, we validated NSSR24, NSSR65, NSSR73, NSSR260, and NSSR265 using a new natural population. Our results provide useful information regarding the function of the NBS-LRR proteins and will accelerate the functional genomics and evolutionary studies of NBS-LRR genes in food legumes. NBS-SSR markers represent a wide-reaching resource for molecular breeding in the common bean and other food legumes. Collectively, our results should be of broad interest to bean scientists and breeders. PMID:28848595
Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat
2018-01-01
We applied a new approach to Generalizability theory (G-theory) involving parallel splits and repeated measures to evaluate common uses of the Paulhus Deception Scales based on polytomous and four types of dichotomous scoring. G-theory indices of reliability and validity accounting for specific-factor, transient, and random-response measurement error supported use of polytomous over dichotomous scores as contamination checks; as control, explanatory, and outcome variables; as aspects of construct validation; and as indexes of environmental effects on socially desirable responding. Polytomous scoring also provided results for flagging faking as dependable as those when using dichotomous scoring methods. These findings argue strongly against the nearly exclusive use of dichotomous scoring for the Paulhus Deception Scales in practice and underscore the value of G-theory in demonstrating this. We provide guidelines for applying our G-theory techniques to other objectively scored clinical assessments, for using G-theory to estimate how changes to a measure might improve reliability, and for obtaining software to conduct G-theory analyses free of charge.
Dynamic (Vibration) Testing: Design-Certification of Aerospace System
NASA Technical Reports Server (NTRS)
Aggarwal, Pravin K.
2010-01-01
Various types of dynamic testing of structures for certification purposes are described, including vibration, shock and acoustic testing. Modal testing is discussed as it frequently complements dynamic testing and is part of the structural verification/validation process leading up to design certification. Examples of dynamic and modal testing are presented as well as the common practices, procedures and standards employed.
ERIC Educational Resources Information Center
Harnisch, Delwyn L.; And Others
This paper describes several common types of research studies in special education transition literature and the threats to their validity. It then describes how the evidential base may be broadened, how diverse sources of evidence can be combined to strengthen causal inferences, and the role of judgment within quasi-experimentation. The paper…
Benej, Martin; Bendlova, Bela; Vaclavikova, Eliska; Poturnajova, Martina
2011-10-06
Reliable and effective primary screening of mutation carriers is the key condition for common diagnostic use. The objective of this study is to validate the method high resolution melting (HRM) analysis for routine primary mutation screening and accomplish its optimization, evaluation and validation. Due to their heterozygous nature, germline point mutations of c-RET proto-oncogene, associated to multiple endocrine neoplasia type 2 (MEN2), are suitable for HRM analysis. Early identification of mutation carriers has a major impact on patients' survival due to early onset of medullary thyroid carcinoma (MTC) and resistance to conventional therapy. The authors performed a series of validation assays according to International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines for validation of analytical procedures, along with appropriate design and optimization experiments. After validated evaluation of HRM, the method was utilized for primary screening of 28 pathogenic c-RET mutations distributed among nine exons of c-RET gene. Validation experiments confirm the repeatability, robustness, accuracy and reproducibility of HRM. All c-RET gene pathogenic variants were detected with no occurrence of false-positive/false-negative results. The data provide basic information about design, establishment and validation of HRM for primary screening of genetic variants in order to distinguish heterozygous point mutation carriers among the wild-type sequence carriers. HRM analysis is a powerful and reliable tool for rapid and cost-effective primary screening, e.g., of c-RET gene germline and/or sporadic mutations and can be used as a first line potential diagnostic tool.
Dyadic coping in Latino couples: validity of the Spanish version of the Dyadic Coping Inventory.
Falconier, Mariana Karin; Nussbeck, Fridtjof; Bodenmann, Guy
2013-01-01
This study seeks to validate the Spanish version of the Dyadic Coping Inventory (DCI) in a Latino population with data from 113 heterosexual couples. Results for both partners confirm the factorial structure for the Spanish version (Subscales: Stress Communication, Emotion- and Problem-Focused Supportive, Delegated, and Negative Dyadic Coping, Emotion- and Problem-Focused Common Dyadic Coping, and Evaluation of Dyadic Coping; Aggregated Scales: Dyadic Coping by Oneself and by Partner) and support the discriminant validity of its subscales and the concurrent, and criterion validity of the subscales and aggregated scales. These results do not only indicate that the Spanish version of the DCI can be used reliably as a measure of coping in Spanish-speaking Latino couples, but they also suggest that this group relies on dyadic coping frequently and that this type of coping is associated with positive relationship functioning and individual coping. Limitations and implications are discussed.
Ko, Hsiu-Chia; Wang, Li-Ling; Xu, Yi-Ting
2013-03-01
Blogs offer audiences a forum through which they can exchange ideas and provide feedback about the everyday lives and experiences of the bloggers. Such interactions and communication between audiences and bloggers could be regarded as a kind of social support. The present study aims to identify and compare the types of social support offered by audiences to continuous popular diary-like and informative bloggers, and to explore the possible benefits that bloggers may obtain from such social support. Content analysis was used to analyze the 485 and 390 comments provided by the audiences to the A-list diary-like and informative blog posts, respectively. Results reveal that validation, compliment, and encouragement are the most common types of social support given by audiences to A-list bloggers. Chi-square test results show that the audiences offer more encouragement-type of social support to diary-like bloggers and more complimentary and informational social support to informative bloggers. Such types of social support may enhance A-list bloggers' self-esteem, boost their confidence, promote their self-understanding, and help them obtain the benefits of social validation, which in turn encourage bloggers to commit continuous self-disclosure.
Lehmann, Vicky; Makine, Ceylan; Karşıdağ, Cagatay; Kadıoğlu, Pinar; Karşıdağ, Kubilay; Pouwer, François
2011-07-26
Depression is a common co-morbid health problem in patients with diabetes that is underrecognised. Current international guidelines recommend screening for depression in patients with diabetes. Yet, few depression screening instruments have been validated for use in this particular group of patients. Aim of the present study was to investigate the psychometric properties of the Turkish version of the Centre for Epidemiologic Studies Depression Scale (CES-D) in patients with type 2 diabetes. A sample of 151 Turkish outpatients with type 2 diabetes completed the CES-D, the World Health Organization-Five Well-Being Index (WHO-5), and the Problem Areas in Diabetes scale (PAID). Explanatory factor analyses, various correlations and Cronbach's alpha were investigated to test the validity and reliability of the CES-D in Turkish diabetes outpatients. The original four-factor structure proposed by Radloff was not confirmed. Explanatory factor analyses revealed a two-factor structure representing two subscales: (1) depressed mood combined with somatic symptoms of depression and (2) positive affect. However, one item showed insufficient factor loadings. Cronbach's alpha of the total score was high (0.88), as were split-half coefficients (0.77-0.90). The correlation of the CES-D with the WHO-5 was the strongest (r = -0.70), and supported concurrent validity. The CES-D appears to be a valid measure for the assessment of depression in Turkish diabetes patients. Future studies should investigate its sensitivity and specificity as well as test-retest reliability.
Technical skills assessment toolbox: a review using the unitary framework of validity.
Ghaderi, Iman; Manji, Farouq; Park, Yoon Soo; Juul, Dorthea; Ott, Michael; Harris, Ilene; Farrell, Timothy M
2015-02-01
The purpose of this study was to create a technical skills assessment toolbox for 35 basic and advanced skills/procedures that comprise the American College of Surgeons (ACS)/Association of Program Directors in Surgery (APDS) surgical skills curriculum and to provide a critical appraisal of the included tools, using contemporary framework of validity. Competency-based training has become the predominant model in surgical education and assessment of performance is an essential component. Assessment methods must produce valid results to accurately determine the level of competency. A search was performed, using PubMed and Google Scholar, to identify tools that have been developed for assessment of the targeted technical skills. A total of 23 assessment tools for the 35 ACS/APDS skills modules were identified. Some tools, such as Operative Performance Rating System (OSATS) and Objective Structured Assessment of Technical Skill (OPRS), have been tested for more than 1 procedure. Therefore, 30 modules had at least 1 assessment tool, with some common surgical procedures being addressed by several tools. Five modules had none. Only 3 studies used Messick's framework to design their validity studies. The remaining studies used an outdated framework on the basis of "types of validity." When analyzed using the contemporary framework, few of these studies demonstrated validity for content, internal structure, and relationship to other variables. This study provides an assessment toolbox for common surgical skills/procedures. Our review shows that few authors have used the contemporary unitary concept of validity for development of their assessment tools. As we progress toward competency-based training, future studies should provide evidence for various sources of validity using the contemporary framework.
Hara, Yoriko; Koyama, Satoshi; Morinaga, Toru; Ito, Hisao; Kohno, Shusuke; Hirai, Hiroyuki; Kikuchi, Toshio; Tsuda, Toru; Ichino, Isao; Takei, Satoko; Yamada, Kentaro; Tsuboi, Koji; Breugelmans, Raoul; Ishihara, Yoko
2011-01-01
An appropriate questionnaire for measurement of the psychological burden of self-management or behavior modification in type-2 diabetes patients has yet to be developed in Japan. This study was conducted to test the reliability and validity of the Japanese version of the Appraisal of Diabetes Scale (ADS). the study enrolled 346 Japanese patients with type 2 diabetes: 200 men and 146 women who were 63.2 ± 10.1 and 62.2 ± 11.9 years of age and had HbA1c levels of 6.9 ± 1.2% and 7.3 ± 1.9%, respectively. the questionnaire was divided into three components: "Psychological impact of diabetes", "Sense of self-control", and "Efforts for symptom management". Cronbach's alpha was 0.746-0.628. Significant correlations were observed between "Sense of self-control" and self-managed dietary and exercise behaviors and HbA1c levels; between "Psychological impact of diabetes" and various treatments, symptoms causing anxiety, and HbA1c levels; and between "Efforts for symptom management" and dietary and nutritional behaviors. The questionnaire showed better evidence of internal consistency, test-retest reliability and validity. our results suggested that the Japanese version of ADS may be a useful tool for the quick assessment of common anxieties and motivation toward treatment in patients with type 2 diabetes. 2010 Elsevier Ireland Ltd. All rights reserved.
Registration of in vivo MR to histology of rodent brains using blockface imaging
NASA Astrophysics Data System (ADS)
Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael
2009-02-01
Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.
The statistical validity of nursing home survey findings.
Woolley, Douglas C
2011-11-01
The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
Measures of crowding in the emergency department: a systematic review.
Hwang, Ula; McCarthy, Melissa L; Aronsky, Dominik; Asplin, Brent; Crane, Peter W; Craven, Catherine K; Epstein, Stephen K; Fee, Christopher; Handel, Daniel A; Pines, Jesse M; Rathlev, Niels K; Schafermeyer, Robert W; Zwemer, Frank L; Bernstein, Steven L
2011-05-01
Despite consensus regarding the conceptual foundation of crowding, and increasing research on factors and outcomes associated with crowding, there is no criterion standard measure of crowding. The objective was to conduct a systematic review of crowding measures and compare them in conceptual foundation and validity. This was a systematic, comprehensive review of four medical and health care citation databases to identify studies related to crowding in the emergency department (ED). Publications that "describe the theory, development, implementation, evaluation, or any other aspect of a 'crowding measurement/definition' instrument (qualitative or quantitative)" were included. A "measurement/definition" instrument is anything that assigns a value to the phenomenon of crowding in the ED. Data collected from papers meeting inclusion criteria were: study design, objective, crowding measure, and evidence of validity. All measures were categorized into five measure types (clinician opinion, input factors, throughput factors, output factors, and multidimensional scales). All measures were then indexed to six validation criteria (clinician opinion, ambulance diversion, left without being seen (LWBS), times to care, forecasting or predictions of future crowding, and other). There were 2,660 papers identified by databases; 46 of these papers met inclusion criteria, were original research studies, and were abstracted by reviewers. A total of 71 unique crowding measures were identified. The least commonly used type of crowding measure was clinician opinion, and the most commonly used were numerical counts (number or percentage) of patients and process times associated with patient care. Many measures had moderate to good correlation with validation criteria. Time intervals and patient counts are emerging as the most promising tools for measuring flow and nonflow (i.e., crowding), respectively. Standardized definitions of time intervals (flow) and numerical counts (nonflow) will assist with validation of these metrics across multiple sites and clarify which options emerge as the metrics of choice in this "crowded" field of measures. © 2011 by the Society for Academic Emergency Medicine.
Progress in Developing Transfer Functions for Surface Scanning Eddy Current Inspections
NASA Astrophysics Data System (ADS)
Shearer, J.; Heebl, J.; Brausch, J.; Lindgren, E.
2009-03-01
As US Air Force (USAF) aircraft continue to age, additional inspections are required for structural components. The validation of new inspections typically requires a capability demonstration of the method using representative structure with representative damage. To minimize the time and cost required to prepare such samples, Electric Discharge machined (EDM) notches are commonly used to represent fatigue cracks in validation studies. However, the sensitivity to damage typically changes as a function of damage type. This requires a mathematical relationship to be developed between the responses from the two different flaw types to enable the use of EDM notched samples to validate new inspections. This paper reviews progress to develop transfer functions for surface scanning eddy current inspections of aluminum and titanium alloys found in structural aircraft components. Multiple samples with well characterized grown fatigue cracks and master gages with EDM notches, both with a range of flaw sizes, were used to collect flaw signals with USAF field inspection equipment. Analysis of this empirical data was used to develop a transfer function between the response from the EDM notches and grown fatigue cracks.
Forensic DNA Profiling and Database
Panneerchelvam, S.; Norazmi, M.N.
2003-01-01
The incredible power of DNA technology as an identification tool had brought a tremendous change in crimnal justice . DNA data base is an information resource for the forensic DNA typing community with details on commonly used short tandem repeat (STR) DNA markers. This article discusses the essential steps in compilation of COmbined DNA Index System (CODIS) on validated polymerase chain amplified STRs and their use in crime detection. PMID:23386793
MicroRNAs expression profile in solid and unicystic ameloblastomas.
Setién-Olarra, A; Marichalar-Mendia, X; Bediaga, N G; Aguirre-Echebarria, P; Aguirre-Urizar, J M; Mosqueda-Taylor, A
2017-01-01
Odontogenic tumors (OT) represent a specific pathological category that includes some lesions with unpredictable biological behavior. Although most of these lesions are benign, some, such as the ameloblastoma, exhibit local aggressiveness and high recurrence rates. The most common types of ameloblastoma are the solid/multicystic (SA) and the unicystic ameloblastoma (UA); the latter considered a much less aggressive entity as compared to the SA. The microRNA system regulates the expression of many human genes while its deregulation has been associated with neoplastic development. The aim of the current study was to determine the expression profiles of microRNAs present in the two most common types of ameloblastomas. MicroRNA expression profiles were assessed using TaqMan® Low Density Arrays (TLDAs) in 24 samples (8 SA, 8 UA and 8 control samples). The findings were validated using quantitative RTqPCR in an independent cohort of 19 SA, 8 UA and 19 dentigerous cysts as controls. We identified 40 microRNAs differentially regulated in ameloblastomas, which are related to neoplastic development and differentiation, and with the osteogenic process. Further validation of the top ranked microRNAs revealed significant differences in the expression of 6 of them in relation to UA, 7 in relation to SA and 1 (miR-489) that was related to both types. We identified a new microRNA signature for the ameloblastoma and for its main types, which may be useful to better understand the etiopathogenesis of this neoplasm. In addition, we identified a microRNA (miR-489) that is suggestive of differentiating among solid from unicystic ameloblastoma.
MicroRNAs expression profile in solid and unicystic ameloblastomas
Setién-Olarra, A.; Bediaga, N. G.; Aguirre-Echebarria, P.; Aguirre-Urizar, J. M.; Mosqueda-Taylor, A.
2017-01-01
Objectives Odontogenic tumors (OT) represent a specific pathological category that includes some lesions with unpredictable biological behavior. Although most of these lesions are benign, some, such as the ameloblastoma, exhibit local aggressiveness and high recurrence rates. The most common types of ameloblastoma are the solid/multicystic (SA) and the unicystic ameloblastoma (UA); the latter considered a much less aggressive entity as compared to the SA. The microRNA system regulates the expression of many human genes while its deregulation has been associated with neoplastic development. The aim of the current study was to determine the expression profiles of microRNAs present in the two most common types of ameloblastomas. Material & methods MicroRNA expression profiles were assessed using TaqMan® Low Density Arrays (TLDAs) in 24 samples (8 SA, 8 UA and 8 control samples). The findings were validated using quantitative RTqPCR in an independent cohort of 19 SA, 8 UA and 19 dentigerous cysts as controls. Results We identified 40 microRNAs differentially regulated in ameloblastomas, which are related to neoplastic development and differentiation, and with the osteogenic process. Further validation of the top ranked microRNAs revealed significant differences in the expression of 6 of them in relation to UA, 7 in relation to SA and 1 (miR-489) that was related to both types. Conclusion We identified a new microRNA signature for the ameloblastoma and for its main types, which may be useful to better understand the etiopathogenesis of this neoplasm. In addition, we identified a microRNA (miR-489) that is suggestive of differentiating among solid from unicystic ameloblastoma. PMID:29053755
Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M
2018-05-16
Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Self-validating type C thermocouples to 2300 °C using high temperature fixed points
NASA Astrophysics Data System (ADS)
Pearce, J. V.; Elliott, C. J.; Machin, G.; Ongrai, O.
2013-09-01
Above 1500 °C, tungsten-rhenium (W-Re) thermocouples are the most commonly used contact thermometers because they are practical and inexpensive. However in general loss of calibration is very rapid, and, due to their embrittlement at high temperature, it is generally not possible to remove them for recalibration from the process environments in which they are used. Even if removal for recalibration was possible this would be of, at best, very limited use due to large inhomogeneity effects. Ideally, these thermocouples require some mechanism to monitor their drift in-situ. In this study, we describe self-validation of Type C (W5%Re/W26%Re) thermocouples by means of miniature high temperature fixed points comprising crucibles containing respectively Co-C, Pt-C, Ru-C, and Ir-C eutectic alloys. An overview of developments in this area is presented.
Syakalima, Michelo; Simuunza, Martin; Zulu, Victor Chisha
2018-02-01
Ethno veterinary knowledge has rarely been recorded, and no or limited effort has been made to exploit this knowledge despite its widespread use in Zambia. This study documented the types of plants used to treat important animal diseases in rural Zambia as a way of initiating their sustained documentation and scientific validation. The study was done in selected districts of the Southern Zambia, Africa. The research was a participatory epidemiological study conducted in two phases. The first phase was a pre-study exploratory rapid rural appraisal conducted to familiarize the researchers with the study areas, and the second phase was a participatory rural appraisal to help gather the data. The frequency index was used to rank the commonly mentioned treatments. A number of diseases and traditional treatments were listed with the help of local veterinarians. Diseases included: Corridor disease (Theileriosis), foot and mouth disease, blackleg, bloody diarrhea, lumpy skin disease, fainting, mange, blindness, coughing, bloat, worms, cobra snakebite, hemorrhagic septicemia, and transmissible venereal tumors. The plant preparations were in most diseases given to the livestock orally (as a drench). Leaves, barks, and roots were generally used depending on the plant type. Ethno veterinary medicine is still widespread among the rural farmers in the province and in Zambia in general. Some medicines are commonly used across diseases probably because they have a wide spectrum of action. These medicines should, therefore, be validated for use in conventional livestock healthcare systems in the country to reduce the cost of treatments.
Michaelides, Michalis P.
2010-01-01
Many studies have investigated the topic of change or drift in item parameter estimates in the context of item response theory (IRT). Content effects, such as instructional variation and curricular emphasis, as well as context effects, such as the wording, position, or exposure of an item have been found to impact item parameter estimates. The issue becomes more critical when items with estimates exhibiting differential behavior across test administrations are used as common for deriving equating transformations. This paper reviews the types of effects on IRT item parameter estimates and focuses on the impact of misbehaving or aberrant common items on equating transformations. Implications relating to test validity and the judgmental nature of the decision to keep or discard aberrant common items are discussed, with recommendations for future research into more informed and formal ways of dealing with misbehaving common items. PMID:21833230
Michaelides, Michalis P
2010-01-01
Many studies have investigated the topic of change or drift in item parameter estimates in the context of item response theory (IRT). Content effects, such as instructional variation and curricular emphasis, as well as context effects, such as the wording, position, or exposure of an item have been found to impact item parameter estimates. The issue becomes more critical when items with estimates exhibiting differential behavior across test administrations are used as common for deriving equating transformations. This paper reviews the types of effects on IRT item parameter estimates and focuses on the impact of misbehaving or aberrant common items on equating transformations. Implications relating to test validity and the judgmental nature of the decision to keep or discard aberrant common items are discussed, with recommendations for future research into more informed and formal ways of dealing with misbehaving common items.
Distinct Microbial Signatures Associated With Different Breast Cancer Types
Banerjee, Sagarika; Tian, Tian; Wei, Zhi; Shih, Natalie; Feldman, Michael D.; Peck, Kristen N.; DeMichele, Angela M.; Alwine, James C.; Robertson, Erle S.
2018-01-01
A dysbiotic microbiome can potentially contribute to the pathogenesis of many different diseases including cancer. Breast cancer is the second leading cause of cancer death in women. Thus, we investigated the diversity of the microbiome in the four major types of breast cancer: endocrine receptor (ER) positive, triple positive, Her2 positive and triple negative breast cancers. Using a whole genome and transcriptome amplification and a pan-pathogen microarray (PathoChip) strategy, we detected unique and common viral, bacterial, fungal and parasitic signatures for each of the breast cancer types. These were validated by PCR and Sanger sequencing. Hierarchical cluster analysis of the breast cancer samples, based on their detected microbial signatures, showed distinct patterns for the triple negative and triple positive samples, while the ER positive and Her2 positive samples shared similar microbial signatures. These signatures, unique or common to the different breast cancer types, provide a new line of investigation to gain further insights into prognosis, treatment strategies and clinical outcome, as well as better understanding of the role of the micro-organisms in the development and progression of breast cancer. PMID:29867857
Hogikyan, N D; Wodchis, W P; Terrell, J E; Bradford, C R; Esclamado, R M
2000-09-01
Unilateral vocal fold paralysis is a common clinical problem which frequently causes severe dysphonia. Various treatment options exist for this condition, with the type I thyroplasty being one of the more commonly performed surgical procedures for vocal rehabilitation. The Voice-Related Quality of Life (V-RQOL) Measure is a validated outcomes instrument for voice disorders. This study measured the V-RQOL of patients with unilateral vocal fold paralysis who had undergone a type I thyroplasty and compared these scores to those of patients with untreated and uncompensated unilateral vocal fold paralysis and to normals. Treated patients had significantly higher domain and overall V-RQOL scores than untreated patients, but also scored lower than normals. These differences were true across gender and age. Patients who were more distant from surgery had lower V-RQOL scores than those who had more recently been treated. It is concluded that type I thyroplasty leads to a significantly higher V-RQOL for patients with unilateral vocal fold paralysis. This study also demonstrates further the utility of patient-oriented measures of treatment outcome.
STRBase: a short tandem repeat DNA database for the human identity testing community
Ruitberg, Christian M.; Reeder, Dennis J.; Butler, John M.
2001-01-01
The National Institute of Standards and Technology (NIST) has compiled and maintained a Short Tandem Repeat DNA Internet Database (http://www.cstl.nist.gov/biotech/strbase/) since 1997 commonly referred to as STRBase. This database is an information resource for the forensic DNA typing community with details on commonly used short tandem repeat (STR) DNA markers. STRBase consolidates and organizes the abundant literature on this subject to facilitate on-going efforts in DNA typing. Observed alleles and annotated sequence for each STR locus are described along with a review of STR analysis technologies. Additionally, commercially available STR multiplex kits are described, published polymerase chain reaction (PCR) primer sequences are reported, and validation studies conducted by a number of forensic laboratories are listed. To supplement the technical information, addresses for scientists and hyperlinks to organizations working in this area are available, along with the comprehensive reference list of over 1300 publications on STRs used for DNA typing purposes. PMID:11125125
Hierarchical atom type definitions and extensible all-atom force fields.
Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai
2016-03-15
The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samudrala, Ram; Heffron, Fred; McDermott, Jason E.
2009-04-24
The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, aftermore » eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.« less
Validation and Design of Sheet Retrofits
2010-10-31
enough to allow for rotation of the top of the wall without development of an axial force. Obviously, these walls are not load bearing . This type...structures are commonly constructed using CMU blocks to infill non- load bearing walls (Hammons, 1999). Many of these structures were built in a... axial loads within the sheet. 3 Figure 1. Infill Masonry Wall Retrofit Concept 2.1. Objective The objective of the research documented in
Zaccaro, Heather N; Carbone, Emily C; Dsouza, Nishita; Xu, Michelle R; Byrne, Mary C; Kraemer, John D
2015-12-01
There is a need to develop motorcycle helmet surveillance approaches that are less labour intensive than direct observation (DO), which is the commonly recommended but never formally validated approach, particularly in developing settings. This study sought to assess public traffic camera feeds as an alternative to DO, in addition to the reliability of DO under field conditions. DO had high inter-rater reliability, κ=0.88 and 0.84, respectively, for cycle type and helmet type, which reinforces its use as a gold standard. However, traffic camera-based data collection was found to be unreliable, with κ=0.46 and 0.53 for cycle type and helmet type. When bicycles, motorcycles and scooters were classified based on traffic camera streams, only 68.4% of classifications concurred with those made via DO. Given the current technology, helmet surveillance via traffic camera streams is infeasible, and there remains a need for innovative traffic safety surveillance approaches in low-income urban settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Peterman, Karen; Cranston, Kayla A.; Pryor, Marie; Kermish-Allen, Ruth
2015-11-01
This case study was conducted within the context of a place-based education project that was implemented with primary school students in the USA. The authors and participating teachers created a performance assessment of standards-aligned tasks to examine 6-10-year-old students' graph interpretation skills as part of an exploratory research project. Fifty-five students participated in a performance assessment interview at the beginning and end of a place-based investigation. Two forms of the assessment were created and counterbalanced within class at pre and post. In situ scoring was conducted such that responses were scored as correct versus incorrect during the assessment's administration. Criterion validity analysis demonstrated an age-level progression in student scores. Tests of discriminant validity showed that the instrument detected variability in interpretation skills across each of three graph types (line, bar, dot plot). Convergent validity was established by correlating in situ scores with those from the Graph Interpretation Scoring Rubric. Students' proficiency with interpreting different types of graphs matched expectations based on age and the standards-based progression of graphs across primary school grades. The assessment tasks were also effective at detecting pre-post gains in students' interpretation of line graphs and dot plots after the place-based project. The results of the case study are discussed in relation to the common challenges associated with performance assessment. Implications are presented in relation to the need for authentic and performance-based instructional and assessment tasks to respond to the Common Core State Standards and the Next Generation Science Standards.
de Laat, Joanne M; Tham, Emma; Pieterman, Carolina R C; Vriens, Menno R; Dorresteijn, Johannes A N; Bots, Michiel L; Nordenskjöld, Magnus; van der Luijt, Rob B; Valk, Gerlof D
2012-08-01
Endocrine diseases that can be part of the rare inheritable syndrome multiple endocrine neoplasia type 1 (MEN1) commonly occur in the general population. Patients at risk for MEN1, and consequently their families, must be identified to prevent morbidity through periodic screening for the detection and treatment of manifestations in an early stage. The aim of the study was to develop a model for predicting MEN1 in individual patients with sporadically occurring endocrine tumors. Cross-sectional study. In a nationwide study in The Netherlands, patients with sporadically occurring endocrine tumors in whom the referring physician suspected the MEN1 syndrome were identified between 1998 and 2011 (n=365). Logistic regression analysis with internal validation using bootstrapping and external validation with a cohort from Sweden was used. A MEN1 mutation was found in 15.9% of 365 patients. Recurrent primary hyperparathyroidism (pHPT; odds ratio (OR) 162.40); nonrecurrent pHPT (OR 25.78); pancreatic neuroendocrine tumors (pNETs) and duodenal NETs (OR 17.94); pituitary tumor (OR 4.71); NET of stomach, thymus, or bronchus (OR 25.84); positive family history of NET (OR 4.53); and age (OR 0.96) predicted MEN1. The c-statistic of the prediction model was 0.86 (95% confidence interval (95% CI) 0.81-0.90) in the derivation cohort and 0.77 (95% CI 0.66-0.88) in the validation cohort. With the prediction model, the risk of MEN1 can be calculated in patients suspected for MEN1 with sporadically occurring endocrine tumors.
Review of meta-analyses evaluating surrogate endpoints for overall survival in oncology.
Sherrill, Beth; Kaye, James A; Sandin, Rickard; Cappelleri, Joseph C; Chen, Connie
2012-01-01
Overall survival (OS) is the gold standard in measuring the treatment effect of new drug therapies for cancer. However, practical factors may preclude the collection of unconfounded OS data, and surrogate endpoints are often used instead. Meta-analyses have been widely used for the validation of surrogate endpoints, specifically in oncology. This research reviewed published meta-analyses on the types of surrogate measures used in oncology studies and examined the extent of correlation between surrogate endpoints and OS for different cancer types. A search was conducted in October 2010 to compile available published evidence in the English language for the validation of disease progression-related endpoints as surrogates of OS, based on meta-analyses. We summarize published meta-analyses that quantified the correlation between progression-based endpoints and OS for multiple advanced solid-tumor types. We also discuss issues that affect the interpretation of these findings. Progression-free survival is the most commonly used surrogate measure in studies of advanced solid tumors, and correlation with OS is reported for a limited number of cancer types. Given the increased use of crossover in trials and the availability of second-/third-line treatment options available to patients after progression, it will become increasingly more difficult to establish correlation between effects on progression-free survival and OS in additional tumor types.
Distinguishing between debris flows and floods from field evidence in small watersheds
Pierson, Thomas C.
2005-01-01
Post-flood indirect measurement techniques to back-calculate flood magnitude are not valid for debris flows, which commonly occur in small steep watersheds during intense rainstorms. This is because debris flows can move much faster than floods in steep channel reaches and much slower than floods in low-gradient reaches. In addition, debris-flow deposition may drastically alter channel geometry in reaches where slope-area surveys are applied. Because high-discharge flows are seldom witnessed and automated samplers are commonly plugged or destroyed, determination of flow type often must be made on the basis of field evidence preserved at the site.
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
Steinert, Janina Isabel; Cluver, Lucie Dale; Melendez-Torres, G J; Vollmer, Sebastian
2018-01-01
Composite indices have been prominently used in poverty research. However, validity of these indices remains subject to debate. This paper examines the validity of a common type of composite poverty indices using data from a cross-sectional survey of 2477 households in urban and rural KwaZulu-Natal, South Africa. Multiple-group comparisons in structural equation modelling were employed for testing differences in the measurement model across urban and rural groups. The analysis revealed substantial variations between urban and rural respondents both in the conceptualisation of poverty as well as in the weights and importance assigned to individual poverty indicators. The validity of a 'one size fits all' measurement model can therefore not be confirmed. In consequence, it becomes virtually impossible to determine a household's poverty level relative to the full sample. Findings from our analysis have important practical implications in nuancing how we can sensitively use composite poverty indices to identify poor people.
Transrectal Near-Infrared Optical Tomography for Prostate Imaging
2010-03-01
valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 31-03-2010 2. REPORT TYPE Annual 3...technology of trans-rectal near-infrared (NIR) optical tomography for accurate, selective prostate biopsy. Prostate cancer is the most common non ...mice and recovered/homogenized for injection into the non -immune suppressed dog’s prostate gland. Under general anesthesia, ~2 cc of TVT cells
Syakalima, Michelo; Simuunza, Martin; Zulu, Victor Chisha
2018-01-01
Aim: Ethno veterinary knowledge has rarely been recorded, and no or limited effort has been made to exploit this knowledge despite its widespread use in Zambia. This study documented the types of plants used to treat important animal diseases in rural Zambia as a way of initiating their sustained documentation and scientific validation. Materials and Methods: The study was done in selected districts of the Southern Zambia, Africa. The research was a participatory epidemiological study conducted in two phases. The first phase was a pre-study exploratory rapid rural appraisal conducted to familiarize the researchers with the study areas, and the second phase was a participatory rural appraisal to help gather the data. The frequency index was used to rank the commonly mentioned treatments. Results: A number of diseases and traditional treatments were listed with the help of local veterinarians. Diseases included: Corridor disease (Theileriosis), foot and mouth disease, blackleg, bloody diarrhea, lumpy skin disease, fainting, mange, blindness, coughing, bloat, worms, cobra snakebite, hemorrhagic septicemia, and transmissible venereal tumors. The plant preparations were in most diseases given to the livestock orally (as a drench). Leaves, barks, and roots were generally used depending on the plant type. Conclusion: Ethno veterinary medicine is still widespread among the rural farmers in the province and in Zambia in general. Some medicines are commonly used across diseases probably because they have a wide spectrum of action. These medicines should, therefore, be validated for use in conventional livestock healthcare systems in the country to reduce the cost of treatments. PMID:29657394
NASA Astrophysics Data System (ADS)
Zhang, Yuguang; Wen, Jihong; Zhao, Honggang; Yu, Dianlong; Cai, Li; Wen, Xisen
2013-08-01
We present the experimental realization and theoretical understanding of membrane-type acoustic metamaterials embedded with different masses at adjacent cells, capable of increasing the transmission loss at low frequency. Owing to the reverse vibration of adjacent cells, Transmission loss (TL) peaks appear, and the magnitudes of the TL peaks exceed the predicted results of the composite wall. Compared with commonly used configuration, i.e., all cells carrying with identical mass, the nonuniformity of attaching masses causes another much low TL peak. Finite element analysis was employed to validate and provide insights into the TL behavior of the structure.
Phetpeng, Sukanya; Kitpipit, Thitika; Thanakiatkrai, Phuvadol
2015-07-01
Improvised explosive devices (IEDs) made from household items are encountered in terrorist attacks worldwide. Assembling an IED leaves trace DNA on its components, but deflagration degrades DNA. To maximize the amount of DNA recovered, a systematic evaluation of DNA collection methods was carried out and the most efficient methods were implemented with IED casework evidence as a validation exercise. Six swab types and six moistening agents were used to collect dried buffy coat stains on four common IED substrates. The most efficient swab/moistening agent combinations were then compared with tape-lifting using three brands of adhesive tape and also with direct DNA extraction from evidence. The most efficient collection methods for different IED substrates (post-study protocol) were then implemented for IED casework and compared with the pre-study protocol using 195 pieces of IED evidence. There was no single best swab type or moistening agent. Swab type had the largest effect on DNA recovery percentages, but moistening agents, substrates, and the interactions between factors all affected DNA recovery. The most efficient swab/moistening agent combinations performed equally well when compared with the best adhesive tape and direct extraction. The post-study protocol significantly improved STR profiles obtained from IED evidence. This paper outlines a comprehensive study of DNA collection methods for trace DNA and the validation of the most efficient collection methods with IED evidence. The findings from both parts of this study emphasize the need to continuously re-evaluate standard operating protocols with empirical studies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Reviewing the psychometric properties of contemporary circadian typology measures.
Di Milia, Lee; Adan, Ana; Natale, Vincenzo; Randler, Christoph
2013-12-01
The accurate measurement of circadian typology (CT) is critical because the construct has implications for a number of health disorders. In this review, we focus on the evidence to support the reliability and validity of the more commonly used CT scales: the Morningness-Eveningness Questionnaire (MEQ), reduced Morningness-Eveningness Questionnaire (rMEQ), the Composite Scale of Morningness (CSM), and the Preferences Scale (PS). In addition, we also consider the Munich ChronoType Questionnaire (MCTQ). In terms of reliability, the MEQ, CSM, and PS consistently report high levels of reliability (>0.80), whereas the reliability of the rMEQ is satisfactory. The stability of these scales is sound at follow-up periods up to 13 mos. The MCTQ is not a scale; therefore, its reliability cannot be assessed. Although it is possible to determine the stability of the MCTQ, these data are yet to be reported. Validity must be given equal weight in assessing the measurement properties of CT instruments. Most commonly reported is convergent and construct validity. The MEQ, rMEQ, and CSM are highly correlated and this is to be expected, given that these scales share common items. The level of agreement between the MCTQ and the MEQ is satisfactory, but the correlation between these two constructs decreases in line with the number of "corrections" applied to the MCTQ. The interesting question is whether CT is best represented by a psychological preference for behavior or by using a biomarker such as sleep midpoint. Good-quality subjective and objective data suggest adequate construct validity for each of the CT instruments, but a major limitation of this literature is studies that assess the predictive validity of these instruments. We make a number of recommendations with the aim of advancing science. Future studies need to (1) focus on collecting data from representative samples that consider a number of environmental factors; (2) employ longitudinal designs to allow the predictive validity of CT measures to be assessed and preferably make use of objective data; (3) employ contemporary statistical approaches, including structural equation modeling and item-response models; and (4) provide better information concerning sample selection and a rationale for choosing cutoff points.
Development and validation of a predictive risk model for all-cause mortality in type 2 diabetes.
Robinson, Tom E; Elley, C Raina; Kenealy, Tim; Drury, Paul L
2015-06-01
Type 2 diabetes is common and is associated with an approximate 80% increase in the rate of mortality. Management decisions may be assisted by an estimate of the patient's absolute risk of adverse outcomes, including death. This study aimed to derive a predictive risk model for all-cause mortality in type 2 diabetes. We used primary care data from a large national multi-ethnic cohort of patients with type 2 diabetes in New Zealand and linked mortality records to develop a predictive risk model for 5-year risk of mortality. We then validated this model using information from a separate cohort of patients with type 2 diabetes. 26,864 people were included in the development cohort with a median follow up time of 9.1 years. We developed three models initially using demographic information and then progressively more clinical detail. The final model, which also included markers of renal disease, proved to give best prediction of all-cause mortality with a C-statistic of 0.80 in the development cohort and 0.79 in the validation cohort (7610 people) and was well calibrated. Ethnicity was a major factor with hazard ratios of 1.37 for indigenous Maori, 0.41 for East Asian and 0.55 for Indo Asian compared with European (P<0.001). We have developed a model using information usually available in primary care that provides good assessment of patient's risk of death. Results are similar to models previously published from smaller cohorts in other countries and apply to a wider range of patient ethnic groups. Copyright © 2015. Published by Elsevier Ireland Ltd.
A nomenclator of extant and fossil taxa of the Melanopsidae (Gastropoda, Cerithioidea)
Neubauer, Thomas A.
2016-01-01
Abstract This nomenclator provides details on all published names in the family-, genus-, and species-group, as well as for a few infrasubspecific names introduced for, or attributed to, the family Melanopsidae. It includes nomenclaturally valid names, as well as junior homonyms, junior objective synonyms, nomina nuda, common incorrect subsequent spellings, and as far as possible discussion on the current status in taxonomy. The catalogue encompasses three family-group names, 79 genus-group names, and 1381 species-group names. All of them are given in their original combination and spelling (except mandatory corrections requested by the Code), along with their original source. For each family- and genus-group name, the original classification and the type genus and type species, respectively, are given. Data provided for species-group taxa are type locality, type horizon (for fossil taxa), and type specimens, as far as available. PMID:27551193
A miniature high-temperature fixed point for self-validation of type C thermocouples
NASA Astrophysics Data System (ADS)
Ongrai, O.; Pearce, J. V.; Machin, G.; Sweeney, S. J.
2011-10-01
Reliable high-temperature (>1500 °C) measurement is crucial for a wide range of industrial processes as well as specialized applications, e.g. aerospace. The most common type of sensor used for high-temperature measurement is the thermocouple. At and above 1500 °C, tungsten-rhenium (W-Re) thermocouples are the most commonly used temperature sensors due to their utility up to 2300 °C. However, the achievable accuracy of W-Re thermocouples is seriously limited by the effects of their inhomogeneity, drift and hysteresis. Furthermore, due to their embrittlement at high temperature, the removal of these thermocouples from environments such as nuclear power plants or materials processing furnaces for recalibration is generally not possible. Even if removal for recalibration were possible, this would be of, at best, very limited use due to large inhomogeneity effects. Ideally, these thermocouples require some mechanism to monitor their drift in situ. In this study, we describe a miniature Co-C eutectic fixed-point cell to evaluate the stability of type C (W5%Re/W26%Re) thermocouples by means of in situ calibration.
Clemes, Stacy A; Biddle, Stuart J H
2013-02-01
Pedometers are increasingly being used to measure physical activity in children and adolescents. This review provides an overview of common measurement issues relating to their use. Studies addressing the following measurement issues in children/adolescents (aged 3-18 years) were included: pedometer validity and reliability, monitoring period, wear time, reactivity, and data treatment and reporting. Pedometer surveillance studies in children/adolescents (aged: 4-18 years) were also included to enable common measurement protocols to be highlighted. In children > 5 years, pedometers provide a valid and reliable, objective measure of ambulatory activity. Further evidence is required on pedometer validity in preschool children. Across all ages, optimal monitoring frames to detect habitual activity have yet to be determined; most surveillance studies use 7 days. It is recommended that standardized wear time criteria are established for different age groups, and that wear times are reported. As activity varies between weekdays and weekend days, researchers interested in habitual activity should include both types of day in surveillance studies. There is conflicting evidence on the presence of reactivity to pedometers. Pedometers are a suitable tool to objectively assess ambulatory activity in children (> 5 years) and adolescents. This review provides recommendations to enhance the standardization of measurement protocols.
jsc2018m000314_Spinning_Science_Multi-use_Variable-g_Platform_Arrives_at_the_Space_Station-MP4
2018-05-09
Spinning Science: Multi-use Variable-g Platform Arrives at the Space Station --- The Multi-use Variable-gravity Platform (MVP) Validation mission will install and test the MVP, a new hardware platform developed and owned by Techshot Inc., on the International Space Station (ISS). Though the MVP is designed for research with many different kinds of organisms and cell types, this validation mission will focus on Drosophila melanogaster, more commonly known as the fruit fly. This platform will be especially important for fruit fly research, as it will allow researchers to study larger sample sizes of Drosophila melanogaster than in other previous hardware utilizing centrifuges and it will be able to support fly colonies for multiple generations.
NASA Astrophysics Data System (ADS)
Pathak, Ashish; Raessi, Mehdi
2016-11-01
Using an in-house computational framework, we have studied the interaction of water waves with pitching flap-type ocean wave energy converters (WECs). The computational framework solves the full 3D Navier-Stokes equations and captures important effects, including the fluid-solid interaction, the nonlinear and viscous effects. The results of the computational tool, is first compared against the experimental data on the response of a flap-type WEC in a wave tank, and excellent agreement is demonstrated. Further simulations at the model and prototype scales are presented to assess the validity of the Froude scaling. The simulations are used to address some important questions, such as the validity range of common WEC modeling approaches that rely heavily on the Froude scaling and the inviscid potential flow theory. Additionally, the simulations examine the role of the Keulegan-Carpenter (KC) number, which is often used as a measure of relative importance of viscous drag on bodies exposed to oscillating flows. The performance of the flap-type WECs is investigated at various KC numbers to establish the relationship between the viscous drag and KC number for such geometry. That is of significant importance because such relationship only exists for simple geometries, e.g., a cylinder. Support from the National Science Foundation is gratefully acknowledged.
Martin, Caroline J Hollins; Kenney, Laurence; Pratt, Thomas; Granat, Malcolm H
2015-01-01
There is limited understanding of the type and extent of maternal postures that midwives should encourage or support during labor. The aims of this study were to identify a set of postures and movements commonly seen during labor, to develop an activity monitoring system for use during labor, and to validate this system design. Volunteer student midwives simulated maternal activity during labor in a laboratory setting. Participants (N = 15) wore monitors adhered to the left thigh and left shank, and adopted 13 common postures of laboring women for 3 minutes each. Simulated activities were recorded using a video camera. Postures and movements were coded from the video, and statistical analysis conducted of agreement between coded video data and outputs of the activity monitoring system. Excellent agreement between the 2 raters of the video recordings was found (Cohen's κ = 0.95). Both sensitivity and specificity of the activity monitoring system were greater than 80% for standing, lying, kneeling, and sitting (legs dangling). This validated system can be used to measure elected activity of laboring women and report on effects of postures on length of first stage, pain experience, birth satisfaction, and neonatal condition. This validated maternal posture-monitoring system is available as a reference-and for use by researchers who wish to develop research in this area. © 2015 by the American College of Nurse-Midwives.
What We are Learning about Airborne Particles from MISR Multi-angle Imaging
NASA Astrophysics Data System (ADS)
Kahn, Ralph
The NASA Earth Observing System’s Multi-angle Imaging SpectroRadiometer (MISR) instrument has been collecting global observations in 36 angular-spectral channels about once per week for over 14 years. Regarding airborne particles, MISR is contributing in three broad areas: (1) aerosol optical depth (AOD), especially over land surface, including bright desert, (2) wildfire smoke, desert dust, and volcanic ash injection and near-source plume height, and (3) aerosol type, the aggregate of qualitative constraints on particle size, shape, and single-scattering albedo (SSA). Early advances in the retrieval of these quantities focused on AOD, for which surface-based sun photometers provided a global network of ground truth, and plume height, for which ground-based and airborne lidar offered near-coincident validation data. MSIR monthly, global AOD products contributed directly to the advances in modeling aerosol impacts on climate made between the Inter-governmental Panel on Climate Change (IPCC) third and fourth assessment reports. MISR stereo-derived plume heights are now being used to constrain source inventories for the AeroCom aerosol-climate modeling effort. The remaining challenge for the MISR aerosol effort is to refine and validate our global aerosol type product. Unlike AOD and plume height, aerosol type as retrieved by MISR is a qualitative classification derived from multi-dimensional constraints, so evaluation must be done on a categorical basis. Coincident aerosol type validation data are far less common than for AOD, and, except for rare Golden Days during aircraft field campaigns, amount to remote sensing retrievals from suborbital instruments having uncertainties comparable to those from the MISR product itself. And satellite remote sensing retrievals of aerosol type are much more sensitive to scene conditions such as surface variability and AOD than either AOD or plume height. MISR aerosol type retrieval capability and information content have been demonstrated in case studies using the MISR Operational as especially the MISR Research aerosol retrieval algorithms. Refinements to the Operational algorithm, as indicated by these studies, are required to generate a high-quality next-generation aerosol type product from the MISR data. This presentation will briefly review the MISR AOD and plume height product attributes, and will then focus on the MISR aerosol type product: validation, data quality, and refinements.
Review of meta-analyses evaluating surrogate endpoints for overall survival in oncology
Sherrill, Beth; Kaye, James A; Sandin, Rickard; Cappelleri, Joseph C; Chen, Connie
2012-01-01
Overall survival (OS) is the gold standard in measuring the treatment effect of new drug therapies for cancer. However, practical factors may preclude the collection of unconfounded OS data, and surrogate endpoints are often used instead. Meta-analyses have been widely used for the validation of surrogate endpoints, specifically in oncology. This research reviewed published meta-analyses on the types of surrogate measures used in oncology studies and examined the extent of correlation between surrogate endpoints and OS for different cancer types. A search was conducted in October 2010 to compile available published evidence in the English language for the validation of disease progression-related endpoints as surrogates of OS, based on meta-analyses. We summarize published meta-analyses that quantified the correlation between progression-based endpoints and OS for multiple advanced solid-tumor types. We also discuss issues that affect the interpretation of these findings. Progression-free survival is the most commonly used surrogate measure in studies of advanced solid tumors, and correlation with OS is reported for a limited number of cancer types. Given the increased use of crossover in trials and the availability of second-/third-line treatment options available to patients after progression, it will become increasingly more difficult to establish correlation between effects on progression-free survival and OS in additional tumor types. PMID:23109809
Examining the dimensions and correlates of workplace stress among Australian veterinarians
2009-01-01
Background Although stress is known to be a common occupational health issue in the veterinary profession, few studies have investigated its broad domains or the internal validity of the survey instrument used for assessment. Methods We analysed data from over 500 veterinarians in Queensland, Australia, who were surveyed during 2006-07. Results The most common causes of stress were reported to be long hours worked per day, not having enough holidays per year, not having enough rest breaks per day, the attitude of customers, lack of recognition from the public and not having enough time per patient. Age, gender and practice type were statistically associated with various aspects of work-related stress. Strong correlations were found between having too many patients per day and not having enough time per patient; between not having enough holidays and long working hours; and also between not enough rest breaks per day and long working hours. Factor analysis revealed four dimensions of stress comprising a mixture of career, professional and practice-related items. The internal validity of our stress questionnaire was shown to be high during statistical analysis. Conclusion Overall, this study suggests that workplace stress is fairly common among Australian veterinarians and represents an issue that occupies several distinct areas within their professional life. PMID:19995450
Examining the dimensions and correlates of workplace stress among Australian veterinarians.
Smith, Derek R; Leggat, Peter A; Speare, Richard; Townley-Jones, Maureen
2009-12-08
Although stress is known to be a common occupational health issue in the veterinary profession, few studies have investigated its broad domains or the internal validity of the survey instrument used for assessment. We analysed data from over 500 veterinarians in Queensland, Australia, who were surveyed during 2006-07. The most common causes of stress were reported to be long hours worked per day, not having enough holidays per year, not having enough rest breaks per day, the attitude of customers, lack of recognition from the public and not having enough time per patient. Age, gender and practice type were statistically associated with various aspects of work-related stress. Strong correlations were found between having too many patients per day and not having enough time per patient; between not having enough holidays and long working hours; and also between not enough rest breaks per day and long working hours. Factor analysis revealed four dimensions of stress comprising a mixture of career, professional and practice-related items. The internal validity of our stress questionnaire was shown to be high during statistical analysis. Overall, this study suggests that workplace stress is fairly common among Australian veterinarians and represents an issue that occupies several distinct areas within their professional life.
Sittig, Dean F; Ash, Joan S; Feblowitz, Joshua; Meltzer, Seth; McMullen, Carmit; Guappone, Ken; Carpenter, Jim; Richardson, Joshua; Simonaitis, Linas; Evans, R Scott; Nichol, W Paul; Middleton, Blackford
2011-01-01
Background Clinical decision support (CDS) is a valuable tool for improving healthcare quality and lowering costs. However, there is no comprehensive taxonomy of types of CDS and there has been limited research on the availability of various CDS tools across current electronic health record (EHR) systems. Objective To develop and validate a taxonomy of front-end CDS tools and to assess support for these tools in major commercial and internally developed EHRs. Study design and methods We used a modified Delphi approach with a panel of 11 decision support experts to develop a taxonomy of 53 front-end CDS tools. Based on this taxonomy, a survey on CDS tools was sent to a purposive sample of commercial EHR vendors (n=9) and leading healthcare institutions with internally developed state-of-the-art EHRs (n=4). Results Responses were received from all healthcare institutions and 7 of 9 EHR vendors (response rate: 85%). All 53 types of CDS tools identified in the taxonomy were found in at least one surveyed EHR system, but only 8 functions were present in all EHRs. Medication dosing support and order facilitators were the most commonly available classes of decision support, while expert systems (eg, diagnostic decision support, ventilator management suggestions) were the least common. Conclusion We developed and validated a comprehensive taxonomy of front-end CDS tools. A subsequent survey of commercial EHR vendors and leading healthcare institutions revealed a small core set of common CDS tools, but identified significant variability in the remainder of clinical decision support content. PMID:21415065
An ALE meta-analysis on the audiovisual integration of speech signals.
Erickson, Laura C; Heeg, Elizabeth; Rauschecker, Josef P; Turkeltaub, Peter E
2014-11-01
The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining "conflicting" versus "validating" AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals. Copyright © 2014 Wiley Periodicals, Inc.
Hacisalihoglu, Gokhan; Larbi, Bismark; Settles, A Mark
2010-01-27
The objective of this study was to explore the potential of near-infrared reflectance (NIR) spectroscopy to determine individual seed composition in common bean ( Phaseolus vulgaris L.). NIR spectra and analytical measurements of seed weight, protein, and starch were collected from 267 individual bean seeds representing 91 diverse genotypes. Partial least-squares (PLS) regression models were developed with 61 bean accessions randomly assigned to a calibration data set and 30 accessions assigned to an external validation set. Protein gave the most accurate PLS regression, with the external validation set having a standard error of prediction (SEP) = 1.6%. PLS regressions for seed weight and starch had sufficient accuracy for seed sorting applications, with SEP = 41.2 mg and 4.9%, respectively. Seed color had a clear effect on the NIR spectra, with black beans having a distinct spectral type. Seed coat color did not impact the accuracy of PLS predictions. This research demonstrates that NIR is a promising technique for simultaneous sorting of multiple seed traits in single bean seeds with no sample preparation.
Nestle, Frank O
2008-01-01
Psoriasis is one of the most common chronic inflammatory disorders with a strong genetic background. Recent progress in the understanding of both the immunological as well as the genetic basis has provided an unprecedented opportunity to move scientific insights from the bench to bedside. Based on insights from laboratory research, targeted immunotherapies are now available for the benefit of patients suffering from psoriasis. The success of these therapies has validated insights into disease pathogenesis and also provides the opportunity to increase our understanding about the pathways underpinning autoimmune-type inflammation in the skin.
Discovering system requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahill, A.T.; Bentz, B.; Dean, F.F.
1996-07-01
Cost and schedule overruns are often caused by poor requirements that are produced by people who do not understand the requirements process. This report provides a high-level overview of the system requirements process, explaining types, sources, and characteristics of good requirements. System requirements, however, are seldom stated by the customer. Therefore, this report shows ways to help you work with your customer to discover the system requirements. It also explains terminology commonly used in the requirements development field, such as verification, validation, technical performance measures, and the various design reviews.
Galbusera, Fabio; Brayda-Bruno, Marco; Freutel, Maren; Seitz, Andreas; Steiner, Malte; Wehrle, Esther; Wilke, Hans-Joachim
2012-01-01
Previous surveys showed a poor quality of the web sites providing health information about low back pain. However, the rapid and continuous evolution of the Internet content may question the current validity of those investigations. The present study is aimed to quantitatively assess the quality of the Internet information about low back pain retrieved with the most commonly employed search engines. An Internet search with the keywords "low back pain" has been performed with Google, Yahoo!® and Bing™ in the English language. The top 30 hits obtained with each search engine were evaluated by five independent raters and averaged following criteria derived from previous works. All search results were categorized as declaring compliant to a quality standard for health information (e.g. HONCode) or not and based on the web site type (Institutional, Free informative, Commercial, News, Social Network, Unknown). The quality of the hits retrieved by the three search engines was extremely similar. The web sites had a clear purpose, were easy to navigate, and mostly lacked in validity and quality of the provided links. The conformity to a quality standard was correlated with a marked greater quality of the web sites in all respects. Institutional web sites had the best validity and ease of use. Free informative web sites had good quality but a markedly lower validity compared to Institutional websites. Commercial web sites provided more biased information. News web sites were well designed and easy to use, but lacked in validity. The average quality of the hits retrieved by the most commonly employed search engines could be defined as satisfactory and favorably comparable with previous investigations. Awareness of the user about checking the quality of the information remains of concern.
Peters, Tansy; Bertrand, Sophie; Björkman, Jonas T; Brandal, Lin T; Brown, Derek J; Erdõsi, Tímea; Heck, Max; Ibrahem, Salha; Johansson, Karin; Kornschober, Christian; Kotila, Saara M; Le Hello, Simon; Lienemann, Taru; Mattheus, Wesley; Nielsen, Eva Møller; Ragimbeau, Catherine; Rumore, Jillian; Sabol, Ashley; Torpdahl, Mia; Trees, Eija; Tuohy, Alma; de Pinna, Elizabeth
2017-01-01
Multilocus variable-number tandem repeat analysis (MLVA) is a rapid and reproducible typing method that is an important tool for investigation, as well as detection, of national and multinational outbreaks of a range of food-borne pathogens. Salmonella enterica serovar Enteritidis is the most common Salmonella serovar associated with human salmonellosis in the European Union/European Economic Area and North America. Fourteen laboratories from 13 countries in Europe and North America participated in a validation study for MLVA of S. Enteritidis targeting five loci. Following normalisation of fragment sizes using a set of reference strains, a blinded set of 24 strains with known allele sizes was analysed by each participant. The S. Enteritidis 5-loci MLVA protocol was shown to produce internationally comparable results as more than 90% of the participants reported less than 5% discrepant MLVA profiles. All 14 participating laboratories performed well, even those where experience with this typing method was limited. The raw fragment length data were consistent throughout, and the inter-laboratory validation helped to standardise the conversion of raw data to repeat numbers with at least two countries updating their internal procedures. However, differences in assigned MLVA profiles remain between well-established protocols and should be taken into account when exchanging data. PMID:28277220
The surgical treatment of acromioclavicular joint injuries
Boffano, Michele; Mortera, Stefano; Wafa, Hazem; Piana, Raimondo
2017-01-01
Acromioclavicular joint (ACJ) injuries are common, but their incidence is probably underestimated. As the treatment of some sub-types is still debated, we reviewed the available literature to obtain an overview of current management. We analysed the literature using the PubMed search engine. There is consensus on the treatment of Rockwood type I and type II lesions and for high-grade injuries of types IV, V and VI. The treatment of type III injuries remains controversial, as none of the studies has proven a significant benefit of one procedure when compared with another. Several approaches can be considered in reaching a valid solution for treating ACJ lesions. The final outcome is affected by both vertical and horizontal post-operative ACJ stability. Synthetic devices, positioned using early open or arthroscopic procedures, are the main choice for young people. Type III injuries should be managed surgically only in cases with high-demand sporting or working activities. Cite this article: EFORT Open Rev 2017;2:432–437. DOI: 10.1302/2058-5241.2.160085. PMID:29209519
Nohle, David G; Ayers, Leona W
2005-01-01
Background The Association for Pathology Informatics (API) Extensible Mark-up Language (XML) TMA Data Exchange Specification (TMA DES) proposed in April 2003 provides a community-based, open source tool for sharing tissue microarray (TMA) data in a common format. Each tissue core within an array has separate data including digital images; therefore an organized, common approach to produce, navigate and publish such data facilitates viewing, sharing and merging TMA data from different laboratories. The AIDS and Cancer Specimen Resource (ACSR) is a HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers HIV-related malignancies and uninfected control tissues in microarrays (TMA) accompanied by de-identified clinical data to approved researchers. Exporting our TMA data into the proposed API specified format offers an opportunity to evaluate the API specification in an applied setting and to explore its usefulness. Results A document type definition (DTD) that governs the allowed common data elements (CDE) in TMA DES export XML files was written, tested and evolved and is in routine use by the ACSR. This DTD defines TMA DES CDEs which are implemented in an external file that can be supplemented by internal DTD extensions for locally defined TMA data elements (LDE). Conclusion ACSR implementation of the TMA DES demonstrated the utility of the specification and allowed application of a DTD to validate the language of the API specified XML elements and to identify possible enhancements within our TMA data management application. Improvements to the specification have additionally been suggested by our experience in importing other institution's exported TMA data. Enhancements to TMA DES to remove ambiguous situations and clarify the data should be considered. Better specified identifiers and hierarchical relationships will make automatic use of the data possible. Our tool can be used to reorder data and add identifiers; upgrading data for changes in the specification can be automatically accomplished. Using a DTD (optionally reflecting our proposed enhancements) can provide stronger validation of exported TMA data. PMID:15871741
Hulteen, Ryan M; Lander, Natalie J; Morgan, Philip J; Barnett, Lisa M; Robertson, Samuel J; Lubans, David R
2015-10-01
It has been suggested that young people should develop competence in a variety of 'lifelong physical activities' to ensure that they can be active across the lifespan. The primary aim of this systematic review is to report the methodological properties, validity, reliability, and test duration of field-based measures that assess movement skill competency in lifelong physical activities. A secondary aim was to clearly define those characteristics unique to lifelong physical activities. A search of four electronic databases (Scopus, SPORTDiscus, ProQuest, and PubMed) was conducted between June 2014 and April 2015 with no date restrictions. Studies addressing the validity and/or reliability of lifelong physical activity tests were reviewed. Included articles were required to assess lifelong physical activities using process-oriented measures, as well as report either one type of validity or reliability. Assessment criteria for methodological quality were adapted from a checklist used in a previous review of sport skill outcome assessments. Movement skill assessments for eight different lifelong physical activities (badminton, cycling, dance, golf, racquetball, resistance training, swimming, and tennis) in 17 studies were identified for inclusion. Methodological quality, validity, reliability, and test duration (time to assess a single participant), for each article were assessed. Moderate to excellent reliability results were found in 16 of 17 studies, with 71% reporting inter-rater reliability and 41% reporting intra-rater reliability. Only four studies in this review reported test-retest reliability. Ten studies reported validity results; content validity was cited in 41% of these studies. Construct validity was reported in 24% of studies, while criterion validity was only reported in 12% of studies. Numerous assessments for lifelong physical activities may exist, yet only assessments for eight lifelong physical activities were included in this review. Generalizability of results may be more applicable if more heterogeneous samples are used in future research. Moderate to excellent levels of inter- and intra-rater reliability were reported in the majority of studies. However, future work should look to establish test-retest reliability. Validity was less commonly reported than reliability, and further types of validity other than content validity need to be established in future research. Specifically, predictive validity of 'lifelong physical activity' movement skill competency is needed to support the assertion that such activities provide the foundation for a lifetime of activity.
Lee, Joseph G L; Gregory, Kyle R; Baker, Hannah M; Ranney, Leah M; Goldstein, Adam O
2016-01-01
Most smokers become addicted to tobacco products before they are legally able to purchase these products. We systematically reviewed the literature on protocols to assess underage purchase and their ecological validity. We conducted a systematic search in May 2015 in PubMed and PsycINFO. We independently screened records for inclusion. We conducted a narrative review and examined implications of two types of legal authority for protocols that govern underage buy enforcement in the United States: criminal (state-level laws prohibiting sales to youth) and administrative (federal regulations prohibiting sales to youth). Ten studies experimentally assessed underage buy protocols and 44 studies assessed the association between youth characteristics and tobacco sales. Protocols that mimicked real-world youth behaviors were consistently associated with substantially greater likelihood of a sale to a youth. Many of the tested protocols appear to be designed for compliance with criminal law rather than administrative enforcement in ways that limited ecological validity. This may be due to concerns about entrapment. For administrative enforcement in particular, entrapment may be less of an issue than commonly thought. Commonly used underage buy protocols poorly represent the reality of youths' access to tobacco from retailers. Compliance check programs should allow youth to present themselves naturally and attempt to match the community's demographic makeup.
Lee, Joseph G. L.; Gregory, Kyle R.; Baker, Hannah M.; Ranney, Leah M.; Goldstein, Adam O.
2016-01-01
Most smokers become addicted to tobacco products before they are legally able to purchase these products. We systematically reviewed the literature on protocols to assess underage purchase and their ecological validity. We conducted a systematic search in May 2015 in PubMed and PsycINFO. We independently screened records for inclusion. We conducted a narrative review and examined implications of two types of legal authority for protocols that govern underage buy enforcement in the United States: criminal (state-level laws prohibiting sales to youth) and administrative (federal regulations prohibiting sales to youth). Ten studies experimentally assessed underage buy protocols and 44 studies assessed the association between youth characteristics and tobacco sales. Protocols that mimicked real-world youth behaviors were consistently associated with substantially greater likelihood of a sale to a youth. Many of the tested protocols appear to be designed for compliance with criminal law rather than administrative enforcement in ways that limited ecological validity. This may be due to concerns about entrapment. For administrative enforcement in particular, entrapment may be less of an issue than commonly thought. Commonly used underage buy protocols poorly represent the reality of youths' access to tobacco from retailers. Compliance check programs should allow youth to present themselves naturally and attempt to match the community’s demographic makeup. PMID:27050671
Exploring rationality in schizophrenia.
Revsbech, Rasmus; Mortensen, Erik Lykke; Owen, Gareth; Nordgaard, Julie; Jansson, Lennart; Sæbye, Ditte; Flensborg-Madsen, Trine; Parnas, Josef
2015-06-01
Empirical studies of rationality (syllogisms) in patients with schizophrenia have obtained different results. One study found that patients reason more logically if the syllogism is presented through an unusual content. To explore syllogism-based rationality in schizophrenia. Thirty-eight first-admitted patients with schizophrenia and 38 healthy controls solved 29 syllogisms that varied in presentation content (ordinary v. unusual) and validity (valid v. invalid). Statistical tests were made of unadjusted and adjusted group differences in models adjusting for intelligence and neuropsychological test performance. Controls outperformed patients on all syllogism types, but the difference between the two groups was only significant for valid syllogisms presented with unusual content. However, when adjusting for intelligence and neuropsychological test performance, all group differences became non-significant. When taking intelligence and neuropsychological performance into account, patients with schizophrenia and controls perform similarly on syllogism tests of rationality. None. © The Royal College of Psychiatrists 2015. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence.
NASA Astrophysics Data System (ADS)
Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.
1996-02-01
Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.
Content validation of terms and definitions in a wound glossary.
Milne, Catherine T; Paine, Tim; Sullivan, Valerie; Sawyer, Allen
2011-12-01
A common language and lexicon provide the easiest means of mutual understanding. Inconsistency in terminology makes effective information exchange difficult. Previous studies identified the need to determine standard, accepted definitions for the vocabulary frequently used in wound care. The objective of this study was to establish content validation for these terms and develop an evidence-based glossary for this specialty. Members of the Association for the Advancement of Wound Care Quality of Care Task Force reviewed literature to determine glossary content generation and the associated literature-based definitions. Thirty-nine wound care professionals from wound care stakeholder professional organizations in the United States and Canada participated in the content validation process. Participants were asked to quantify the degree of validity using a 367-item, 4-point Likert-type scale. On a scale of 1 to 4, the mean score of the entire instrument was 3.84. The instrument's overall scale content validity index was 0.96. Terms with an item content validity index of less than 0.70 were removed from the glossary, leaving 365 items with established content validity. Qualitative data analysis revealed themes suggesting that enhanced communication between providers improves patient outcomes. The need for ongoing updates of the glossary was also identified. The wound care glossary in its finalized form proved valid. An evidence-based glossary bridges the chasm of miscommunication and nonstandardization so that wound care, as an emerging specialized medical science field, can move forward to optimize both process and clinical outcomes.
Why bother with testing? The validity of immigrants' self-assessed language proficiency.
Edele, Aileen; Seuring, Julian; Kristen, Cornelia; Stanat, Petra
2015-07-01
Due to its central role in social integration, immigrants' language proficiency is a matter of considerable societal concern and scientific interest. This study examines whether commonly applied self-assessments of linguistic skills yield results that are similar to those of competence tests and thus whether these self-assessments are valid measures of language proficiency. Analyses of data for immigrant youth reveal moderate correlations between language test scores and two types of self-assessments (general ability estimates and concrete performance estimates) for the participants' first and second languages. More importantly, multiple regression models using self-assessments and models using test scores yield different results. This finding holds true for a variety of analyses and for both types of self-assessments. Our findings further suggest that self-assessed language skills are systematically biased in certain groups. Subjective measures thus seem to be inadequate estimates of language skills, and future research should use them with caution when research questions pertain to actual language skills rather than self-perceptions. Copyright © 2015 Elsevier Inc. All rights reserved.
Hippisley-Cox, Julia; Coupland, Carol
2015-01-01
Objective To derive and validate a set of clinical risk prediction algorithm to estimate the 10-year risk of 11 common cancers. Design Prospective open cohort study using routinely collected data from 753 QResearch general practices in England. We used 565 practices to develop the scores and 188 for validation. Subjects 4.96 million patients aged 25–84 years in the derivation cohort; 1.64 million in the validation cohort. Patients were free of the relevant cancer at baseline. Methods Cox proportional hazards models in the derivation cohort to derive 10-year risk algorithms. Risk factors considered included age, ethnicity, deprivation, body mass index, smoking, alcohol, previous cancer diagnoses, family history of cancer, relevant comorbidities and medication. Measures of calibration and discrimination in the validation cohort. Outcomes Incident cases of blood, breast, bowel, gastro-oesophageal, lung, oral, ovarian, pancreas, prostate, renal tract and uterine cancers. Cancers were recorded on any one of four linked data sources (general practitioner (GP), mortality, hospital or cancer records). Results We identified 228 241 incident cases during follow-up of the 11 types of cancer. Of these 25 444 were blood; 41 315 breast; 32 626 bowel, 12 808 gastro-oesophageal; 32 187 lung; 4811 oral; 6635 ovarian; 7119 pancreatic; 35 256 prostate; 23 091 renal tract; 6949 uterine cancers. The lung cancer algorithm had the best performance with an R2 of 64.2%; D statistic of 2.74; receiver operating characteristic curve statistic of 0.91 in women. The sensitivity for the top 10% of women at highest risk of lung cancer was 67%. Performance of the algorithms in men was very similar to that for women. Conclusions We have developed and validated a prediction models to quantify absolute risk of 11 common cancers. They can be used to identify patients at high risk of cancers for prevention or further assessment. The algorithms could be integrated into clinical computer systems and used to identify high-risk patients. Web calculator: There is a simple web calculator to implement the Qcancer 10 year risk algorithm together with the open source software for download (available at http://qcancer.org/10yr/). PMID:25783428
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Steffan, Jean; Olivry, Thierry; Forster, Sophie L; Seewald, Wolfgang
2012-10-01
Hypersensitivity (allergic) dermatitis (HD) is commonly seen in cats, causing pruritus and various patterns of skin lesions, including at least one of the following: head and neck excoriations, self-induced alopecia, eosinophilic plaques and miliary dermatitis. Few studies have evaluated the efficacy of therapeutic interventions for feline HD, and although various scales have been considered, none has been formally validated for the assessment of disease severity and its response to therapy. To design and validate a novel scale (SCORing Feline Allergic Dermatitis; SCORFAD) to assess the value of different criteria used as outcome measures for the treatment of feline HD and to set minimal thresholds for defining the clinical success of tested interventions. One hundred client-owned cats. The SCORFAD scale was designed to include the four most frequently identified lesion types in feline HD (eosinophilic plaque, head and neck excoriations, self-induced alopecia and miliary dermatitis) across 10 body regions. The extent and severity of each lesion type were graded prior to inclusion and after 3 and 6 weeks in a clinical study to compare the efficacy of two doses of ciclosporin with placebo. The SCORFAD scale was found to exhibit satisfactory content, construct, criterion and sensitivity to change. The percentage reduction in SCORFAD from baseline was determined to be the most valid assessment of clinical response. Inter- and intra-observer reliability was not assessed. The SCORFAD scale is proposed for use as a validated tool for the assessment of disease severity and response to therapeutic interventions in clinical trials for feline HD. © 2012 The Authors. Veterinary Dermatology © 2012 ESVD and ACVD.
Analysis of Altered Micro RNA Expression Profiles in Focal Cortical Dysplasia IIB.
Li, Lin; Liu, Chang-Qing; Li, Tian-Fu; Guan, Yu-Guang; Zhou, Jian; Qi, Xue-Ling; Yang, Yu-Tao; Deng, Jia-Hui; Xu, Zhi-Qing David; Luan, Guo-Ming
2016-04-01
Focal cortical dysplasia type IIB is a commonly encountered subtype of developmental malformation of the cerebral cortex and is often associated with pharmacoresistant epilepsy. In this study, to investigate the molecular etiology of focal cortical dysplasia type IIB, the authors performed micro ribonucleic acid (RNA) microarray on surgical specimens from 5 children (2 female and 3 male, mean age was 73.4 months, range 50-112 months) diagnosed of focal cortical dysplasia type IIB and matched normal tissue adjacent to the lesion. In all, 24 micro RNAs were differentially expressed in focal cortical dysplasia type IIB, and the microarray results were validated using quantitative real-time polymerase chain reaction (PCR). Then the putative target genes of the differentially expressed micro RNAs were identified by bioinformatics analysis. Moreover, biological significance of the target genes was evaluated by investigating the pathways in which the genes were enriched, and the Hippo signaling pathway was proposed to be highly related with the pathogenesis of focal cortical dysplasia type IIB. © The Author(s) 2015.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
NASA Astrophysics Data System (ADS)
Malof, Jordan M.; Reichman, Daniël.; Collins, Leslie M.
2018-04-01
A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ-1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.
Anxiety measures validated in perinatal populations: a systematic review.
Meades, Rose; Ayers, Susan
2011-09-01
Research and screening of anxiety in the perinatal period is hampered by a lack of psychometric data on self-report anxiety measures used in perinatal populations. This paper aimed to review self-report measures that have been validated with perinatal women. A systematic search was carried out of four electronic databases. Additional papers were obtained through searching identified articles. Thirty studies were identified that reported validation of an anxiety measure with perinatal women. Most commonly validated self-report measures were the General Health Questionnaire (GHQ), State-Trait Anxiety Inventory (STAI), and Hospital Anxiety and Depression Scales (HADS). Of the 30 studies included, 11 used a clinical interview to provide criterion validity. Remaining studies reported one or more other forms of validity (factorial, discriminant, concurrent and predictive) or reliability. The STAI shows criterion, discriminant and predictive validity and may be most useful for research purposes as a specific measure of anxiety. The Kessler 10 (K-10) may be the best short screening measure due to its ability to differentiate anxiety disorders. The Depression Anxiety Stress Scales 21 (DASS-21) measures multiple types of distress, shows appropriate content, and remains to be validated against clinical interview in perinatal populations. Nineteen studies did not report sensitivity or specificity data. The early stages of research into perinatal anxiety, the multitude of measures in use, and methodological differences restrict comparison of measures across studies. There is a need for further validation of self-report measures of anxiety in the perinatal period to enable accurate screening and detection of anxiety symptoms and disorders. Copyright © 2010 Elsevier B.V. All rights reserved.
Validation of ocean color sensors using a profiling hyperspectral radiometer
NASA Astrophysics Data System (ADS)
Ondrusek, M. E.; Stengel, E.; Rella, M. A.; Goode, W.; Ladner, S.; Feinholz, M.
2014-05-01
Validation measurements of satellite ocean color sensors require in situ measurements that are accurate, repeatable and traceable enough to distinguish variability between in situ measurements and variability in the signal being observed on orbit. The utility of using a Satlantic Profiler II equipped with HyperOCR radiometers (Hyperpro) for validating ocean color sensors is tested by assessing the stability of the calibration coefficients and by comparing Hyperpro in situ measurements to other instruments and between different Hyperpros in a variety of water types. Calibration and characterization of the NOAA Satlantic Hyperpro instrument is described and concurrent measurements of water-leaving radiances conducted during cruises are presented between this profiling instrument and other profiling, above-water and moored instruments. The moored optical instruments are the US operated Marine Optical BuoY (MOBY) and the French operated Boussole Buoy. In addition, Satlantic processing versions are described in terms of accuracy and consistency. A new multi-cast approach is compared to the most commonly used single cast method. Analysis comparisons are conducted in turbid and blue water conditions. Examples of validation matchups with VIIRS ocean color data are presented. With careful data collection and analysis, the Satlantic Hyperpro profiling radiometer has proven to be a reliable and consistent tool for satellite ocean color validation.
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
Multinomial logistic regression in workers' health
NASA Astrophysics Data System (ADS)
Grilo, Luís M.; Grilo, Helena L.; Gonçalves, Sónia P.; Junça, Ana
2017-11-01
In European countries, namely in Portugal, it is common to hear some people mentioning that they are exposed to excessive and continuous psychosocial stressors at work. This is increasing in diverse activity sectors, such as, the Services sector. A representative sample was collected from a Portuguese Services' organization, by applying a survey (internationally validated), which variables were measured in five ordered categories in Likert-type scale. A multinomial logistic regression model is used to estimate the probability of each category of the dependent variable general health perception where, among other independent variables, burnout appear as statistically significant.
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
Information bias in health research: definition, pitfalls, and adjustment methods
Althubaiti, Alaa
2016-01-01
As with other fields, medical sciences are subject to different sources of bias. While understanding sources of bias is a key element for drawing valid conclusions, bias in health research continues to be a very sensitive issue that can affect the focus and outcome of investigations. Information bias, otherwise known as misclassification, is one of the most common sources of bias that affects the validity of health research. It originates from the approach that is utilized to obtain or confirm study measurements. This paper seeks to raise awareness of information bias in observational and experimental research study designs as well as to enrich discussions concerning bias problems. Specifying the types of bias can be essential to limit its effects and, the use of adjustment methods might serve to improve clinical evaluation and health care practice. PMID:27217764
Martinez-Millana, A; Fernandez-Llatas, C; Sacchi, L; Segagni, D; Guillen, S; Bellazzi, R; Traver, V
2015-08-01
The application of statistics and mathematics over large amounts of data is providing healthcare systems with new tools for screening and managing multiple diseases. Nonetheless, these tools have many technical and clinical limitations as they are based on datasets with concrete characteristics. This proposition paper describes a novel architecture focused on providing a validation framework for discrimination and prediction models in the screening of Type 2 diabetes. For that, the architecture has been designed to gather different data sources under a common data structure and, furthermore, to be controlled by a centralized component (Orchestrator) in charge of directing the interaction flows among data sources, models and graphical user interfaces. This innovative approach aims to overcome the data-dependency of the models by providing a validation framework for the models as they are used within clinical settings.
Peters, Tansy; Bertrand, Sophie; Björkman, Jonas T; Brandal, Lin T; Brown, Derek J; Erdõsi, Tímea; Heck, Max; Ibrahem, Salha; Johansson, Karin; Kornschober, Christian; Kotila, Saara M; Le Hello, Simon; Lienemann, Taru; Mattheus, Wesley; Nielsen, Eva Møller; Ragimbeau, Catherine; Rumore, Jillian; Sabol, Ashley; Torpdahl, Mia; Trees, Eija; Tuohy, Alma; de Pinna, Elizabeth
2017-03-02
Multilocus variable-number tandem repeat analysis (MLVA) is a rapid and reproducible typing method that is an important tool for investigation, as well as detection, of national and multinational outbreaks of a range of food-borne pathogens. Salmonella enterica serovar Enteritidis is the most common Salmonella serovar associated with human salmonellosis in the European Union/European Economic Area and North America. Fourteen laboratories from 13 countries in Europe and North America participated in a validation study for MLVA of S. Enteritidis targeting five loci. Following normalisation of fragment sizes using a set of reference strains, a blinded set of 24 strains with known allele sizes was analysed by each participant. The S. Enteritidis 5-loci MLVA protocol was shown to produce internationally comparable results as more than 90% of the participants reported less than 5% discrepant MLVA profiles. All 14 participating laboratories performed well, even those where experience with this typing method was limited. The raw fragment length data were consistent throughout, and the inter-laboratory validation helped to standardise the conversion of raw data to repeat numbers with at least two countries updating their internal procedures. However, differences in assigned MLVA profiles remain between well-established protocols and should be taken into account when exchanging data. This article is copyright of The Authors, 2017.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert
Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less
Validation of scores of use of inhalation devices: valoration of errors *
Zambelli-Simões, Letícia; Martins, Maria Cleusa; Possari, Juliana Carneiro da Cunha; Carvalho, Greice Borges; Coelho, Ana Carla Carvalho; Cipriano, Sonia Lucena; de Carvalho-Pinto, Regina Maria; Cukier, Alberto; Stelmach, Rafael
2015-01-01
Abstract Objective: To validate two scores quantifying the ability of patients to use metered dose inhalers (MDIs) or dry powder inhalers (DPIs); to identify the most common errors made during their use; and to identify the patients in need of an educational program for the use of these devices. Methods: This study was conducted in three phases: validation of the reliability of the inhaler technique scores; validation of the contents of the two scores using a convenience sample; and testing for criterion validation and discriminant validation of these instruments in patients who met the inclusion criteria. Results: The convenience sample comprised 16 patients. Interobserver disagreement was found in 19% and 25% of the DPI and MDI scores, respectively. After expert analysis on the subject, the scores were modified and were applied in 72 patients. The most relevant difficulty encountered during the use of both types of devices was the maintenance of total lung capacity after a deep inhalation. The degree of correlation of the scores by observer was 0.97 (p < 0.0001). There was good interobserver agreement in the classification of patients as able/not able to use a DPI (50%/50% and 52%/58%; p < 0.01) and an MDI (49%/51% and 54%/46%; p < 0.05). Conclusions: The validated scores allow the identification and correction of inhaler technique errors during consultations and, as a result, improvement in the management of inhalation devices. PMID:26398751
Targeting the NFκB signaling pathways for breast cancer prevention and therapy.
Wang, Wei; Nag, Subhasree A; Zhang, Ruiwen
2015-01-01
The activation of nuclear factor-kappaB (NFκB), a proinflammatory transcription factor, is a commonly observed phenomenon in breast cancer. It facilitates the development of a hormone-independent, invasive, high-grade, and late-stage tumor phenotype. Moreover, the commonly used cancer chemotherapy and radiotherapy approaches activate NFκB, leading to the development of invasive breast cancers that show resistance to chemotherapy, radiotherapy, and endocrine therapy. Inhibition of NFκB results in an increase in the sensitivity of cancer cells to the apoptotic effects of chemotherapeutic agents and radiation and restoring hormone sensitivity, which is correlated with increased disease-free survival in patients with breast cancer. In this review article, we focus on the role of the NFκB signaling pathways in the development and progression of breast cancer and the validity of NFκB as a potential target for breast cancer prevention and therapy. We also discuss the recent findings that NFκB may have tumor suppressing activity in certain cancer types. Finally, this review also covers the state-of-the-art development of NFκB inhibitors for cancer therapy and prevention, the challenges in targeting validation, and pharmacology and toxicology evaluations of these agents from the bench to the bedside.
Farmer, Cristan A; Aman, Michael G
2010-01-01
Although often lacking "malice", aggression is fairly common in children with intellectual or developmental disability (I/DD). Despite this, there are no scales available that are appropriate for an in-depth analysis of aggressive behavior in this population. Such scales are needed for the study of aggressive behavior, which is a common target symptom in clinical trials. We assessed the reliability and validity of the Children's Scale of Hostility and Aggression: Reactive/Proactive (C-SHARP), a new aggression scale created for children with I/DD. Data are presented from a survey of 365 children with I/DD aged 3-21 years. Interrater reliability was very high for the Problem Scale, which characterizes type of aggression. Reliability was lower but largely acceptable for the Provocation Scale, which assesses motivation. Validity of the Problem Scale was supported by expected differences in children with autism, Down syndrome, comorbid disruptive behavior disorders (DBDs) and ADHD. The Provocation Scale, which categorizes behavior as proactive or reactive, showed expected differences in children with DBD, but was less effective in those with ADHD. The C-SHARP appears to have fundamentally sound psychometric characteristics, although more research is needed.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Maggi, Stefania; Noale, Marianna; Zambon, Alberto; Limongi, Federica; Romanato, Giovanna; Crepaldi, Gaetano
2008-04-01
The metabolic syndrome (MetS) is represented by the co-occurrence of multiple metabolic and physiologic risk factors for both type 2 diabetes mellitus and atherosclerotic cardiovascular diseases. In spite of its high frequency and association with morbidity and mortality in the adult population, very little is known about its magnitude in the elderly and about the validity of the diagnostic criteria commonly used. The objective of this paper is to assess the prevalence rate of MetS and the validity of the Adult Treatment Panel III (ATP III) diagnostic criteria in an elderly Caucasian cohort, considering data from the Italian Longitudinal Study on Aging (ILSA), a population-based study with a sample of 5632 individuals aged 65-84 years at baseline (1992). Logistic regression models and ROC curve were used to test the validity of the cut off levels proposed. The prevalence of MetS was 31.5% in men, and 59.8% in women. The cut off levels suggested for both men and women by the ATP III panel indicated a significant association with the MetS for all components. Actually, the ROC analysis would suggest lower levels for glycaemia (106 mg/dl) in men, and higher levels for blood pressure in both men and women (145/95 and 135/90, respectively). Concluding, MetS is very common in the aged Caucasians and the diagnostic criteria proposed by the ATP III panel seem to be appropriate in older individuals. Small adjustments in the cut off levels could be suggested for glycaemia (men) and in blood pressure (men and women).
Collins, Lisa M.; Part, Chérie E.
2013-01-01
Simple Summary In this review paper we discuss the different modeling techniques that have been used in animal welfare research to date. We look at what questions they have been used to answer, the advantages and pitfalls of the methods, and how future research can best use these approaches to answer some of the most important upcoming questions in farm animal welfare. Abstract The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested. PMID:26487411
Amano, Nobuko; Nakamura, Tomiyo
2018-02-01
The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Spear, Marcia
2010-01-01
There has been a steady increase in the number of individuals who undergo dermal fillers and botulinum toxin Type A injections. The majority of these procedures are performed by nurse providers. The purpose of this study was to collect national data on the current practice among nursing providers within the American Society of Plastic Surgical Nurses (ASPSN). The goal was to utilize the national data and develop a document of the necessary competencies to guide the practice of providers of dermal fillers and botulinum toxin Type A injections. A survey tool was developed and validated for content by expert nursing providers among the membership of the ASPSN and disseminated via e-mail to the membership of the ASPSN. In addition, data from investigator training, mentoring, and evidence from a review of the literature were also incorporated into the competency document utilizing the Competency Outcomes and Performance Assessment (COPA) model. Common core issues became apparent that included contraindications for the use of botulinum toxin Type A and dermal fillers, postprocedure complications as well as strategies in terms of managing complications. The data also revealed that there is no common method providers are taught to assess the aesthetic patient and a lack of a collaborative relationship in current practice. Overwhelmingly, the respondents supported the need for defined practice competencies. A competency document to guide the practice of providers of dermal fillers and botulinum toxin Type A has been developed for completion of this DNP project.
Hamm, Elisa; Wee, Joy
2017-01-01
Background Comparative effectiveness research on wheelchairs available in low-resource areas is needed to enable effective use of limited funds. Mobility on commonly encountered rolling environments is a key aspect of function. High variation in capacity among wheelchair users can mask changes in mobility because of wheelchair design. A repeated measures protocol in which the participants use one type of wheelchair and then another minimises the impact of individual variation. Objectives The Aspects of Wheelchair Mobility Test (AWMT) was designed to be used in repeated measures studies in low-resource areas. It measures the impact of different wheelchair types on physical performance in commonly encountered rolling environments and provides an opportunity for qualitative and quantitative participant response. This study sought to confirm the ability of the AWMT to discern differences in mobility because of wheelchair design. Method Participants were wheelchair users at a boarding school for students with disabilities in a low-resource area. Each participant completed timed tests on measured tracks on rough and smooth surfaces, in tight spaces and over curbs. Four types of wheelchairs designed for use in low-resource areas were included. Results The protocol demonstrated the ability to discriminate changes in mobility of individuals because of wheelchair type. Conclusion Comparative effectiveness studies with this protocol can enable beneficial change. This is illustrated by design alterations by wheelchair manufacturers in response to results. PMID:28936413
Yorulmaz, Orçun; Gençöz, Tülin; Woody, Sheila
2010-01-01
Recent findings have suggested some potential psychological vulnerability factors for development of obsessive-compulsive (OC) symptoms, including cognitive factors of appraisal and thought control, religiosity, self-esteem and personality characteristics such as neuroticism. Studies demonstrating these associations usually come from Western cultures, but there may be cultural differences relevant to these vulnerability factors and OC symptoms. The present study examined the relationship between putative vulnerability factors and OC symptoms by comparing non-clinical samples from Turkey and Canada, two countries with quite different cultural characteristics. The findings revealed some common correlates such as neuroticism and certain types of metacognition, including appraisals of responsibility/threat estimation and perfectionism/need for certainty, as well as thought-action fusion. However, culture-specific factors were also indicated in the type of thought control participants used. For OC disorder symptoms, Turkish participants were more likely to utilize worry and thought suppression, while Canadian participants tended to use self-punishment more frequently. The association with common factors supports the cross-cultural validity of some factors, whereas unique factors suggest cultural features that may be operative in cognitive processes relevant to OC symptoms.
Ohta, Hidetoshi; Kawashima, Makoto
2014-01-01
A few types of steerable capsule endoscopes have been proposed but disappointingly their systems were not applicable to common endoscopic treatment or pathological diagnosis. This study validates the possibility of treatment and biopsy by using an internet-linked (wireless control via the internet) robotic capsule endoscope (iRoboCap). iRoboCap consisted of three parts: an imaging unit, a movement control unit and a therapeutic tool unit. Two types of iRoboCaps were designed, one was a submarine type (iRoboCap-S) and the other was an amphibious type (iRoboCap-A). They were remotely and wirelessly steered by a portable tablet device using Bluetooth and via the internet. The success rates of biopsy or clipping were evaluated in a phantom. Although the two prototypes have various problems that need improving, we hope that our robotic and wireless innovations have opened the door to new endoscopic procedures and will pioneer various new applications in medicine.
Burton, Charles L; Bonanno, George A
2016-08-01
Flexibility in self-regulatory behaviors has proved to be an important quality for adjusting to stressful life events and requires individuals to have a diverse repertoire of emotion regulation abilities. However, the most commonly used emotion regulation questionnaires assess frequency of behavior rather than ability, with little evidence linking these measures to observable capacity to enact a behavior. The aim of the current investigation was to develop and validate a Flexible Regulation of Emotional Expression (FREE) Scale that measures a person's ability to enhance and suppress displayed emotion across an array of hypothetical contexts. In Studies 1 and 2, a series of confirmatory factor analyses revealed that the FREE Scale consists of 4 first-order factors divided by regulation and emotional valence type that can contribute to 2 higher order factors: expressive enhancement ability and suppression ability. In Study 1, we also compared the FREE Scale to other commonly used emotion regulation measures, which revealed that suppression ability is conceptually distinct from suppression frequency. In Study 3, we compared the FREE Scale with a composite of traditional frequency-based indices of expressive regulation to predict performance in a previously validated emotional modulation paradigm. Participants' enhancement and suppression ability scores on the FREE Scale predicted their corresponding performance on the laboratory task, even when controlling for baseline expressiveness. These studies suggest that the FREE Scale is a valid and flexible measure of expressive regulation ability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Sarntivijai, Sirarat; Vasant, Drashtti; Jupp, Simon; Saunders, Gary; Bento, A Patrícia; Gonzalez, Daniel; Betts, Joanna; Hasan, Samiul; Koscielny, Gautier; Dunham, Ian; Parkinson, Helen; Malone, James
2016-01-01
The Centre for Therapeutic Target Validation (CTTV - https://www.targetvalidation.org/) was established to generate therapeutic target evidence from genome-scale experiments and analyses. CTTV aims to support the validity of therapeutic targets by integrating existing and newly-generated data. Data integration has been achieved in some resources by mapping metadata such as disease and phenotypes to the Experimental Factor Ontology (EFO). Additionally, the relationship between ontology descriptions of rare and common diseases and their phenotypes can offer insights into shared biological mechanisms and potential drug targets. Ontologies are not ideal for representing the sometimes associated type relationship required. This work addresses two challenges; annotation of diverse big data, and representation of complex, sometimes associated relationships between concepts. Semantic mapping uses a combination of custom scripting, our annotation tool 'Zooma', and expert curation. Disease-phenotype associations were generated using literature mining on Europe PubMed Central abstracts, which were manually verified by experts for validity. Representation of the disease-phenotype association was achieved by the Ontology of Biomedical AssociatioN (OBAN), a generic association representation model. OBAN represents associations between a subject and object i.e., disease and its associated phenotypes and the source of evidence for that association. The indirect disease-to-disease associations are exposed through shared phenotypes. This was applied to the use case of linking rare to common diseases at the CTTV. EFO yields an average of over 80% of mapping coverage in all data sources. A 42% precision is obtained from the manual verification of the text-mined disease-phenotype associations. This results in 1452 and 2810 disease-phenotype pairs for IBD and autoimmune disease and contributes towards 11,338 rare diseases associations (merged with existing published work [Am J Hum Genet 97:111-24, 2015]). An OBAN result file is downloadable at http://sourceforge.net/p/efo/code/HEAD/tree/trunk/src/efoassociations/. Twenty common diseases are linked to 85 rare diseases by shared phenotypes. A generalizable OBAN model for association representation is presented in this study. Here we present solutions to large-scale annotation-ontology mapping in the CTTV knowledge base, a process for disease-phenotype mining, and propose a generic association model, 'OBAN', as a means to integrate disease using shared phenotypes. EFO is released monthly and available for download at http://www.ebi.ac.uk/efo/.
Correcting evaluation bias of relational classifiers with network cross validation
Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...
2011-01-04
Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less
Discounts and rebates granted to public payers for medicines in European countries
Vogler, Sabine; Zimmermann, Nina; Habl, Claudia; Piessnegger, Jutta; Bucsics, Anna
2012-01-01
Objective: The objective of this study was to provide an overview about the existence and types of discounts and rebates granted to public payers by the pharmaceutical industry in European countries. Methods: Data were collected via a questionnaire in spring 2011. Officials from public authorities for pharmaceutical pricing and reimbursement represented in the PPRI (Pharmaceutical Pricing and Reimbursement Information) network provided the information and reviewed the compilation. Results: Information is available from 31 European countries. Discounts and rebates granted to public payers by pharmaceutical industry were reported for 25 European countries. Such discounts exist both in the in- and out-patient sectors in 21 countries and in the in-patient sector only in four countries. Six countries reported not having any regulations or agreements regarding the discounts and rebates granted by industry. The most common discounts and rebates are price reductions and refunds linked to sales volume but types such as in-kind support, price-volume and risk-sharing agreements are also in place. A mix of various types of discounts and rebates is common. Many of these arrangements are confidential. Differences regarding types, the organizational and legal framework, validity and frequency of updates and the amount of the discounts and rebates granted exist among the surveyed countries. Conclusions: In Europe, discounts and rebates on medicines granted by pharmaceutical industry to public payers are common tools to contain public pharmaceutical expenditure. They appear to be used as a complimentary measure when price regulation does not achieve the desired results and in the few European countries with no or limited price regulation. The confidential character of many of these arrangements impedes transparency and may lead to a distortion of medicines prices. An analysis of the impact on these measures is recommended. PMID:23093898
Development of Metabolic Function Biomarkers in the Common Marmoset, Callithrix jacchus
Ziegler, Toni E.; Colman, Ricki J.; Tardif, Suzette D.; Sosa, Megan E.; Wegner, Fredrick H.; Wittwer, Daniel J.; Shrestha, Hemanta
2013-01-01
Metabolic assessment of a nonhuman primate model of metabolic syndrome and obesity requires the necessary biomarkers specific to the species. While the rhesus monkey has a number of specific assays for assessing metabolic syndrome, the marmoset does not. Furthermore, the common marmoset (Callithrix jacchus) has a small blood volume that necessitates using a single blood volume for multiple analyses. The common marmoset holds a great potential as an alternative primate model for the study of human disease but assay methods need to be developed and validated for the biomarkers of metabolic syndrome. Here we report on the adaptation, development and validation of commercially available immunoassays for common marmoset samples in small volumes. We have performed biological validations for insulin, adiponectin, leptin, and ghrelin to demonstrate the use of these biomarkers in examining metabolic syndrome and other related diseases in the common marmoset. PMID:23447060
Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.
2007-01-01
Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532
Extrachromosomal oncogene amplification drives tumor evolution and genetic heterogeneity
Turner, Kristen M.; Deshpande, Viraj; Beyter, Doruk; Koga, Tomoyuki; Rusert, Jessica; Lee, Catherine; Li, Bin; Arden, Karen; Ren, Bing; Nathanson, David A.; Kornblum, Harley I.; Taylor, Michael D.; Kaushal, Sharmeela; Cavenee, Webster K.; Wechsler-Reya, Robert; Furnari, Frank B.; Vandenberg, Scott R.; Rao, P. Nagesh; Wahl, Geoffrey M.; Bafna, Vineet; Mischel, Paul S.
2017-01-01
Human cells have twenty-three pairs of chromosomes but in cancer, genes can be amplified in chromosomes or in circular extrachromosomal DNA (ECDNA), whose frequency and functional significance are not understood1–4. We performed whole genome sequencing, structural modeling and cytogenetic analyses of 17 different cancer types, including 2572 metaphases, and developed ECdetect to conduct unbiased integrated ECDNA detection and analysis. ECDNA was found in nearly half of human cancers varying by tumor type, but almost never in normal cells. Driver oncogenes were amplified most commonly on ECDNA, elevating transcript level. Mathematical modeling predicted that ECDNA amplification elevates oncogene copy number and increases intratumoral heterogeneity more effectively than chromosomal amplification, which we validated by quantitative analyses of cancer samples. These results suggest that ECDNA contributes to accelerated evolution in cancer. PMID:28178237
Exploring rationality in schizophrenia
Mortensen, Erik Lykke; Owen, Gareth; Nordgaard, Julie; Jansson, Lennart; Sæbye, Ditte; Flensborg-Madsen, Trine; Parnas, Josef
2015-01-01
Background Empirical studies of rationality (syllogisms) in patients with schizophrenia have obtained different results. One study found that patients reason more logically if the syllogism is presented through an unusual content. Aims To explore syllogism-based rationality in schizophrenia. Method Thirty-eight first-admitted patients with schizophrenia and 38 healthy controls solved 29 syllogisms that varied in presentation content (ordinary v. unusual) and validity (valid v. invalid). Statistical tests were made of unadjusted and adjusted group differences in models adjusting for intelligence and neuropsychological test performance. Results Controls outperformed patients on all syllogism types, but the difference between the two groups was only significant for valid syllogisms presented with unusual content. However, when adjusting for intelligence and neuropsychological test performance, all group differences became non-significant. Conclusions When taking intelligence and neuropsychological performance into account, patients with schizophrenia and controls perform similarly on syllogism tests of rationality. Declaration of interest None. Copyright and usage © The Royal College of Psychiatrists 2015. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence. PMID:27703730
2014-01-01
Background To validate physical activity estimates by the Sensewear Pro3 activity monitor compared with indirect calorimetry during simulated free living in patients diagnosed with osteoarthritis of the hip pre or post total hip arthroplasty. Methods Twenty patients diagnosed with hip osteoarthritis (10 pre- and 10 post total hip arthroplasty; 40% female; age: 63.3 ± 9.0; BMI: 23.7 ± 3.7). All patients completed a 2 hour protocol of simulated free living with 8 different typical physical activity types. Energy consumption (kcal/min) was estimated by the Sense Wear pro3 Armband activity monitor and validated against indirect calorimetry (criterion method) by means of a portable unit (Cosmed K4b2). Bias and variance was analyzed using functional ANOVA. Results Mean bias during all activities was 1.5 Kcal/min 95%CI [1.3; 1.8] corresponding to 72% (overestimation). Normal gait speed showed an overestimation of 2.8 Kcal/min, 95%CI [2.3; 3.3] (93%) while an underestimation of -1.1 Kcal/min, 95%CI [-1.8; -0.3] (-25%) was recorded during stair climb. Activities dominated by upper body movements showed large overestimation with 4.37 Kcal/min, 95%CI [3.8; 5.1] (170%) being recorded during gardening. Both bias and variance appeared to be dependent on activity type. Conclusion The activity monitor generally overestimated the energy consumption during common activities of low to medium intensity in the patient group. The size and direction of the bias was highly dependent on the activity type which indicates the activity monitor is of limited value in patients with hip osteoarthritis and that the results do not express the real energy expenditure. PMID:24552503
Automated Cervical Screening and Triage, Based on HPV Testing and Computer-Interpreted Cytology.
Yu, Kai; Hyun, Noorie; Fetterman, Barbara; Lorey, Thomas; Raine-Bennett, Tina R; Zhang, Han; Stamps, Robin E; Poitras, Nancy E; Wheeler, William; Befano, Brian; Gage, Julia C; Castle, Philip E; Wentzensen, Nicolas; Schiffman, Mark
2018-04-11
State-of-the-art cervical cancer prevention includes human papillomavirus (HPV) vaccination among adolescents and screening/treatment of cervical precancer (CIN3/AIS and, less strictly, CIN2) among adults. HPV testing provides sensitive detection of precancer but, to reduce overtreatment, secondary "triage" is needed to predict women at highest risk. Those with the highest-risk HPV types or abnormal cytology are commonly referred to colposcopy; however, expert cytology services are critically lacking in many regions. To permit completely automatable cervical screening/triage, we designed and validated a novel triage method, a cytologic risk score algorithm based on computer-scanned liquid-based slide features (FocalPoint, BD, Burlington, NC). We compared it with abnormal cytology in predicting precancer among 1839 women testing HPV positive (HC2, Qiagen, Germantown, MD) in 2010 at Kaiser Permanente Northern California (KPNC). Precancer outcomes were ascertained by record linkage. As additional validation, we compared the algorithm prospectively with cytology results among 243 807 women screened at KPNC (2016-2017). All statistical tests were two-sided. Among HPV-positive women, the algorithm matched the triage performance of abnormal cytology. Combined with HPV16/18/45 typing (Onclarity, BD, Sparks, MD), the automatable strategy referred 91.7% of HPV-positive CIN3/AIS cases to immediate colposcopy while deferring 38.4% of all HPV-positive women to one-year retesting (compared with 89.1% and 37.4%, respectively, for typing and cytology triage). In the 2016-2017 validation, the predicted risk scores strongly correlated with cytology (P < .001). High-quality cervical screening and triage performance is achievable using this completely automated approach. Automated technology could permit extension of high-quality cervical screening/triage coverage to currently underserved regions.
Craddick, Karen; Eccles, Dayl; Kwasnik, Abigail; O’Sullivan, Teresa A.
2015-01-01
Objective. To qualitatively analyze free-text responses gathered as part of a previously published survey in order to systematically identify common concerns facing pharmacy experiential education (EE) programs. Methods. In 2011, EE directors at all 118 accredited pharmacy schools in the US were asked in a survey to describe the most pressing issues facing their programs. Investigators performed qualitative, thematic analysis of responses and compared results against demographic data (institution type, class size, number of practice sites, number and type of EE faculty member/staff). Expert and novice investigators identified common themes via an iterative process. To check validity, additional expert and novice reviewers independently coded responses. The Cohen kappa coefficient was calculated and showed good agreement between investigators and reviewers. Results. Seventy-eight responses were received (66% response rate) representing 75% of publicly funded institutions and 71% of schools with class sizes 51-150. Themes identified as common concerns were site capacity, workload/financial support, quality assurance, preceptor development, preceptor stipends, assessment, onboarding, and support/recognition from administration. Good agreement (mean percent agreement 93%, ƙ range=0.59-0.92) was found between investigators and reviewers. Conclusion. Site capacity for student placements continues to be the foremost concern for many experiential education programs. New concerns about preceptor development and procedures for placing and orienting students at individual practice sites (ie, “onboarding”) have emerged and must be addressed as new accreditation standards are implemented. PMID:25741022
Kenow, Kevin P.; Ge, Zhongfu; Fara, Luke J.; Houdek, Steven C.; Lubinski, Brian R.
2016-01-01
Avian botulism type E is responsible for extensive waterbird mortality on the Great Lakes, yet the actual site of toxin exposure remains unclear. Beached carcasses are often used to describe the spatial aspects of botulism mortality outbreaks, but lack specificity of offshore toxin source locations. We detail methodology for developing a neural network model used for predicting waterbird carcass motions in response to wind, wave, and current forcing, in lieu of a complex analytical relationship. This empirically trained model uses current velocity, wind velocity, significant wave height, and wave peak period in Lake Michigan simulated by the Great Lakes Coastal Forecasting System. A detailed procedure is further developed to use the model for back-tracing waterbird carcasses found on beaches in various parts of Lake Michigan, which was validated using drift data for radiomarked common loon (Gavia immer) carcasses deployed at a variety of locations in northern Lake Michigan during September and October of 2013. The back-tracing model was further used on 22 non-radiomarked common loon carcasses found along the shoreline of northern Lake Michigan in October and November of 2012. The model-estimated origins of those cases pointed to some common source locations offshore that coincide with concentrations of common loons observed during aerial surveys. The neural network source tracking model provides a promising approach for identifying locations of botulinum neurotoxin type E intoxication and, in turn, contributes to developing an understanding of the dynamics of toxin production and possible trophic transfer pathways.
Chubak, Jessica; Boudreau, Denise M; Wirtz, Heidi S; McKnight, Barbara; Weiss, Noel S
2013-10-02
Studies of the effects of exposures after cancer diagnosis on cancer recurrence and survival can provide important information to the growing group of cancer survivors. Observational studies that address this issue generally fall into one of two categories: 1) those using health plan automated data that contain "continuous" information on exposures, such as studies that use pharmacy records; and 2) survey or interview studies that collect information directly from patients once or periodically postdiagnosis. Reverse causation, confounding, selection bias, and information bias are common in observational studies of cancer outcomes in relation to exposures after cancer diagnosis. We describe these biases, focusing on sources of bias specific to these types of studies, and we discuss approaches for reducing them. Attention to known challenges in epidemiologic research is critical for the validity of studies of postdiagnosis exposures and cancer outcomes.
2013-01-01
Studies of the effects of exposures after cancer diagnosis on cancer recurrence and survival can provide important information to the growing group of cancer survivors. Observational studies that address this issue generally fall into one of two categories: 1) those using health plan automated data that contain “continuous” information on exposures, such as studies that use pharmacy records; and 2) survey or interview studies that collect information directly from patients once or periodically postdiagnosis. Reverse causation, confounding, selection bias, and information bias are common in observational studies of cancer outcomes in relation to exposures after cancer diagnosis. We describe these biases, focusing on sources of bias specific to these types of studies, and we discuss approaches for reducing them. Attention to known challenges in epidemiologic research is critical for the validity of studies of postdiagnosis exposures and cancer outcomes. PMID:23940288
Numerical investigation on effect of blade shape for stream water wheel performance.
NASA Astrophysics Data System (ADS)
Yah, N. F.; Oumer, A. N.; Aziz, A. A.; Sahat, I. M.
2018-04-01
Stream water wheels are one of the oldest and commonly used types of wheels for the production of energy. Moreover, they are economical, efficient and sustainable. However, few amounts of research works are available in the open literature. This paper aims to develop numerical model for investigation of the effect of blade shape on the performance of stream water wheel. The numerical model was simulated using Computational Fluid Dynamics (CFD) method and the developed model was validated by comparing the simulation results with experimental data obtained from literature. The performance of straight, curved type 1 and curved type 2 was observed and the power generated by each blade design was identified. The inlet velocity was set to 0.3 m/s static pressure outlet. The obtained results indicate that the highest power was generated by the Curved type 2 compared to straight blade and curved type 1. From the CFD result, Curved type 1 was able to generate 0.073 Watt while Curved type 2 generate 0.064 Watt. The result obtained were consistent with the experiment result hence can be used the numerical model as a guide to numerically predict the water wheel performance
Results of an Internet survey determining the most frequently used ankle scores by AOFAS members.
Lau, Johnny T C; Mahomed, Nizar M; Schon, Lew C
2005-06-01
With technological advances in ankle arthroplasty, there has been parallel development in the outcome instruments used to assess the results of surgery. The literature recommends the use of valid, reliable, and responsive ankle scores, but the ankle scores commonly used in clinical practice remain undefined. An internet survey of members of the American Orthopaedic Foot and Ankle Society (AOFAS) was conducted to determine which three ankle scores they perceived as most commonly used in the literature, which ones they believe are validated, which ones they prefer, and which they use in practice. According to respondents, the three most commonly used scores were the AOFAS Ankle score, the Foot Function Index (FFI), and the Musculoskeletal Outcomes Data Evaluation and Management System (MODEMS). The respondents believed that the AOFAS Ankle score, FFI, and MODEMS were validated. The FFI and MODEMS are validated, but the AOFAS ankle score is not validated. Most respondents preferred using the AOFAS Ankle score. The use of the empirical AOFAS Ankle score continues among AOFAS members.
Siemann, Julia; Herrmann, Manfred; Galashan, Daniela
2018-01-25
The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).
MISR Global Aerosol Product Assessment by Comparison with AERONET
NASA Technical Reports Server (NTRS)
Kahn, Ralph A.; Gaitley, Barbara J.; Garay, Michael J.; Diner, David J.; Eck, Thomas F.; Smirnov, Alexander; Holben, Brent N.
2010-01-01
A statistical approach is used to assess the quality of the MISR Version 22 (V22) aerosol products. Aerosol Optical Depth (AOD) retrieval results are improved relative to the early post- launch values reported by Kahn et al. [2005a], varying with particle type category. Overall, about 70% to 75% of MISR AOD retrievals fall within 0.05 or 20% AOD of the paired validation data, and about 50% to 55% are within 0.03 or 10% AOD, except at sites where dust, or mixed dust and smoke, are commonly found. Retrieved particle microphysical properties amount to categorical values, such as three groupings in size: "small," "medium," and "large." For particle size, ground-based AERONET sun photometer Angstrom Exponents are used to assess statistically the corresponding MISR values, which are interpreted in terms of retrieved size categories. Coincident Single-Scattering Albedo (SSA) and fraction AOD spherical data are too limited for statistical validation. V22 distinguishes two or three size bins, depending on aerosol type, and about two bins in SSA (absorbing vs. non-absorbing), as well as spherical vs. non-spherical particles, under good retrieval conditions. Particle type sensitivity varies considerably with conditions, and is diminished for mid-visible AOD below about 0.15 or 0.2. Based on these results, specific algorithm upgrades are proposed, and are being investigated by the MISR team for possible implementation in future versions of the product.
Operationalizing and Validating Disciplinary Literacy in Secondary Education
ERIC Educational Resources Information Center
Spires, Hiller A.; Kerkhoff, Shea N.; Graham, Abbey C. K.; Thompson, Isaac; Lee, John K.
2018-01-01
The goal of this study was to define the construct and establish the validity of disciplinary literacy, which has recently gained attention from the implementation of the Common Core State Standards (National Governors Association Center for Best Practices & Council of Chief State School Officers in Common Core State Standards for English…
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver
2014-06-14
Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.
On the stochastic dissemination of faults in an admissible network
NASA Technical Reports Server (NTRS)
Kyrala, A.
1987-01-01
The dynamic distribution of faults in a general type network is discussed. The starting point is a uniquely branched network in which each pair of nodes is connected by a single branch. Mathematical expressions for the uniquely branched network transition matrix are derived to show that sufficient stationarity exists to ensure the validity of the use of the Markov Chain model to analyze networks. In addition the conditions for the use of Semi-Markov models are discussed. General mathematical expressions are derived in an examination of branch redundancy techniques commonly used to increase reliability.
Nutakki, Kavitha; Hingtgen, Cynthia M; Monahan, Patrick; Varni, James W; Swigonski, Nancy L
2013-02-21
Neurofibromatosis type 1 (NF1) is a common autosomal dominant genetic disorder with significant impact on health-related quality of life (HRQOL). Research in understanding the pathogenetic mechanisms of neurofibroma development has led to the use of new clinical trials for the treatment of NF1. One of the most important outcomes of a trial is improvement in quality of life, however, no condition specific HRQOL instrument for NF1 exists. The objective of this study was to develop an NF1 HRQOL instrument as a module of PedsQL™ and to test for its initial feasibility, internal consistency reliability and validity in adults with NF1. The NF1 specific HRQOL instrument was developed using a standard method of PedsQL™ module development - literature review, focus group/semi-structured interviews, cognitive interviews and experts' review of initial draft, pilot testing and field testing. Field testing involved 134 adults with NF1. Feasibility was measured by the percentage of missing responses, internal consistency reliability was measured with Cronbach's alpha and validity was measured by the known-groups method. Feasibility, measured by the percentage of missing responses was 4.8% for all subscales on the adult version of the NF1-specific instrument. Internal consistency reliability for the Total Score (alpha =0.97) and subscale reliabilities ranging from 0.72 to 0.96 were acceptable for group comparisons. The PedsQL™ NF1 module distinguished between NF1 adults with excellent to very good, good, and fair to poor health status. The results demonstrate the initial feasibility, reliability and validity of the PedsQL™ NF1 module in adult patients. The PedsQL™ NF1 Module can be used to understand the multidimensional nature of NF1 on the HRQOL patients with this disorder.
Are the binary typology models of alcoholism valid in polydrug abusers?
Pombo, Samuel; da Costa, Nuno F; Figueira, Maria L
2015-01-01
To evaluate the dichotomy of type I/II and type A/B alcoholism typologies in opiate-dependent patients with a comorbid alcohol dependence problem (ODP-AP). The validity assessment process comprised the information regarding the history of alcohol use (internal validity), cognitive-behavioral variables regarding substance use (external validity), and indicators of treatment during 6-month follow-up (predictive validity). ODP-AP subjects classified as type II/B presented an early and much more severe drinking problem and a worse clinical prognosis when considering opiate treatment variables as compared with ODP-AP subjects defined as type I/A. Furthermore, type II/B patients endorse more general positive beliefs and expectancies related to the effect of alcohol and tend to drink heavily across several intra- and interpersonal situations as compared with type I/A patients. These findings confirm two different forms of alcohol dependence, recognized as a low-severity/vulnerability subgroup and a high-severity/vulnerability subgroup, in an opiate-dependent population with a lifetime diagnosis of alcohol dependence.
Wan, Eric Yuk Fai; Fong, Daniel Yee Tak; Fung, Colman Siu Cheung; Yu, Esther Yee Tak; Chin, Weng Yee; Chan, Anca Ka Chun; Lam, Cindy Lo Kuen
2017-06-01
This study aimed to develop and validate an all-cause mortality risk prediction model for Chinese primary care patients with type 2 diabetes mellitus(T2DM) in Hong Kong. A population-based retrospective cohort study was conducted on 132,462 Chinese patients who had received public primary care services during 2010. Each gender sample was randomly split on a 2:1 basis into derivation and validation cohorts and was followed-up for a median period of 5years. Gender-specific mortality risk prediction models showing the interaction effect between predictors and age were derived using Cox proportional hazards regression with forward stepwise approach. Developed models were compared with pre-existing models by Harrell's C-statistic and calibration plot using validation cohort. Common predictors of increased mortality risk in both genders included: age; smoking habit; diabetes duration; use of anti-hypertensive agents, insulin and lipid-lowering drugs; body mass index; hemoglobin A1c; systolic blood pressure(BP); total cholesterol to high-density lipoprotein-cholesterol ratio; urine albumin to creatinine ratio(urine ACR); and estimated glomerular filtration rate(eGFR). Prediction models showed better discrimination with Harrell"'s C-statistics of 0.768(males) and 0.782(females) and calibration power from the plots than previously established models. Our newly developed gender-specific models provide a more accurate predicted 5-year mortality risk for Chinese diabetic patients than other established models. Copyright © 2017 Elsevier Inc. All rights reserved.
Instrumental and statistical methods for the comparison of class evidence
NASA Astrophysics Data System (ADS)
Liszewski, Elisa Anne
Trace evidence is a major field within forensic science. Association of trace evidence samples can be problematic due to sample heterogeneity and a lack of quantitative criteria for comparing spectra or chromatograms. The aim of this study is to evaluate different types of instrumentation for their ability to discriminate among samples of various types of trace evidence. Chemometric analysis, including techniques such as Agglomerative Hierarchical Clustering, Principal Components Analysis, and Discriminant Analysis, was employed to evaluate instrumental data. First, automotive clear coats were analyzed by using microspectrophotometry to collect UV absorption data. In total, 71 samples were analyzed with classification accuracy of 91.61%. An external validation was performed, resulting in a prediction accuracy of 81.11%. Next, fiber dyes were analyzed using UV-Visible microspectrophotometry. While several physical characteristics of cotton fiber can be identified and compared, fiber color is considered to be an excellent source of variation, and thus was examined in this study. Twelve dyes were employed, some being visually indistinguishable. Several different analyses and comparisons were done, including an inter-laboratory comparison and external validations. Lastly, common plastic samples and other polymers were analyzed using pyrolysis-gas chromatography/mass spectrometry, and their pyrolysis products were then analyzed using multivariate statistics. The classification accuracy varied dependent upon the number of classes chosen, but the plastics were grouped based on composition. The polymers were used as an external validation and misclassifications occurred with chlorinated samples all being placed into the category containing PVC.
Damond, F; Benard, A; Balotta, Claudia; Böni, Jürg; Cotten, Matthew; Duque, Vitor; Ferns, Bridget; Garson, Jeremy; Gomes, Perpetua; Gonçalves, Fátima; Gottlieb, Geoffrey; Kupfer, Bernd; Ruelle, Jean; Rodes, Berta; Soriano, Vicente; Wainberg, Mark; Taieb, Audrey; Matheron, Sophie; Chene, Genevieve; Brun-Vezinet, Francoise
2011-10-01
Accurate HIV-2 plasma viral load quantification is crucial for adequate HIV-2 patient management and for the proper conduct of clinical trials and international cohort collaborations. This study compared the homogeneity of HIV-2 RNA quantification when using HIV-2 assays from ACHI(E)V(2E) study sites and either in-house PCR calibration standards or common viral load standards supplied to all collaborators. Each of the 12 participating laboratories quantified blinded HIV-2 samples, using its own HIV-2 viral load assay and standard as well as centrally validated and distributed common HIV-2 group A and B standards (http://www.hiv.lanl.gov/content/sequence/HelpDocs/subtypes-more.html). Aliquots of HIV-2 group A and B strains, each at 2 theoretical concentrations (2.7 and 3.7 log(10) copies/ml), were tested. Intralaboratory, interlaboratory, and overall variances of quantification results obtained with both standards were compared using F tests. For HIV-2 group A quantifications, overall and interlaboratory and/or intralaboratory variances were significantly lower when using the common standard than when using in-house standards at the concentration levels of 2.7 log(10) copies/ml and 3.7 log(10) copies/ml, respectively. For HIV-2 group B, a high heterogeneity was observed and the variances did not differ according to the type of standard used. In this international collaboration, the use of a common standard improved the homogeneity of HIV-2 group A RNA quantification only. The diversity of HIV-2 group B, particularly in PCR primer-binding regions, may explain the heterogeneity in quantification of this strain. Development of a validated HIV-2 viral load assay that accurately quantifies distinct circulating strains is needed.
Damond, F.; Benard, A.; Balotta, Claudia; Böni, Jürg; Cotten, Matthew; Duque, Vitor; Ferns, Bridget; Garson, Jeremy; Gomes, Perpetua; Gonçalves, Fátima; Gottlieb, Geoffrey; Kupfer, Bernd; Ruelle, Jean; Rodes, Berta; Soriano, Vicente; Wainberg, Mark; Taieb, Audrey; Matheron, Sophie; Chene, Genevieve; Brun-Vezinet, Francoise
2011-01-01
Accurate HIV-2 plasma viral load quantification is crucial for adequate HIV-2 patient management and for the proper conduct of clinical trials and international cohort collaborations. This study compared the homogeneity of HIV-2 RNA quantification when using HIV-2 assays from ACHIEV2E study sites and either in-house PCR calibration standards or common viral load standards supplied to all collaborators. Each of the 12 participating laboratories quantified blinded HIV-2 samples, using its own HIV-2 viral load assay and standard as well as centrally validated and distributed common HIV-2 group A and B standards (http://www.hiv.lanl.gov/content/sequence/HelpDocs/subtypes-more.html). Aliquots of HIV-2 group A and B strains, each at 2 theoretical concentrations (2.7 and 3.7 log10 copies/ml), were tested. Intralaboratory, interlaboratory, and overall variances of quantification results obtained with both standards were compared using F tests. For HIV-2 group A quantifications, overall and interlaboratory and/or intralaboratory variances were significantly lower when using the common standard than when using in-house standards at the concentration levels of 2.7 log10 copies/ml and 3.7 log10 copies/ml, respectively. For HIV-2 group B, a high heterogeneity was observed and the variances did not differ according to the type of standard used. In this international collaboration, the use of a common standard improved the homogeneity of HIV-2 group A RNA quantification only. The diversity of HIV-2 group B, particularly in PCR primer-binding regions, may explain the heterogeneity in quantification of this strain. Development of a validated HIV-2 viral load assay that accurately quantifies distinct circulating strains is needed. PMID:21813718
Prevalence of primary headache disorders in Fayoum Governorate, Egypt.
El-Sherbiny, Naglaa A; Masoud, Mohamed; Shalaby, Nevin M; Shehata, Hatem S
2015-01-01
There is abundance of epidemiological studies of headache in developed and western countries; however, data in developing countries and in Egypt are still lacking. This study aims to detect the prevalence of primary headache disorders in both urban and rural sectors in Fayoum governorate, Egypt. A total of 2600 subjects were included using multi-stage stratified systematic random sampling, with respondent rate of 91.3 %. A pre-designed Arabic version, interviewer-administered, pilot tested structured questionnaire was developed according to The International Classification of Headache Disorders, 3rd edition (beta version), and this questionnaire was validated and the strength of agreement in headache diagnosis was good. The 1-year headache prevalence was 51.4 %, which was more prevalent in urban dwellers. The most common primary headache type was episodic tension type headache (prevalence; 24.5 %), followed by episodic migraine (prevalence; 17.3 %), both types peaked in midlife. Headache disorders were more common in females with exception of cluster headache that showed the expected male dominance. The risk of chronic headache increased more than one fold and half when the participants were females, married, and in those with high education. More than 60 % of our participants did not seek medical advice for their headaches problem; this percentage was higher in rural areas. Primary headache disorders are common in Egypt; prevalence rate was comparable with western countries with exception of episodic tension headache. Still headache is under-estimated and under-recognized in Egypt and this problem should be targeted by health care providers.
Clinical assessment of organizational strategy: An examination of healthy adults.
Banerjee, Pia; White, Desirée A
2015-06-01
During the assessment of patients with cognitive difficulties, clinicians often examine strategic processing, particularly the ability to use organization-based strategies to efficiently complete various tasks. Several commonly used neuropsychological tasks are currently thought to provide measures of organizational strategic processing, but empirical evidence for the construct validity of these strategic measures is needed before interpreting them as measuring the same underlying ability. This is particularly important for the assessment of organizational strategic processing because the measures span cognitive domains (e.g., memory strategy, language strategy) as well as types of organization. In the present study, 200 adults were administered cognitive tasks commonly used in clinical practice to assess organizational strategic processing. Factor analysis was used to examine whether these measures of organizational strategic processing, which involved different cognitive domains and types of organization, could be operationalized as measuring a unitary construct. A very good-fitting model of the data demonstrated no significant shared variance among any of the strategic variables from different tasks (root mean square error of approximation < .0001, standardized root-mean-square residual = .045, comparative fit index = 1.000). These findings suggest that organizational strategic processing is highly specific to the demands and goals of individual tasks even when tasks share commonalities such as involving the same cognitive domain. In the design of neuropsychological batteries involving the assessment of organizational strategic processing, it is recommended that various strategic measures across cognitive domains and types of organizational processing are selected as guided by each patient's individual cognitive difficulties. (c) 2015 APA, all rights reserved).
Validating a large geophysical data set: Experiences with satellite-derived cloud parameters
NASA Technical Reports Server (NTRS)
Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie
1992-01-01
We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.
CT imaging-based determination and classification of anatomic variations of left gastric vein.
Wu, Yongyou; Chen, Guangqiang; Wu, Pengfei; Zhu, Jianbin; Peng, Wei; Xing, Chungen
2017-03-01
Precise determination and classification of left gastric vein (LGV) anatomy are helpful in planning for gastric surgery, in particular, for resection of gastric cancer. However, the anatomy of LGV is highly variable. A systematic classification of its variations is still to be proposed. We aimed to investigate the anatomical variations in LGV using CT imaging and develop a new nomenclature system. We reviewed CT images and tracked the course of LGV in 825 adults. The frequencies of common and variable LGV anatomical courses were recorded. Anatomic variations of LGV were proposed and classified into different types mainly based on its courses. The inflow sites of LGV into the portal system were also considered if common hepatic artery (CHA) or splenic artery (SA) could not be used as a frame of reference due to variations. Detailed anatomy and courses of LGV were depicted on CT images. Using CHA and SA as the frames of reference, the routes of LGV were divided into six types (i.e., PreS, RetroS, Mid, PreCH, RetroCH, and Supra). The inflow sites were classified into four types (i.e., PV, SV, PSV, and LPV). The new classification was mainly based on the courses of LGV, which was validated with MDCT in the 805 cases with an identifiable LGV, namely type I, RetroCH, 49.8 % (401/805); type II, PreS, 20.6 % (166/805); type III, Mid, 20.0 % (161/805); type IV, RetroS, 7.3 % (59/805); type V, Supra, 1.5 % (12/805); and type VI, PreCH, 0.7 % (6/805). Type VII, designated to the cases in which SA and CHA could not be used as frames of reference, was not observed in this series. Detailed depiction of the anatomy and courses of LGV on CT images allowed us to evaluate and develop a new classification and nomenclature system for the anatomical variations of LGV.
The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...
Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas
2018-01-01
Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.
Bobo, William V; Cooper, William O; Stein, C Michael; Olfson, Mark; Mounsey, Jackie; Daugherty, James; Ray, Wayne A
2012-08-24
We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6-24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes.
2012-01-01
Background We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. Methods The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6–24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. Results The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. Conclusion These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes. PMID:22920280
Barcak, Daniel; Oros, Mikulas; Hanzelova, Vladimira; Scholz, Tomas
2017-08-16
Tapeworms of the genus Caryophyllaeus Gmelin, 1790 (Caryophyllidea: Caryophyllaeidae), common parasites of cyprinid fishes, are reviewed and taxonomic status of 42 nominal taxa that have been placed in the genus during its long history is clarified. The following seven species occurring in the Palaearctic Region are recognised as valid: C. laticeps (Pallas, 1781), C. auriculatus (Kulakovskaya, 1961), C. balticus (Szidat, 1941) comb. n. (syn. Khawia baltica Szidat, 1941), C. brachycollis Janiszewska, 1953, C. fimbriceps Annenkova-Chlopina, 1919, C. syrdarjensis Skrjabin, 1913, and newly described Caryophyllaeus chondrostomi sp. n. (= C. laticeps morphotype 4 of Bazsalovicsová et al., 2014) from common nase, Chondrostoma nasus (Linnaeus), found in Austria and Slovakia. The new species differs by the paramuscular or cortical position of preovarian vitelline follicles, a large, robust body (up to 64 mm long), conspicuously long vas deferens, flabellate scolex with small wrinkles on the anterior margin, and anteriormost testes located in a relatively short distance from the anterior extremity. Caryophyllaeus kashmirenses Mehra, 1930 and Caryophyllaeus prussicus (Szidat, 1937) comb. n. are considered to be species inquirendae, C. truncatus von Siebold in Baird, 1853 and C. tuba von Siebold in Baird, 1853 are nomina nuda. Data on the morphology, host spectra, distribution and known life-cycles of valid species are provided. Phylogenetic interrelations of four species of the genus including its type species and newly described C. chondrostomi were assessed based on an analysis of sequences of lsrDNA and cox1. A key to identification of all valid species of Caryophyllaeus is also provided.
Khatun, Jainab; Hamlett, Eric; Giddings, Morgan C
2008-03-01
The identification of peptides by tandem mass spectrometry (MS/MS) is a central method of proteomics research, but due to the complexity of MS/MS data and the large databases searched, the accuracy of peptide identification algorithms remains limited. To improve the accuracy of identification we applied a machine-learning approach using a hidden Markov model (HMM) to capture the complex and often subtle links between a peptide sequence and its MS/MS spectrum. Our model, HMM_Score, represents ion types as HMM states and calculates the maximum joint probability for a peptide/spectrum pair using emission probabilities from three factors: the amino acids adjacent to each fragmentation site, the mass dependence of ion types and the intensity dependence of ion types. The Viterbi algorithm is used to calculate the most probable assignment between ion types in a spectrum and a peptide sequence, then a correction factor is added to account for the propensity of the model to favor longer peptides. An expectation value is calculated based on the model score to assess the significance of each peptide/spectrum match. We trained and tested HMM_Score on three data sets generated by two different mass spectrometer types. For a reference data set recently reported in the literature and validated using seven identification algorithms, HMM_Score produced 43% more positive identification results at a 1% false positive rate than the best of two other commonly used algorithms, Mascot and X!Tandem. HMM_Score is a highly accurate platform for peptide identification that works well for a variety of mass spectrometer and biological sample types. The program is freely available on ProteomeCommons via an OpenSource license. See http://bioinfo.unc.edu/downloads/ for the download link.
NASA Astrophysics Data System (ADS)
Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun
2008-03-01
Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doris, E.
2012-04-01
There is a growing body of qualitative and a limited body of quantitative literature supporting the common assertion that policy drives development of clean energy resources. Recent work in this area indicates that the impact of policy depends on policy type, length of time in place, and economic and social contexts of implementation. This work aims to inform policymakers about the impact of different policy types and to assist in the staging of those policies to maximize individual policy effectiveness and development of the market. To do so, this paper provides a framework for policy development to support the marketmore » for distributed photovoltaic systems. Next steps include mathematical validation of the framework and development of specific policy pathways given state economic and resource contexts.« less
Study on Shear Performance of Cold-formed Steel Composite Wall with New Type of stud
NASA Astrophysics Data System (ADS)
Wang, Chungang; Yue, Sizhe; Liu, Hong; Zhang, Zhuangnan
2018-03-01
The shear resistance of single oriented-strand board wall and single gypsum board wall can be improved in different degrees by increasing strength of steel. The experimental data of literatures were used, and the test specimens had been simulated and validated by ABAQUS finite element analysis. According to the research, it showed that the compressive bearing capacity of the new stud composite wall was much better than the common stud composite wall, so the establishment and research of all models had been based on the new section stud. The analysis results show that when using new type of stud the shear resistance of the single oriented-strand board wall can be improved efficiently by increasing strength of steel, but the shear resistance of the single gypsum wall can be increased little.
Common path in-line holography using enhanced joint object reference digital interferometers
Kelner, Roy; Katz, Barak; Rosen, Joseph
2014-01-01
Joint object reference digital interferometer (JORDI) is a recently developed system capable of recording holograms of various types [Opt. Lett. 38(22), 4719 (2013)24322115]. Presented here is a new enhanced system design that is based on the previous JORDI. While the previous JORDI has been based purely on diffractive optical elements, displayed on spatial light modulators, the present design incorporates an additional refractive objective lens, thus enabling hologram recording with improved resolution and increased system applicability. Experimental results demonstrate successful hologram recording for various types of objects, including transmissive, reflective, three-dimensional, phase and highly scattering objects. The resolution limit of the system is analyzed and experimentally validated. Finally, the suitability of JORDI for microscopic applications is verified as a microscope objective based configuration of the system is demonstrated. PMID:24663838
Most Common Publication Types in Radiology Journals:: What is the Level of Evidence?
Rosenkrantz, Andrew B; Pinnamaneni, Niveditha; Babb, James S; Doshi, Ankur M
2016-05-01
This study aimed to assess the most common publication types in radiology journals, as well as temporal trends and association with citation frequency. PubMed was searched to extract all published articles having the following "Publication Type" indices: "validation studies," "meta-analysis," "clinical trial," "comparative study," "evaluation study," "guideline," "multicenter study," "randomized study," "review," "editorial," "case report," and "technical report." The percentage of articles within each category published within clinical radiology journals was computed. Normalized percentages for each category were also computed on an annual basis. Citation counts within a 2-year window following publication were obtained using Web of Science. Overall trends were assessed. Publication types with the highest fraction in radiology journals were technical reports, evaluation studies, and case reports (4.8% to 5.8%). Publication types with the lowest fraction in radiology journals were randomized trials, multicenter studies, and meta-analyses (0.8% to 1.5%). Case reports showed a significant decrease since 1999, with accelerating decline since 2007 (P = 0.002). Publication types with highest citation counts were meta-analyses, guidelines, and multicenter studies (8.1 ± 10.7 to 12.9 ± 5.1). Publication types with lowest citation counts were case reports, editorials, and technical reports (1.4 ± 2.4 to 2.9 ± 4.3). The representation in radiology journals and citation frequency of the publication types showed weak inverse correlation (r = -0.372). Radiology journals have historically had relatively greater representation of less frequently cited publication types. Various strategies, including methodological training, multidisciplinary collaboration, national support networks, as well as encouragement of higher level of evidence by funding agencies and radiology journals themselves, are warranted to improve the impact of radiological research. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
76 FR 16858 - Proposed Information Collection (Statement of Marital Relationship); Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... validity of a common law marriage. DATES: Written comments and recommendations on the proposed collection.... VA uses the information collected to determine whether a common law marriage was valid under the law of the place where the parties resided at the time of the marriage or under the law of the place...
Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test.
Pszczola, Marek; Jaczewski, Mariusz; Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary
2018-01-10
Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from -20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis.
Nanomaterial-Based Electrochemical Immunosensors for Clinically Significant Biomarkers
Ronkainen, Niina J.; Okon, Stanley L.
2014-01-01
Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon. PMID:28788700
Semaille, P
2009-09-01
Addiction is characterized by the inability to control his consumption of product or control certain behaviors, and the continuation of the behavior despite knowledge of its adverse effects. Addictions to substances like heroin, cocaine, etc., are well known. But other substances potentially addictive are getting more common in Belgium: MDMA, GHB / GBL, Cristal, etc. The existence of addictions without substance (called also behavioral addiction) is well recognized now: gambling addiction seems to be the most common and has been recognized as a disease by WHO, but we can also observe cyberaddiction, addiction to sex, workalholic, addiction to shopping, etc. The screening of poly-addiction or to one substance or one behavior should be systematized in the history of every patient. This screening should be facilitated through the development and validation of a cross scale. Particular attention will be paid to certain groups, both in primary prevention and screening: men, adolescents and young adults, university students or high schools, clubbers, sporting people, prisoners, ethnic minorities, people with mental disorders like depression. Primary care workers, and especially general practitioners, are at the first place to detect those different forms of addiction, can affort appropriate care according to patient's characteristics and type addiction, and to identify high-risk situations for relapse.
Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test
Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary
2018-01-01
Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from −20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis. PMID:29320443
Xu, Jingting; Hu, Hong; Dai, Yang
The identification of enhancers is a challenging task. Various types of epigenetic information including histone modification have been utilized in the construction of enhancer prediction models based on a diverse panel of machine learning schemes. However, DNA methylation profiles generated from the whole genome bisulfite sequencing (WGBS) have not been fully explored for their potential in enhancer prediction despite the fact that low methylated regions (LMRs) have been implied to be distal active regulatory regions. In this work, we propose a prediction framework, LMethyR-SVM, using LMRs identified from cell-type-specific WGBS DNA methylation profiles and a weighted support vector machine learning framework. In LMethyR-SVM, the set of cell-type-specific LMRs is further divided into three sets: reliable positive, like positive and likely negative, according to their resemblance to a small set of experimentally validated enhancers in the VISTA database based on an estimated non-parametric density distribution. Then, the prediction model is obtained by solving a weighted support vector machine. We demonstrate the performance of LMethyR-SVM by using the WGBS DNA methylation profiles derived from the human embryonic stem cell type (H1) and the fetal lung fibroblast cell type (IMR90). The predicted enhancers are highly conserved with a reasonable validation rate based on a set of commonly used positive markers including transcription factors, p300 binding and DNase-I hypersensitive sites. In addition, we show evidence that the large fraction of the LMethyR-SVM predicted enhancers are not predicted by ChromHMM in H1 cell type and they are more enriched for the FANTOM5 enhancers. Our work suggests that low methylated regions detected from the WGBS data are useful as complementary resources to histone modification marks in developing models for the prediction of cell-type-specific enhancers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrero-Faz, M.; Hernandezperez, G.
The Cuban Archipelago is an Early Tertiary thrust belt derived from the Collision of the Cretaceous volcanic arc from the South with the North American continental margin (Jurassic- Cretaceous). The main characteristics of the hydrocarbon potential of Cuba are: (1) Widespread existence of Jurassic-Cretaceous source rocks and active process of generation of different types of oils; (2) Hydrocarbons are reservoired in a wide range of rock types most commonly in thrusted, fractured carbonates of Jurassic to Cretaceous age. This kind of reservoir is the most important in Cuba; (3) High density in area of different types of traps, being themore » most important hinterland dipping thrust sheet play; and (4) Migration and trapping of hydrocarbons mainly in Eocene. Migration is supposed to be mostly lateral. Vertical migration is not excluded in the South and also in some part of the North Province. There still remains a significant number of untested, apparently valid exploration plays in both on- and offshore areas of Cuba.« less
Anemia and mortality in older persons: does the type of anemia affect survival?
Shavelle, Robert M; MacKenzie, Ross; Paculdo, David R
2012-03-01
Anemia is a common condition among community-dwelling older adults. The present study investigates the effect of type of anemia on subsequent mortality. We analyzed data from participants of the Third National Health and Nutrition Survey who were aged ≥50 and had valid hemoglobin levels determined by laboratory measurement. Anemia was defined by World Health Organization criteria. 7,171 subjects met our inclusion criterion. Of those with anemia (n = 862, deaths = 491), 24% had nutritional anemia, 11% had anemia of chronic renal disease, 26% had anemia of chronic inflammation, and 39% had unexplained anemia. We found an overall relative risk (RR) for mortality of 1.8 (p < 0.001) comparing those with anemia to those without, after adjusting for age, sex, and race. After we controlled for a number of chronic medical conditions, the overall RR was 1.6. Compared to persons without anemia, we found the following RRs for the type of anemia: nutritional (2.34, p < 0.0001), chronic renal disease (1.70, p < 0.0001), chronic inflammation (1.48, p < 0.0001), and unexplained (1.26, p < 0.01). Anemia is common although not severe in older non-institutionalized adults. When compared with non-anemic older adults, those with nutritional anemia or anemia due to chronic renal disease have the highest mortality risk.
Zheng, Kai; Fear, Kathleen; Chaffee, Bruce W; Zimmerman, Christopher R; Karls, Edward M; Gatwood, Justin D; Stevenson, James G; Pearlman, Mark D
2011-12-01
To develop a theoretically informed and empirically validated survey instrument for assessing prescribers' perception of computerized drug-drug interaction (DDI) alerts. The survey is grounded in the unified theory of acceptance and use of technology and an adapted accident causation model. Development of the instrument was also informed by a review of the extant literature on prescribers' attitude toward computerized medication safety alerts and common prescriber-provided reasons for overriding. To refine and validate the survey, we conducted a two-stage empirical validation study consisting of a pretest with a panel of domain experts followed by a field test among all eligible prescribers at our institution. The resulting survey instrument contains 28 questionnaire items assessing six theoretical dimensions: performance expectancy, effort expectancy, social influence, facilitating conditions, perceived fatigue, and perceived use behavior. Satisfactory results were obtained from the field validation; however, a few potential issues were also identified. We analyzed these issues accordingly and the results led to the final survey instrument as well as usage recommendations. High override rates of computerized medication safety alerts have been a prevalent problem. They are usually caused by, or manifested in, issues of poor end user acceptance. However, standardized research tools for assessing and understanding end users' perception are currently lacking, which inhibits knowledge accumulation and consequently forgoes improvement opportunities. The survey instrument presented in this paper may help fill this methodological gap. We developed and empirically validated a survey instrument that may be useful for future research on DDI alerts and other types of computerized medication safety alerts more generally.
Van Iddekinge, Chad H; Roth, Philip L; Putka, Dan J; Lanivich, Stephen E
2011-11-01
A common belief among researchers is that vocational interests have limited value for personnel selection. However, no comprehensive quantitative summaries of interests validity research have been conducted to substantiate claims for or against the use of interests. To help address this gap, we conducted a meta-analysis of relations between interests and employee performance and turnover using data from 74 studies and 141 independent samples. Overall validity estimates (corrected for measurement error in the criterion but not for range restriction) for single interest scales were .14 for job performance, .26 for training performance, -.19 for turnover intentions, and -.15 for actual turnover. Several factors appeared to moderate interest-criterion relations. For example, validity estimates were larger when interests were theoretically relevant to the work performed in the target job. The type of interest scale also moderated validity, such that corrected validities were larger for scales designed to assess interests relevant to a particular job or vocation (e.g., .23 for job performance) than for scales designed to assess a single, job-relevant realistic, investigative, artistic, social, enterprising, or conventional (i.e., RIASEC) interest (.10) or a basic interest (.11). Finally, validity estimates were largest when studies used multiple interests for prediction, either by using a single job or vocation focused scale (which tend to tap multiple interests) or by using a regression-weighted composite of several RIASEC or basic interest scales. Overall, the results suggest that vocational interests may hold more promise for predicting employee performance and turnover than researchers may have thought. (c) 2011 APA, all rights reserved.
Kozyreva, Varvara K.; Truong, Chau-Linda; Greninger, Alexander L.; Crandall, John; Mukhopadhyay, Rituparna
2017-01-01
ABSTRACT Public health microbiology laboratories (PHLs) are on the cusp of unprecedented improvements in pathogen identification, antibiotic resistance detection, and outbreak investigation by using whole-genome sequencing (WGS). However, considerable challenges remain due to the lack of common standards. Here, we describe the validation of WGS on the Illumina platform for routine use in PHLs according to Clinical Laboratory Improvements Act (CLIA) guidelines for laboratory-developed tests (LDTs). We developed a validation panel comprising 10 Enterobacteriaceae isolates, 5 Gram-positive cocci, 5 Gram-negative nonfermenting species, 9 Mycobacterium tuberculosis isolates, and 5 miscellaneous bacteria. The genome coverage range was 15.71× to 216.4× (average, 79.72×; median, 71.55×); the limit of detection (LOD) for single nucleotide polymorphisms (SNPs) was 60×. The accuracy, reproducibility, and repeatability of base calling were >99.9%. The accuracy of phylogenetic analysis was 100%. The specificity and sensitivity inferred from multilocus sequence typing (MLST) and genome-wide SNP-based phylogenetic assays were 100%. The following objectives were accomplished: (i) the establishment of the performance specifications for WGS applications in PHLs according to CLIA guidelines, (ii) the development of quality assurance and quality control measures, (iii) the development of a reporting format for end users with or without WGS expertise, (iv) the availability of a validation set of microorganisms, and (v) the creation of a modular template for the validation of WGS processes in PHLs. The validation panel, sequencing analytics, and raw sequences could facilitate multilaboratory comparisons of WGS data. Additionally, the WGS performance specifications and modular template are adaptable for the validation of other platforms and reagent kits. PMID:28592550
Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E
2017-09-01
- A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.
Kozyreva, Varvara K; Truong, Chau-Linda; Greninger, Alexander L; Crandall, John; Mukhopadhyay, Rituparna; Chaturvedi, Vishnu
2017-08-01
Public health microbiology laboratories (PHLs) are on the cusp of unprecedented improvements in pathogen identification, antibiotic resistance detection, and outbreak investigation by using whole-genome sequencing (WGS). However, considerable challenges remain due to the lack of common standards. Here, we describe the validation of WGS on the Illumina platform for routine use in PHLs according to Clinical Laboratory Improvements Act (CLIA) guidelines for laboratory-developed tests (LDTs). We developed a validation panel comprising 10 Enterobacteriaceae isolates, 5 Gram-positive cocci, 5 Gram-negative nonfermenting species, 9 Mycobacterium tuberculosis isolates, and 5 miscellaneous bacteria. The genome coverage range was 15.71× to 216.4× (average, 79.72×; median, 71.55×); the limit of detection (LOD) for single nucleotide polymorphisms (SNPs) was 60×. The accuracy, reproducibility, and repeatability of base calling were >99.9%. The accuracy of phylogenetic analysis was 100%. The specificity and sensitivity inferred from multilocus sequence typing (MLST) and genome-wide SNP-based phylogenetic assays were 100%. The following objectives were accomplished: (i) the establishment of the performance specifications for WGS applications in PHLs according to CLIA guidelines, (ii) the development of quality assurance and quality control measures, (iii) the development of a reporting format for end users with or without WGS expertise, (iv) the availability of a validation set of microorganisms, and (v) the creation of a modular template for the validation of WGS processes in PHLs. The validation panel, sequencing analytics, and raw sequences could facilitate multilaboratory comparisons of WGS data. Additionally, the WGS performance specifications and modular template are adaptable for the validation of other platforms and reagent kits. Copyright © 2017 Kozyreva et al.
Paap, Kenneth R.; Sawi, Oliver
2014-01-01
A sample of 58 bilingual and 62 monolingual university students completed four tasks commonly used to test for bilingual advantages in executive functioning (EF): antisaccade, attentional network test, Simon, and color-shape switching. Across the four tasks, 13 different indices were derived that are assumed to reflect individual differences in inhibitory control, monitoring, or switching. The effects of bilingualism on the 13 measures were explored by directly comparing the means of the two language groups and through regression analyses using a continuous measure of bilingualism and multiple demographic characteristics as predictors. Across the 13 different measures and two types of data analysis there were very few significant results and those that did occur supported a monolingual advantage. An equally important goal was to assess the convergent validity through cross-task correlations of indices assume to measure the same component of executive functioning. Most of the correlations using difference-score measures were non-significant and many near zero. Although modestly higher levels of convergent validity are sometimes reported, a review of the existing literature suggests that bilingual advantages (or disadvantages) may reflect task-specific differences that are unlikely to generalize to important general differences in EF. Finally, as cautioned by Salthouse, assumed measures of executive functioning may also be threatened by a lack of discriminant validity that separates individual or group differences in EF from those in general fluid intelligence or simple processing speed. PMID:25249988
Xu, Feng; Hilpert, Peter; Randall, Ashley K; Li, Qiuping; Bodenmann, Guy
2016-08-01
The Dyadic Coping Inventory (DCI, Bodenmann, 2008) assesses how couples support each other when facing individual (e.g., workload) and common (e.g., parenting) stressors. Specifically, the DCI measures partners' perceptions of their own (Self) and their partners' behaviors (Partner) when facing individual stressors, and partners' common coping behaviors when facing common stressors (Common). To date, the DCI has been validated in 6 different languages from individualistic Western cultures; however, because culture can affect interpersonal interactions, it is unknown whether the DCI is a reliable measure of coping behaviors for couples living in collectivistic Eastern cultures. Based on data from 474 Chinese couples (N = 948 individuals), the current study examined the Chinese version of the DCI's factorial structure, measurement invariance (MI), and construct validity of test scores. Using 3 cultural groups (China, Switzerland, and the United States [U.S.]), confirmatory factor analysis revealed a 5-factor structure regarding Self and Partner and a 2-factor structure regarding Common dyadic coping (DC). Results from analyses of MI indicated that the DCI subscales met the criteria for configural, metric, and full/partial scalar invariance across cultures (Chinese-Swiss and Chinese-U.S.) and genders (Chinese men and women). Results further revealed good construct validity of the DCI test scores. In all, the Chinese version of the DCI can be used for measuring Chinese couples' coping behaviors, and is available for cross-cultural studies examining DC behaviors between Western and Eastern cultures. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Reusable software parts and the semi-abstract data type
NASA Technical Reports Server (NTRS)
Cohen, Sanford G.
1986-01-01
The development of reuable software parts has been an area of intense discussion within the software community for many years. An approach is described for developing reusable parts for the applications of missile guidance, navigation and control which meet the following criteria: (1) Reusable; (2) Tailorable; (3) Efficient; (4) Simple to use; and (5) Protected against misuse. Validating the feasibility of developing reusable parts which possess these characteristics is the basis of the Common Ada Missile Packages Program (CAMP). Under CAMP, over 200 reusable software parts were developed, including part for navigation, Kalman filter, signal processing and autopilot. Six different methods are presented for designing reusable software parts.
Platelet function is modified by common sequence variation in megakaryocyte super enhancers
Petersen, Romina; Lambourne, John J.; Javierre, Biola M.; Grassi, Luigi; Kreuzhuber, Roman; Ruklisa, Dace; Rosa, Isabel M.; Tomé, Ana R.; Elding, Heather; van Geffen, Johanna P.; Jiang, Tao; Farrow, Samantha; Cairns, Jonathan; Al-Subaie, Abeer M.; Ashford, Sofie; Attwood, Antony; Batista, Joana; Bouman, Heleen; Burden, Frances; Choudry, Fizzah A.; Clarke, Laura; Flicek, Paul; Garner, Stephen F.; Haimel, Matthias; Kempster, Carly; Ladopoulos, Vasileios; Lenaerts, An-Sofie; Materek, Paulina M.; McKinney, Harriet; Meacham, Stuart; Mead, Daniel; Nagy, Magdolna; Penkett, Christopher J.; Rendon, Augusto; Seyres, Denis; Sun, Benjamin; Tuna, Salih; van der Weide, Marie-Elise; Wingett, Steven W.; Martens, Joost H.; Stegle, Oliver; Richardson, Sylvia; Vallier, Ludovic; Roberts, David J.; Freson, Kathleen; Wernisch, Lorenz; Stunnenberg, Hendrik G.; Danesh, John; Fraser, Peter; Soranzo, Nicole; Butterworth, Adam S.; Heemskerk, Johan W.; Turro, Ernest; Spivakov, Mikhail; Ouwehand, Willem H.; Astle, William J.; Downes, Kate; Kostadima, Myrto; Frontini, Mattia
2017-01-01
Linking non-coding genetic variants associated with the risk of diseases or disease-relevant traits to target genes is a crucial step to realize GWAS potential in the introduction of precision medicine. Here we set out to determine the mechanisms underpinning variant association with platelet quantitative traits using cell type-matched epigenomic data and promoter long-range interactions. We identify potential regulatory functions for 423 of 565 (75%) non-coding variants associated with platelet traits and we demonstrate, through ex vivo and proof of principle genome editing validation, that variants in super enhancers play an important role in controlling archetypical platelet functions. PMID:28703137
Understanding the cognitive and motivational underpinnings of sexual passion from a dualistic model.
Philippe, Frederick L; Vallerand, Robert J; Bernard-Desrosiers, Léa; Guilbault, Valérie; Rajotte, Guillaume
2017-11-01
Sexual passion has always been conceptualized as a one-dimensional phenomenon that emerges from interactions with partners. Drawing from the literature on passionate activities, sexual passion was defined in terms of its intrapersonal motivational and cognitive components and examined from a dualistic perspective. More specifically, in 5 studies, we investigated how 2 types of sexual passion, harmonious and obsessive, can lead to clearly distinct subjective, relational, and cognitive outcomes. Study 1 validated a scale measuring harmonious and obsessive sexual passion, and showed that each type of sexual passion leads to common, but also distinct, subjective consequences during sexual activity engagement for both singles and romantically engaged individuals. Studies 2 and 3 differentiated the constructs of harmonious and obsessive sexual passion from competing constructs existing in the literature and provided evidence for its predictive validity regarding various relational outcomes, including relationship sustainability over time. Finally, Studies 4 and 5 investigated the cognitive consequences of each type of sexual passion by showing how they reflect distinct levels of integration of sexual and relational representations, and how they can lead to biased processing of sexual information (Study 4) and conflict with ongoing sex-unrelated goals (Studies 5a and 5b). Overall, the present series of studies provides a new look at sexual passion from a motivational and cognitive intrapersonal perspective that is not restricted to interpersonal ramifications with partners. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Nielsen, Alex Christian Yde; Böttiger, Blenda; Midgley, Sofie Elisabeth; Nielsen, Lars Peter
2013-11-01
As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay. The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses in 171 (8%) and 66 (3%) of the samples, respectively. 180 of the positive samples could be genotyped by PCR and sequencing and the most common genotypes found were human parechovirus type 3, echovirus 9, enterovirus 71, Coxsackievirus A16, and echovirus 25. During 2009 in Denmark, both enterovirus and human parechovirus type 3 had a similar seasonal pattern with a peak during the summer and autumn. Human parechovirus type 3 was almost invariably found in children less than 4 months of age. In conclusion, a multiplex assay was developed allowing simultaneous detection of 2 viruses, which can cause similar clinical symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.
Fuzzy-PI-based centralised control of semi-isolated FP-SEPIC/ZETA BDC in a PV/battery hybrid system
NASA Astrophysics Data System (ADS)
Mahendran, Venmathi; Ramabadran, Ramaprabha
2016-11-01
Multiport converters with centralised controller have been most commonly used in stand-alone photovoltaic (PV)/battery hybrid system to supply the load smoothly without any disturbances. This study presents the performance analysis of four-port SEPIC/ZETA bidirectional converter (FP-SEPIC/ZETA BDC) using various types of centralised control schemes like Fuzzy tuned proportional integral controller (Fuzzy-PI), fuzzy logic controller (FLC) and conventional proportional integral (PI) controller. The proposed FP-SEPIC/ZETA BDC with various control strategy is derived for simultaneous power management of a PV source using distributed maximum power point tracking (DMPPT) algorithm, a rechargeable battery, and a load by means of centralised controller. The steady state and the dynamic response of the FP-SEPIC/ZETA BDC are analysed using three different types of controllers under line and load regulation. The Fuzzy-PI-based control scheme improves the dynamic response of the system when compared with the FLC and the conventional PI controller. The power balance between the ports is achieved by pseudorandom carrier modulation scheme. The response of the FP-SEPIC/ZETA BDC is also validated experimentally using hardware prototype model of 500 W system. The effectiveness of the control strategy is validated using simulation and experimental results.
Current HPLC Methods for Assay of Nano Drug Delivery Systems.
Tekkeli, Serife Evrim Kepekci; Kiziltas, Mustafa Volkan
2017-01-01
In nano drug formulations the mechanism of release is a critical process to recognize controlled and targeted drug delivery systems. In order to gain high bioavailability and specificity from the drug to reach its therapeutic goal, the active substance must be loaded into the nanoparticles efficiently. Therefore, the amount in biological fluids or tissues and the remaining amount in nano carriers are very important parameters to understand the potential of the nano drug delivery systems. For this aim, suitable and validated quantitation methods are required to determine released drug concentrations from nano pharmaceutical formulations. HPLC (High Performance Liquid Chromatography) is one of the most common techniques used for determination of released drug content out of nano drug formulations, in different physical conditions, over different periods of time. Since there are many types of HPLC methods depending on detector and column types, it is a challenge for the researchers to choose a suitable method that is simple, fast and validated HPLC techniques for their nano drug delivery systems. This review's goal is to compare HPLC methods that are currently used in different nano drug delivery systems in order to provide detailed and useful information for researchers. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Joint Attention, Social-Cognition, and Recognition Memory in Adults
Kim, Kwanguk; Mundy, Peter
2012-01-01
The early emerging capacity for Joint Attention (JA), or socially coordinated visual attention, is thought to be integral to the development of social-cognition in childhood. Recent studies have also begun to suggest that JA affects adult cognition as well, but methodological limitations hamper research on this topic. To address this issue we developed a novel virtual reality paradigm that integrates eye-tracking and virtual avatar technology to measure two types of JA in adults, Initiating Joint Attention (IJA) and Responding to Joint Attention (RJA). Distinguishing these types of JA in research is important because they are thought to reflect unique, as well as common constellations of processes involved in human social-cognition and social learning. We tested the validity of the differentiation of IJA and RJA in our paradigm in two studies of picture recognition memory in undergraduate students. Study 1 indicated that young adults correctly identified more pictures they had previously viewed in an IJA condition (67%) than in a RJA (58%) condition, η2 = 0.57. Study 2 controlled for IJA and RJA stimulus viewing time differences, and replicated the findings of Study 1. The implications of these results for the validity of the paradigm and research on the affects of JA on adult social-cognition are discussed. PMID:22712011
Guo, Chunguang; Zhang, Xiaodong; Fink, Stephen P; Platzer, Petra; Wilson, Keith; Willson, James K. V.; Wang, Zhenghe; Markowitz, Sanford D
2008-01-01
Expression microarrays identified a novel transcript, designated as Ugene, whose expression is absent in normal colon and colon adenomas, but that is commonly induced in malignant colon cancers. These findings were validated by real-time PCR and Northern blot analysis in an independent panel of colon cancer cases. In addition, Ugene expression was found to be elevated in many other common cancer types, including, breast, lung, uterus, and ovary. Immunofluorescence of V5-tagged Ugene revealed it to have a nuclear localization. In a pull-down assay, uracil DNA-glycosylase 2 (UNG2), an important enzyme in the base excision repair pathway, was identified as a partner protein that binds to Ugene. Co-immunoprecipitation and Western blot analysis confirmed the binding between the endogenous Ugene and UNG2 proteins. Using deletion constructs, we find that Ugene binds to the first 25 amino acids of the UNG2 NH2-terminus. We suggest Ugene induction in cancer may contribute to the cancer phenotype by interacting with the base excision repair pathway. PMID:18676834
Comparison between ultrasonographic and clinical findings in 43 dogs with gallbladder mucoceles.
Choi, Jihye; Kim, Ahyoung; Keh, Seoyeon; Oh, Juyeon; Kim, Hyunwook; Yoon, Junghee
2014-01-01
Cholecystectomy is the current standard recommended treatment for dogs with gallbladder mucoceles. However, medical management with monitoring has also been recommended for asymptomatic dogs. The purpose of this retrospective study was to compare ultrasonographic patterns of gallbladder mucoceles with clinical disease status in a group of dogs. For each included dog, the ultrasonographic pattern of the mucocele was classified into one of six types: type 1, immobile echogenic bile; type 2, incomplete stellate pattern; type 3, typical stellate pattern; type 4, kiwi like pattern and stellate combination; type 5, kiwi like pattern with residual central echogenic bile; and type 6, kiwi like pattern. A total of 43 dogs were included. Twenty-four dogs, including 11 dogs with gallbladder rupture, were symptomatic. Nineteen dogs were asymptomatic. Cholecystectomy (n = 19), medical therapy (n = 17), or monitoring (n = 6) treatments were applied according to clinical signs and owners' requests. One dog suspected of having gallbladder rupture was euthanized. Frequencies of gallbladder mucocele patterns were as follows: type 1 = 10 (23%), type 2 = 13 (30%), type 3 = 5 (12%), type 4 = 11 (26%), type 5 = 4 (9%), and type 6 = 0. In dogs with gallbladder rupture, type 2 (8/13) was the most common. No significant correlations were found between ultrasonographic patterns of gallbladder mucoceles and clinical disease status or gallbladder rupture. Findings indicated that ultrasonographic patterns of gallbladder mucoceles may not be valid bases for treatment recommendations in dogs. © 2013 American College of Veterinary Radiology.
Mouse Models of Type 2 Diabetes Mellitus in Drug Discovery.
Baribault, Helene
2016-01-01
Type 2 diabetes is a fast-growing epidemic in industrialized countries, associated with obesity, lack of physical exercise, aging, family history, and ethnic background. Diagnostic criteria are elevated fasting or postprandial blood glucose levels, a consequence of insulin resistance. Early intervention can help patients to revert the progression of the disease together with lifestyle changes or monotherapy. Systemic glucose toxicity can have devastating effects leading to pancreatic beta cell failure, blindness, nephropathy, and neuropathy, progressing to limb ulceration or even amputation. Existing treatments have numerous side effects and demonstrate variability in individual patient responsiveness. However, several emerging areas of discovery research are showing promises with the development of novel classes of antidiabetic drugs.The mouse has proven to be a reliable model for discovering and validating new treatments for type 2 diabetes mellitus. We review here commonly used methods to measure endpoints relevant to glucose metabolism which show good translatability to the diagnostic of type 2 diabetes in humans: baseline fasting glucose and insulin, glucose tolerance test, insulin sensitivity index, and body type composition. Improvements on these clinical values are essential for the progression of a novel potential therapeutic molecule through a preclinical and clinical pipeline.
Considerations Underlying the Use of Mixed Group Validation
ERIC Educational Resources Information Center
Jewsbury, Paul A.; Bowden, Stephen C.
2013-01-01
Mixed Group Validation (MGV) is an approach for estimating the diagnostic accuracy of tests. MGV is a promising alternative to the more commonly used Known Groups Validation (KGV) approach for estimating diagnostic accuracy. The advantage of MGV lies in the fact that the approach does not require a perfect external validity criterion or gold…
Plagnol, Vincent; Woodhouse, Samuel; Howarth, Karen; Lensing, Stefanie; Smith, Matt; Epstein, Michael; Madi, Mikidache; Smalley, Sarah; Leroy, Catherine; Hinton, Jonathan; de Kievit, Frank; Musgrave-Brown, Esther; Herd, Colin; Baker-Neblett, Katherine; Brennan, Will; Dimitrov, Peter; Campbell, Nathan; Morris, Clive; Rosenfeld, Nitzan; Clark, James; Gale, Davina; Platt, Jamie; Calaway, John; Jones, Greg; Forshew, Tim
2018-01-01
Circulating tumor DNA (ctDNA) analysis is being incorporated into cancer care; notably in profiling patients to guide treatment decisions. Responses to targeted therapies have been observed in patients with actionable mutations detected in plasma DNA at variant allele fractions (VAFs) below 0.5%. Highly sensitive methods are therefore required for optimal clinical use. To enable objective assessment of assay performance, detailed analytical validation is required. We developed the InVisionFirst™ assay, an assay based on enhanced tagged amplicon sequencing (eTAm-Seq™) technology to profile 36 genes commonly mutated in non-small cell lung cancer (NSCLC) and other cancer types for actionable genomic alterations in cell-free DNA. The assay has been developed to detect point mutations, indels, amplifications and gene fusions that commonly occur in NSCLC. For analytical validation, two 10mL blood tubes were collected from NSCLC patients and healthy volunteer donors. In addition, contrived samples were used to represent a wide spectrum of genetic aberrations and VAFs. Samples were analyzed by multiple operators, at different times and using different reagent Lots. Results were compared with digital PCR (dPCR). The InVisionFirst assay demonstrated an excellent limit of detection, with 99.48% sensitivity for SNVs present at VAF range 0.25%-0.33%, 92.46% sensitivity for indels at 0.25% VAF and a high rate of detection at lower frequencies while retaining high specificity (99.9997% per base). The assay also detected ALK and ROS1 gene fusions, and DNA amplifications in ERBB2, FGFR1, MET and EGFR with high sensitivity and specificity. Comparison between the InVisionFirst assay and dPCR in a series of cancer patients showed high concordance. This analytical validation demonstrated that the InVisionFirst assay is highly sensitive, specific and robust, and meets analytical requirements for clinical applications.
Howarth, Karen; Lensing, Stefanie; Smith, Matt; Epstein, Michael; Madi, Mikidache; Smalley, Sarah; Leroy, Catherine; Hinton, Jonathan; de Kievit, Frank; Musgrave-Brown, Esther; Herd, Colin; Baker-Neblett, Katherine; Brennan, Will; Dimitrov, Peter; Campbell, Nathan; Morris, Clive; Rosenfeld, Nitzan; Clark, James; Gale, Davina; Platt, Jamie; Calaway, John; Jones, Greg
2018-01-01
Circulating tumor DNA (ctDNA) analysis is being incorporated into cancer care; notably in profiling patients to guide treatment decisions. Responses to targeted therapies have been observed in patients with actionable mutations detected in plasma DNA at variant allele fractions (VAFs) below 0.5%. Highly sensitive methods are therefore required for optimal clinical use. To enable objective assessment of assay performance, detailed analytical validation is required. We developed the InVisionFirst™ assay, an assay based on enhanced tagged amplicon sequencing (eTAm-Seq™) technology to profile 36 genes commonly mutated in non-small cell lung cancer (NSCLC) and other cancer types for actionable genomic alterations in cell-free DNA. The assay has been developed to detect point mutations, indels, amplifications and gene fusions that commonly occur in NSCLC. For analytical validation, two 10mL blood tubes were collected from NSCLC patients and healthy volunteer donors. In addition, contrived samples were used to represent a wide spectrum of genetic aberrations and VAFs. Samples were analyzed by multiple operators, at different times and using different reagent Lots. Results were compared with digital PCR (dPCR). The InVisionFirst assay demonstrated an excellent limit of detection, with 99.48% sensitivity for SNVs present at VAF range 0.25%-0.33%, 92.46% sensitivity for indels at 0.25% VAF and a high rate of detection at lower frequencies while retaining high specificity (99.9997% per base). The assay also detected ALK and ROS1 gene fusions, and DNA amplifications in ERBB2, FGFR1, MET and EGFR with high sensitivity and specificity. Comparison between the InVisionFirst assay and dPCR in a series of cancer patients showed high concordance. This analytical validation demonstrated that the InVisionFirst assay is highly sensitive, specific and robust, and meets analytical requirements for clinical applications. PMID:29543828
Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.
2016-01-11
Similar to an AA, validation involves generating random points based on the total area for each map class. However, instead of collecting field data, two or three individuals not involved with the photo-interpretative mapping separately review each of the points onscreen and record a best-fit vegetation type(s) for each site. Once the individual analyses are complete, results are joined together and a comparative analysis is performed. The objective of this initial analysis is to identify areas where the validation results were in agreement (matches) and areas where validation results were in disagreement (mismatches). The two or three individuals then perform an analysis, looking at each mismatched site, and agree upon a final validation class. (If two vegetation types at a specific site appear to be equally prevalent, the validation team is permitted to assign the site two best-fit vegetation types.) Following the validation team’s comparative analysis of vegetation assignments, the data are entered into a database and compared to the mappers’ vegetation assignments. Agreements and disagreements between the map and validation classes are identified, and a contingency table is produced. This document presents the AA processes/results for Pools 13 and La Grange, as well as the validation process/results for Pools 13 and 26 and Open River South.
Statistical Modelling of the Soil Dielectric Constant
NASA Astrophysics Data System (ADS)
Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy
2010-05-01
The dielectric constant of soil is the physical property being very sensitive on water content. It funds several electrical measurement techniques for determining the water content by means of direct (TDR, FDR, and others related to effects of electrical conductance and/or capacitance) and indirect RS (Remote Sensing) methods. The work is devoted to a particular statistical manner of modelling the dielectric constant as the property accounting a wide range of specific soil composition, porosity, and mass density, within the unsaturated water content. Usually, similar models are determined for few particular soil types, and changing the soil type one needs switching the model on another type or to adjust it by parametrization of soil compounds. Therefore, it is difficult comparing and referring results between models. The presented model was developed for a generic representation of soil being a hypothetical mixture of spheres, each representing a soil fraction, in its proper phase state. The model generates a serial-parallel mesh of conductive and capacitive paths, which is analysed for a total conductive or capacitive property. The model was firstly developed to determine the thermal conductivity property, and now it is extended on the dielectric constant by analysing the capacitive mesh. The analysis is provided by statistical means obeying physical laws related to the serial-parallel branching of the representative electrical mesh. Physical relevance of the analysis is established electrically, but the definition of the electrical mesh is controlled statistically by parametrization of compound fractions, by determining the number of representative spheres per unitary volume per fraction, and by determining the number of fractions. That way the model is capable covering properties of nearly all possible soil types, all phase states within recognition of the Lorenz and Knudsen conditions. In effect the model allows on generating a hypothetical representative of the soil type, and that way it enables clear comparing to results from other soil type dependent models. The paper is focused on proper representing possible range of porosity in commonly existing soils. This work is done with aim of implementing the statistical-physical model of the dielectric constant to a use in the model CMEM (Community Microwave Emission Model), applicable to SMOS (Soil Moisture and Ocean Salinity ESA Mission) data. The input data to the model clearly accepts definition of soil fractions in common physical measures, and in opposition to other empirical models, does not need calibrating. It is not dependent on recognition of the soil by type, but instead it offers the control of accuracy by proper determination of the soil compound fractions. SMOS employs CMEM being funded only by the sand-clay-silt composition. Common use of the soil data, is split on tens or even hundreds soil types depending on the region. We hope that only by determining three element compounds of sand-clay-silt, in few fractions may help resolving the question of relevance of soil data to the input of CMEM, for SMOS. Now, traditionally employed soil types are converted on sand-clay-silt compounds, but hardly cover effects of other specific properties like the porosity. It should bring advantageous effects in validating SMOS observation data, and is taken for the aim in the Cal/Val project 3275, in the campaigns for SVRT (SMOS Validation and Retrieval Team). Acknowledgements. This work was funded in part by the PECS - Programme for European Cooperating States, No. 98084 "SWEX/R - Soil Water and Energy Exchange/Research".
Virtual ellipsometry on layered micro-facet surfaces.
Wang, Chi; Wilkie, Alexander; Harcuba, Petr; Novosad, Lukas
2017-09-18
Microfacet-based BRDF models are a common tool to describe light scattering from glossy surfaces. Apart from their wide-ranging applications in optics, such models also play a significant role in computer graphics for photorealistic rendering purposes. In this paper, we mainly investigate the computer graphics aspect of this technology, and present a polarisation-aware brute force simulation of light interaction with both single and multiple layered micro-facet surfaces. Such surface models are commonly used in computer graphics, but the resulting BRDF is ultimately often only approximated. Recently, there has been work to try to make these approximations more accurate, and to better understand the behaviour of existing analytical models. However, these brute force verification attempts still emitted the polarisation state of light and, as we found out, this renders them prone to mis-estimating the shape of the resulting BRDF lobe for some particular material types, such as smooth layered dielectric surfaces. For these materials, non-polarising computations can mis-estimate some areas of the resulting BRDF shape by up to 23%. But we also identified some other material types, such as dielectric layers over rough conductors, for which the difference turned out to be almost negligible. The main contribution of our work is to clearly demonstrate that the effect of polarisation is important for accurate simulation of certain material types, and that there are also other common materials for which it can apparently be ignored. As this required a BRDF simulator that we could rely on, a secondary contribution is that we went to considerable lengths to validate our software. We compare it against a state-of-art model from graphics, a library from optics, and also against ellipsometric measurements of real surface samples.
Bray, Benjamin D; Campbell, James; Cloud, Geoffrey C; Hoffman, Alex; James, Martin; Tyrrell, Pippa J; Wolfe, Charles D A; Rudd, Anthony G
2014-11-01
Case mix adjustment is required to allow valid comparison of outcomes across care providers. However, there is a lack of externally validated models suitable for use in unselected stroke admissions. We therefore aimed to develop and externally validate prediction models to enable comparison of 30-day post-stroke mortality outcomes using routine clinical data. Models were derived (n=9000 patients) and internally validated (n=18 169 patients) using data from the Sentinel Stroke National Audit Program, the national register of acute stroke in England and Wales. External validation (n=1470 patients) was performed in the South London Stroke Register, a population-based longitudinal study. Models were fitted using general estimating equations. Discrimination and calibration were assessed using receiver operating characteristic curve analysis and correlation plots. Two final models were derived. Model A included age (<60, 60-69, 70-79, 80-89, and ≥90 years), National Institutes of Health Stroke Severity Score (NIHSS) on admission, presence of atrial fibrillation on admission, and stroke type (ischemic versus primary intracerebral hemorrhage). Model B was similar but included only the consciousness component of the NIHSS in place of the full NIHSS. Both models showed excellent discrimination and calibration in internal and external validation. The c-statistics in external validation were 0.87 (95% confidence interval, 0.84-0.89) and 0.86 (95% confidence interval, 0.83-0.89) for models A and B, respectively. We have derived and externally validated 2 models to predict mortality in unselected patients with acute stroke using commonly collected clinical variables. In settings where the ability to record the full NIHSS on admission is limited, the level of consciousness component of the NIHSS provides a good approximation of the full NIHSS for mortality prediction. © 2014 American Heart Association, Inc.
The Riso-Hudson Enneagram Type Indicator: Estimates of Reliability and Validity
ERIC Educational Resources Information Center
Newgent, Rebecca A.; Parr, Patricia E.; Newman, Isadore; Higgins, Kristin K.
2004-01-01
This investigation was conducted to estimate the reliability and validity of scores on the Riso-Hudson Enneagram Type Indicator (D. R. Riso & R. Hudson, 1999a). Results of 287 participants were analyzed. Alpha suggests an adequate degree of internal consistency. Evidence provides mixed support for construct validity using correlational and…
Prediction of type III secretion signals in genomes of gram-negative bacteria.
Löwer, Martin; Schneider, Gisbert
2009-06-15
Pathogenic bacteria infecting both animals as well as plants use various mechanisms to transport virulence factors across their cell membranes and channel these proteins into the infected host cell. The type III secretion system represents such a mechanism. Proteins transported via this pathway ("effector proteins") have to be distinguished from all other proteins that are not exported from the bacterial cell. Although a special targeting signal at the N-terminal end of effector proteins has been proposed in literature its exact characteristics remain unknown. In this study, we demonstrate that the signals encoded in the sequences of type III secretion system effectors can be consistently recognized and predicted by machine learning techniques. Known protein effectors were compiled from the literature and sequence databases, and served as training data for artificial neural networks and support vector machine classifiers. Common sequence features were most pronounced in the first 30 amino acids of the effector sequences. Classification accuracy yielded a cross-validated Matthews correlation of 0.63 and allowed for genome-wide prediction of potential type III secretion system effectors in 705 proteobacterial genomes (12% predicted candidates protein), their chromosomes (11%) and plasmids (13%), as well as 213 Firmicute genomes (7%). We present a signal prediction method together with comprehensive survey of potential type III secretion system effectors extracted from 918 published bacterial genomes. Our study demonstrates that the analyzed signal features are common across a wide range of species, and provides a substantial basis for the identification of exported pathogenic proteins as targets for future therapeutic intervention. The prediction software is publicly accessible from our web server (www.modlab.org).
Moscati, Arden; Verhulst, Brad; McKee, Kevin; Silberg, Judy; Eaves, Lindon
2018-01-01
Understanding the factors that contribute to behavioral traits is a complex task, and partitioning variance into latent genetic and environmental components is a useful beginning, but it should not also be the end. Many constructs are influenced by their contextual milieu, and accounting for background effects (such as gene-environment correlation) is necessary to avoid bias. This study introduces a method for examining the interplay between traits, in a longitudinal design using differential items in sibling pairs. The model is validated via simulation and power analysis, and we conclude with an application to paternal praise and ADHD symptoms in a twin sample. The model can help identify what type of genetic and environmental interplay may contribute to the dynamic relationship between traits using a cross-lagged panel framework. Overall, it presents a way to estimate and explicate the developmental interplay between a set of traits, free from many common sources of bias.
On the Prediction of Solar Cell Degradation in Space
NASA Astrophysics Data System (ADS)
Bourgoin, J. C.; Boizot, B.; Khirouni, K.; Khorenko, V.
2014-08-01
We discuss the validity of the procedure which is used to predict End Of Life performances of a solar cell in space. This procedure consists to measure the performances of the cell after it has been irradiated at the EOL fluence during a time ti very short compared to the duration tm of the mission in space, i.e. with a considerably larger flux. We show that this procedure is valid only when the defects created by the irradiation do not anneal (thermally or by carrier injection) with a time constant shorter than tm or larger than ti. This can be a common situation since annealing of irradiation induced defects occurs in all type of cells, at least in specific conditions (temperature, intensity of illumination, flux and nature of irradiating particles). Using modeling, we illustrate the effect of injection or thermal annealing on EOL prediction in the case GaInP, material at the heart of modern high efficiency space solar cells.
Catherine A., Tauber; Gilberto S., Albuquerque; Maurice J., Tauber
2011-01-01
Abstract Three species that Navás described – Leucochrysa (Nodita) azevedoi Navás, 1913, Leucochrysa (Nodita) camposi (Navás, 1933) and Leucochrysa (Nodita) morenoi (Navás, 1934) – have received recent taxonomic attention. All three have many similar external features; indeed Navás himself, as well as subsequent authors, have confused the species with each other. Here, (a) misidentifications are corrected; (b) a neotype of Leucochrysa azevedoi is designated; (c) Leucochrysa (Nodita) morenoi, previously synonymized with Leucochrysa (Nodita) camposi, is recognized as a valid species [Reinstated status] All three species are redescribed and illustrated, with special emphasis on the types. Leucochrysa (Nodita) azevedoi was found to be relatively common in agricultural areas along Brazil’s Atlantic coast. The two other species are known only from their type localities: Leucochrysa (Nodita) camposi – coastal Ecuador, and Leucochrysa (Nodita) morenoi – Quito, Ecuador. PMID:21594110
Social discourses of healthy eating. A market segmentation approach.
Chrysochou, Polymeros; Askegaard, Søren; Grunert, Klaus G; Kristensen, Dorthe Brogård
2010-10-01
This paper proposes a framework of discourses regarding consumers' healthy eating as a useful conceptual scheme for market segmentation purposes. The objectives are: (a) to identify the appropriate number of health-related segments based on the underlying discursive subject positions of the framework, (b) to validate and further describe the segments based on their socio-demographic characteristics and attitudes towards healthy eating, and (c) to explore differences across segments in types of associations with food and health, as well as perceptions of food healthfulness.316 Danish consumers participated in a survey that included measures of the underlying subject positions of the proposed framework, followed by a word association task that aimed to explore types of associations with food and health, and perceptions of food healthfulness. A latent class clustering approach revealed three consumer segments: the Common, the Idealists and the Pragmatists. Based on the addressed objectives, differences across the segments are described and implications of findings are discussed.
Leahy, P Devin; Puttlitz, Christian M
2016-01-01
This study examined the cervical spine range of motion (ROM) resulting from whiplash-type hyperextension and hyperflexion type ligamentous injuries, and sought to improve the accuracy of specific diagnosis of these injuries. The study was accomplished by measurement of ROM throughout axial rotation, lateral bending, and flexion and extension, using a validated finite element model of the cervical spine that was modified to simulate hyperextension and/or hyperflexion injuries. It was found that the kinematic difference between hyperextension and hyperflexion injuries was minimal throughout the combined flexion and extension ROM measurement that is commonly used for clinical diagnosis of cervical ligamentous injury. However, the two injuries demonstrated substantially different ROM under axial rotation and lateral bending. It is recommended that other bending axes beyond flexion and extension are incorporated into clinical diagnosis of cervical ligamentous injury.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
Pattern of adverse drug reactions reported by the community pharmacists in Nepal
Palaian, Subish; Ibrahim, Mohamed I.M.; Mishra, Pranaya
2010-01-01
The pharmacovigilance program in Nepal is less than a decade old, and is hospital centered. This study highlights the findings of a community based pharmacovigilance program involving the community pharmacists. Objectives: To collect the demographic details of the patients experiencing adverse drug reactions (ADR) reported by the community pharmacists; to identify the common drugs causing the ADRs, the common types of ADRs; and to carry out the causality, severity and preventability assessments of the reported ADRs. Methods: The baseline Knowledge-Attitude-Practices (KAP) of 116 community pharmacists from Pokhara valley towards drug safety was evaluated using a validated (Cronbach alpha=0.61) KAP questionnaire having 20 questions [(knowledge 11, attitude 5 and practice 4) maximum possible score 40]. Thirty community pharmacists with high scores were selected for three training sessions, each session lasting for one to two hours, covering the basic knowledge required for the community pharmacists for ADR reporting. Pharmacist from the regional pharmacovigilance center visited the trained community pharmacists every alternate day and collected the filled ADR reporting forms. Results: Altogether 71 ADRs, from 71 patients (37 males) were reported. Antibiotics/ antibacterials caused 42% (n=37) of the total ADRs followed by non steroidal anti-inflammatory drugs [25% (n=22)]. Ibuprofen/paracetamol combination accounted for ten ADRs. The most common type of ADR was itching [17.2 % (n=20), followed by generalized edema [8.6 % (n=10)]. In order to manage the ADRs, the patients needed medical treatment in 69% (n=49) of the cases. Over two third (69%) of the ADRs had a ‘possible’ association with the suspected drugs and a high percentage (70.4%) were of ‘mild (level 2)’ type. Nearly two third [64.7 % (n=46)] of the ADRs were ‘definitely preventable’. Conclusion: The common class of drugs known to cause ADRs was antibacterial/ antibiotics. Ibuprofen/ Paracetamol combination use of the drug was responsible for more number of ADRs and the most common ADRs were related to dermatological system. Strengthening this program might improve safe use of medicines in the community. PMID:25126141
Measurement of Harm Outcomes in Older Adults after Hospital Discharge: Reliability and Validity
Douglas, Alison; Letts, Lori; Eva, Kevin; Richardson, Julie
2012-01-01
Objectives. Defining and validating a measure of safety contributes to further validation of clinical measures. The objective was to define and examine the psychometric properties of the outcome “incidents of harm.” Methods. The Incident of Harm Caregiver Questionnaire was administered to caregivers of older adults discharged from hospital by telephone. Caregivers completed daily logs for one month and medical charts were examined. Results. Test-retest reliability (n = 38) was high for the occurrence of an incident of harm (yes/no; kappa = 1.0) and the type of incident (agreement = 100%). Validation against daily logs found no disagreement regarding occurrence or types of incidents. Validation with medical charts found no disagreement regarding incident occurrence and disagreement in half regarding incident type. Discussion. The data support the Incident of Harm Caregiver Questionnaire as a reliable and valid estimation of incidents for this sample and are important to researchers as a method to measure safety when validating clinical measures. PMID:22649728
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Schäfer, Axel; Lüdtke, Kerstin; Breuel, Franziska; Gerloff, Nikolas; Knust, Maren; Kollitsch, Christian; Laukart, Alex; Matej, Laura; Müller, Antje; Schöttker-Königer, Thomas; Hall, Toby
2018-08-01
Headache is a common and costly health problem. Although pathogenesis of headache is heterogeneous, one reported contributing factor is dysfunction of the upper cervical spine. The flexion rotation test (FRT) is a commonly used diagnostic test to detect upper cervical movement impairment. The aim of this cross-sectional study was to investigate concurrent validity of detecting high cervical ROM impairment during the FRT by comparing measurements established by an ultrasound-based system (gold standard) with eyeball estimation. Secondary aim was to investigate intra-rater reliability of FRT ROM eyeball estimation. The examiner (6 years experience) was blinded to the data from the ultrasound-based device and to the symptoms of the patients. FRT test result (positive or negative) was based on visual estimation of range of rotation less than 34° to either side. Concurrently, range of rotation was evaluated using the ultrasound-based device. A total of 43 subjects with headache (79% female), mean age of 35.05 years (SD 13.26) were included. According to the International Headache Society Classification 23 subjects had migraine, 4 tension type headache, and 16 multiple headache forms. Sensitivity and specificity were 0.96 and 0.89 for combined rotation, indicating good concurrent reliability. The area under the ROC curve was 0.95 (95% CI 0.91-0.98) for rotation to both sides. Intra-rater reliability for eyeball estimation was excellent with Fleiss Kappa 0.79 for right rotation and left rotation. The results of this study indicate that the FRT is a valid and reliable test to detect impairment of upper cervical ROM in patients with headache.
Review of GEM Radiation Belt Dropout and Buildup Challenges
NASA Astrophysics Data System (ADS)
Tu, Weichao; Li, Wen; Morley, Steve; Albert, Jay
2017-04-01
In Summer 2015 the US NSF GEM (Geospace Environment Modeling) focus group named "Quantitative Assessment of Radiation Belt Modeling" started the "RB dropout" and "RB buildup" challenges, focused on quantitative modeling of the radiation belt buildups and dropouts. This is a community effort which includes selecting challenge events, gathering model inputs that are required to model the radiation belt dynamics during these events (e.g., various magnetospheric waves, plasmapause and density models, electron phase space density data), simulating the challenge events using different types of radiation belt models, and validating the model results by comparison to in situ observations of radiation belt electrons (from Van Allen Probes, THEMIS, GOES, LANL/GEO, etc). The goal is to quantitatively assess the relative importance of various acceleration, transport, and loss processes in the observed radiation belt dropouts and buildups. Since 2015, the community has selected four "challenge" events under four different categories: "storm-time enhancements", "non-storm enhancements", "storm-time dropouts", and "non-storm dropouts". Model inputs and data for each selected event have been coordinated and shared within the community to establish a common basis for simulations and testing. Modelers within and outside US with different types of radiation belt models (diffusion-type, diffusion-convection-type, test particle codes, etc.) have participated in our challenge and shared their simulation results and comparison with spacecraft measurements. Significant progress has been made in quantitative modeling of the radiation belt buildups and dropouts as well as accessing the modeling with new measures of model performance. In this presentation, I will review the activities from our "RB dropout" and "RB buildup" challenges and the progresses achieved in understanding radiation belt physics and improving model validation and verification.
Development and validation of an all-cause mortality risk score in type 2 diabetes.
Yang, Xilin; So, Wing Yee; Tong, Peter C Y; Ma, Ronald C W; Kong, Alice P S; Lam, Christopher W K; Ho, Chung Shun; Cockram, Clive S; Ko, Gary T C; Chow, Chun-Chung; Wong, Vivian C W; Chan, Juliana C N
2008-03-10
Diabetes reduces life expectancy by 10 to 12 years, but whether death can be predicted in type 2 diabetes mellitus remains uncertain. A prospective cohort of 7583 type 2 diabetic patients enrolled since 1995 were censored on July 30, 2005, or after 6 years of follow-up, whichever came first. A restricted cubic spline model was used to check data linearity and to develop linear-transforming formulas. Data were randomly assigned to a training data set and to a test data set. A Cox model was used to develop risk scores in the test data set. Calibration and discrimination were assessed in the test data set. A total of 619 patients died during a median follow-up period of 5.51 years, resulting in a mortality rate of 18.69 per 1000 person-years. Age, sex, peripheral arterial disease, cancer history, insulin use, blood hemoglobin levels, linear-transformed body mass index, random spot urinary albumin-creatinine ratio, and estimated glomerular filtration rate at enrollment were predictors of all-cause death. A risk score for all-cause mortality was developed using these predictors. The predicted and observed death rates in the test data set were similar (P > .70). The area under the receiver operating characteristic curve was 0.85 for 5 years of follow-up. Using the risk score in ranking cause-specific deaths, the area under the receiver operating characteristic curve was 0.95 for genitourinary death, 0.85 for circulatory death, 0.85 for respiratory death, and 0.71 for neoplasm death. Death in type 2 diabetes mellitus can be predicted using a risk score consisting of commonly measured clinical and biochemical variables. Further validation is needed before clinical use.
Development and Validation of a New Reliable Method for the Diagnosis of Avian Botulism.
Le Maréchal, Caroline; Rouxel, Sandra; Ballan, Valentine; Houard, Emmanuelle; Poezevara, Typhaine; Bayon-Auboyer, Marie-Hélène; Souillard, Rozenn; Morvan, Hervé; Baudouard, Marie-Agnès; Woudstra, Cédric; Mazuet, Christelle; Le Bouquin, Sophie; Fach, Patrick; Popoff, Michel; Chemaly, Marianne
2017-01-01
Liver is a reliable matrix for laboratory confirmation of avian botulism using real-time PCR. Here, we developed, optimized, and validated the analytical steps preceding PCR to maximize the detection of Clostridium botulinum group III in avian liver. These pre-PCR steps included enrichment incubation of the whole liver (maximum 25 g) at 37°C for at least 24 h in an anaerobic chamber and DNA extraction using an enzymatic digestion step followed by a DNA purification step. Conditions of sample storage before analysis appear to have a strong effect on the detection of group III C. botulinum strains and our results recommend storage at temperatures below -18°C. Short-term storage at 5°C is possible for up to 24 h, but a decrease in sensitivity was observed at 48 h of storage at this temperature. Analysis of whole livers (maximum 25 g) is required and pooling samples before enrichment culturing must be avoided. Pooling is however possible before or after DNA extraction under certain conditions. Whole livers should be 10-fold diluted in enrichment medium and homogenized using a Pulsifier® blender (Microgen, Surrey, UK) instead of a conventional paddle blender. Spiked liver samples showed a limit of detection of 5 spores/g liver for types C and D and 250 spores/g for type E. Using the method developed here, the analysis of 268 samples from 73 suspected outbreaks showed 100% specificity and 95.35% sensitivity compared with other PCR-based methods considered as reference. The mosaic type C/D was the most common neurotoxin type found in examined samples, which included both wild and domestic birds.
Development and Validation of a New Reliable Method for the Diagnosis of Avian Botulism
Le Maréchal, Caroline; Rouxel, Sandra; Ballan, Valentine; Houard, Emmanuelle; Poezevara, Typhaine; Bayon-Auboyer, Marie-Hélène; Souillard, Rozenn; Morvan, Hervé; Baudouard, Marie-Agnès; Woudstra, Cédric; Mazuet, Christelle; Le Bouquin, Sophie; Fach, Patrick; Popoff, Michel; Chemaly, Marianne
2017-01-01
Liver is a reliable matrix for laboratory confirmation of avian botulism using real-time PCR. Here, we developed, optimized, and validated the analytical steps preceding PCR to maximize the detection of Clostridium botulinum group III in avian liver. These pre-PCR steps included enrichment incubation of the whole liver (maximum 25 g) at 37°C for at least 24 h in an anaerobic chamber and DNA extraction using an enzymatic digestion step followed by a DNA purification step. Conditions of sample storage before analysis appear to have a strong effect on the detection of group III C. botulinum strains and our results recommend storage at temperatures below -18°C. Short-term storage at 5°C is possible for up to 24 h, but a decrease in sensitivity was observed at 48 h of storage at this temperature. Analysis of whole livers (maximum 25 g) is required and pooling samples before enrichment culturing must be avoided. Pooling is however possible before or after DNA extraction under certain conditions. Whole livers should be 10-fold diluted in enrichment medium and homogenized using a Pulsifier® blender (Microgen, Surrey, UK) instead of a conventional paddle blender. Spiked liver samples showed a limit of detection of 5 spores/g liver for types C and D and 250 spores/g for type E. Using the method developed here, the analysis of 268 samples from 73 suspected outbreaks showed 100% specificity and 95.35% sensitivity compared with other PCR-based methods considered as reference. The mosaic type C/D was the most common neurotoxin type found in examined samples, which included both wild and domestic birds. PMID:28076405
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Farooq, Mohammad U.
1986-01-01
The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.
Fission yield and criticality excursion code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.
2000-06-30
The ANSI/ANS 8.3 standard allows a maximum yield not to exceed 2 x 10 fissions to calculate requiring the alarm system to be effective. It is common practice to use this allowance or to develop some other yield based on past criticality accident history or excursion experiments. The literature on the subject of yields discusses maximum yields larger and somewhat smaller than the ANS 8.3 permissive value. The ability to model criticality excursions and vary the various parameters to determine a credible maximum yield for operational specific cases has been available for some time but is not in common usemore » by criticality safety specialists. The topic of yields for various solution, metal, oxide powders, etc. in various geometry's and containers has been published by laboratory specialists or university staff and students for many decades but have not been available to practitioners. The need for best-estimate calculations of fission yields with a well-validated criticality excursion code has long been recognized. But no coordinated effort has been made so far to develop a generalized and well-validated excursion code for different types of systems. In this paper, the current practices to estimate fission yields are summarized along with its shortcomings for the 12-Rad zone (at SRS) and Criticality Alarm System (CAS) calculations. Finally the need for a user-friendly excursion code is reemphasized.« less
Lund, Travis J.; Pilarz, Matthew; Velasco, Jonathan B.; Chakraverty, Devasmita; Rosploch, Kaitlyn; Undersander, Molly; Stains, Marilyne
2015-01-01
Researchers, university administrators, and faculty members are increasingly interested in measuring and describing instructional practices provided in science, technology, engineering, and mathematics (STEM) courses at the college level. Specifically, there is keen interest in comparing instructional practices between courses, monitoring changes over time, and mapping observed practices to research-based teaching. While increasingly common observation protocols (Reformed Teaching Observation Protocol [RTOP] and Classroom Observation Protocol in Undergraduate STEM [COPUS]) at the postsecondary level help achieve some of these goals, they also suffer from weaknesses that limit their applicability. In this study, we leverage the strengths of these protocols to provide an easy method that enables the reliable and valid characterization of instructional practices. This method was developed empirically via a cluster analysis using observations of 269 individual class periods, corresponding to 73 different faculty members, 28 different research-intensive institutions, and various STEM disciplines. Ten clusters, called COPUS profiles, emerged from this analysis; they represent the most common types of instructional practices enacted in the classrooms observed for this study. RTOP scores were used to validate the alignment of the 10 COPUS profiles with reformed teaching. Herein, we present a detailed description of the cluster analysis method, the COPUS profiles, and the distribution of the COPUS profiles across various STEM courses at research-intensive universities. PMID:25976654
Development and validation of a microchip pulsed laser for ESA space altimeters
NASA Astrophysics Data System (ADS)
Couto, Bruno; Abreu, Hernâni; Gordo, Paulo; Amorim, António
2016-10-01
The development and validation of small size laser sources for space based range finding is of crucial importance to the development of miniature LIDAR devices for European space missions, particularly for planet lander probes. In this context, CENTRA-SIM is developing a passively q-switched microchip laser in the 1.5μm wavelength range. Pulses in the order of 2 ns and 100μJ were found to be suitable for range finding for small landing platforms. Both glass and crystalline Yb-Er doped active media are commonly available. Crystalline media present higher thermal conductivity and hardness, which allows for higher pumping intensities. However, glass laser media present longer laser upper-state lifetime and 99% Yb-Er transfer efficiency make phosphate glasses the typically preferred host for this type of application. In addition to this, passively q-switched microchip lasers with Yb-Er doped phosphate glass have been reported to output >100μJ pulses while their crystalline host counterparts achieve a few tens of μJ at best. Two different types of rate equation models have been found: microscopic quantities based models and macroscopic quantities based models. Based on the works of Zolotovskaya et al. and Spühler et al, we have developed a computer model that further exploits the equivalence between the two types of approaches. The simulation studies, using commercial available components allowed us to design a compact laser emitting 80μJ pulses with up to 30kW peak power and 1 to 2 ns pulse width. We considered EAT14 Yb-Er doped glass as active medium and Co2+:MgAl2O4 as saturable absorber. The active medium is pumped by a 975nm semiconductor laser focused into a 200μm spot. Measurements on an experimental test bench to validate the numerical model were carried out. Several different combinations of, saturable absorber length and output coupling were experimented.
Howard, Siobhán; Hughes, Brian M
2012-01-01
The Type D personality, identified by high negative affectivity paired with high social inhibition, has been associated with a number of health-related outcomes in (mainly) cardiac populations. However, despite its prevalence in the health-related literature, how this personality construct fits within existing personality theory has not been directly tested. Using a sample of 134 healthy university students, this study examined the Type D personality in terms of two well-established personality traits; introversion and neuroticism. Construct, concurrent and discriminant validity of this personality type was established through examination of the associations between the Type D personality and psychometrically assessed anxiety, depression and stress, as well as measurement of resting cardiovascular function. Results showed that while the Type D personality was easily represented using alternative measures of both introversion and neuroticism, associations with anxiety, depression and stress were mainly accounted for by neuroticism. Conversely, however, associations with resting cardiac output were attributable to the negative affectivity-social inhibition synergy, explicit within the Type D construct. Consequently, both the construct and concurrent validity of this personality type were confirmed, with discriminant validity evident on examination of physiological indices of well-being.
Hibbard, S; Tang, P C; Latko, R; Park, J H; Munn, S; Bolz, S; Somerville, A
2000-12-01
Thematic Apperception Test (Murray, 1943) responses of 69 Asian American (hereafter, Asian) and 83 White students were coded for defenses according to the Defense Mechanism Manual (Cramer, 1991b) and studied for differential validity in predicting paper-and-pencil measures of relevant constructs. Three tests for differential validity were used: (a) differences between validity coefficients, (b) interactions between predictor and ethnicity in criterion prediction, and (c) differences between groups in mean prediction errors using a common regression equation. Modest differential validity was found. It was surprising that the DMM scales were slightly stronger predictors of their criteria among Asians than among Whites and when a common predictor was used, desirable criteria were overpredicted for Asians, whereas undesirable ones were overpredicted for Whites. The results were not affected by acculturation level or English vocabulary among the Asians.
NASA Astrophysics Data System (ADS)
Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra
2013-03-01
SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.
Ingram, Paul B; Ternes, Michael S
2016-05-01
This study synthesized research evaluation of the effectiveness of the over-reporting validity scales of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) for detecting intentionally feigned over-endorsements of symptoms using a moderated meta-analysis. After identifying experimental and quasi-experimental studies for inclusion (k = 25) in which the validity scales of the MMPI-2-RF were compared between groups of respondents, moderated meta-analyses were conducted for each of its five over-reporting scales. These meta-analyses explored the general effectiveness of each scale across studies, as well as the impact that several moderators had on scale performance, including comparison group, study type (i.e. real versus simulation), age, education, sex, and diagnosis. The over-reporting scales of the MMPI-2-RF act as effective general measures for the detection of malingering and over endorsement of symptoms with individual scales ranging in effectiveness from an effect size of 1.08 (Symptom Validity; FBS-r) to 1.43 (Infrequent Pathology; Fp-r), each with different patterns of moderating influence. The MMPI-2-RF validity scales effectively discriminate between groups of respondents presenting in either an honest manner or with patterned exaggeration and over-endorsement of symptoms. The magnitude of difference observed between honest and malingering groups was substantially narrower than might be expected using traditional cut-scores for the validity scales, making interpretation within the evaluation context particularly important. While all over-reporting scales are effective, the FBS-r and RBS scales are those least influenced by common and context specific moderating influences, such as respondent or comparison grouping.
Thomas, Hannah J; Scott, James G; Coates, Jason M; Connor, Jason P
2018-05-03
Intervention on adolescent bullying is reliant on valid and reliable measurement of victimization and perpetration experiences across different behavioural expressions. This study developed and validated a survey tool that integrates measurement of both traditional and cyber bullying to test a theoretically driven multi-dimensional model. Adolescents from 10 mainstream secondary schools completed a baseline and follow-up survey (N = 1,217; M age = 14 years; 66.2% male). The Bullying and cyberbullying Scale for Adolescents (BCS-A) developed for this study comprised parallel victimization and perpetration subscales, each with 20 items. Additional measures of bullying (Olweus Global Bullying and the Forms of Bullying Scale [FBS]), as well as measures of internalizing and externalizing problems, school connectedness, social support, and personality, were used to further assess validity. Factor structure was determined, and then, the suitability of items was assessed according to the following criteria: (1) factor interpretability, (2) item correlations, (3) model parsimony, and (4) measurement equivalence across victimization and perpetration experiences. The final models comprised four factors: physical, verbal, relational, and cyber. The final scale was revised to two 13-item subscales. The BCS-A demonstrated acceptable concurrent and convergent validity (internalizing and externalizing problems, school connectedness, social support, and personality), as well as predictive validity over 6 months. The BCS-A has sound psychometric properties. This tool establishes measurement equivalence across types of involvement and behavioural forms common among adolescents. An improved measurement method could add greater rigour to the evaluation of intervention programmes and also enable interventions to be tailored to subscale profiles. © 2018 The British Psychological Society.
[Development and validity of workplace bullying in nursing-type inventory (WPBN-TI)].
Lee, Younju; Lee, Mihyoung
2014-04-01
The purpose of this study was to develop an instrument to assess bullying of nurses, and test the validity and reliability of the instrument. The initial thirty items of WPBN-TI were identified through a review of the literature on types bullying related to nursing and in-depth interviews with 14 nurses who experienced bullying at work. Sixteen items were developed through 2 content validity tests by 9 experts and 10 nurses. The final WPBN-TI instrument was evaluated by 458 nurses from five general hospitals in the Incheon metropolitan area. SPSS 18.0 program was used to assess the instrument based on internal consistency reliability, construct validity, and criterion validity. WPBN-TI consisted of 16 items with three distinct factors (verbal and nonverbal bullying, work-related bullying, and external threats), which explained 60.3% of the total variance. The convergent validity and determinant validity for WPBN-TI were 100.0%, 89.7%, respectively. Known-groups validity of WPBN-TI was proven through the mean difference between subjective perception of bullying. The satisfied criterion validity for WPBN-TI was more than .70. The reliability of WPBN-TI was Cronbach's α of .91. WPBN-TI with high validity and reliability is suitable to determine types of bullying in nursing workplace.
2002-02-01
NVLAP procedures are compatible with, among others, the most recent official publications of ISO / IEC 17025 (formally ISO / IEC Guide 25), ISO Guides 2, 30... IEC Guide 17025 and the relevant requirements of ISO 9002-1994. NVLAP Handbook 150-20 contains information that is specific to Common Criteria...Evaluation Technical Report EAP Evaluation Acceptance Package IEC International Electrotechnical Commission ISO International
Role of NSO compounds during primary cracking of a Type II kerogen and a Type III lignite
Behar, F.; Lorant, F.; Lewan, M.
2008-01-01
The aim of this work is to follow the generation of NSO compounds during the artificial maturation of an immature Type II kerogen and a Type III lignite in order to determine the different sources of the petroleum potential during primary cracking. Experiments were carried out in closed system pyrolysis in the temperature range from 225 to 350 ??C. Two types of NSOs were recovered: one is soluble in n-pentane and the second in dichloromethane. A kinetic scheme was optimised including both kerogen and NSO cracking. It was validated by complementary experiments carried out on isolated asphaltenes generated from the Type II kerogen and on the total n-pentane and DCM extracts generated from the Type III lignite. Results show that kerogen and lignite first decompose into DCM NSOs with minor generation of hydrocarbons. Then, the main source of petroleum potential originates from secondary cracking of both DCM and n-pentane NSOs through successive decomposition reactions. These results confirm the model proposed by Tissot [Tissot, B., 1969. Premie??res donne??es sur les me??canismes et la cine??tique de la formation du pe??trole dans les bassins se??dimentaires. Simulation d'un sche??ma re??actionnel sur ordinateur. Oil and Gas Science and Technology 24, 470-501] in which the main source of hydrocarbons is not the insoluble organic matter, but the NSO fraction. As secondary cracking of the NSOs largely overlaps that of the kerogen, it was demonstrated that bulk kinetics in open system is a result of both kerogen and NSO cracking. Thus, another kinetic scheme for primary cracking in open system was built as a combination of kerogen and NSO cracking. This new kinetic scheme accounts for both the rate and amounts of hydrocarbons generated in a closed pyrolysis system. Thus, the concept of successive steps for hydrocarbon generation is valid for the two types of pyrolysis system and, for the first time, a common kinetic scheme is available for extrapolating results to natural case studies. ?? 2007 Elsevier Ltd. All rights reserved.
Hackley, Paul C.; Araujo, Carla Viviane; Borrego, Angeles G.; Bouzinos, Antonis; Cardott, Brian; Cook, Alan C.; Eble, Cortland; Flores, Deolinda; Gentzis, Thomas; Gonçalves, Paula Alexandra; Filho, João Graciano Mendonça; Hámor-Vidó, Mária; Jelonek, Iwona; Kommeren, Kees; Knowles, Wayne; Kus, Jolanta; Mastalerz, Maria; Menezes, Taíssa Rêgo; Newman, Jane; Pawlewicz, Mark; Pickel, Walter; Potter, Judith; Ranasinghe, Paddy; Read, Harold; Reyes, Julito; Rodriguez, Genaro De La Rosa; de Souza, Igor Viegas Alves Fernandes; Suarez-Ruiz, Isabel; Sýkorová, Ivana; Valentine, Brett J.
2015-01-01
Vitrinite reflectance generally is considered the most robust thermal maturity parameter available for application to hydrocarbon exploration and petroleum system evaluation. However, until 2011 there was no standardized methodology available to provide guidelines for vitrinite reflectance measurements in shale. Efforts to correct this deficiency resulted in publication of ASTM D7708: Standard test method for microscopical determination of the reflectance of vitrinite dispersed in sedimentary rocks. In 2012-2013, an interlaboratory exercise was conducted to establish precision limits for the D7708 measurement technique. Six samples, representing a wide variety of shale, were tested in duplicate by 28 analysts in 22 laboratories from 14 countries. Samples ranged from immature to overmature (0.31-1.53% Ro), from organic-lean to organic-rich (1-22 wt.% total organic carbon), and contained Type I (lacustrine), Type II (marine), and Type III (terrestrial) kerogens. Repeatability limits (maximum difference between valid repetitive results from same operator, same conditions) ranged from 0.03-0.11% absolute reflectance, whereas reproducibility limits (maximum difference between valid results obtained on same test material by different operators, different laboratories) ranged from 0.12-0.54% absolute reflectance. Repeatability and reproducibility limits degraded consistently with increasing maturity and decreasing organic content. However, samples with terrestrial kerogens (Type III) fell off this trend, showing improved levels of reproducibility due to higher vitrinite content and improved ease of identification. Operators did not consistently meet the reporting requirements of the test method, indicating that a common reporting template is required to improve data quality. The most difficult problem encountered was the petrographic distinction of solid bitumens and low-reflecting inert macerals from vitrinite when vitrinite occurred with reflectance ranges overlapping the other components. Discussion among participants suggested this problem could not be easily corrected via kerogen concentration or solvent extraction and is related to operator training and background. No statistical difference in mean reflectance was identified between participants reporting bitumen reflectance vs. vitrinite reflectance vs. a mixture of bitumen and vitrinite reflectance values, suggesting empirical conversion schemes should be treated with caution. Analysis of reproducibility limits obtained during this exercise in comparison to reproducibility limits from historical interlaboratory exercises suggests use of a common methodology (D7708) improves interlaboratory precision. Future work will investigate opportunities to improve reproducibility in high maturity, organic-lean shale varieties.
Talmud, Philippa J; Hingorani, Aroon D; Cooper, Jackie A; Marmot, Michael G; Brunner, Eric J; Kumari, Meena; Kivimäki, Mika; Humphries, Steve E
2010-01-14
To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes. Workplace based prospective cohort study with three 5 yearly medical screenings. 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years. Non-genetic variables included in two established risk models-the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)-and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor's diagnosis, or the use of anti-diabetic drugs. A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score. The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.
Barnett, David; Louzao, Raaul; Gambell, Peter; De, Jitakshi; Oldaker, Teri; Hanson, Curtis A
2013-01-01
Flow cytometry and other technologies of cell-based fluorescence assays are as a matter of good laboratory practice required to validate all assays, which when in clinical practice may pass through regulatory review processes using criteria often defined with a soluble analyte in plasma or serum samples in mind. Recently the U.S. Food and Drug Administration (FDA) has entered into a public dialogue in the U.S. regarding their regulatory interest in laboratory developed tests (LDTs) or so-called home brew assays performed in clinical laboratories. The absence of well-defined guidelines for validation of cell-based assays using fluorescence detection has thus become a subject of concern for the International Council for Standardization of Haematology (ICSH) and International Clinical Cytometry Society (ICCS). Accordingly, a group of over 40 international experts in the areas of test development, test validation, and clinical practice of a variety of assay types using flow cytometry and/or morphologic image analysis were invited to develop a set of practical guidelines useful to in vitro diagnostic (IVD) innovators, clinical laboratories, regulatory scientists, and laboratory inspectors. The focus of the group was restricted to fluorescence reporter reagents, although some common principles are shared by immunohistochemistry or immunocytochemistry techniques and noted where appropriate. The work product of this two year effort is the content of this special issue of this journal, which is published as 5 separate articles, this being Validation of Cell-based Fluorescence Assays: Practice Guidelines from the ICSH and ICCS - Part IV - Postanalytic considerations. © 2013 International Clinical Cytometry Society.
Davis, Bruce H; Dasgupta, Amar; Kussick, Steven; Han, Jin-Yeong; Estrellado, Annalee
2013-01-01
Flow cytometry and other technologies of cell-based fluorescence assays are as a matter of good laboratory practice required to validate all assays, which when in clinical practice may pass through regulatory review processes using criteria often defined with a soluble analyte in plasma or serum samples in mind. Recently the U.S. Food and Drug Administration (FDA) has entered into a public dialogue in the U.S. regarding their regulatory interest in laboratory developed tests (LDTs) or so-called "home brew" assays performed in clinical laboratories. The absence of well-defined guidelines for validation of cell-based assays using fluorescence detection has thus become a subject of concern for the International Council for Standardization of Haematology (ICSH) and International Clinical Cytometry Society (ICCS). Accordingly, a group of over 40 international experts in the areas of test development, test validation, and clinical practice of a variety of assay types using flow cytometry and/or morphologic image analysis were invited to develop a set of practical guidelines useful to in vitro diagnostic (IVD) innovators, clinical laboratories, regulatory scientists, and laboratory inspectors. The focus of the group was restricted to fluorescence reporter reagents, although some common principles are shared by immunohistochemistry or immunocytochemistry techniques and noted where appropriate. The work product of this two year effort is the content of this special issue of this journal, which is published as 5 separate articles, this being Validation of Cell-based Fluorescence Assays: Practice Guidelines from the ICSH and ICCS - Part II - Preanalytical issues. © 2013 International Clinical Cytometry Society. © 2013 International Clinical Cytometry Society.
Theilmann, Wiebke; Löscher, Wolfgang; Socala, Katarzyna; Frieling, Helge; Bleich, Stefan; Brandt, Claudia
2014-06-01
Electroconvulsive therapy is the most effective therapy for major depressive disorder (MDD). The remission rate is above 50% in previously pharmacoresistant patients but the mechanisms of action are not fully understood. Electroconvulsive stimulation (ECS) in rodents mimics antidepressant electroconvulsive therapy (ECT) in humans and is widely used to investigate the underlying mechanisms of ECT. For the translational value of findings in animal models it is essential to establish models with the highest construct, face and predictive validity possible. The commonly used model for ECT in rodents does not meet the demand for high construct validity. For ECT, cortical surface electrodes are used to induce therapeutic seizures whereas ECS in rodents is exclusively performed by auricular or corneal electrodes. However, the stimulation site has a major impact on the type and spread of the induced seizure activity and its antidepressant effect. We propose a method in which ECS is performed by screw electrodes placed above the motor cortex of rats to closely simulate the clinical situation and thereby increase the construct validity of the model. Cortical ECS in rats induced reliably seizures comparable to human ECT. Cortical ECS was more effective than auricular ECS to reduce immobility in the forced swim test. Importantly, auricular stimulation had a negative influence on the general health condition of the rats with signs of fear during the stimulation sessions. These results suggest that auricular ECS in rats is not a suitable ECT model. Cortical ECS in rats promises to be a valid method to mimic ECT. Copyright © 2014 Elsevier Ltd. All rights reserved.
Measuring Resource Utilization: A Systematic Review of Validated Self-Reported Questionnaires.
Leggett, Laura E; Khadaroo, Rachel G; Holroyd-Leduc, Jayna; Lorenzetti, Diane L; Hanson, Heather; Wagg, Adrian; Padwal, Raj; Clement, Fiona
2016-03-01
A variety of methods may be used to obtain costing data. Although administrative data are most commonly used, the data available in these datasets are often limited. An alternative method of obtaining costing is through self-reported questionnaires. Currently, there are no systematic reviews that summarize self-reported resource utilization instruments from the published literature.The aim of the study was to identify validated self-report healthcare resource use instruments and to map their attributes.A systematic review was conducted. The search identified articles using terms like "healthcare utilization" and "questionnaire." All abstracts and full texts were considered in duplicate. For inclusion, studies had to assess the validity of a self-reported resource use questionnaire, to report original data, include adult populations, and the questionnaire had to be publically available. Data such as type of resource utilization assessed by each questionnaire, and validation findings were extracted from each study.In all, 2343 unique citations were retrieved; 2297 were excluded during abstract review. Forty-six studies were reviewed in full text, and 15 studies were included in this systematic review. Six assessed resource utilization of patients with chronic conditions; 5 assessed mental health service utilization; 3 assessed resource utilization by a general population; and 1 assessed utilization in older populations. The most frequently measured resources included visits to general practitioners and inpatient stays; nonmedical resources were least frequently measured. Self-reported questionnaires on resource utilization had good agreement with administrative data, although, visits to general practitioners, outpatient days, and nurse visits had poorer agreement.Self-reported questionnaires are a valid method of collecting data on healthcare resource utilization.
Randomized controlled trials in dentistry: common pitfalls and how to avoid them.
Fleming, Padhraig S; Lynch, Christopher D; Pandis, Nikolaos
2014-08-01
Clinical trials are used to appraise the effectiveness of clinical interventions throughout medicine and dentistry. Randomized controlled trials (RCTs) are established as the optimal primary design and are published with increasing frequency within the biomedical sciences, including dentistry. This review outlines common pitfalls associated with the conduct of randomized controlled trials in dentistry. Common failings in RCT design leading to various types of bias including selection, performance, detection and attrition bias are discussed in this review. Moreover, methods of minimizing and eliminating bias are presented to ensure that maximal benefit is derived from RCTs within dentistry. Well-designed RCTs have both upstream and downstream uses acting as a template for development and populating systematic reviews to permit more precise estimates of treatment efficacy and effectiveness. However, there is increasing awareness of waste in clinical research, whereby resource-intensive studies fail to provide a commensurate level of scientific evidence. Waste may stem either from inappropriate design or from inadequate reporting of RCTs; the importance of robust conduct of RCTs within dentistry is clear. Optimal reporting of randomized controlled trials within dentistry is necessary to ensure that trials are reliable and valid. Common shortcomings leading to important forms or bias are discussed and approaches to minimizing these issues are outlined. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bacterial genome sequencing in clinical microbiology: a pathogen-oriented review.
Tagini, F; Greub, G
2017-11-01
In recent years, whole-genome sequencing (WGS) has been perceived as a technology with the potential to revolutionise clinical microbiology. Herein, we reviewed the literature on the use of WGS for the most commonly encountered pathogens in clinical microbiology laboratories: Escherichia coli and other Enterobacteriaceae, Staphylococcus aureus and coagulase-negative staphylococci, streptococci and enterococci, mycobacteria and Chlamydia trachomatis. For each pathogen group, we focused on five different aspects: the genome characteristics, the most common genomic approaches and the clinical uses of WGS for (i) typing and outbreak analysis, (ii) virulence investigation and (iii) in silico antimicrobial susceptibility testing. Of all the clinical usages, the most frequent and straightforward usage was to type bacteria and to trace outbreaks back. A next step toward standardisation was made thanks to the development of several new genome-wide multi-locus sequence typing systems based on WGS data. Although virulence characterisation could help in various particular clinical settings, it was done mainly to describe outbreak strains. An increasing number of studies compared genotypic to phenotypic antibiotic susceptibility testing, with mostly promising results. However, routine implementation will preferentially be done in the workflow of particular pathogens, such as mycobacteria, rather than as a broadly applicable generic tool. Overall, concrete uses of WGS in routine clinical microbiology or infection control laboratories were done, but the next big challenges will be the standardisation and validation of the procedures and bioinformatics pipelines in order to reach clinical standards.
USDA-ARS?s Scientific Manuscript database
Information to support application of hydrologic and water quality (H/WQ) models abounds, yet modelers commonly use arbitrary, ad hoc methods to conduct, document, and report model calibration, validation, and evaluation. Consistent methods are needed to improve model calibration, validation, and e...
Use of vitamin D supplements during infancy in an international feeding trial.
Lehtonen, Eveliina; Ormisson, Anne; Nucci, Anita; Cuthbertson, David; Sorkio, Susa; Hyytinen, Mila; Alahuhta, Kirsi; Berseth, Carol; Salonen, Marja; Taback, Shayne; Franciscus, Margaret; González-Frutos, Teba; Korhonen, Tuuli E; Lawson, Margaret L; Becker, Dorothy J; Krischer, Jeffrey P; Knip, Mikael; Virtanen, Suvi M
2014-04-01
To examine the use of vitamin D supplements during infancy among the participants in an international infant feeding trial. Longitudinal study. Information about vitamin D supplementation was collected through a validated FFQ at the age of 2 weeks and monthly between the ages of 1 month and 6 months. Infants (n 2159) with a biological family member affected by type 1 diabetes and with increased human leucocyte antigen-conferred susceptibility to type 1 diabetes from twelve European countries, the USA, Canada and Australia. Daily use of vitamin D supplements was common during the first 6 months of life in Northern and Central Europe (>80% of the infants), with somewhat lower rates observed in Southern Europe (> 60%). In Canada, vitamin D supplementation was more common among exclusively breast-fed than other infants (e.g., 71% v. 44% at 6 months of age). Less than 2% of infants in the U.S.A. and Australia received any vitamin D supplementation. Higher gestational age, older maternal age and longer maternal education were study-wide associated with greater use of vitamin D supplements. Most of the infants received vitamin D supplements during the first 6 months of life in the European countries, whereas in Canada only half and in the U.S.A. and Australia very few were given supplementation.
Research misconduct and data fraud in clinical trials: prevalence and causal factors.
George, Stephen L
2016-02-01
The disclosure of cases of research misconduct in clinical trials, conventionally defined as fabrication, falsification or plagiarism, has been a disturbingly common phenomenon in recent years. Such cases can potentially harm patients enrolled on the trials in question or patients treated based on the results of those trials and can seriously undermine the scientific and public trust in the validity of clinical trial results. Here, I review what is known about the prevalence of research misconduct in general and the contributing or causal factors leading to the misconduct. The evidence on prevalence is unreliable and fraught with definitional problems and with study design issues. Nevertheless, the evidence taken as a whole seems to suggest that cases of the most serious types of misconduct, fabrication and falsification (i.e., data fraud), are relatively rare but that other types of questionable research practices are quite common. There have been many individual, institutional and scientific factors proposed for misconduct but, as is the case with estimates of prevalence, reliable empirical evidence on the strength and relative importance of these factors is lacking. However, it seems clear that the view of misconduct as being simply the result of aberrant or self-delusional personalities likely underestimates the effect of other important factors and inhibits the development of effective prevention strategies.
BAWADI, Hiba A.; AL-SHWAIYAT, Naseem M.; TAYYEM, Reema F.; MEKARY, Rania; TUURI, Georgianna
2011-01-01
Aim This study was conducted to develop a meal-planning exchange list for Middle Eastern foods commonly included in the Jordanian cuisine. Forty types of appetizers and another 40 types of desserts were selected; with five different recipes for each item. Recipes were collected from different housewives and Arabic cookbooks. Ingredients’ weight and dish net weight were recorded based on an average recipe, and dishes were prepared accordingly. Dishes were proximately analyzed following the AOAC procedures. Proximate analysis was compared to the WHO-food composition tables (FCT) for the use in the Middle East, and with food analysis software (ESHA). Results Significant correlations (P < 0.001) were found between macronutrient content obtained from proximate analysis and those obtained from ESHA. The correlation coefficients (r) were 0.92 for carbohydrate, 0.86 for protein, and 0.86 for fat. Strong correlations were also detected between proximate analysis FCT for carbohydrate (r=0.91, P<0.001) and protein (r=0.81; P<0.001) contents. However, this significant correlation was not found as strong, yet significant for fat (r=0. 62, P<0.001). Conclusion A valid exchange system for traditional desserts and appetizers is now available and ready to be used by dietitians and health care providers in Jordan and Arab World. PMID:21841913
Bawadi, Hiba A; Al-Shwaiyat, Naseem M; Tayyem, Reema F; Mekary, Rania; Tuuri, Georgianna
2009-03-01
AIM: This study was conducted to develop a meal-planning exchange list for Middle Eastern foods commonly included in the Jordanian cuisine. Forty types of appetizers and another 40 types of desserts were selected; with five different recipes for each item. Recipes were collected from different housewives and Arabic cookbooks. Ingredients' weight and dish net weight were recorded based on an average recipe, and dishes were prepared accordingly. Dishes were proximately analyzed following the AOAC procedures. Proximate analysis was compared to the WHO-food composition tables (FCT) for the use in the Middle East, and with food analysis software (ESHA). RESULTS: Significant correlations (P < 0.001) were found between macronutrient content obtained from proximate analysis and those obtained from ESHA. The correlation coefficients (r) were 0.92 for carbohydrate, 0.86 for protein, and 0.86 for fat. Strong correlations were also detected between proximate analysis FCT for carbohydrate (r=0.91, P<0.001) and protein (r=0.81; P<0.001) contents. However, this significant correlation was not found as strong, yet significant for fat (r=0. 62, P<0.001). CONCLUSION: A valid exchange system for traditional desserts and appetizers is now available and ready to be used by dietitians and health care providers in Jordan and Arab World.
Lund, Travis J; Pilarz, Matthew; Velasco, Jonathan B; Chakraverty, Devasmita; Rosploch, Kaitlyn; Undersander, Molly; Stains, Marilyne
2015-01-01
Researchers, university administrators, and faculty members are increasingly interested in measuring and describing instructional practices provided in science, technology, engineering, and mathematics (STEM) courses at the college level. Specifically, there is keen interest in comparing instructional practices between courses, monitoring changes over time, and mapping observed practices to research-based teaching. While increasingly common observation protocols (Reformed Teaching Observation Protocol [RTOP] and Classroom Observation Protocol in Undergraduate STEM [COPUS]) at the postsecondary level help achieve some of these goals, they also suffer from weaknesses that limit their applicability. In this study, we leverage the strengths of these protocols to provide an easy method that enables the reliable and valid characterization of instructional practices. This method was developed empirically via a cluster analysis using observations of 269 individual class periods, corresponding to 73 different faculty members, 28 different research-intensive institutions, and various STEM disciplines. Ten clusters, called COPUS profiles, emerged from this analysis; they represent the most common types of instructional practices enacted in the classrooms observed for this study. RTOP scores were used to validate the alignment of the 10 COPUS profiles with reformed teaching. Herein, we present a detailed description of the cluster analysis method, the COPUS profiles, and the distribution of the COPUS profiles across various STEM courses at research-intensive universities. © 2015 T. J. Lund et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Western Blotting of the Endocannabinoid System.
Wager-Miller, Jim; Mackie, Ken
2016-01-01
Measuring expression levels of G protein-coupled receptors (GPCRs) is an important step for understanding the distribution, function, and regulation of these receptors. A common approach for detecting proteins from complex biological systems is Western blotting. In this chapter, we describe a general approach to Western blotting protein components of the endocannabinoid system using sodium dodecyl sulfate-polyacrylamide gel electrophoresis and nitrocellulose membranes, with a focus on detecting type 1 cannabinoid (CB1) receptors. When this technique is carefully used, specifically with validation of the primary antibodies, it can provide quantitative information on protein expression levels. Additional information can also be inferred from Western blotting such as potential posttranslational modifications that can be further evaluated by specific analytical techniques.
Particle Substructure. A Common Theme of Discovery in this Century
DOE R&D Accomplishments Database
Panofsky, W. K. H.
1984-02-01
Some example of modern developments in particle physics are given which demonstrate that the fundamental rules of quantum mechanics, applied to all forces in nature as they became understood, have retained their validity. The well-established laws of electricity and magnetism, reformulated in terms of quantum mechanics, have exhibited a truly remarkable numerical agreement between theory and experiment over an enormous range of observation. As experimental techniques have grown from the top of a laboratory bench to the large accelerators of today, the basic components of experimentation have changed vastly in scale but only little in basic function. More important, the motivation of those engaged in this type of experimentation has hardly changed at all.
Electronic Publishing or Electronic Information Handling?
NASA Astrophysics Data System (ADS)
Heck, A.
The current dramatic evolution in information technology is bringing major modifications in the way scientists communicate. The concept of 'electronic publishing' is too restrictive and has often different, sometimes conflicting, interpretations. It is thus giving way to the broader notion of 'electronic information handling' encompassing the diverse types of information, the different media, as well as the various communication methodologies and technologies. New problems and challenges result also from this new information culture, especially on legal, ethical, and educational grounds. The procedures for validating 'published material' and for evaluating scientific activities will have to be adjusted too. 'Fluid' information is becoming a common concept. Electronic publishing cannot be conceived without link to knowledge bases nor without intelligent information retrieval tools.
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false General standards for validity studies. 1607.5 Section 1607... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users may rely upon criterion-related validity studies, content validity studies or construct validity...
Conwell, Darwin L; Banks, Peter A; Sandhu, Bimaljit S; Sherman, Stuart; Al-Kaade, Samer; Gardner, Timothy B; Anderson, Michelle A; Wilcox, C Mel; Lewis, Michele D; Muniraj, Thiruvengadam; Forsmark, Christopher E; Cote, Gregory A; Guda, Nalini M; Tian, Ye; Romagnuolo, Joseph; Wisniewski, Stephen R; Brand, Randall; Gelrud, Andres; Slivka, Adam; Whitcomb, David C; Yadav, Dhiraj
2017-08-01
Our aim was to validate recent epidemiologic trends and describe the distribution of TIGAR-O risk factors in chronic pancreatitis (CP) patients. The NAPS-2 Continuation and Validation (NAPS2-CV) study prospectively enrolled 521 CP patients from 13 US centers from 2008 to 2012. CP was defined by definitive changes in imaging, endoscopy, or histology. Data were analyzed after stratification by demographic factors, physician-defined etiology, participating center, and TIGAR-O risk factors. Demographics and physician-defined etiology in the NAPS2-CV study were similar to the original NAPS2 study. Mean age was 53 years (IQR 43, 62) with 55% males and 87% white. Overall, alcohol was the single most common etiology (46%) followed by idiopathic etiology (24%). Alcohol etiology was significantly more common in males, middle-aged (35-65 years), and non-whites. Females and elderly (≥65 years) were more likely to have idiopathic etiology, while younger patients (<35 years) to have genetic etiology. Variability in etiology was noted by participating centers (e.g., alcohol etiology ranged from 27 to 67% among centers enrolling ≥25 patients). Smoking was the most commonly identified (59%) risk factor followed by alcohol (53%), idiopathic (30%), obstructive (19%), and hyperlipidemia (13%). The presence of multiple TIGAR-O risk factors was common, with 1, 2, ≥3 risk factors observed in 27.6, 47.6, and 23.6% of the cohort, respectively. Our data validate the current epidemiologic trends in CP. Alcohol remains the most common physician-defined etiology, while smoking was the most commonly identified TIGAR-O risk factor. Identification of multiple risk factors suggests CP to be a complex disease.
ERIC Educational Resources Information Center
Wiliam, Dylan
2010-01-01
The idea that validity should be considered a property of inferences, rather than of assessments, has developed slowly over the past century. In early writings about the validity of educational assessments, validity was defined as a property of an assessment. The most common definition was that an assessment was valid to the extent that it…
ERIC Educational Resources Information Center
Plotnikoff, Ronald C.; Lippke, Sonia; Reinbold-Matthews, Melissa; Courneya, Kerry S.; Karunamuni, Nandini; Sigal, Ronald J.; Birkett, Nicholas
2007-01-01
This study was designed to test the validity of a transtheoretical model's physical activity (PA) stage measure with intention and different intensities of behavior in a large population-based sample of adults living with diabetes (Type 1 diabetes, n = 697; Type 2 diabetes, n = 1,614) and examine different age groups. The overall…
Effective Temperatures for Young Stars in Binaries
NASA Astrophysics Data System (ADS)
Muzzio, Ryan; Avilez, Ian; Prato, Lisa A.; Biddle, Lauren I.; Allen, Thomas; Wright-Garba, Nuria Meilani Laure; Wittal, Matthew
2017-01-01
We have observed about 100 multi-star systems, within the star forming regions Taurus and Ophiuchus, to investigate the individual stellar and circumstellar properties of both components in young T Tauri binaries. Near-infrared spectra were collected using the Keck II telescope’s NIRSPEC spectrograph and imaging data were taken with Keck II’s NIRC2 camera, both behind adaptive optics. Some properties are straightforward to measure; however, determining effective temperature is challenging as the standard method of estimating spectral type and relating spectral type to effective temperature can be subjective and unreliable. We explicitly looked for a relationship between effective temperatures empirically determined in Mann et al. (2015) and equivalent width ratios of H-band Fe and OH lines for main sequence spectral type templates common to both our infrared observations and to the sample of Mann et al. We find a fit for a wide range of temperatures and are currently testing the validity of using this method as a way to determine effective temperature robustly. Support for this research was provided by an REU supplement to NSF award AST-1313399.
Wu, Miao; Yan, Chuanbo; Liu, Huiqiang; Liu, Qian
2018-06-29
Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images. © 2018 The Author(s).
Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine
2012-08-31
Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices. Copyright © 2012 Elsevier B.V. All rights reserved.
LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction
Huang, Li
2017-01-01
Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885
Henderson, Valerie C; Kimmelman, Jonathan; Fergusson, Dean; Grimshaw, Jeremy M; Hackam, Dan G
2013-01-01
The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address. We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria--most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation. By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice. Please see later in the article for the Editors' Summary.
Performance of a cognitive load inventory during simulated handoffs: Evidence for validity.
Young, John Q; Boscardin, Christy K; van Dijk, Savannah M; Abdullah, Ruqayyah; Irby, David M; Sewell, Justin L; Ten Cate, Olle; O'Sullivan, Patricia S
2016-01-01
Advancing patient safety during handoffs remains a public health priority. The application of cognitive load theory offers promise, but is currently limited by the inability to measure cognitive load types. To develop and collect validity evidence for a revised self-report inventory that measures cognitive load types during a handoff. Based on prior published work, input from experts in cognitive load theory and handoffs, and a think-aloud exercise with residents, a revised Cognitive Load Inventory for Handoffs was developed. The Cognitive Load Inventory for Handoffs has items for intrinsic, extraneous, and germane load. Students who were second- and sixth-year students recruited from a Dutch medical school participated in four simulated handoffs (two simple and two complex cases). At the end of each handoff, study participants completed the Cognitive Load Inventory for Handoffs, Paas' Cognitive Load Scale, and one global rating item for intrinsic load, extraneous load, and germane load, respectively. Factor and correlational analyses were performed to collect evidence for validity. Confirmatory factor analysis yielded a single factor that combined intrinsic and germane loads. The extraneous load items performed poorly and were removed from the model. The score from the combined intrinsic and germane load items associated, as predicted by cognitive load theory, with a commonly used measure of overall cognitive load (Pearson's r = 0.83, p < 0.001), case complexity (beta = 0.74, p < 0.001), level of experience (beta = -0.96, p < 0.001), and handoff accuracy (r = -0.34, p < 0.001). These results offer encouragement that intrinsic load during handoffs may be measured via a self-report measure. Additional work is required to develop an adequate measure of extraneous load.
NASA Astrophysics Data System (ADS)
Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar
2016-06-01
There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.
ERIC Educational Resources Information Center
Carpentier, Normand
2007-01-01
This article offers reflection on the validity of relational data such as used in social network analysis. Ongoing research on the transformation of the support network of caregivers of persons with an Alzheimer-type disease provides the data to fuel the debate on the validity of participant report. More specifically, we sought to understand the…
Mink, Richard B; Schwartz, Alan; Herman, Bruce E; Turner, David A; Curran, Megan L; Myers, Angela; Hsu, Deborah C; Kesselheim, Jennifer C; Carraccio, Carol L
2018-02-01
Entrustable professional activities (EPAs) represent the routine and essential activities that physicians perform in practice. Although some level of supervision scales have been proposed, they have not been validated. In this study, the investigators created level of supervision scales for EPAs common to the pediatric subspecialties and then examined their validity in a study conducted by the Subspecialty Pediatrics Investigator Network (SPIN). SPIN Steering Committee members used a modified Delphi process to develop unique scales for six of the seven common EPAs. The investigators sought validity evidence in a multisubspecialty study in which pediatric fellowship program directors and Clinical Competency Committees used the scales to evaluate fellows in fall 2014 and spring 2015. Separate scales for the six EPAs, each with five levels of progressive entrustment, were created. In both fall and spring, more than 300 fellows in each year of training from over 200 programs were assessed. In both periods and for each EPA, there was a progressive increase in entrustment levels, with second-year fellows rated higher than first-year fellows (P < .001) and third-year fellows rated higher than second-year fellows (P < .001). For each EPA, spring ratings were higher (P < .001) than those in the fall. Interrater reliability was high (Janson and Olsson's iota = 0.73). The supervision scales developed for these six common pediatric subspecialty EPAs demonstrated strong validity evidence for use in EPA-based assessment of pediatric fellows. They may also inform the development of scales in other specialties.
Factorial validity of the Childhood Trauma Questionnaire in Italian psychiatric patients.
Innamorati, Marco; Erbuto, Denise; Venturini, Paola; Fagioli, Francesca; Ricci, Federica; Lester, David; Amore, Mario; Girardi, Paolo; Pompili, Maurizio
2016-11-30
Early adverse experiences are associated with neurobiological changes and these may underlie the increased risk of psychopathology. The Childhood Trauma Questionnaire (CTQ-SF) is the most commonly used instrument for assessing childhood maltreatment. Thus, the aim of our study was to investigate the factorial validity of an Italian version of the CTQ-SF in a sample of psychiatric inpatients by means of confirmatory and exploratory factor analyses. The sample was composed of 471 psychiatric in-patients and out-patients (206 males and 265 females) aged 16-80 years (mean age=34.4 years [SD=16.3]) consecutively admitted to two psychiatric departments. All patients were administered the Italian version of the CTQ-SF. We tested five different factor models which lacked good fit, while the exploratory factor analysis supported the adequacy of a solution with three factors (Emotional Neglect/Abuse, Sexual Abuse, Physical Neglect/Abuse). The three factors had satisfactory internal consistency (ordinal Cronbach alphas >0.90). Our study supports results from previous research indicating the lack of structural invariance of the CTQ-SF in cross-cultural adaptations of the test, and the fact that, when measuring different types of childhood maltreatment, the difference between abuse and neglect may be not valid. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Pettengill, James; Strain, Errol; Allard, Marc W.; Ahmed, Rafiq; Zhao, Shaohua; Brown, Eric W.
2014-01-01
Phage typing has been used for the epidemiological surveillance of Salmonella enterica serovar Enteritidis for over 2 decades. However, knowledge of the genetic and evolutionary relationships between phage types is very limited, making differences difficult to interpret. Here, single nucleotide polymorphisms (SNPs) identified from whole-genome comparisons were used to determine the relationships between some S. Enteritidis phage types (PTs) commonly associated with food-borne outbreaks in the United States. Emphasis was placed on the predominant phage types PT8, PT13a, and PT13 in North America. With >89,400 bp surveyed across 98 S. Enteritidis isolates representing 14 distinct phage types, 55 informative SNPs were discovered within 23 chromosomally anchored loci. To maximize the discriminatory and evolutionary partitioning of these highly homogeneous strains, sequences comprising informative SNPs were concatenated into a single combined data matrix and subjected to phylogenetic analysis. The resultant phylogeny allocated most S. Enteritidis isolates into two distinct clades (clades I and II) and four subclades. Synapomorphic (shared and derived) sets of SNPs capable of distinguishing individual clades/subclades were identified. However, individual phage types appeared to be evolutionarily disjunct when mapped to this phylogeny, suggesting that phage typing may not be valid for making phylogenetic inferences. Furthermore, the set of SNPs identified here represents useful genetic markers for strain differentiation of more clonal S. Enteritidis strains and provides core genotypic markers for future development of a SNP typing scheme with S. Enteritidis. PMID:24574287
Validity threats: overcoming interference with proposed interpretations of assessment data.
Downing, Steven M; Haladyna, Thomas M
2004-03-01
Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.
Sikirzhytskaya, Aliaksandra; Sikirzhytski, Vitali; Lednev, Igor K
2014-01-01
Body fluids are a common and important type of forensic evidence. In particular, the identification of menstrual blood stains is often a key step during the investigation of rape cases. Here, we report on the application of near-infrared Raman microspectroscopy for differentiating menstrual blood from peripheral blood. We observed that the menstrual and peripheral blood samples have similar but distinct Raman spectra. Advanced statistical analysis of the multiple Raman spectra that were automatically (Raman mapping) acquired from the 40 dried blood stains (20 donors for each group) allowed us to build classification model with maximum (100%) sensitivity and specificity. We also demonstrated that despite certain common constituents, menstrual blood can be readily distinguished from vaginal fluid. All of the classification models were verified using cross-validation methods. The proposed method overcomes the problems associated with currently used biochemical methods, which are destructive, time consuming and expensive. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The development and cross-validation of an MMPI typology of murderers.
Holcomb, W R; Adams, N A; Ponder, H M
1985-06-01
A sample of 80 male offenders charged with premeditated murder were divided into five personality types using MMPI scores. A hierarchical clustering procedure was used with a subsequent internal cross-validation analysis using a second sample of 80 premeditated murderers. A Discriminant Analysis resulted in a 96.25% correct classification of subjects from the second sample into the five types. Clinical data from a mental status interview schedule supported the external validity of these types. There were significant differences among the five types in hallucinations, disorientation, hostility, depression, and paranoid thinking. Both similarities and differences of the present typology with prior research was discussed. Additional research questions were suggested.
Fang, Chao-Hua; Chang, Chia-Ming; Lai, Yu-Shu; Chen, Wen-Chuan; Song, Da-Yong; McClean, Colin J; Kao, Hao-Yuan; Qu, Tie-Bing; Cheng, Cheng-Kung
2015-11-01
Excellent clinical and kinematical performance is commonly reported after medial pivot knee arthroplasty. However, there is conflicting evidence as to whether the posterior cruciate ligament should be retained. This study simulated how the posterior cruciate ligament, post-cam mechanism and medial tibial insert morphology may affect postoperative kinematics. After the computational intact knee model was validated according to the motion of a normal knee, four TKA models were built based on a medial pivot prosthesis; PS type, modified PS type, CR type with PCL retained and CR type with PCL sacrificed. Anteroposterior translation and axial rotation of femoral condyles on the tibia during 0°-135° knee flexion were analyzed. There was no significant difference in kinematics between the intact knee model and reported data for a normal knee. In all TKA models, normal motion was almost fully restored, except for the CR type with PCL sacrificed. Sacrificing the PCL produced paradoxical anterior femoral translation and tibial external rotation during full flexion. Either the posterior cruciate ligament or post-cam mechanism is necessary for medial pivot prostheses to regain normal kinematics after total knee arthroplasty. The morphology of medial tibial insert was also shown to produce a small but noticeable effect on knee kinematics. V.
Performance evaluation of an agent-based occupancy simulation model
Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...
2017-01-17
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
Performance evaluation of an agent-based occupancy simulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Lam, Khee Poh; Chen, Yixing
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
Jarrett, Steven M; Glaze, Ryan M; Schurig, Ira; Arthur, Winfred
2017-08-01
The relationship between team sex composition and team performance on a complex psychomotor task was examined because these types of tasks are commonly used in the lab-based teams literature. Despite well-documented sex-based differences on complex psychomotor tasks, the preponderance of studies-mainly lab based-that use these tasks makes no mention of the sex composition of teams across or within experimental conditions. A sample of 123 four-person teams with varying team sex composition learned and performed a complex psychomotor task, Steal Beasts Pro PE. Each team completed a 5-hr protocol whereby they conducted several performance missions. The results indicated significant large mean differences such that teams with larger proportions of males had higher performance scores. These findings demonstrate the potential effect of team sex composition on the validity of studies that use complex psychomotor tasks to explore and investigate team performance-related phenomena when (a) team sex composition is not a focal variable of interest and (b) it is not accounted for or controlled. Given the proclivity of complex psychomotor action-based tasks used in lab-based team studies, it is important to understand and control for the impact of team sex composition on team performance. When team sex composition is not controlled for, either methodologically or statistically, it may affect the validity of the results in teams studies using these types of tasks.
IT Security Standards and Legal Metrology - Transfer and Validation
NASA Astrophysics Data System (ADS)
Thiel, F.; Hartmann, V.; Grottker, U.; Richter, D.
2014-08-01
Legal Metrology's requirements can be transferred into the IT security domain applying a generic set of standardized rules provided by the Common Criteria (ISO/IEC 15408). We will outline the transfer and cross validation of such an approach. As an example serves the integration of Legal Metrology's requirements into a recently developed Common Criteria based Protection Profile for a Smart Meter Gateway designed under the leadership of the Germany's Federal Office for Information Security. The requirements on utility meters laid down in the Measuring Instruments Directive (MID) are incorporated. A verification approach to check for meeting Legal Metrology's requirements by their interpretation through Common Criteria's generic requirements is also presented.
Modern data science for analytical chemical data - A comprehensive review.
Szymańska, Ewa
2018-10-22
Efficient and reliable analysis of chemical analytical data is a great challenge due to the increase in data size, variety and velocity. New methodologies, approaches and methods are being proposed not only by chemometrics but also by other data scientific communities to extract relevant information from big datasets and provide their value to different applications. Besides common goal of big data analysis, different perspectives and terms on big data are being discussed in scientific literature and public media. The aim of this comprehensive review is to present common trends in the analysis of chemical analytical data across different data scientific fields together with their data type-specific and generic challenges. Firstly, common data science terms used in different data scientific fields are summarized and discussed. Secondly, systematic methodologies to plan and run big data analysis projects are presented together with their steps. Moreover, different analysis aspects like assessing data quality, selecting data pre-processing strategies, data visualization and model validation are considered in more detail. Finally, an overview of standard and new data analysis methods is provided and their suitability for big analytical chemical datasets shortly discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Hackett, Lucien; Reed, Darren; Halaki, Mark; Ginn, Karen A
2014-04-01
No direct evidence exists to support the validity of using surface electrodes to record muscle activity from serratus anterior, an important and commonly investigated shoulder muscle. The aims of this study were to determine the validity of examining muscle activation patterns in serratus anterior using surface electromyography and to determine whether intramuscular electromyography is representative of serratus anterior muscle activity. Seven asymptomatic subjects performed dynamic and isometric shoulder flexion, extension, abduction, adduction and dynamic bench press plus tests. Surface electrodes were placed over serratus anterior and around intramuscular electrodes in serratus anterior. Load was ramped during isometric tests from 0% to 100% maximum load and dynamic tests were performed at 70% maximum load. EMG signals were normalised using five standard maximum voluntary contraction tests. Surface electrodes significantly underestimated serratus anterior muscle activity compared with the intramuscular electrodes during dynamic flexion, dynamic abduction, isometric flexion, isometric abduction and bench press plus tests. All other test conditions showed no significant differences including the flexion normalisation test where maximum activation was recorded from both electrode types. Low correlation between signals was recorded using surface and intramuscular electrodes during concentric phases of dynamic abduction and flexion. It is not valid to use surface electromyography to assess muscle activation levels in serratus anterior during isometric exercises where the electrodes are not placed at the angle of testing and dynamic exercises. Intramuscular electrodes are as representative of the serratus anterior muscle activity as surface electrodes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Shao, Hui; Fonseca, Vivian; Stoecker, Charles; Liu, Shuqian; Shi, Lizheng
2018-05-03
There is an urgent need to update diabetes prediction, which has relied on the United Kingdom Prospective Diabetes Study (UKPDS) that dates back to 1970 s' European populations. The objective of this study was to develop a risk engine with multiple risk equations using a recent patient cohort with type 2 diabetes mellitus reflective of the US population. A total of 17 risk equations for predicting diabetes-related microvascular and macrovascular events, hypoglycemia, mortality, and progression of diabetes risk factors were estimated using the data from the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial (n = 10,251). Internal and external validation processes were used to assess performance of the Building, Relating, Assessing, and Validating Outcomes (BRAVO) risk engine. One-way sensitivity analysis was conducted to examine the impact of risk factors on mortality at the population level. The BRAVO risk engine added several risk factors including severe hypoglycemia and common US racial/ethnicity categories compared with the UKPDS risk engine. The BRAVO risk engine also modeled mortality escalation associated with intensive glycemic control (i.e., glycosylated hemoglobin < 6.5%). External validation showed a good prediction power on 28 endpoints observed from other clinical trials (slope = 1.071, R 2 = 0.86). The BRAVO risk engine for the US diabetes cohort provides an alternative to the UKPDS risk engine. It can be applied to assist clinical and policy decision making such as cost-effective resource allocation in USA.
Estimation of in-vivo neurotransmitter release by brain microdialysis: the issue of validity.
Di Chiara, G.; Tanda, G.; Carboni, E.
1996-11-01
Although microdialysis is commonly understood as a method of sampling low molecular weight compounds in the extracellular compartment of tissues, this definition appears insufficient to specifically describe brain microdialysis of neurotransmitters. In fact, transmitter overflow from the brain into dialysates is critically dependent upon the composition of the perfusing Ringer. Therefore, the dialysing Ringer not only recovers the transmitter from the extracellular brain fluid but is a main determinant of its in-vivo release. Two types of brain microdialysis are distinguished: quantitative micro-dialysis and conventional microdialysis. Quantitative microdialysis provides an estimate of neurotransmitter concentrations in the extracellular fluid in contact with the probe. However, this information might poorly reflect the kinetics of neurotransmitter release in vivo. Conventional microdialysis involves perfusion at a constant rate with a transmitter-free Ringer, resulting in the formation of a steep neurotransmitter concentration gradient extending from the Ringer into the extracellular fluid. This artificial gradient might be critical for the ability of conventional microdialysis to detect and resolve phasic changes in neurotransmitter release taking place in the implanted area. On the basis of these characteristics, conventional microdialysis of neurotransmitters can be conceptualized as a model of the in-vivo release of neurotransmitters in the brain. As such, the criteria of face-validity, construct-validity and predictive-validity should be applied to select the most appropriate experimental conditions for estimating neurotransmitter release in specific brain areas in relation to behaviour.
Endogenous protein "barcode" for data validation and normalization in quantitative MS analysis.
Lee, Wooram; Lazar, Iulia M
2014-07-01
Quantitative proteomic experiments with mass spectrometry detection are typically conducted by using stable isotope labeling and label-free quantitation approaches. Proteins with housekeeping functions and stable expression level such actin, tubulin, and glyceraldehyde-3-phosphate dehydrogenase are frequently used as endogenous controls. Recent studies have shown that the expression level of such common housekeeping proteins is, in fact, dependent on various factors such as cell type, cell cycle, or disease status and can change in response to a biochemical stimulation. The interference of such phenomena can, therefore, substantially compromise their use for data validation, alter the interpretation of results, and lead to erroneous conclusions. In this work, we advance the concept of a protein "barcode" for data normalization and validation in quantitative proteomic experiments. The barcode comprises a novel set of proteins that was generated from cell cycle experiments performed with MCF7, an estrogen receptor positive breast cancer cell line, and MCF10A, a nontumorigenic immortalized breast cell line. The protein set was selected from a list of ~3700 proteins identified in different cellular subfractions and cell cycle stages of MCF7/MCF10A cells, based on the stability of spectral count data generated with an LTQ ion trap mass spectrometer. A total of 11 proteins qualified as endogenous standards for the nuclear and 62 for the cytoplasmic barcode, respectively. The validation of the protein sets was performed with a complementary SKBR3/Her2+ cell line.
Safety validation test equipment operation
NASA Astrophysics Data System (ADS)
Kurosaki, Tadaaki; Watanabe, Takashi
1992-08-01
An overview of the activities conducted on safety validation test equipment operation for materials used for NASA manned missions is presented. Safety validation tests, such as flammability, odor, offgassing, and so forth were conducted in accordance with NASA-NHB-8060.1C using test subjects common with those used by NASA, and the equipment used were qualified for their functions and performances in accordance with NASDA-CR-99124 'Safety Validation Test Qualification Procedures.' Test procedure systems were established by preparing 'Common Procedures for Safety Validation Test' as well as test procedures for flammability, offgassing, and odor tests. The test operation organization chaired by the General Manager of the Parts and Material Laboratory of NASDA (National Space Development Agency of Japan) was established, and the test leaders and operators in the organization were qualified in accordance with the specified procedures. One-hundred-one tests had been conducted so far by the Parts and Material Laboratory according to the request submitted by the manufacturers through the Space Station Group and the Safety and Product Assurance for Manned Systems Office.
Valenta, Sabine; De Geest, Sabina; Fierz, Katharina; Beckmann, Sonja; Halter, Jörg; Schanz, Urs; Nair, Gayathri; Kirsch, Monika
2017-04-01
To give a first description of the perception of late effects among long-term survivors after Allogeneic Haematopoietic Stem Cell Transplantation (HSCT) and to validate the German Brief Illness Perception Questionnaire (BIPQ). This is a secondary analysis of data from the cross-sectional, mixed-method PROVIVO study, which included 376 survivors from two Swiss HSCT-centres. First, we analysed the sample characteristics and the distribution for each BIPQ item. Secondly, we tested three validity types following the American Educational Research Association (AERA)Standards: content validity indices (CVIs) were assessed based on an expert survey (n = 9). A confirmatory factor analysis (CFA) explored the internal structure, and correlations tested the validity in relations to other variables including data from the Hospital Anxiety and Depression Scale (HADS), the number and burden of late effects and clinical variables. In total, 319 HSCT recipients returned completed BIPQs. For this sample, the most feared threat for post-transplant life was long lasting late effects (median = 8/10). The expert-survey revealed an overall acceptable CVI (0.82), three items-on personal control, treatment control and causal representation-yielded low CVIs (<.78). The CFA confirmed that the BIPQ fits the underlying construct, the Common-Sense Model (CSM) (χ 2 (df) = 956.321, p = 0.00). The HADS-scores correlated strongly with the item emotional representation (r = 0.648; r = 0.656). According to its overall content validity, the German BIPQ is a promising instrument to gain deeper insights into patients' perceptions of HSCT late effects. However, as three items revealed potential problems, improvements and adaptions in translation are therefore required. Following these revisions, validity evidence should be re-examined through an in-depth patient survey. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ay, Ali; Bulut, Hulya
2015-08-01
Many ostomy patients experience peristomal skin lesions. A descriptive study was conducted to assess the validity, usability, and reliability of the Peristomal Skin Lesions Assessment instrument (SACS instrument) adapted to Turkish from English. The SACS Instrument consists of 2 main assessments: lesion type (utilizing definitions and photographs) and lesion area by location around the ostomy. The study was performed in 2 stages: 1) the SACS language was changed and its content validity established; and 2) the instrument\\'92s content validity and inter-observer agreement (consistency) were determined among pairs of nurses who used the tool to assess peristomal skin lesions. Patients (included if they were >18 years old and receiving treatment/observation at 1 of the 4 participating stomatherapy units) and 8 stomatherapy nurses also completed appropriate sociodemographic questionnaires. Of the 393 patients screened during the 7-month study, 100 (average age 56.74 \\'b1 14.03 years, 55 men) participated; most (79) had a planned operation. A little more than half (59) of the patients had colorectal cancer and 28 had their stoma site marked preoperatively by a stomatherapy nurse. The most common peristomal skin lesion risk factors were having an ileostomy and unplanned surgery. The content validity index of the entire Turkish SACS instrument was 1, and the inter-observer agreement Kappa statistic was very good (K = 0.90, 95% CI 0.80- 0.99). Individual SACS item K values ranged from K = 0.84 (95% CI 0.63\\'961) to K = 1 (95% CI 1). Most (62.5%) nurses found the terms and pictures used in the SACS classification adequate and suitable, and 50% believed the Turkish version of the SACS instrument was a valid and suitable assessment tool for use by Turkish stomatherapy nurses. Validity and reliability studies involving larger and more diverse patient and nurse samples are warranted.
Cawley, John
2015-01-01
The method of instrumental variables (IV) is useful for estimating causal effects. Intuitively, it exploits exogenous variation in the treatment, sometimes called natural experiments or instruments. This study reviews the literature in health-services research and medical research that applies the method of instrumental variables, documents trends in its use, and offers examples of various types of instruments. A literature search of the PubMed and EconLit research databases for English-language journal articles published after 1990 yielded a total of 522 original research articles. Citations counts for each article were derived from the Web of Science. A selective review was conducted, with articles prioritized based on number of citations, validity and power of the instrument, and type of instrument. The average annual number of papers in health services research and medical research that apply the method of instrumental variables rose from 1.2 in 1991-1995 to 41.8 in 2006-2010. Commonly-used instruments (natural experiments) in health and medicine are relative distance to a medical care provider offering the treatment and the medical care provider's historic tendency to administer the treatment. Less common but still noteworthy instruments include randomization of treatment for reasons other than research, randomized encouragement to undertake the treatment, day of week of admission as an instrument for waiting time for surgery, and genes as an instrument for whether the respondent has a heritable condition. The use of the method of IV has increased dramatically in the past 20 years, and a wide range of instruments have been used. Applications of the method of IV have in several cases upended conventional wisdom that was based on correlations and led to important insights about health and healthcare. Future research should pursue new applications of existing instruments and search for new instruments that are powerful and valid.
Penelo, Eva; Estévez-Guerra, Gabriel J; Fariña-López, Emilio
2018-03-01
To study the internal structure and measurement invariance of the Physical Restraint Use Questionnaire and to compare perceptions, experience and training, regarding use of physical restraint on the older people between nursing staff working in hospitals and nursing homes. Physical restraint of patients is still common in many countries, and thus, it is important to study the attitudes of nursing staff. One of the most common tools used to assess perceptions regarding its use is the Physical Restraint Use Questionnaire. However, gaps exist in its internal structure and measurement invariance across different groups of respondents. Cross-sectional multicentre survey. Data were collected from nurses working in eight Spanish hospitals and 19 nursing homes. All registered nurses and nurse assistants (N = 3,838) were contacted, of whom 1,635 agreed to participate. Confirmatory factor analysis was performed to determine internal structure and measurement invariance of Physical Restraint Use Questionnaire, after which scale scores and other measures of experience and training were compared between hospital-based (n = 855) and nursing homes-based (n = 780) nurses. The Physical Restraint Use Questionnaire showed three invariant factors across type of facility, and also professional category and sex. Nursing staff working in both types of facility scored similarly; prevention of therapy disruption and prevention of falls were rated more important. Nurses working in nursing homes reported using restraint "many times" more frequently (52.9% vs. 38.6%), less severe lack of training (18.2% vs. 58.7%) being perceived as more adequate (33.4% vs. 17.7%), than hospital-based nurses. These findings support Physical Restraint Use Questionnaire as a valid and reliable tool for assessing the importance given to the use of physical restraint in the older people by nursing professionals, regardless of the setting being studied. The information would help design more specifically the physical restraint training of nursing staff and to plan institutional interventions aimed at reducing its use. © 2018 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Hartley, S. L.; MacLean, W. E., Jr.
2006-01-01
Background: Likert-type scales are increasingly being used among people with intellectual disability (ID). These scales offer an efficient method for capturing a wide range of variance in self-reported attitudes and behaviours. This review is an attempt to evaluate the reliability and validity of Likert-type scales in people with ID. Methods:…
Antonopoulou, Z; Konstantakopoulos, G; Τzinieri-Coccosis, M; Sinodinou, C
2017-01-01
The self-report Early Trauma Inventory (ETI-SR-SF) was developed by Bremner et al in 2007 and has been proven a valid tool for the assessment of childhood trauma. The inventory covers four types of traumatic experiences: general trauma, physical abuse, emotional abuse and sexual abuse. The primary aim of the present study was to assess the internal consistency, test-retest reliability, convergent validity and factor structure of the Greek version of the ETI-SR-SF. The study sample consisted of 605 individuals (402 women), undergraduate and postgraduate students of Athens universities with a mean age of 24.3 years. All participants completed a questionnaire on demographic characteristics, the Greek version of the ETI-SR-SF and the Greek version of the Trauma Symptoms Checklist (TSC-40). Both ETI-SR-SF and TSC-40 were re-administered to 56 participants after three to four weeks. ETI-SR-SF was found to display high levels of internal consistency (Cronbach's α=0.91) and test-retest reliability (ICC=0.93). In addition, the internal structure of every subscale was examined by the means of factor analysis, which revealed that the items in every subscale contribute to a single factor explaining a great proportion of the variance. The correlation between total scores of ETI-SR-SF and TSC-40 was significantly strong (r=0.42, p<0.001), indicating satisfactory convergent validity. The most frequently reported type of childhood trauma was corporal punishment, at a rate of 89.9%, followed by emotional abuse (67.2%) and sexual abuse (27%). These rates are higher than those found in the international literature indicating that the various types of early traumatic experience are very common phenomena in the Greek student population. This finding should alert the experts and requires replication and further investigation by studies with larger samples. The findings of the present study suggest that the Greek version of the self-report Early Trauma Inventory (ETI-SR-SF) is a valid and reliable tool useful for future studies on childhood traumatic experiences in Greek populations. Moreover, according to our preliminary findings further investigation of the childhood trauma in Greece appears to be very much warranted.
Martinez-Vega, Ingrid Patricia; Doubova, Svetlana V; Aguirre-Hernandez, Rebeca; Infante-Castañeda, Claudia
2016-03-02
The aim of this study was to adapt and validate the Distress Scale for Mexican patients with type 2 diabetes and hypertension (DSDH17M). Two family medicine clinics affiliated with the Mexican Institute of Social Security. 722 patients with type 2 diabetes and/or hypertension (235 patients with diabetes, 233 patients with hypertension and 254 patients with both diseases). A cross-sectional survey. The validation procedures included: (1) content validity using a group of experts, (2) construct validity from exploratory factor analysis, (3) internal consistency using Cronbach's α, (4) convergent validity between DSDH17M and anxiety and depression using the Spearman correlation coefficient, (5) discriminative validity through the Wilcoxon rank-sum test and (6) test-retest reliability using intraclass correlation coefficient. The DSDH17M has 17 items and three factors explaining 67% of the total variance. Cronbach α ranged from 0.83 to 0.91 among factors. The first factor of 'Regime-related Distress and Emotional Burden' moderately correlated with anxiety and depression scores. Discriminative validity revealed that patients with obesity, those with stressful events and those who did not adhere to pharmacological treatment had significantly higher distress scores in all DSDH17M domains. Test-retest intraclass correlation coefficient for DSDH17M ranged from 0.92 to 0.97 among factors. DSDH17M is a valid and reliable tool to identify distress of patients with type 2 diabetes and hypertension. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Zou, Jinfeng; Wang, Edwin
2017-04-01
With the technology development on detecting circulating tumor cells (CTCs) and cell-free DNAs (cfDNAs) in blood, serum, and plasma, non-invasive diagnosis of cancer becomes promising. A few studies reported good correlations between signals from tumor tissues and CTCs or cfDNAs, making it possible to detect cancers using CTCs and cfDNAs. However, the detection cannot tell which cancer types the person has. To meet these challenges, we developed an algorithm, eTumorType, to identify cancer types based on copy number variations (CNVs) of the cancer founding clone. eTumorType integrates cancer hallmark concepts and a few computational techniques such as stochastic gradient boosting, voting, centroid, and leading patterns. eTumorType has been trained and validated on a large dataset including 18 common cancer types and 5327 tumor samples. eTumorType produced high accuracies (0.86-0.96) and high recall rates (0.79-0.92) for predicting colon, brain, prostate, and kidney cancers. In addition, relatively high accuracies (0.78-0.92) and recall rates (0.58-0.95) have also been achieved for predicting ovarian, breast luminal, lung, endometrial, stomach, head and neck, leukemia, and skin cancers. These results suggest that eTumorType could be used for non-invasive diagnosis to determine cancer types based on CNVs of CTCs and cfDNAs. Copyright © 2017 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.
Whitehead, Nicholas P; Bible, Kenneth L; Kim, Min Jeong; Odom, Guy L; Adams, Marvin E; Froehner, Stanley C
2016-12-15
Duchenne muscular dystrophy (DMD) is a severe, degenerative muscle disease that is commonly studied using the mdx mouse. The mdx diaphragm muscle closely mimics the pathophysiological changes in DMD muscles. mdx diaphragm force is commonly assessed ex vivo, precluding time course studies. Here we used ultrasonography to evaluate time-dependent changes in diaphragm function in vivo, by measuring diaphragm movement amplitude. In mdx mice, diaphragm amplitude decreased with age and values were much lower than for wild-type mice. Importantly, diaphragm amplitude strongly correlated with ex vivo specific force values. Micro-dystrophin administration increased mdx diaphragm amplitude by 26% after 4 weeks. Diaphragm amplitude correlated positively with ex vivo force values and negatively with diaphragm fibrosis, a major cause of DMD muscle weakness. These studies validate diaphragm ultrasonography as a reliable technique for assessing time-dependent changes in mdx diaphragm function in vivo. This technique will be valuable for testing potential therapies for DMD. Duchenne muscular dystrophy (DMD) is a severe, degenerative muscle disease caused by dystrophin mutations. The mdx mouse is a widely used animal model of DMD. The mdx diaphragm muscle most closely recapitulates key features of DMD muscles, including progressive fibrosis and considerable force loss. Diaphragm function in mdx mice is commonly evaluated by specific force measurements ex vivo. While useful, this method only measures force from a small muscle sample at one time point. Therefore, accurate assessment of diaphragm function in vivo would provide an important advance to study the time course of functional decline and treatment benefits. Here, we evaluated an ultrasonography technique for measuring time-dependent changes of diaphragm function in mdx mice. Diaphragm movement amplitude values for mdx mice were considerably lower than those for wild-type, decreased from 8 to 18 months of age, and correlated strongly with ex vivo specific force. We then investigated the time course of diaphragm amplitude changes following administration of an adeno-associated viral vector expressing Flag-micro-dystrophin (AAV-μDys) to young adult mdx mice. Diaphragm amplitude peaked 4 weeks after AAV-μDys administration, and was 26% greater than control mdx mice at this time. This value decreased slightly to 21% above mdx controls after 12 weeks of treatment. Importantly, diaphragm amplitude again correlated strongly with ex vivo specific force. Also, diaphragm amplitude and specific force negatively correlated with fibrosis levels in the muscle. Together, our results validate diaphragm ultrasonography as a reliable technique for assessing time-dependent changes in dystrophic diaphragm function in vivo, and for evaluating potential therapies for DMD. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Bible, Kenneth L.; Kim, Min Jeong; Odom, Guy L.; Adams, Marvin E.; Froehner, Stanley C.
2016-01-01
Key points Duchenne muscular dystrophy (DMD) is a severe, degenerative muscle disease that is commonly studied using the mdx mouse. The mdx diaphragm muscle closely mimics the pathophysiological changes in DMD muscles. mdx diaphragm force is commonly assessed ex vivo, precluding time course studies. Here we used ultrasonography to evaluate time‐dependent changes in diaphragm function in vivo, by measuring diaphragm movement amplitude.In mdx mice, diaphragm amplitude decreased with age and values were much lower than for wild‐type mice. Importantly, diaphragm amplitude strongly correlated with ex vivo specific force values.Micro‐dystrophin administration increased mdx diaphragm amplitude by 26% after 4 weeks. Diaphragm amplitude correlated positively with ex vivo force values and negatively with diaphragm fibrosis, a major cause of DMD muscle weakness.These studies validate diaphragm ultrasonography as a reliable technique for assessing time‐dependent changes in mdx diaphragm function in vivo. This technique will be valuable for testing potential therapies for DMD. Abstract Duchenne muscular dystrophy (DMD) is a severe, degenerative muscle disease caused by dystrophin mutations. The mdx mouse is a widely used animal model of DMD. The mdx diaphragm muscle most closely recapitulates key features of DMD muscles, including progressive fibrosis and considerable force loss. Diaphragm function in mdx mice is commonly evaluated by specific force measurements ex vivo. While useful, this method only measures force from a small muscle sample at one time point. Therefore, accurate assessment of diaphragm function in vivo would provide an important advance to study the time course of functional decline and treatment benefits. Here, we evaluated an ultrasonography technique for measuring time‐dependent changes of diaphragm function in mdx mice. Diaphragm movement amplitude values for mdx mice were considerably lower than those for wild‐type, decreased from 8 to 18 months of age, and correlated strongly with ex vivo specific force. We then investigated the time course of diaphragm amplitude changes following administration of an adeno‐associated viral vector expressing Flag‐micro‐dystrophin (AAV‐μDys) to young adult mdx mice. Diaphragm amplitude peaked 4 weeks after AAV‐μDys administration, and was 26% greater than control mdx mice at this time. This value decreased slightly to 21% above mdx controls after 12 weeks of treatment. Importantly, diaphragm amplitude again correlated strongly with ex vivo specific force. Also, diaphragm amplitude and specific force negatively correlated with fibrosis levels in the muscle. Together, our results validate diaphragm ultrasonography as a reliable technique for assessing time‐dependent changes in dystrophic diaphragm function in vivo, and for evaluating potential therapies for DMD. PMID:27570057
Some Chinese folk prescriptions for wind-cold type common cold.
Hai-Long, Zhai; Shimin, Chen; Yalan, Lu
2015-07-01
Although self-limiting, the common cold (gǎn mào) is highly prevalent. There are no effective antivirals to cure the common cold and few effective measures to prevent it, However, for thousands years, Chinese people have treated the common cold with natural herbs, According to the traditional Chinese medicine (TCM) theory ( zhōng yī lǐ lùn), the common cold is considered as an exterior syndrome, which can be further divided into the wind-cold type ( fēng hán xíng), the wind-heat type ( fēng rè xíng), and the summer heat dampness type ( shǔ rè xíng). Since the most common type of common cold caught in winter and spring is the wind-cold type, the article introduced some Chinese folk prescriptions for the wind-cold type common cold with normal and weak physique, respectively. For thousands of years, Chinese folk prescriptions for the common cold, as complementary and alternative medicine (CAM; bǔ chōng yǔ tì dài yī xué), have been proven to be effective, convenient, cheap, and most importantly, safe. The Chinese folk prescriptions ( zhōng guó mín jiān chǔ fāng) for the wind-cold type common cold are quite suitable for general practitioners or patients with the wind-cold type common cold, to treat the disease. Of course, their pharmacological features and mechanisms of action need to be further studied.
Put the Family Back in Family Health History: A Multiple-Informant Approach.
Lin, Jielu; Marcum, Christopher S; Myers, Melanie F; Koehly, Laura M
2017-05-01
An accurate family health history is essential for individual risk assessment. This study uses a multiple-informant approach to examine whether family members have consistent perceptions of shared familial risk for four common chronic conditions (heart disease, Type 2 diabetes, high cholesterol, and hypertension) and whether accounting for inconsistency in family health history reports leads to more accurate risk assessment. In 2012-2013, individual and family health histories were collected from 127 adult informants of 45 families in the Greater Cincinnati Area. Pedigrees were linked within each family to assess inter-informant (in)consistency regarding common biological family member's health history. An adjusted risk assessment based on pooled pedigrees of multiple informants was evaluated to determine whether it could more accurately identify individuals affected by common chronic conditions, using self-reported disease diagnoses as a validation criterion. Analysis was completed in 2015-2016. Inter-informant consistency in family health history reports was 54% for heart disease, 61% for Type 2 diabetes, 43% for high cholesterol, and 41% for hypertension. Compared with the unadjusted risk assessment, the adjusted risk assessment correctly identified an additional 7%-13% of the individuals who had been diagnosed, with a ≤2% increase in cases that were predicted to be at risk but had not been diagnosed. Considerable inconsistency exists in individual knowledge of their family health history. Accounting for such inconsistency can, nevertheless, lead to a more accurate genetic risk assessment tool. A multiple-informant approach is potentially powerful when coupled with technology to support clinical decisions. Published by Elsevier Inc.
Ganna, Andrea; Lee, Donghwan; Ingelsson, Erik; Pawitan, Yudi
2015-07-01
It is common and advised practice in biomedical research to validate experimental or observational findings in a population different from the one where the findings were initially assessed. This practice increases the generalizability of the results and decreases the likelihood of reporting false-positive findings. Validation becomes critical when dealing with high-throughput experiments, where the large number of tests increases the chance to observe false-positive results. In this article, we review common approaches to determine statistical thresholds for validation and describe the factors influencing the proportion of significant findings from a 'training' sample that are replicated in a 'validation' sample. We refer to this proportion as rediscovery rate (RDR). In high-throughput studies, the RDR is a function of false-positive rate and power in both the training and validation samples. We illustrate the application of the RDR using simulated data and real data examples from metabolomics experiments. We further describe an online tool to calculate the RDR using t-statistics. We foresee two main applications. First, if the validation study has not yet been collected, the RDR can be used to decide the optimal combination between the proportion of findings taken to validation and the size of the validation study. Secondly, if a validation study has already been done, the RDR estimated using the training data can be compared with the observed RDR from the validation data; hence, the success of the validation study can be assessed. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Subclinical Carotid Atherosclerosis in Asymptomatic Subjects With Type 2 Diabetes Mellitus.
Rubinat, Esther; Marsal, Josep Ramon; Vidal, Teresa; Cebrian, Cristina; Falguera, Mireia; Vilanova, Ma Belen; Betriu, Àngels; Fernández, Elvira; Franch, Josep; Mauricio, Dídac
2016-01-01
Subjects with type 2 diabetes mellitus are considered to be at high risk for cardiovascular disease. The identification of carotid atherosclerosis is a validated surrogate marker of cardiovascular disease. Nurses are key professionals in the improvement and intensification of cardiovascular preventive strategies. The aim is to study the presence of carotid atherosclerosis in a group of asymptomatic subjects with type 2 diabetes mellitus and no previous clinical cardiovascular disease. A total of 187 patients with type 2 diabetes mellitus and 187 age- and sex-matched subjects without type 2 diabetes mellitus were studied in this cross-sectional, observational, cohort study. Standard operational procedures were applied by the nursing team regarding physical examination and carotid ultrasound assessment. Common, bulb, and internal carotid arteries were explored by measuring intima-media thickness and identifying atherosclerotic plaques. Carotid intima-media thickness (c-IMT) and carotid plaque prevalence were significantly greater in diabetic subjects than in the control group. Carotid plaques and c-IMT were more frequent in men than in women and increased with increasing age. In the multivariate analysis, age, gender, waist circumference, systolic blood pressure, and hypercholesterolemia were positively associated with c-IMT, whereas age, gender, and weight were positively associated with carotid plaque. The current nurse-led study shows that subjects with type 2 diabetes mellitus have a high prevalence of subclinical atherosclerosis that is associated with cardiovascular risk factors.
Schiffman, Eric; Ohrbach, Richard; Truelove, Edmond; Look, John; Anderson, Gary; Goulet, Jean-Paul; List, Thomas; Svensson, Peter; Gonzalez, Yoly; Lobbezoo, Frank; Michelotti, Ambra; Brooks, Sharon L.; Ceusters, Werner; Drangsholt, Mark; Ettlin, Dominik; Gaul, Charly; Goldberg, Louis J.; Haythornthwaite, Jennifer A.; Hollender, Lars; Jensen, Rigmor; John, Mike T.; De Laat, Antoon; de Leeuw, Reny; Maixner, William; van der Meulen, Marylee; Murray, Greg M.; Nixdorf, Donald R.; Palla, Sandro; Petersson, Arne; Pionchon, Paul; Smith, Barry; Visscher, Corine M.; Zakrzewska, Joanna; Dworkin, Samuel F.
2015-01-01
Aims The original Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis I diagnostic algorithms have been demonstrated to be reliable. However, the Validation Project determined that the RDC/TMD Axis I validity was below the target sensitivity of ≥ 0.70 and specificity of ≥ 0.95. Consequently, these empirical results supported the development of revised RDC/TMD Axis I diagnostic algorithms that were subsequently demonstrated to be valid for the most common pain-related TMD and for one temporomandibular joint (TMJ) intra-articular disorder. The original RDC/TMD Axis II instruments were shown to be both reliable and valid. Working from these findings and revisions, two international consensus workshops were convened, from which recommendations were obtained for the finalization of new Axis I diagnostic algorithms and new Axis II instruments. Methods Through a series of workshops and symposia, a panel of clinical and basic science pain experts modified the revised RDC/TMD Axis I algorithms by using comprehensive searches of published TMD diagnostic literature followed by review and consensus via a formal structured process. The panel's recommendations for further revision of the Axis I diagnostic algorithms were assessed for validity by using the Validation Project's data set, and for reliability by using newly collected data from the ongoing TMJ Impact Project—the follow-up study to the Validation Project. New Axis II instruments were identified through a comprehensive search of the literature providing valid instruments that, relative to the RDC/TMD, are shorter in length, are available in the public domain, and currently are being used in medical settings. Results The newly recommended Diagnostic Criteria for TMD (DC/TMD) Axis I protocol includes both a valid screener for detecting any pain-related TMD as well as valid diagnostic criteria for differentiating the most common pain-related TMD (sensitivity ≥ 0.86, specificity ≥ 0.98) and for one intra-articular disorder (sensitivity of 0.80 and specificity of 0.97). Diagnostic criteria for other common intra-articular disorders lack adequate validity for clinical diagnoses but can be used for screening purposes. Inter-examiner reliability for the clinical assessment associated with the validated DC/TMD criteria for pain-related TMD is excellent (kappa ≥ 0.85). Finally, a comprehensive classification system that includes both the common and less common TMD is also presented. The Axis II protocol retains selected original RDC/TMD screening instruments augmented with new instruments to assess jaw function as well as behavioral and additional psychosocial factors. The Axis II protocol is divided into screening and comprehensive self-report instrument sets. The screening instruments’ 41 questions assess pain intensity, pain-related disability, psychological distress, jaw functional limitations, and parafunctional behaviors, and a pain drawing is used to assess locations of pain. The comprehensive instruments, composed of 81 questions, assess in further detail jaw functional limitations and psychological distress as well as additional constructs of anxiety and presence of comorbid pain conditions. Conclusion The recommended evidence-based new DC/TMD protocol is appropriate for use in both clinical and research settings. More comprehensive instruments augment short and simple screening instruments for Axis I and Axis II. These validated instruments allow for identification of patients with a range of simple to complex TMD presentations. PMID:24482784
Habtewold, Tesfa Dejenie; Alemu, Sisay Mulugeta; Haile, Yohannes Gebreegziabhere
2016-04-15
Depression is a common comorbidity among patients with type 2 diabetes. There are several reports supporting a bidirectional association between depression and type 2 diabetes. However, there is limited data from non-western countries. Therefore, the aim of this study was to assess the sociodemographic, clinical, and psychosocial factors associated with co-morbid depression among type 2 diabetic outpatients presenting to Black Lion General Specialized Hospital, Addis Ababa, Ethiopia. This institution based cross-sectional study design was conducted on a random sample of 276 type 2 diabetic outpatients. Type 2 diabetes patients were evaluated for depression by administering a validated nine-item Patient Health Questionnaire (PHQ-9). Risk factors for depression among type 2 diabetes patients were identified using multiple logistic regression analysis. In total, 264 study participants were interviewed with a response rate of 95.6%. The prevalence of depression was 44.7%. In the multivariate analysis, the statistically significant risk factors for depression were monthly family income ≤ 650 (p-value = 0.056; OR = 2.0; 95% CI = 1.01, 4.2), presence of ≥3 diabetic complications (p-value = 0.03; OR = 3.3; 95% CI = 1.1, 10.0), diabetic nephropathy (p-value = 0.01; OR = 2.9; 95% CI = 1.2, 6.7), negative life events (p-value = 0.01; OR = 2.4; 95% CI = 1.2, 4.5), and poor social support (p-value = 0.001; OR = 2.7; 95% CI = 1.5, 5.0). This study demonstrated that depression is a common co-morbid health problem with a prevalence rate of 44.7%. The presence of diabetic complications, low monthly family income, diabetic nephropathy, negative life event, and poor social support were the statistically significant risk factors associated with depression. We presume that the burden of mental health especially depression is high in the population with type 2 diabetes mellitus co-morbidity. Therefore, specific attention is needed to diagnose early and treat promptly.
A Review of Validation Research on Psychological Variables Used in Hiring Police Officers.
ERIC Educational Resources Information Center
Malouff, John M.; Schutte Nicola S.
This paper reviews the methods and findings of published research on the validity of police selection procedures. As a preface to the review, the typical police officer selection process is briefly described. Several common methodological deficiencies of the validation research are identified and discussed in detail: (1) use of past-selection…
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
Development and validation of a cost-utility model for Type 1 diabetes mellitus.
Wolowacz, S; Pearson, I; Shannon, P; Chubb, B; Gundgaard, J; Davies, M; Briggs, A
2015-08-01
To develop a health economic model to evaluate the cost-effectiveness of new interventions for Type 1 diabetes mellitus by their effects on long-term complications (measured through mean HbA1c ) while capturing the impact of treatment on hypoglycaemic events. Through a systematic review, we identified complications associated with Type 1 diabetes mellitus and data describing the long-term incidence of these complications. An individual patient simulation model was developed and included the following complications: cardiovascular disease, peripheral neuropathy, microalbuminuria, end-stage renal disease, proliferative retinopathy, ketoacidosis, cataract, hypoglycemia and adverse birth outcomes. Risk equations were developed from published cumulative incidence data and hazard ratios for the effect of HbA1c , age and duration of diabetes. We validated the model by comparing model predictions with observed outcomes from studies used to build the model (internal validation) and from other published data (external validation). We performed illustrative analyses for typical patient cohorts and a hypothetical intervention. Model predictions were within 2% of expected values in the internal validation and within 8% of observed values in the external validation (percentages represent absolute differences in the cumulative incidence). The model utilized high-quality, recent data specific to people with Type 1 diabetes mellitus. In the model validation, results deviated less than 8% from expected values. © 2014 Research Triangle Institute d/b/a RTI Health Solutions. Diabetic Medicine © 2014 Diabetes UK.
HIV reservoirs: the new frontier.
Iglesias-Ussel, Maria D; Romerio, Fabio
2011-01-01
Current antiretroviral therapies suppress viremia to very low levels, but are ineffective in eliminating reservoirs of persistent HIV infection. Efforts toward the development of therapies aimed at HIV reservoirs are complicated by the evidence that HIV establishes persistent productive and nonproductive infection in a number of cell types and through a variety of mechanisms. Moreover, immunologically privileged sites such as the brain also act as HIV sanctuaries. To facilitate the advancement of our knowledge in this new area of research, in vitro models of HIV persistence in different cellular reservoirs have been developed, particularly in CD4+ T-cells that represent the largest pool of persistently infected cells in the body. Whereas each model presents clear advantages, they all share one common limitation: they are systems attempting to recapitulate extremely complex virus-cell interactions occurring in vivo, which we know very little about. Potentially conflicting results arising from different models may be difficult to interpret without validation with clinical samples. Addressing these issues, among others, merits careful consideration for the identification of valid targets and the design of effective strategies for therapy, which may increase the success of efforts toward HIV eradication.
Comparing the performance of biomedical clustering methods.
Wiwie, Christian; Baumbach, Jan; Röttger, Richard
2015-11-01
Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene expression to protein domains. Performance was judged on the basis of 13 common cluster validity indices. We developed a clustering analysis platform, ClustEval (http://clusteval.mpi-inf.mpg.de), to promote streamlined evaluation, comparison and reproducibility of clustering results in the future. This allowed us to objectively evaluate the performance of all tools on all data sets with up to 1,000 different parameter sets each, resulting in a total of more than 4 million calculated cluster validity indices. We observed that there was no universal best performer, but on the basis of this wide-ranging comparison we were able to develop a short guideline for biomedical clustering tasks. ClustEval allows biomedical researchers to pick the appropriate tool for their data type and allows method developers to compare their tool to the state of the art.
Flight Simulator and Training Human Factors Validation
NASA Technical Reports Server (NTRS)
Glaser, Scott T.; Leland, Richard
2009-01-01
Loss of control has been identified as the leading cause of aircraft accidents in recent years. Efforts have been made to better equip pilots to deal with these types of events, commonly referred to as upsets. A major challenge in these endeavors has been recreating the motion environments found in flight as the majority of upsets take place well beyond the normal operating envelope of large aircraft. The Environmental Tectonics Corporation has developed a simulator motion base, called GYROLAB, that is capable of recreating the sustained accelerations, or G-forces, and motions of flight. A two part research study was accomplished that coupled NASA's Generic Transport Model with a GYROLAB device. The goal of the study was to characterize physiological effects of the upset environment and to demonstrate that a sustained motion based simulator can be an effective means for upset recovery training. Two groups of 25 Air Transport Pilots participated in the study. The results showed reliable signs of pilot arousal at specific stages of similar upsets. Further validation also demonstrated that sustained motion technology was successful in improving pilot performance during recovery following an extensive training program using GYROLAB technology.
Reliability and Validity of Wisconsin Upper Respiratory Symptom Survey, Korean Version
Yang, Su-Young; Kang, Weechang; Yeo, Yoon; Park, Yang-Chun
2011-01-01
Background The Wisconsin Upper Respiratory Symptom Survey (WURSS) is a self-administered questionnaire developed in the United States to evaluate the severity of the common cold and its reliability has been validated. We developed a Korean language version of this questionnaire by using a sequential forward and backward translation approach. The purpose of this study was to validate the Korean version of the Wisconsin Upper Respiratory Symptom Survey (WURSS-K) in Korean patients with common cold. Methods This multicenter prospective study enrolled 107 participants who were diagnosed with common cold and consented to participate in the study. The WURSS-K includes 1 global illness severity item, 32 symptom-based items, 10 functional quality-of-life (QOL) items, and 1 item assessing global change. The SF-8 was used as an external comparator. Results The participants were 54 women and 53 men aged 18 to 42 years. The WURSS-K showed good reliability in 10 domains, with Cronbach’s alphas ranging from 0.67 to 0.96 (mean: 0.84). Comparison of the reliability coefficients of the WURSS-K and WURSS yielded a Pearson correlation coefficient of 0.71 (P = 0.02). Validity of the WURSS-K was evaluated by comparing it with the SF-8, which yielded a Pearson correlation coefficient of −0.267 (P < 0.001). The Guyatt’s responsiveness index of the WURSS-K ranged from 0.13 to 0.46, and the correlation coefficient with the WURSS was 0.534 (P < 0.001), indicating that there was close correlation between the WURSS-K and WURSS. Conclusions The WURSS-K is a reliable, valid, and responsive disease-specific questionnaire for assessing symptoms and QOL in Korean patients with common cold. PMID:21691034
Nutrition Knowledge and Training Needs in the School Environment
NASA Astrophysics Data System (ADS)
Jones, Anna Marie
The nutrition environment in schools can influence the risk for childhood overweight and obesity, which in turn can have life-long implications for risk of chronic disease. This dissertation aimed to examine the nutrition environment in primary public schools in California with regards to the amount of nutrition education provided in the classroom, the nutrition knowledge of teachers, and the training needs of school nutrition personnel. In order to determine nutrition knowledge of teachers, a valid and reliable questionnaire was developed to assess knowledge. The systematic process involved cognitive interviews, a mail-based pretest that utilized a random sample of addresses in California, and validity and reliability testing in a sample of university students. Results indicated that the questionnaire had adequate construct validity, internal consistency reliability, and test-retest reliability. Following the validation of the knowledge questionnaire, it was used in a study of public school teachers in California to determine the relationship between demographic and classroom characteristics and nutrition knowledge, in addition to barriers to nutrition education and resources used to plan nutrition lessons. Nutrition knowledge was not found to be associated with teaching nutrition in the classroom, however it was associated with gender, identifying as Hispanic or Latino, and grade level grouping taught. The most common barriers to nutrition education were time, and unrelated subject matter. The most commonly used resources to plan nutrition lessons were Dairy Council of California educational materials. The school nutrition program was the second area of the school nutrition environment to be examined, and the primary focus was to determine the perceived training needs of California school nutrition personnel. Respondents indicated a need for training in topics related to: program management; the Healthy, Hunger-Free Kids Act of 2010; nutrition, health and wellness; planning, preparing, and serving meals; and communication and marketing. Those employed in residential child care institutions expressed a strong need for training specific to this type of program. Overall, the results of this dissertation contribute to the body of knowledge about nutrition in the school environment and raise interesting questions to be examined in future studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anh Bui; Nam Dinh; Brian Williams
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less
Validation of the "Pain Block" concrete ordinal scale for children aged 4 to 7 years.
Jung, Jin Hee; Lee, Jin Hee; Kim, Do Kyun; Jung, Jae Yun; Chang, Ikwan; Kwon, Hyuksool; Shin, Jonghwan; Paek, So Hyun; Oh, Sohee; Kwak, Young Ho
2018-04-01
Pain scales using faces are commonly used tools for assessing pain in children capable of communicating. However, some children require other types of pain scales because they have difficulties in understanding faces pain scales. The goal of this study was to develop and validate the "Pain Block" concrete ordinal scale for 4- to 7-year-old children. This was a multicenter prospective observational study in the emergency department. Psychometric properties (convergent validity, discriminative validity, responsivity, and reliability) were compared between the "Pain Block" pain scale and the Faces Pain Scale-Revised (FPS-R) to assess the validity of the "Pain Block" scale. A total of 163 children (mean age, 5.5 years) were included in this study. The correlation coefficient between the FPS-R and the Pain Block scale was 0.82 for all participants which increased with age. Agreement between the 2 pain scales was acceptable, with 95.0% of the values within the predetermined limit. The differences in mean scores between the painful group and nonpainful group were 3.3 (95% confidence interval, 2.6-4.1) and 3.8 (95% confidence interval, 3.1-4.6) for FPR-S and Pain Block, respectively. The pain scores for both pain scales were significantly decreased when analgesics or pain-relieving procedures were administered (difference in Pain Block, 2.4 [1.4-3.3]; and difference in FPS-R, 2.3 [1.3-3.3]). The Pain Block pain scale could be used to assess pain in 4- to 7-year-old children capable of understanding and counting up to the number 5, even if they do not understand the FPS-R pain scale.
Li, Zhandong; Shi, Qiuling; Liu, Meng; Jia, Liqun; He, Bin; Yang, Yufei; Liu, Jie; Lin, Hongsheng; Lin, Huei-Kai; Li, Pingping; Wang, Xin Shelley
2017-11-01
The MD Anderson Symptom Inventory (MDASI) is a brief, yet thorough, patient-reported outcomes measure for assessing the severity of common cancer-related symptoms and their interference with daily functioning. We report the development of an MDASI version tailored for use with Traditional Chinese Medicine in China (the MDASI-TCM). Chinese-speaking patients with mixed cancer types (n = 317) participated in the study. The development and validation process included four steps: 1) identify candidate TCM-specific items, with input from patients, oncologists, and TCM specialists; 2) eliminate candidate TCM items lacking relevance, based on patient report; 3) psychometrically examine the MDASI-TCM's validity and reliability in cancer patients receiving TCM-based care; and 4) cognitively debrief patients to assess the MDASI-TCM's relevance, understandability, and acceptability. Seven TCM-specific symptom items (sweating, feeling cold, constipation, bitter taste, coughing, palpitations, and heat in palms/soles) were clinically and psychometrically meaningful to add to the core MDASI. Approximately 61% of patients had moderate to severe symptoms (rated ≥5 on the MDASI-TCM's 0-10 scale). Cronbach α coefficients were .90 for symptom-severity items and .93 for interference items, indicating internal consistency reliability. Known-group validity was substantiated by the MDASI-TCM's detection of differences in symptom severity according to performance status (P < .001) and interference levels by cancer stage (P < .05). Cognitive debriefing indicated that patients found the MDASI-TCM to be an understandable, easy-to-use tool. The Chinese MDASI-TCM is a valid, reliable, and concise measure of symptom severity and interference that can be used to assess Chinese cancer patients and survivors receiving TCM-based care. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Davis, Bruce H; Wood, Brent; Oldaker, Teri; Barnett, David
2013-01-01
Flow cytometry and other technologies of cell-based fluorescence assays are as a matter of good laboratory practice required to validate all assays, which when in clinical practice may pass through regulatory review processes using criteria often defined with a soluble analyte in plasma or serum samples in mind. Recently the U.S. Food and Drug Administration (FDA) has entered into a public dialogue in the U.S. regarding their regulatory interest in laboratory developed tests (LDTs) or so-called "home brew" assays performed in clinical laboratories. The absence of well-defined guidelines for validation of cell-based assays using fluorescence detection has thus become a subject of concern for the International Council for Standardization of Haematology (ICSH) and International Clinical Cytometry Society (ICCS). Accordingly, a group of over 40 international experts in the areas of test development, test validation, and clinical practice of a variety of assay types using flow cytometry and/or morphologic image analysis were invited to develop a set of practical guidelines useful to in vitro diagnostic (IVD) innovators, clinical laboratories, regulatory scientists, and laboratory inspectors. The focus of the group was restricted to fluorescence reporter reagents, although some common principles are shared by immunohistochemistry or immunocytochemistry techniques and noted where appropriate. The work product of this two year effort is the content of this special issue of this journal, which is published as 5 separate articles, this being Validation of Cell-based Fluorescence Assays: Practice Guidelines from the ICSH and ICCS - Part I - Rationale and aims. © 2013 International Clinical Cytometry Society. © 2013 International Clinical Cytometry Society.
NASA Astrophysics Data System (ADS)
Susanti, L. B.; Poedjiastoeti, S.; Taufikurohmah, T.
2018-04-01
The purpose of this study is to explain the validity of guided inquiry and mind mapping-based worksheet that has been developed in this study. The worksheet implemented the phases of guided inquiry teaching models in order to train students’ creative thinking skills. The creative thinking skills which were trained in this study included fluency, flexibility, originality and elaboration. The types of validity used in this study included content and construct validity. The type of this study is development research with Research and Development (R & D) method. The data of this study were collected using review and validation sheets. Sources of the data were chemistry lecturer and teacher. The data is the analyzed descriptively. The results showed that the worksheet is very valid and could be used as a learning media with the percentage of validity ranged from 82.5%-92.5%.
Zhou, Li; Hongsermeier, Tonya; Boxwala, Aziz; Lewis, Janet; Kawamoto, Kensaku; Maviglia, Saverio; Gentile, Douglas; Teich, Jonathan M; Rocha, Roberto; Bell, Douglas; Middleton, Blackford
2013-01-01
At present, there are no widely accepted, standard approaches for representing computer-based clinical decision support (CDS) intervention types and their structural components. This study aimed to identify key requirements for the representation of five widely utilized CDS intervention types: alerts and reminders, order sets, infobuttons, documentation templates/forms, and relevant data presentation. An XML schema was proposed for representing these interventions and their core structural elements (e.g., general metadata, applicable clinical scenarios, CDS inputs, CDS outputs, and CDS logic) in a shareable manner. The schema was validated by building CDS artifacts for 22 different interventions, targeted toward guidelines and clinical conditions called for in the 2011 Meaningful Use criteria. Custom style sheets were developed to render the XML files in human-readable form. The CDS knowledge artifacts were shared via a public web portal. Our experience also identifies gaps in existing standards and informs future development of standards for CDS knowledge representation and sharing.
Modeling Acceleration of a System of Two Objects Using the Concept of Limits
NASA Astrophysics Data System (ADS)
Sokolowski, Andrzej
2018-01-01
Traditional school laboratory exercises on a system of moving objects connected by strings involve deriving expressions for the system acceleration, a = (∑F )/m, and sketching a graph of acceleration vs. force. While being in the form of rational functions, these expressions present great opportunities for broadening the scope of the analysis by using a more sophisticated math apparatus—the concept of limits. Using the idea of limits allows for extending both predictions and explanations of this type of motion that are—according to Redish—essential goals of teaching physics. This type of analysis, known in physics as limiting case analysis, allows for generalizing inferences by evaluating or estimating values of algebraic functions based on its extreme inputs. In practice, such transition provides opportunities for deriving valid conclusions for cases when direct laboratory measurements are not possible. While using limits is common for scientists, the idea of applying limits in school practice is not visible, and testing students' ability in this area is also rare.
Adaptive multi-resolution Modularity for detecting communities in networks
NASA Astrophysics Data System (ADS)
Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He
2018-02-01
Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.
NASA Astrophysics Data System (ADS)
Nitheesh Kumar, P.; Khan, Vishwas Chandra; Balaganesan, G.; Pradhan, A. K.; Sivakumar, M. S.
2018-04-01
The present study is concerned with the repair of through thickness corrosion or leaking defects in metallic pipelines using a commercially available metallic seal and glass/epoxy composite. Pipe specimens are made with three different types of most commonly occurring through thickness corrosion/leaking defects. The metallic seal is applied over the through thickness corrosion/leaking defect and it is reinforced with glass/epoxy composite overwrap. The main objective of the metallic seal is to arrest the leak at live pressure. After reinforcing the metallic seal with glass/epoxy composite overwrap, the repaired composite wrap is able to sustain high pressures. Burst test is performed for different configurations of metallic seal and optimum configuration of metallic seal is determined. The optimum configurations of metallic seal for three different types of through thickness corrosion/leaking defects are further reinforced with glass/epoxy composite wrap and experimental failure pressure is determined by performing the burst test. An analytical model as per ISO 24817 has been developed to validate experimental results.
Šulc, Miloslav; Kotíková, Zora; Paznocht, Luboš; Pivec, Vladimír; Hamouz, Karel; Lachman, Jaromír
2017-12-15
Certain potato cultivars are capable of producing anthocyanin pigments in the potato skin and flesh and those pigments have been shown, together with other phytochemicals, to promote good health. Six common anthocyanidins (cyanidin, delphinidin, petunidin, pelargonidin, malvidin and peonidin) were analyzed weekly for 15weeks in red- and purple-fleshed potato cultivars (Red Emma, Königspurpur, Valfi and Blaue de la Mancha) grown in field conditions using a validated LC-(+ESI)MS/MS method. Pelargonidin was the major type detected in red-fleshed cultivars whereas petunidin was the major type detected in the purple ones. Neither cyanidin nor delphinidin were found in any of the cultivars. The anthocyanidin levels observed were as high as 78mg/100g FW during tuber growth; however, fully matured tubers contained only 10-39mg anthocyanidins/100gFW. Anthocyanidin levels were moderately correlated with global solar irradiation (r<0.6252) but not with rainfall or daily temperature. Copyright © 2017 Elsevier Ltd. All rights reserved.
2017-01-01
Parent report is commonly used to assess language and attention in children for research and clinical purposes. It is therefore important to understand the convergent validity of parent-report tools in comparison to direct assessments of language and attention. In particular, cultural and linguistic background may influence this convergence. In this study a group of six- to eight-year old children (N = 110) completed direct assessments of language and attention and their parents reported on the same areas. Convergence between assessment types was explored using correlations. Possible influences of ethnicity (Hispanic or non-Hispanic) and of parent report language (English or Spanish) were explored using hierarchical linear regression. Correlations between parent report and direct child assessments were significant for both language and attention, suggesting convergence between assessment types. Ethnicity and parent report language did not moderate the relationships between direct child assessments and parent report tools for either attention or language. PMID:28683131
Ebert, Kerry Danahy
2017-01-01
Parent report is commonly used to assess language and attention in children for research and clinical purposes. It is therefore important to understand the convergent validity of parent-report tools in comparison to direct assessments of language and attention. In particular, cultural and linguistic background may influence this convergence. In this study a group of six- to eight-year old children (N = 110) completed direct assessments of language and attention and their parents reported on the same areas. Convergence between assessment types was explored using correlations. Possible influences of ethnicity (Hispanic or non-Hispanic) and of parent report language (English or Spanish) were explored using hierarchical linear regression. Correlations between parent report and direct child assessments were significant for both language and attention, suggesting convergence between assessment types. Ethnicity and parent report language did not moderate the relationships between direct child assessments and parent report tools for either attention or language.
Publications in anesthesia journals: quality and clinical relevance.
Lauritsen, Jakob; Moller, Ann M
2004-11-01
Clinicians performing evidence-based anesthesia rely on anesthesia journals for clinically relevant information. The objective of this study was to analyze the proportion of clinically relevant articles in five high impact anesthesia journals. We evaluated all articles published in Anesthesiology, Anesthesia & Analgesia, British Journal of Anesthesia, Anesthesia, and Acta Anaesthesiologica Scandinavica from January to June, 2000. Articles were assessed and classified according to type, outcome, and design; 1379 articles consisting of 5468 pages were evaluated and categorized. The most common types of article were animal and laboratory research (31.2%) and randomized clinical trial (20.4%). A clinically relevant article was defined as an article that used a statistically valid method and had a clinically relevant end-point. Altogether 18.6% of the pages had as their subject matter clinically relevant trials. We compared the Journal Impact Factor (a measure of the number of citations per article in a journal) and the proportion of clinically relevant pages and found that they were inversely proportional to each other.
NASA Astrophysics Data System (ADS)
Feng, L.; Xie, J.; Ritzwoller, M. H.
2017-12-01
Two major types of surface wave anisotropy are commonly observed by seismologists but are only rarely interpreted jointly: apparent radial anisotropy, which is the difference in propagation speed between horizontally and vertically polarized waves inferred from Love and Rayleigh waves, and apparent azimuthal anisotropy, which is the directional dependence of surface wave speeds (usually Rayleigh waves). We describe a method of inversion that interprets simultaneous observations of radial and azimuthal anisotropy under the assumption of a hexagonally symmetric elastic tensor with a tilted symmetry axis defined by dip and strike angles. With a full-waveform numerical solver based on the spectral element method (SEM), we verify the validity of the forward theory used for the inversion. We also present two examples, in the US and Tibet, in which we have successfully applied the tomographic method to demonstrate that the two types of apparent anisotropy can be interpreted jointly as a tilted hexagonally symmetric medium.
Newman, Ian R; Gibb, Maia; Thompson, Valerie A
2017-07-01
It is commonly assumed that belief-based reasoning is fast and automatic, whereas rule-based reasoning is slower and more effortful. Dual-Process theories of reasoning rely on this speed-asymmetry explanation to account for a number of reasoning phenomena, such as base-rate neglect and belief-bias. The goal of the current study was to test this hypothesis about the relative speed of belief-based and rule-based processes. Participants solved base-rate problems (Experiment 1) and conditional inferences (Experiment 2) under a challenging deadline; they then gave a second response in free time. We found that fast responses were informed by rules of probability and logical validity, and that slow responses incorporated belief-based information. Implications for Dual-Process theories and future research options for dissociating Type I and Type II processes are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Characterization of Textile-Insulated Capacitive Biosensors
Ng, Charn Loong; Reaz, Mamun Bin Ibne
2017-01-01
Capacitive biosensors are an emerging technology revolutionizing wearable sensing systems and personal healthcare devices. They are capable of continuously measuring bioelectrical signals from the human body while utilizing textiles as an insulator. Different textile types have their own unique properties that alter skin-electrode capacitance and the performance of capacitive biosensors. This paper aims to identify the best textile insulator to be used with capacitive biosensors by analysing the characteristics of 6 types of common textile materials (cotton, linen, rayon, nylon, polyester, and PVC-textile) while evaluating their impact on the performance of a capacitive biosensor. A textile-insulated capacitive (TEX-C) biosensor was developed and validated on 3 subjects. Experimental results revealed that higher skin-electrode capacitance of a TEX-C biosensor yields a lower noise floor and better signal quality. Natural fabric such as cotton and linen were the two best insulating materials to integrate with a capacitive biosensor. They yielded the lowest noise floor of 2 mV and achieved consistent electromyography (EMG) signals measurements throughout the performance test. PMID:28287493
Development and validation of a Database Forensic Metamodel (DBFM)
Al-dhaqm, Arafat; Razak, Shukor; Othman, Siti Hajar; Ngadi, Asri; Ahmed, Mohammed Nazir; Ali Mohammed, Abdulalem
2017-01-01
Database Forensics (DBF) is a widespread area of knowledge. It has many complex features and is well known amongst database investigators and practitioners. Several models and frameworks have been created specifically to allow knowledge-sharing and effective DBF activities. However, these are often narrow in focus and address specified database incident types. We have analysed 60 such models in an attempt to uncover how numerous DBF activities are really public even when the actions vary. We then generate a unified abstract view of DBF in the form of a metamodel. We identified, extracted, and proposed a common concept and reconciled concept definitions to propose a metamodel. We have applied a metamodelling process to guarantee that this metamodel is comprehensive and consistent. PMID:28146585
Coherent population transfer in multilevel systems with magnetic sublevels. II. Algebraic analysis
NASA Astrophysics Data System (ADS)
Martin, J.; Shore, B. W.; Bergmann, K.
1995-07-01
We extend previous theoretical work on coherent population transfer by stimulated Raman adiabatic passage for states involving nonzero angular momentum. The pump and Stokes fields are either copropagating or counterpropagating with the corresponding linearly polarized electric-field vectors lying in a common plane with the magnetic-field direction. Zeeman splitting lifts the magnetic sublevel degeneracy. We present an algebraic analysis of dressed-state properties to explain the behavior noted in numerical studies. In particular, we discuss conditions which are likely to lead to a failure of complete population transfer. The applied strategy, based on simple methods of linear algebra, will also be successful for other types of discrete multilevel systems, provided the rotating-wave and adiabatic approximation are valid.
NASA Technical Reports Server (NTRS)
Wong, K. W.
1974-01-01
In lunar phototriangulation, there is a complete lack of accurate ground control points. The accuracy analysis of the results of lunar phototriangulation must, therefore, be completely dependent on statistical procedure. It was the objective of this investigation to examine the validity of the commonly used statistical procedures, and to develop both mathematical techniques and computer softwares for evaluating (1) the accuracy of lunar phototriangulation; (2) the contribution of the different types of photo support data on the accuracy of lunar phototriangulation; (3) accuracy of absolute orientation as a function of the accuracy and distribution of both the ground and model points; and (4) the relative slope accuracy between any triangulated pass points.
Power-spectral-density relationship for retarded differential equations
NASA Technical Reports Server (NTRS)
Barker, L. K.
1974-01-01
The power spectral density (PSD) relationship between input and output of a set of linear differential-difference equations of the retarded type with real constant coefficients and delays is discussed. The form of the PSD relationship is identical with that applicable to unretarded equations. Since the PSD relationship is useful if and only if the system described by the equations is stable, the stability must be determined before applying the PSD relationship. Since it is sometimes difficult to determine the stability of retarded equations, such equations are often approximated by simpler forms. It is pointed out that some common approximations can lead to erroneous conclusions regarding the stability of a system and, therefore, to the possibility of obtaining PSD results which are not valid.
Hassett, Michael J; Uno, Hajime; Cronin, Angel M; Carroll, Nikki M; Hornbrook, Mark C; Ritzwoller, Debra
2017-12-01
Recurrent cancer is common, costly, and lethal, yet we know little about it in community-based populations. Electronic health records and tumor registries contain vast amounts of data regarding community-based patients, but usually lack recurrence status. Existing algorithms that use structured data to detect recurrence have limitations. We developed algorithms to detect the presence and timing of recurrence after definitive therapy for stages I-III lung and colorectal cancer using 2 data sources that contain a widely available type of structured data (claims or electronic health record encounters) linked to gold-standard recurrence status: Medicare claims linked to the Cancer Care Outcomes Research and Surveillance study, and the Cancer Research Network Virtual Data Warehouse linked to registry data. Twelve potential indicators of recurrence were used to develop separate models for each cancer in each data source. Detection models maximized area under the ROC curve (AUC); timing models minimized average absolute error. Algorithms were compared by cancer type/data source, and contrasted with an existing binary detection rule. Detection model AUCs (>0.92) exceeded existing prediction rules. Timing models yielded absolute prediction errors that were small relative to follow-up time (<15%). Similar covariates were included in all detection and timing algorithms, though differences by cancer type and dataset challenged efforts to create 1 common algorithm for all scenarios. Valid and reliable detection of recurrence using big data is feasible. These tools will enable extensive, novel research on quality, effectiveness, and outcomes for lung and colorectal cancer patients and those who develop recurrence.
Less common etiologies of exocrine pancreatic insufficiency
Singh, Vikesh K; Haupt, Mark E; Geller, David E; Hall, Jerry A; Quintana Diez, Pedro M
2017-01-01
Exocrine pancreatic insufficiency (EPI), an important cause of maldigestion and malabsorption, results from primary pancreatic diseases or secondarily impaired exocrine pancreatic function. Besides cystic fibrosis and chronic pancreatitis, the most common etiologies of EPI, other causes of EPI include unresectable pancreatic cancer, metabolic diseases (diabetes); impaired hormonal stimulation of exocrine pancreatic secretion by cholecystokinin (CCK); celiac or inflammatory bowel disease (IBD) due to loss of intestinal brush border proteins; and gastrointestinal surgery (asynchrony between motor and secretory functions, impaired enteropancreatic feedback, and inadequate mixing of pancreatic secretions with food). This paper reviews such conditions that have less straightforward associations with EPI and examines the role of pancreatic enzyme replacement therapy (PERT). Relevant literature was identified by database searches. Most patients with inoperable pancreatic cancer develop EPI (66%-92%). EPI occurs in patients with type 1 (26%-57%) or type 2 diabetes (20%-36%) and is typically mild to moderate; by definition, all patients with type 3c (pancreatogenic) diabetes have EPI. EPI occurs in untreated celiac disease (4%-80%), but typically resolves on a gluten-free diet. EPI manifests in patients with IBD (14%-74%) and up to 100% of gastrointestinal surgery patients (47%-100%; dependent on surgical site). With the paucity of published studies on PERT use for these conditions, recommendations for or against PERT use remain ambiguous. The authors conclude that there is an urgent need to conduct robust clinical studies to understand the validity and nature of associations between EPI and medical conditions beyond those with proven mechanisms, and examine the potential role for PERT. PMID:29093615
Ryland, S; Bishea, G; Brun-Conti, L; Eyring, M; Flanagan, B; Jergovich, T; MacDougall, D; Suzuki, E
2001-01-01
The 1990s saw the introduction of significantly new types of paint binder chemistries into the automotive finish coat market. Considering the pronounced changes in the binders that can now be found in automotive paints and their potential use in a wide variety of finishes worldwide, the Paint Subgroup of the Scientific Working Group for Materials (SWGMAT) initiated a validation study to investigate the ability of commonly accepted methods of forensic paint examination to differentiate between these newer types of paints. Nine automotive paint systems typical of original equipment applications were acquired from General Motors Corporation in 1992. They consisted of steel panels coated with typical electrocoat primers and/or primer surfacers followed by a black nonmetallic base coat and clear coat. The primary purpose of this study was to evaluate the discrimination power of common forensic techniques when applied to the newer generation original automotive finishes. The second purpose was to evaluate interlaboratory reproducibility of automotive paint spectra collected on a variety of Fourier transform infrared (FT-IR) spectrometers and accessories normally used for forensic paint examinations. The results demonstrate that infrared spectroscopy is an effective tool for discriminating between the major automotive paint manufacturers' formulation types which are currently used in original finishes. Furthermore, and equally important, the results illustrate that the mid-infrared spectra of these finishes are generally quite reproducible even when comparing data from different laboratories, commercial FT-IR instruments, and accessories in a "real world," mostly uncontrolled, environment.
Is Echinococcus intermedius a valid species?
USDA-ARS?s Scientific Manuscript database
Medical and veterinary sciences require scientific names to discriminate pathogenic organisms in our living environment. Various species concepts have been proposed for metazoan animals. There are, however, constant controversies over their validity because of lack of a common criterion to define ...
Near-infrared metallicities, radial velocities, and spectral types for 447 nearby M dwarfs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newton, Elisabeth R.; Charbonneau, David; Irwin, Jonathan
We present metallicities, radial velocities, and near-infrared (NIR) spectral types for 447 M dwarfs determined from moderate resolution (R ≈ 2000) NIR spectra obtained with the NASA Infrared Telescope Facility (IRTF)/SpeX. These M dwarfs are primarily targets of the MEarth Survey, a transiting planet survey searching for super Earths around mid-to-late M dwarfs within 33 pc. We present NIR spectral types for each star and new spectral templates for the IRTF in the Y, J, H, and K-bands, created using M dwarfs with near-solar metallicities. We developed two spectroscopic distance calibrations that use NIR spectral type or an index basedmore » on the curvature of the K-band continuum. Our distance calibration has a scatter of 14%. We searched 27 NIR spectral lines and 10 spectral indices for metallicity sensitive features, taking into account correlated noise in our estimates of the errors on these parameters. We calibrated our relation using 36 M dwarfs in common proper pairs with an F-, G-, or K-type star of known metallicity. We validated the physical association of these pairs using proper motions, radial velocities, and spectroscopic distance estimates. Our resulting metallicity calibration uses the sodium doublet at 2.2 μm as the sole indicator for metallicity. It has an accuracy of 0.12 dex inferred from the scatter between the metallicities of the primaries and the estimated metallicities of the secondaries. Our relation is valid for NIR spectral types from M1V to M5V and for –1.0 dex < [Fe/H] < +0.35 dex. We present a new color-color metallicity relation using J – H and J – K colors that directly relates two observables: the distance from the M dwarf main sequence and equivalent width of the sodium line at 2.2 μm. We used radial velocities of M dwarf binaries, observations at different epochs, and comparison between our measurements and precisely measured radial velocities to demonstrate a 4 km s{sup –1} accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernemann, Inga, E-mail: bernemann@imp.uni-hannover.de; Mueller, Thomas; Blasczyk, Rainer
Highlights: {yields} Marmoset bone marrow-derived MSCs differentiate in suspension into adipogenic, osteogenic and chondrogenic lineages. {yields} Marmoset MSCs integrate in collagen type I scaffolds and differentiate excellently into adipogenic cells. {yields} Common marmoset monkey is a suitable model for soft tissue engineering in human regenerative medicine. -- Abstract: In regenerative medicine, human cell replacement therapy offers great potential, especially by cell types differentiated from immunologically and ethically unproblematic mesenchymal stem cells (MSCs). In terms of an appropriate carrier material, collagen scaffolds with homogeneous pore size of 65 {mu}m were optimal for cell seeding and cultivating. However, before clinical application andmore » transplantation of MSC-derived cells in scaffolds, the safety and efficiency, but also possible interference in differentiation due to the material must be preclinically tested. The common marmoset monkey (Callithrix jacchus) is a preferable non-human primate animal model for this aim due to its genetic and physiological similarities to the human. Marmoset bone marrow-derived MSCs were successfully isolated, cultured and differentiated in suspension into adipogenic, osteogenic and chondrogenic lineages by defined factors. The differentiation capability could be determined by FACS. Specific marker genes for all three cell types could be detected by RT-PCR. Furthermore, MSCs seeded on collagen I scaffolds differentiated in adipogenic lineage showed after 28 days of differentiation high cell viability and homogenous distribution on the material which was validated by calcein AM and EthD staining. As proof of adipogenic cells, the intracellular lipid vesicles in the cells were stained with Oil Red O. The generation of fat vacuoles was visibly extensive distinguishable and furthermore determined on the molecular level by expression of specific marker genes. The results of the study proved both the differential potential of marmoset MSCs in adipogenic, osteogenic and chondrogenic lineages and the suitability of collagen scaffolds as carrier material undisturbing differentiation of primate mesenchymal stem cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorensen, J; Duran, C; Stingo, F
Purpose: To characterize the effect of virtual monochromatic reconstructions on several commonly used texture analysis features in DECT of the chest. Further, to assess the effect of monochromatic energy levels on the ability of these textural features to identify tissue types. Methods: 20 consecutive patients underwent chest CTs for evaluation of lung nodules using Siemens Somatom Definition Flash DECT. Virtual monochromatic images were constructed at 10keV intervals from 40–190keV. For each patient, an ROI delineated the lesion under investigation, and cylindrical ROI’s were placed within 5 different healthy tissues (blood, fat, muscle, lung, and liver). Several histogram- and Grey Levelmore » Cooccurrence Matrix (GLCM)-based texture features were then evaluated in each ROI at each energy level. As a means of validation, these feature values were then used in a random forest classifier to attempt to identify the tissue types present within each ROI. Their predictive accuracy at each energy level was recorded. Results: All textural features changed considerably with virtual monochromatic energy, particularly below 70keV. Most features exhibited a global minimum or maximum around 80keV, and while feature values changed with energy above this, patient ranking was generally unaffected. As expected, blood demonstrated the lowest inter-patient variability, for all features, while lung lesions (encompassing many different pathologies) exhibited the highest. The accuracy of these features in identifying tissues (76% accuracy) was highest at 80keV, but no clear relationship between energy and classification accuracy was found. Two common misclassifications (blood vs liver and muscle vs fat) accounted for the majority (24 of the 28) errors observed. Conclusion: All textural features were highly dependent on virtual monochromatic energy level, especially below 80keV, and were more stable above this energy. However, in a random forest model, these commonly used features were able to reliably differentiate between most tissues types regardless of energy level. Dr Godoy has received a dual-energy CT research grant from Siemens Healthcare. That grant did not directly fund this research.« less
Reliability and validity of advanced theory-of-mind measures in middle childhood and adolescence.
Hayward, Elizabeth O; Homer, Bruce D
2017-09-01
Although theory-of-mind (ToM) development is well documented for early childhood, there is increasing research investigating changes in ToM reasoning in middle childhood and adolescence. However, the psychometric properties of most advanced ToM measures for use with older children and adolescents have not been firmly established. We report on the reliability and validity of widely used, conventional measures of advanced ToM with this age group. Notable issues with both reliability and validity of several of the measures were evident in the findings. With regard to construct validity, results do not reveal a clear empirical commonality between tasks, and, after accounting for comprehension, developmental trends were evident in only one of the tasks investigated. Statement of contribution What is already known on this subject? Second-order false belief tasks have acceptable internal consistency. The Eyes Test has poor internal consistency. Validity of advanced theory-of-mind tasks is often based on the ability to distinguish clinical from typical groups. What does this study add? This study examines internal consistency across six widely used advanced theory-of-mind tasks. It investigates validity of tasks based on comprehension of items by typically developing individuals. It further assesses construct validity, or commonality between tasks. © 2017 The British Psychological Society.
Atik Altınok, Yasemin; Özgür, Suriye; Meseri, Reci; Özen, Samim; Darcan, Şükran; Gökşen, Damla
2017-12-15
The aim of this study was to show the reliability and validity of a Turkish version of Diabetes Eating Problem Survey-Revised (DEPS-R) in children and adolescents with type 1 diabetes mellitus. A total of 200 children and adolescents with type 1 diabetes, ages 9-18 years, completed the DEPS-R Turkish version. In addition to tests of validity, confirmatory factor analysis was conducted to investigate the factor structure of the 16-item Turkish version of DEPS-R. The Turkish version of DEPS-R demonstrated satisfactory Cronbach's ∝ (0.847) and was significantly correlated with age (r=0.194; p<0.01), hemoglobin A1c levels (r=0.303; p<0.01), and body mass index-standard deviation score (r=0.412; p<0.01) indicating criterion validity. Median DEPS-R scores of Turkish version for the total samples, females, and males were 11.0, 11.5, and 10.5, respectively. Disturbed eating behaviors and insulin restriction were associated with poor metabolic control. A short, self-administered diabetes-specific screening tool for disordered eating behavior can be used routinely in the clinical care of adolescents with type 1 diabetes. The Turkish version of DEPS-R is a valid screening tool for disordered eating behaviors in type 1 diabetes and it is potentially important to early detect disordered eating behaviors.
Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.
Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory
2016-06-13
Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.
ERIC Educational Resources Information Center
Wu, Chia-Huei; Wu, Chin-Yu
2008-01-01
Subjective well-being is an increasingly common indicator of adequacy of psychiatric services. An easy-to-administer assessment tool of subjective well-being that is conceptually sound, valid, and reliable is needed for use in persons with schizophrenia. The purpose of this paper was to validate the 5-item Satisfaction with Life Scale…
ERIC Educational Resources Information Center
Maanen, Annette; Dewald-Kaufmann, Julia F.; Oort, Frans J.; de Bruin, Eduard J.; Smits, Marcel G.; Short, Michelle A.; Gradisar, Michael; Kerkhof, Gerard A.; Meijer, Anne Marie
2014-01-01
Background: Sleep reduction, resulting from insufficient or poor sleep, is a common phenomenon in adolescents. Due to its severe negative psychological and behavioral daytime consequences, it is important to have a short reliable and valid measure to assess symptoms of sleep reduction. Objective: This study aims to validate the Sleep Reduction…
DESCQA: Synthetic Sky Catalog Validation Framework
NASA Astrophysics Data System (ADS)
Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph
2018-04-01
The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.
Validity and reliability of the NAB Naming Test.
Sachs, Bonnie C; Rush, Beth K; Pedraza, Otto
2016-05-01
Confrontation naming is commonly assessed in neuropsychological practice, but few standardized measures of naming exist and those that do are susceptible to the effects of education and culture. The Neuropsychological Assessment Battery (NAB) Naming Test is a 31-item measure used to assess confrontation naming. Despite adequate psychometric information provided by the test publisher, there has been limited independent validation of the test. In this study, we investigated the convergent and discriminant validity, internal consistency, and alternate forms reliability of the NAB Naming Test in a sample of adults (Form 1: n = 247, Form 2: n = 151) clinically referred for neuropsychological evaluation. Results indicate adequate-to-good internal consistency and alternate forms reliability. We also found strong convergent validity as demonstrated by relationships with other neurocognitive measures. We found preliminary evidence that the NAB Naming Test demonstrates a more pronounced ceiling effect than other commonly used measures of naming. To our knowledge, this represents the largest published independent validation study of the NAB Naming Test in a clinical sample. Our findings suggest that the NAB Naming Test demonstrates adequate validity and reliability and merits consideration in the test arsenal of clinical neuropsychologists.
Boruff, Jill T; Harrison, Pamela
2018-01-01
This scoping review investigates how knowledge and skills are assessed in the information literacy (IL) instruction for students in physical therapy, occupational therapy, or speech-language pathology, regardless of whether the instruction was given by a librarian. The objectives were to discover what assessment measures were used, determine whether these assessment methods were tested for reliability and validity, and provide librarians with guidance on assessment methods to use in their instruction in evidence-based practice contexts. A scoping review methodology was used. A systematic search strategy was run in Ovid MEDLINE and adapted for CINAHL; EMBASE; Education Resources Information Center (ERIC) (EBSCO); Library and Information Science Abstracts (LISA); Library, Information Science & Technology Abstracts (LISTA); and Proquest Theses and Dissertations from 1990 to January 16, 2017. Forty articles were included for data extraction. Three major themes emerged: types of measures used, type and context of librarian involvement, and skills and outcomes described. Thirty-four measures of attitude and thirty-seven measures of performance were identified. Course products were the most commonly used type of performance measure. Librarians were involved in almost half the studies, most frequently as instructor, but also as author or assessor. Information literacy skills such as question formulation and database searching were described in studies that did not involve a librarian. Librarians involved in instructional assessment can use rubrics such as the Valid Assessment of Learning in Undergraduate Education (VALUE) when grading assignments to improve the measurement of knowledge and skills in course-integrated IL instruction. The Adapted Fresno Test could be modified to better suit the real-life application of IL knowledge and skills.
Experimental validation of a damage detection approach on a full-scale highway sign support truss
NASA Astrophysics Data System (ADS)
Yan, Guirong; Dyke, Shirley J.; Irfanoglu, Ayhan
2012-04-01
Highway sign support structures enhance traffic safety by allowing messages to be delivered to motorists related to directions and warning of hazards ahead, and facilitating the monitoring of traffic speed and flow. These structures are exposed to adverse environmental conditions while in service. Strong wind and vibration accelerate their deterioration. Typical damage to this type of structure includes local fatigue fractures and partial loosening of bolted connections. The occurrence of these types of damage can lead to a failure in large portions of the structure, jeopardizing the safety of passing traffic. Therefore, it is important to have effective damage detection approaches to ensure the integrity of these structures. In this study, an extension of the Angle-between-String-and-Horizon (ASH) flexibility-based approach [32] is applied to locate damage in sign support truss structures at bay level. Ambient excitations (e.g. wind) can be considered as a significant source of vibration in these structures. Considering that ambient excitation is immeasurable, a pseudo ASH flexibility matrix constructed from output-only derived operational deflection shapes is proposed. A damage detection method based on the use of pseudo flexibility matrices is proposed to address several of the challenges posed in real-world applications. Tests are conducted on a 17.5-m long full-scale sign support truss structure to validate the effectiveness of the proposed method. Damage cases associated with loosened bolts and weld failures are considered. These cases are realistic for this type of structure. The results successfully demonstrate the efficacy of the proposed method to locate the two common forms of damage on sign support truss structures instrumented with a few accelerometers.
Boruff, Jill T.; Harrison, Pamela
2018-01-01
Objective This scoping review investigates how knowledge and skills are assessed in the information literacy (IL) instruction for students in physical therapy, occupational therapy, or speech-language pathology, regardless of whether the instruction was given by a librarian. The objectives were to discover what assessment measures were used, determine whether these assessment methods were tested for reliability and validity, and provide librarians with guidance on assessment methods to use in their instruction in evidence-based practice contexts. Methods A scoping review methodology was used. A systematic search strategy was run in Ovid MEDLINE and adapted for CINAHL; EMBASE; Education Resources Information Center (ERIC) (EBSCO); Library and Information Science Abstracts (LISA); Library, Information Science & Technology Abstracts (LISTA); and Proquest Theses and Dissertations from 1990 to January 16, 2017. Forty articles were included for data extraction. Results Three major themes emerged: types of measures used, type and context of librarian involvement, and skills and outcomes described. Thirty-four measures of attitude and thirty-seven measures of performance were identified. Course products were the most commonly used type of performance measure. Librarians were involved in almost half the studies, most frequently as instructor, but also as author or assessor. Information literacy skills such as question formulation and database searching were described in studies that did not involve a librarian. Conclusion Librarians involved in instructional assessment can use rubrics such as the Valid Assessment of Learning in Undergraduate Education (VALUE) when grading assignments to improve the measurement of knowledge and skills in course-integrated IL instruction. The Adapted Fresno Test could be modified to better suit the real-life application of IL knowledge and skills. PMID:29339931
ERIC Educational Resources Information Center
Sipps, Gary J.; Alexander, Ralph A.
1987-01-01
The construct validity of extraversion-introversion was explored, as measured by the Myers Briggs Type Indicator (MBTI) and the Eysenck Personality Inventory. Findings supported the complexity of extraversion-introversion. Two MBTI scales, Extraversion Introversion and Judging Perceiving, were factorially valid measures of impulsivity…
Loss-of-function mutations in SLC30A8 protect against type 2 diabetes.
Flannick, Jason; Thorleifsson, Gudmar; Beer, Nicola L; Jacobs, Suzanne B R; Grarup, Niels; Burtt, Noël P; Mahajan, Anubha; Fuchsberger, Christian; Atzmon, Gil; Benediktsson, Rafn; Blangero, John; Bowden, Don W; Brandslund, Ivan; Brosnan, Julia; Burslem, Frank; Chambers, John; Cho, Yoon Shin; Christensen, Cramer; Douglas, Desirée A; Duggirala, Ravindranath; Dymek, Zachary; Farjoun, Yossi; Fennell, Timothy; Fontanillas, Pierre; Forsén, Tom; Gabriel, Stacey; Glaser, Benjamin; Gudbjartsson, Daniel F; Hanis, Craig; Hansen, Torben; Hreidarsson, Astradur B; Hveem, Kristian; Ingelsson, Erik; Isomaa, Bo; Johansson, Stefan; Jørgensen, Torben; Jørgensen, Marit Eika; Kathiresan, Sekar; Kong, Augustine; Kooner, Jaspal; Kravic, Jasmina; Laakso, Markku; Lee, Jong-Young; Lind, Lars; Lindgren, Cecilia M; Linneberg, Allan; Masson, Gisli; Meitinger, Thomas; Mohlke, Karen L; Molven, Anders; Morris, Andrew P; Potluri, Shobha; Rauramaa, Rainer; Ribel-Madsen, Rasmus; Richard, Ann-Marie; Rolph, Tim; Salomaa, Veikko; Segrè, Ayellet V; Skärstrand, Hanna; Steinthorsdottir, Valgerdur; Stringham, Heather M; Sulem, Patrick; Tai, E Shyong; Teo, Yik Ying; Teslovich, Tanya; Thorsteinsdottir, Unnur; Trimmer, Jeff K; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Vaziri-Sani, Fariba; Voight, Benjamin F; Wilson, James G; Boehnke, Michael; McCarthy, Mark I; Njølstad, Pål R; Pedersen, Oluf; Groop, Leif; Cox, David R; Stefansson, Kari; Altshuler, David
2014-04-01
Loss-of-function mutations protective against human disease provide in vivo validation of therapeutic targets, but none have yet been described for type 2 diabetes (T2D). Through sequencing or genotyping of ~150,000 individuals across 5 ancestry groups, we identified 12 rare protein-truncating variants in SLC30A8, which encodes an islet zinc transporter (ZnT8) and harbors a common variant (p.Trp325Arg) associated with T2D risk and glucose and proinsulin levels. Collectively, carriers of protein-truncating variants had 65% reduced T2D risk (P = 1.7 × 10(-6)), and non-diabetic Icelandic carriers of a frameshift variant (p.Lys34Serfs*50) demonstrated reduced glucose levels (-0.17 s.d., P = 4.6 × 10(-4)). The two most common protein-truncating variants (p.Arg138* and p.Lys34Serfs*50) individually associate with T2D protection and encode unstable ZnT8 proteins. Previous functional study of SLC30A8 suggested that reduced zinc transport increases T2D risk, and phenotypic heterogeneity was observed in mouse Slc30a8 knockouts. In contrast, loss-of-function mutations in humans provide strong evidence that SLC30A8 haploinsufficiency protects against T2D, suggesting ZnT8 inhibition as a therapeutic strategy in T2D prevention.
Experimental Diabetes Mellitus in Different Animal Models
Al-awar, Amin; Veszelka, Médea; Szűcs, Gergő; Attieh, Zouhair; Murlasits, Zsolt; Török, Szilvia; Pósa, Anikó; Varga, Csaba
2016-01-01
Animal models have historically played a critical role in the exploration and characterization of disease pathophysiology and target identification and in the evaluation of novel therapeutic agents and treatments in vivo. Diabetes mellitus disease, commonly known as diabetes, is a group of metabolic disorders characterized by high blood glucose levels for a prolonged time. To avoid late complications of diabetes and related costs, primary prevention and early treatment are therefore necessary. Due to its chronic symptoms, new treatment strategies need to be developed, because of the limited effectiveness of the current therapies. We overviewed the pathophysiological features of diabetes in relation to its complications in type 1 and type 2 mice along with rat models, including Zucker Diabetic Fatty (ZDF) rats, BB rats, LEW 1AR1/-iddm rats, Goto-Kakizaki rats, chemically induced diabetic models, and Nonobese Diabetic mouse, and Akita mice model. The advantages and disadvantages that these models comprise were also addressed in this review. This paper briefly reviews the wide pathophysiological and molecular mechanisms associated with type 1 and type 2 diabetes, particularly focusing on the challenges associated with the evaluation and predictive validation of these models as ideal animal models for preclinical assessments and discovering new drugs and therapeutic agents for translational application in humans. PMID:27595114
Rome III survey of irritable bowel syndrome among ethnic Malays
Lee, Yeong Yeh; Waid, Anuar; Tan, Huck Joo; Chua, Andrew Seng Boon; Whitehead, William E
2012-01-01
AIM: To survey irritable bowel syndrome (IBS) using Rome III criteria among Malays from the north-eastern region of Peninsular Malaysia. METHODS: A previously validated Malay language Rome III IBS diagnostic questionnaire was used in the current study. A prospective sample of 232 Malay subjects (80% power) was initially screened. Using a stratified random sampling strategy, a total of 221 Malay subjects (112 subjects in a “full time job” and 109 subjects in “no full time job”) were recruited. Subjects were visitors (friends and relatives) within the hospital compound and were representative of the local community. Red flags and psychosocial alarm symptoms were also assessed in the current study using previously translated and validated questionnaires. Subjects with IBS were sub-typed into constipation-predominant, diarrhea-predominant, mixed type and un-subtyped. Univariable and multivariable analyses were used to test for association between socioeconomic factors and presence of red flags and psychosocial alarm features among the Malays with IBS. RESULTS: IBS was present in 10.9% (24/221), red flags in 22.2% (49/221) and psychosocial alarm features in 9.0% (20/221). Red flags were more commonly reported in subjects with IBS (83.3%) than psychosocial alarm features (20.8%, P < 0.001). Subjects with IBS were older (mean age 41.4 years vs 36.9 years, P = 0.08), but no difference in gender was noted (P = 0.4). Using univariable analysis, IBS was significantly associated with a tertiary education, high individual income above RM1000, married status, ex-smoker and the presence of red flags (all P < 0.05). In multiple logistic regression analysis, only the presence of red flags was significantly associated with IBS (odds ratio: 0.02, 95%CI: 0.004-0.1, P < 0.001). The commonest IBS sub-type was mixed type (58.3%), followed by constipation-predominant (20.8%), diarrhea-predominant (16.7%) and un-subtyped (4.2%). Four of 13 Malay females (30.8%) with IBS also had menstrual pain. Most subjects with IBS had at least one red flag (70.8%), 12.5% had two red flags and 16.7% with no red flags. The commonest red flag was a bowel habit change in subjects > 50 years old and this was reported by 16.7% of subjects with IBS. CONCLUSION: Using the Rome III criteria, IBS was common among ethnic Malays from the north-eastern region of Peninsular Malaysia. PMID:23197894
Verhoef, J; Toussaint, P J; Putter, H; Zwetsloot-Schonk, J H M; Vliet Vlieland, T P M
2005-10-01
Coordinated teams with multidisciplinary team conferences are generally seen as a solution to the management of complex health conditions. However, problems regarding the process of communication during team conferences are reported, such as the absence of a common language or viewpoint and the exchange of irrelevant or repeated information. To determine the outcome of interventions aimed at improving communication during team conferences, a reliable and valid assessment method is needed. To investigate the feasibility of a theory-based measurement instrument for assessing the process of the communication during multidisciplinary team conferences in rheumatology. An observation instrument was developed based on communication theory. The instrument distinguishes three types of communication: (I) grounding activities, (II) coordination of non-team activities, and (III) coordination of team activities. To assess the process of communication during team conferences in a rheumatology clinic with inpatient and day patient facilities, team conferences were videotaped. To determine the inter-rater reliability, in 20 conferences concerning 10 patients with rheumatoid arthritis admitted to the inpatient unit, the instrument was applied by two investigators independently. Content validity was determined by analysing and comparing the results of initial and follow-up team conferences of 25 consecutive patients with rheumatoid arthritis admitted to the day patient unit (Wilcoxon signed rank test). The inter-rater reliability was excellent with the intra-class correlation coefficients being >0.98 for both types I and III communications in 10 initial and 10 follow-up conferences (type II was not observed). An analysis of an additional 25 initial and 86 follow-up team conferences showed that time spent on grounding (type I) made up the greater part of the contents of communication (87% S.D. 14 and 60% S.D. 29 in initial and follow-up conferences, respectively), which is significantly more compared to time spent on co-ordination (p<0.001 and 0.02 for categories II and III, respectively). Moreover, significantly less time spent was spent on grounding in follow-up as compared to initial team conferences, whereas the time spent on coordination (type III) increased (both p-values<0.001). This theory-based measurement instrument for describing and evaluating the communication process during team conferences proved to be reliable and valid in this pilot study. Its usefulness to detect changes in the communication process, e.g. after implementing systems for re-structuring team conferences mediated by ICT applications, should be further examined.
NASA Astrophysics Data System (ADS)
Khademi, April; Hosseinzadeh, Danoush
2014-03-01
Alzheimer's disease (AD) is the most common form of dementia in the elderly characterized by extracellular deposition of amyloid plaques (AP). Using animal models, AP loads have been manually measured from histological specimens to understand disease etiology, as well as response to treatment. Due to the manual nature of these approaches, obtaining the AP load is labourious, subjective and error prone. Automated algorithms can be designed to alleviate these challenges by objectively segmenting AP. In this paper, we focus on the development of a novel algorithm for AP segmentation based on robust preprocessing and a Type II fuzzy system. Type II fuzzy systems are much more advantageous over the traditional Type I fuzzy systems, since ambiguity in the membership function may be modeled and exploited to generate excellent segmentation results. The ambiguity in the membership function is defined as an adaptively changing parameter that is tuned based on the local contrast characteristics of the image. Using transgenic mouse brains with AP ground truth, validation studies were carried out showing a high degree of overlap and low degree of oversegmentation (0.8233 and 0.0917, respectively). The results highlight that such a framework is able to handle plaques of various types (diffuse, punctate), plaques with varying Aβ concentrations as well as intensity variation caused by treatment effects or staining variability.
Maleki, Farzaneh; Hosseini Nodeh, Zahra; Rahnavard, Zahra; Arab, Masoume
2016-01-01
Since type-2 diabetes is the most common chronic disease among Iranian female adolescents, we applied theory of planned behavior to examine the effect of training to intention to preventative nutritional behaviors for type-2 diabetes among female adolescents. In this experimental study 200 (11-14 year old) girls from 8 schools of Tehran city (100 in each intervention and control group) were recruited based on cluster sampling method during two stages. For intervention group, an educational program was designed based on the theory of planned behavior and presented in 6 workshop sessions to prevent type-2 diabetes. The data were collected before and two months after the workshops using a valid and reliable (α=0.72 and r=0.80) authormade questionnaire based on Ajzens TPB questionnaire manual. The data were analyzed using t-test, chi-square test and analysis of covariance. Findings indicate that the two groups were homogeneous regarding the demographic characteristics before education, but the mean score of the theory components (attitudes, subjective norms, perceived behavioral control, and intention) was higher in the control group. Also, results showed all of the theory components significantly increased after the education in the intervention group (p=0.000). Training based on the theory of planned behavior enhances the intention to adherence preventative nutritional behaviors for type-2 diabetes among the studied female adolescents.
Harvey, H Benjamin; Chow, David; Boston, Marion; Zhao, Jing; Lucey, Leonard; Monticciolo, Debra L
2014-07-01
The aim of this study was to evaluate the findings of the first year of validation site surveys performed by the ACR pursuant to new federal accreditation requirements for nonhospital advanced diagnostic imaging (ADI) facilities. In the first year of validation site surveys (November 2012 to November 2013), the ACR surveyed 943 ADI facilities across 21 states. Data were extracted from these site survey reports and analyzed on the basis of the survey outcomes and the frequency and type of deficiencies and recommendations. Follow-up data were obtained from the ACR for facilities deemed noncompliant on the site survey to determine if these facilities adequately took the corrective actions necessary to maintain accreditation. Of the 943 ADI facilities surveyed, 45% (n = 421) were deemed compliant with the ACR accreditation standards, and 55% (n = 522) had one or more deficiencies. Failure to produce the required personnel documentation and absence of mandatory written policies were the two most common causes of deficiencies. Facilities accredited in more modalities tended to fare better in the site surveys, with the number of accredited modalities at a facility negatively associated with the likelihood of a deficiency (P = .007). Of the facilities with deficiencies, 73% (n = 382) took the necessary corrective actions to maintain accreditation, 27% (n = 140) were in the process of taking corrective actions, and no facility has lost accreditation because of an inability to adequately address the deficiencies. Nonbinding recommendations were made to 37% (n = 346) of facilities, and facilities with deficiencies were statistically more likely to receive recommendations (P < .001). Initial site surveys of ADI facilities demonstrated a high proportion of deficient facilities, but no facility has lost accreditation because of an inability to correct these deficiencies. Knowledge of the most common sources of deficiencies and recommendations can assist ACR-accredited ADI facilities in better preparing for validation site surveys, reducing the likelihood of facility noncompliance. Copyright © 2014. Published by Elsevier Inc.
Fox, Carrie; Bernardino, Lourdes; Cochran, Jill; Essig, Mary; Bridges, Kristie Grove
2017-11-01
Assessing pediatric patients for insulin resistance is one way to identify those who are at a high risk of developing type 2 diabetes mellitus. The homoeostasis model assessment (HOMA) is a measure of insulin resistance based on fasting blood glucose and insulin levels. Although this measure is widely used in research, cutoff values for pediatric populations have not been established. To assess the validity of HOMA cutoff values used in pediatric studies published in peer-reviewed journals. Studies published from January 2010 to December 2015 were identified through MEDLINE. Initial screening of abstracts was done to select studies that were conducted in pediatric populations and used HOMA to assess insulin resistance. Subsequent full-text review narrowed the list to only those studies that used a specific HOMA score to diagnose insulin resistance. Each study was classified as using a predetermined fixed HOMA cutoff value or a cutoff that was a percentile specific to that population. For studies that used a predetermined cutoff value, the references cited to provide evidence in support of that cutoff were evaluated. In the 298 articles analyzed, 51 different HOMA cutoff values were used to classify patients as having insulin resistance. Two hundred fifty-five studies (85.6%) used a predetermined fixed cutoff value, but only 72 (28.2%) of those studies provided a reference that supported its use. One hundred ten studies (43%) that used a fixed cutoff either cited a study that did not mention HOMA or provided no reference at all. Tracing of citation history indicated that the most commonly used cutoff values were ultimately based on studies that did not validate their use for defining insulin resistance. Little evidence exists to support HOMA cutoff values commonly used to define insulin resistance in pediatric studies. These findings highlight the importance of validating study design elements when training medical students and novice investigators. Using available data to generate population ranges for HOMA would improve its clinical utility.
Ahadi, Alireza; Sablok, Gaurav; Hutvagner, Gyorgy
2017-04-07
MicroRNAs (miRNAs) are ∼19-22 nucleotides (nt) long regulatory RNAs that regulate gene expression by recognizing and binding to complementary sequences on mRNAs. The key step in revealing the function of a miRNA, is the identification of miRNA target genes. Recent biochemical advances including PAR-CLIP and HITS-CLIP allow for improved miRNA target predictions and are widely used to validate miRNA targets. Here, we present miRTar2GO, which is a model, trained on the common rules of miRNA-target interactions, Argonaute (Ago) CLIP-Seq data and experimentally validated miRNA target interactions. miRTar2GO is designed to predict miRNA target sites using more relaxed miRNA-target binding characteristics. More importantly, miRTar2GO allows for the prediction of cell-type specific miRNA targets. We have evaluated miRTar2GO against other widely used miRNA target prediction algorithms and demonstrated that miRTar2GO produced significantly higher F1 and G scores. Target predictions, binding specifications, results of the pathway analysis and gene ontology enrichment of miRNA targets are freely available at http://www.mirtar2go.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
41 CFR 60-3.5 - General standards for validity studies.
Code of Federal Regulations, 2010 CFR
2010-07-01
... should avoid making employment decisions on the basis of measures of knowledges, skills, or abilities... General standards for validity studies. A. Acceptable types of validity studies. For the purposes of... of these guidelines, section 14 of this part. New strategies for showing the validity of selection...
The Most Common Geometric and Semantic Errors in CityGML Datasets
NASA Astrophysics Data System (ADS)
Biljecki, F.; Ledoux, H.; Du, X.; Stoter, J.; Soon, K. H.; Khoo, V. H. S.
2016-10-01
To be used as input in most simulation and modelling software, 3D city models should be geometrically and topologically valid, and semantically rich. We investigate in this paper what is the quality of currently available CityGML datasets, i.e. we validate the geometry/topology of the 3D primitives (Solid and MultiSurface), and we validate whether the semantics of the boundary surfaces of buildings is correct or not. We have analysed all the CityGML datasets we could find, both from portals of cities and on different websites, plus a few that were made available to us. We have thus validated 40M surfaces in 16M 3D primitives and 3.6M buildings found in 37 CityGML datasets originating from 9 countries, and produced by several companies with diverse software and acquisition techniques. The results indicate that CityGML datasets without errors are rare, and those that are nearly valid are mostly simple LOD1 models. We report on the most common errors we have found, and analyse them. One main observation is that many of these errors could be automatically fixed or prevented with simple modifications to the modelling software. Our principal aim is to highlight the most common errors so that these are not repeated in the future. We hope that our paper and the open-source software we have developed will help raise awareness for data quality among data providers and 3D GIS software producers.
Porto, Paolo; Walling, Des E
2012-10-01
Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Barbosa, Margarida; Saavedra, Ana; Severo, Milton; Maier, Christoph; Carvalho, Davide
2017-04-01
Diabetic peripheral neuropathy is very common in the diabetic population. Early screening for foot pathology is of the utmost importance. The Michigan Neuropathy Screening Instrument (MNSI) is an easy, brief, and noninvasive screening tool. The aim of this study was to validate the semantics and characteristics of both sections of the Portuguese translation of the MNSI for Portuguese diabetic patients. A cross-sectional study was performed on 87 type 1 and 2 diabetic patients at our outpatient endocrinology department. The final sample was composed of 76 patients. Nerve conduction studies were requested, but only a subsample of 42 patients agreed to participate in them. The scale was internally consistent (Cronbach's alpha > 0.70 in section A, or a clinical history questionnaire and a physical examination [section B]), and the scores of both sections were positively correlated (r = 0.70; P < 0.001). With regard to stability, MNSI scores between test/retest showed high stability (intraclass correlation coefficient = 0.91). The receiver-operating characteristic (ROC) demonstrated its validity, with ROC curve values for section A, section B, and sections A + B of 0.913, 0.798, and 0.906 respectively. Considering a cut off of ≥ 3 in section A and of ≥ 2 in section B, we obtained a sensitivity of 100% and 86%; a specificity of 64% and 61%; a positive predictive value of 80% and 73%; and a negative predictive value of 100% and 79%, respectively. The Portuguese MNSI is a reliable and valid tool for screening diabetic neuropathy. © 2016 World Institute of Pain.
Simulation models in population breast cancer screening: A systematic review.
Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H
2015-08-01
The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
von Zerssen, D; Barthelmes, H; Pössl, J; Black, C; Garzynski, E; Wessel, E; Hecht, H
1998-01-01
The Biographical Personality Interview (BPI) was applied to 179 subjects (158 psychiatric patients and 21 probands from the general population); 100 patients and 20 healthy controls served as a validation sample; the others had been interviewed during the training period or did not meet the inclusion criteria for the validation of the BPI. The acceptance of the interview was high, the inter-rater reliability of the ratings of premorbid personality structures ("types") varied between 0.81 and 0.88 per type. Concurrent validity of the typological constructs as assessed by means of the BPI was inferred from the intercorrelations of type scores and correlations of these scores with questionnaire data and proved to be adequate. Clinical validity of the assessment was indicated by statistically significant differences between diagnostic groups. Problems and further developments of the instrument and its application are discussed.
2011-01-01
Background As genetics technology proceeds, practices of genetic testing have become more heterogeneous: many different types of tests are finding their way to the public in different settings and for a variety of purposes. This diversification is relevant to the discourse on ethical, legal and societal issues (ELSI) surrounding genetic testing, which must evolve to encompass these differences. One important development is the rise of personal genome testing on the basis of genetic profiling: the testing of multiple genetic variants simultaneously for the prediction of common multifactorial diseases. Currently, an increasing number of companies are offering personal genome tests directly to consumers and are spurring ELSI-discussions, which stand in need of clarification. This paper presents a systematic approach to the ELSI-evaluation of personal genome testing for multifactorial diseases along the lines of its test characteristics. Discussion This paper addresses four test characteristics of personal genome testing: its being a non-targeted type of testing, its high analytical validity, low clinical validity and problematic clinical utility. These characteristics raise their own specific ELSI, for example: non-targeted genetic profiling poses serious problems for information provision and informed consent. Questions about the quantity and quality of the necessary information, as well as about moral responsibilities with regard to the provision of information are therefore becoming central themes within ELSI-discussions of personal genome testing. Further, the current low level of clinical validity of genetic profiles raises questions concerning societal risks and regulatory requirements, whereas simultaneously it causes traditional ELSI-issues of clinical genetics, such as psychological and health risks, discrimination, and stigmatization, to lose part of their relevance. Also, classic notions of clinical utility are challenged by the newer notion of 'personal utility.' Summary Consideration of test characteristics is essential to any valuable discourse on the ELSI of personal genome testing for multifactorial diseases. Four key characteristics of the test - targeted/non-targeted testing, analytical validity, clinical validity and clinical utility - together determine the applicability and the relevance of ELSI to specific tests. The paper identifies and discusses four areas of interest for the ELSI-debate on personal genome testing: informational problems, risks, regulatory issues, and the notion of personal utility. PMID:21672210
Wearable ECG Based on Impulse-Radio-Type Human Body Communication.
Wang, Jianqing; Fujiwara, Takuya; Kato, Taku; Anzai, Daisuke
2016-09-01
Human body communication (HBC) provides a promising physical layer for wireless body area networks (BANs) in healthcare and medical applications, because of its low propagation loss and high security characteristics. In this study, we have developed a wearable electrocardiogram (ECG) which employs impulse radio (IR)-type HBC technology for transmitting vital signals on the human body in a wearable BAN scenario. The HBC-based wearable ECG has two excellent features. First, the wideband performance of the IR scheme contributed to very low radiation power so that the transceiver is easy to satisfy the extremely weak radio laws, which does not need a license. This feature can provide big convenience in the use and spread of the wearable ECG. Second, the realization of common use of sensing and transmitting electrodes based on time sharing and capacitive coupling largely simplified the HBC-based ECG structure and contributed to its miniaturization. To verify the validity of the HBC-based ECG, we evaluated its communication performance and ECG acquisition performance. The measured bit error rate, smaller than 10 -3 at 1.25 Mb/s, showed a good physical layer communication performance, and the acquired ECG waveform and various heart-rate variability parameters in time and frequency domains exhibited good agreement with a commercially available radio-frequency ECG and a Holter ECG. These results sufficiently showed the validity and feasibility of the HBC-based ECG for healthcare applications. This should be the first time to have realized a real-time ECG transmission by using the HBC technology.
NASA Astrophysics Data System (ADS)
Zhang, Dianjun; Zhou, Guoqing
2015-12-01
Soil moisture (SM) is a key variable that has been widely used in many environmental studies. Land surface temperature versus vegetation index (LST-VI) space becomes a common way to estimate SM in optical remote sensing applications. Normalized LST-VI space is established by the normalized LST and VI to obtain the comparable SM in Zhang et al. (Validation of a practical normalized soil moisture model with in situ measurements in humid and semiarid regions [J]. International Journal of Remote Sensing, DOI: 10.1080/01431161.2015.1055610). The boundary conditions in the study were set to limit the point A (the driest bare soil) and B (the wettest bare soil) for surface energy closure. However, no limitation was installed for point D (the full vegetation cover). In this paper, many vegetation types are simulated by the land surface model - Noah LSM 3.2 to analyze the effects on soil moisture estimation, such as crop, grass and mixed forest. The locations of point D are changed with vegetation types. The normalized LST of point D for forest is much lower than crop and grass. The location of point D is basically unchanged for crop and grass.
Measurement of the local food environment: a comparison of existing data sources.
Bader, Michael D M; Ailshire, Jennifer A; Morenoff, Jeffrey D; House, James S
2010-03-01
Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses-drugstores, liquor stores, bars, convenience stores, restaurants, and grocers-located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design.
Fall classification by machine learning using mobile phones.
Albert, Mark V; Kording, Konrad; Herrmann, Megan; Jayaraman, Arun
2012-01-01
Fall prevention is a critical component of health care; falls are a common source of injury in the elderly and are associated with significant levels of mortality and morbidity. Automatically detecting falls can allow rapid response to potential emergencies; in addition, knowing the cause or manner of a fall can be beneficial for prevention studies or a more tailored emergency response. The purpose of this study is to demonstrate techniques to not only reliably detect a fall but also to automatically classify the type. We asked 15 subjects to simulate four different types of falls-left and right lateral, forward trips, and backward slips-while wearing mobile phones and previously validated, dedicated accelerometers. Nine subjects also wore the devices for ten days, to provide data for comparison with the simulated falls. We applied five machine learning classifiers to a large time-series feature set to detect falls. Support vector machines and regularized logistic regression were able to identify a fall with 98% accuracy and classify the type of fall with 99% accuracy. This work demonstrates how current machine learning approaches can simplify data collection for prevention in fall-related research as well as improve rapid response to potential injuries due to falls.
Psychometric evaluation of dietary habits questionnaire for type 2 diabetes mellitus
NASA Astrophysics Data System (ADS)
Sami, W.; Ansari, T.; Butt, N. S.; Hamid, M. R. Ab
2017-09-01
This research evaluated the psychometric properties of English version of dietary habits questionnaires developed for type 2 diabetic patients. There is scarcity of literature about availability of standardized questionnaires for assessing dietary habits of type 2 diabetics in Saudi Arabia. As dietary habits vary from country to country, therefore, this was an attempt to develop questionnaires that can serve as a baseline. Through intensive literature review, four questionnaires were developed / modified and subsequently tested for psychometric properties. Prior to pilot study, a pre-test was conducted to evaluate the face validity and content validity. The pilot study was conducted from 23 October - 22 November, 2016 to evaluate the questionnaires’ reliability and validity. Systematic random sampling technique was used to collect the data from 132 patients by direct investigation method. Questionnaires assessing diabetes mellitus knowledge (0.891), dietary knowledge (0.869), dietary attitude (0.841) and dietary practices (0.874) had good internal consistency reliability. Factor analysis conducted on dietary attitude questionnaire showed a valid 5 factor solution. Directions of loadings were positive and free from factorial complexity. Relying on the data obtained from type 2 diabetics, these questionnaires can be considered as reliable and valid for the assessment of dietary habits in Saudi Arabia and neighbouring Gulf countries population.
29 CFR 779.204 - Common types of “enterprise.”
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Common types of âenterprise.â 779.204 Section 779.204 Labor... Business Unit § 779.204 Common types of “enterprise.” (a) The single establishment business. In the simplest type of organization—the entire business ordinarily is one enterprise. The entire business...
Concurrent Validity of the Online Version of the Keirsey Temperament Sorter II.
ERIC Educational Resources Information Center
Kelly, Kevin R.; Jugovic, Heidi
2001-01-01
Data from the Keirsey Temperament Sorter II online instrument and Myers Briggs Type Indicator (MBTI) for 203 college freshmen were analyzed. Positive correlations appeared between the concurrent MBTI and Keirsey measures of psychological type, giving preliminary support to the validity of the online version of Keirsey. (Contains 28 references.)…
Examinee Noneffort and the Validity of Program Assessment Results
ERIC Educational Resources Information Center
Wise, Steven L.; DeMars, Christine E.
2010-01-01
Educational program assessment studies often use data from low-stakes tests to provide evidence of program quality. The validity of scores from such tests, however, is potentially threatened by examinee noneffort. This study investigated the extent to which one type of noneffort--rapid-guessing behavior--distorted the results from three types of…
Guerrero-Romero, Fernando; Rodríguez-Morán, Martha
2010-03-01
To validate a method for screening cases of type 2 diabetes and monitoring at-risk people in a community in northern Mexico. The screening instrument for type 2 diabetes (ITD, for its Spanish acronym) was developed using a multiple logistic regression analysis that made it possible to determine the association between a new diagnosis of diabetes (a dependent variable) and 11 known risk factors. Internal validations were performed (through v-fold cross-validation), together with external validations (through the monitoring of a cohort of healthy individuals). In order to estimate the relative risk (RR) of developing type 2 diabetes, the total ITD score is calculated on the basis of an individual's risk factors and compared against a curve that shows the probability of that individual developing the disease. Of the 525 people in the cohort, 438 (83.4%) were followed for an average of 7 years (4.5 to 10 years), for a total of 2 696 person-years; 62 (14.2%) people developed diabetes during the time they were followed. Individuals scoring 55 points based on their risk factors demonstrated a significantly higher risk of developing diabetes in 7 years (RR = 6.1; IC95%: 1.7 to 11.1); the risk was even higher for those with a score of 75 points (RR = 9.4; IC95%: 2.1 to 11.5). The ITD is easy to use and a valid screening alternative for type 2 diabetes. Its use will allow more individuals to benefit from disease prevention methods and early diagnosis without substantially increasing costs and with minimal use of laboratory resources.
Reliability and validity of the Symptoms of Depression Questionnaire (SDQ)
Pedrelli, Paola; Blais, Mark A.; Alpert, Jonathan E.; Shelton, Richard C.; Walker, Rosemary S. W.; Fava, Maurizio
2015-01-01
Current measures for major depressive disorder focus primarily on the assessment of depressive symptoms, while often omitting other common features. However, the presence of comorbid features in the anxiety spectrum influences outcome and may effect treatment. More comprehensive measures of depression are needed that include the assessment of symptoms in the anxiety–depression spectrum. This study examines the reliability and validity of the Symptoms of Depression Questionnaire (SDQ), which assesses irritability, anger attacks, and anxiety symptoms together with the commonly considered symptoms of depression. Analysis of the factor structure of the SDQ identified 5 subscales, including one in the anxiety–depression spectrum, with adequate internal consistency and concurrent validity. The SDQ may be a valuable new tool to better characterize depression and identify and administer more targeted interventions. PMID:25275853
Caro-Bautista, Jorge; Morilla-Herrera, Juan Carlos; Villa-Estrada, Francisca; Cuevas-Fernández-Gallego, Magdalena; Lupiáñez-Pérez, Inmaculada; Morales-Asencio, José Miguel
2016-01-01
To undertake the cultural adaptation and the psychometric assessment of the Summary of Diabetes Self-Care Activities measure (SDSCA) in Spanish population with type 2 diabetes mellitus. Clinimetric validation study. Primary health care centers of District Malaga and Valle del Guadalhorce. Three hundred thirty-one persons with type 2 diabetes mellitus. The SDSCA validated in mexican population was subjected to semantic and content equivalence using a Delphi method, its legibility was determined by INFLESZ scale. Subsequently psychometric validation was conducted through exploratory and confirmatory factor analysis (herein after EFA and CFA), internal consistency, test-retest reliability and discriminant validity. Two rounds were needed to achieve the consensus in between the panel members. Then, the index provided a good readability. The EFA suggested a model with 3 factors (diet, exercise and self-analysis) with 7 items which explained 79.16% variance. The results of CFA showed a good fit of SDSCA-Sp. The Internal consistency was moderate to low (α-Cronbach =0.62) and test-retest reliability was evaluated in 198 patients (t=0.462-0.796, p<0.001) with a total correlation of 0.764 (p< 0.0001). The SDSCA-Sp is used, in a valid way to assess self-care in type 2 DM version in clinical practice and research with similar clinimetric properties to previous studies. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Dubinsky, Eric A; Butkus, Steven R; Andersen, Gary L
2016-11-15
Sources of fecal indicator bacteria are difficult to identify in watersheds that are impacted by a variety of non-point sources. We developed a molecular source tracking test using the PhyloChip microarray that detects and distinguishes fecal bacteria from humans, birds, ruminants, horses, pigs and dogs with a single test. The multiplexed assay targets 9001 different 25-mer fragments of 16S rRNA genes that are common to the bacterial community of each source type. Both random forests and SourceTracker were tested as discrimination tools, with SourceTracker classification producing superior specificity and sensitivity for all source types. Validation with 12 different mammalian sources in mixtures found 100% correct identification of the dominant source and 84-100% specificity. The test was applied to identify sources of fecal indicator bacteria in the Russian River watershed in California. We found widespread contamination by human sources during the wet season proximal to settlements with antiquated septic infrastructure and during the dry season at beaches during intense recreational activity. The test was more sensitive than common fecal indicator tests that failed to identify potential risks at these sites. Conversely, upstream beaches and numerous creeks with less reliance on onsite wastewater treatment contained no fecal signal from humans or other animals; however these waters did contain high counts of fecal indicator bacteria after rain. Microbial community analysis revealed that increased E. coli and enterococci at these locations did not co-occur with common fecal bacteria, but rather co-varied with copiotrophic bacteria that are common in freshwaters with high nutrient and carbon loading, suggesting runoff likely promoted the growth of environmental strains of E. coli and enterococci. These results indicate that machine-learning classification of PhyloChip microarray data can outperform conventional single marker tests that are used to assess health risks, and is an effective tool for distinguishing numerous fecal and environmental sources of pathogen indicators. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bazsalovicsová, Eva; Králová-Hromadová, Ivica; Stefka, Jan; Scholz, Tomáš
2012-05-01
Sequence structure of complete internal transcribed spacer 1 and 2 (ITS1 and ITS2) of the ribosomal DNA region and partial mitochondrial cytochrome c oxidase subunit I (cox1) gene sequences were studied in the monozoic tapeworm Atractolytocestus sagittatus (Kulakovskaya et Akhmerov, 1965) (Cestoda: Caryophyllidea), a parasite of common carp (Cyprinus carpio carpio L.). Intraindividual sequence diversity was observed in both ribosomal spacers. In ITS1, a total number of 19 recombinant clones yielded eight different sequence types (pairwise sequence identity, 99.7-100%) which, however, did not resemble the structure typical for divergent intragenomic ITS copies (paralogues). Polymorphism was displayed by several single nucleotide mutations present exclusively in single clones, but variation in the number of short repetitive motifs was not observed. In ITS2, a total of 21 recombinant clones yielded ten different sequence types (pairwise sequence identity, 97.5-100%). They were mostly characterized by a varying number of (TCGT)(n) repeats resulting in assortment of ITS2 sequences into two sequence variants, which reflected the structure specific for ITS paralogues. The third DNA region analysed, mitochondrial cox1 gene (669 bp) was detected to be 100% identical in all studied A. sagittatus individuals. Comparison of molecular data on A. sagittatus with those on Atractolytocestus huronensis Anthony, 1958, an invasive parasite of common carp, has shown that interspecific differences significantly exceeded intraspecific variation in both ribosomal spacers (81.4-82.5% in ITS1, 74.4-75.2% in ITS2) as well as in mitochondrial cox1, which confirms validity of both congeneric tapeworms parasitic in the same fish host.
Regression: The Apple Does Not Fall Far From the Tree.
Vetter, Thomas R; Schober, Patrick
2018-05-15
Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.
Polygenic determinants in extremes of high-density lipoprotein cholesterol[S
Dron, Jacqueline S.; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A.; Robinson, John F.; McIntyre, Adam D.; Ban, Matthew R.; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J.; Lettre, Guillaume; Tardif, Jean-Claude
2017-01-01
HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. PMID:28870971
Polygenic determinants in extremes of high-density lipoprotein cholesterol.
Dron, Jacqueline S; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A; Robinson, John F; McIntyre, Adam D; Ban, Matthew R; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J; Lettre, Guillaume; Tardif, Jean-Claude; Hegele, Robert A
2017-11-01
HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.
Health surveillance for occupational respiratory disease.
Lewis, L; Fishwick, D
2013-07-01
Occupational lung diseases remain common, and health surveillance is one approach used to assist identification of early cases. To identify areas of good practice within respiratory health surveillance and to formulate recommendations for practice. Published literature was searched since 1990 using a semi-systematic methodology. A total of 561 documents were identified on Medline and Embase combined. Other search engines did not identify relevant documents that had not already been identified by these two main searches. Seventy-nine of these were assessed further and 36 documents were included for the full analysis. Respiratory health surveillance remains a disparate process, even within disease type. A standard validated questionnaire and associated guidance should be developed. Lung function testing was common and generally supported by the evidence. Cross-sectional interpretation of lung function in younger workers needs careful assessment in order to best identify early cases of disease. More informed interpretation of the forced expiratory volume in 1 s/forced vital capacity ratio, for example by using a lower limit of normal for each worker, and of longitudinal lung function information is advised. Immunological tests appear useful in small groups of workers exposed to common occupational allergens. Education, training and improved occupational health policies are likely to improve uptake of health surveillance, to ensure that those who fail health surveillance at any point are handled appropriately.
Classification of mineral deposits into types using mineralogy with a probabilistic neural network
Singer, Donald A.; Kouda, Ryoichi
1997-01-01
In order to determine whether it is desirable to quantify mineral-deposit models further, a test of the ability of a probabilistic neural network to classify deposits into types based on mineralogy was conducted. Presence or absence of ore and alteration mineralogy in well-typed deposits were used to train the network. To reduce the number of minerals considered, the analyzed data were restricted to minerals present in at least 20% of at least one deposit type. An advantage of this restriction is that single or rare occurrences of minerals did not dominate the results. Probabilistic neural networks can provide mathematically sound confidence measures based on Bayes theorem and are relatively insensitive to outliers. Founded on Parzen density estimation, they require no assumptions about distributions of random variables used for classification, even handling multimodal distributions. They train quickly and work as well as, or better than, multiple-layer feedforward networks. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class and each variable. The training set was reduced to the presence or absence of 58 reported minerals in eight deposit types. The training set included: 49 Cyprus massive sulfide deposits; 200 kuroko massive sulfide deposits; 59 Comstock epithermal vein gold districts; 17 quartzalunite epithermal gold deposits; 25 Creede epithermal gold deposits; 28 sedimentary-exhalative zinc-lead deposits; 28 Sado epithermal vein gold deposits; and 100 porphyry copper deposits. The most common training problem was the error of classifying about 27% of Cyprus-type deposits in the training set as kuroko. In independent tests with deposits not used in the training set, 88% of 224 kuroko massive sulfide deposits were classed correctly, 92% of 25 porphyry copper deposits, 78% of 9 Comstock epithermal gold-silver districts, and 83% of six quartzalunite epithermal gold deposits were classed correctly. Across all deposit types, 88% of deposits in the validation dataset were correctly classed. Misclassifications were most common if a deposit was characterized by only a few minerals, e.g., pyrite, chalcopyrite,and sphalerite. The success rate jumped to 98% correctly classed deposits when just two rock types were added. Such a high success rate of the probabilistic neural network suggests that not only should this preliminary test be expanded to include other deposit types, but that other deposit features should be added.
The Practice and Products of Communication Inquiry and Education.
ERIC Educational Resources Information Center
Warren, Clay
1982-01-01
The ability to communicate effectively is fundamental to communication education. For internal validity, communication educators need to concentrate on knowledge-building (competence) and skills training (performance). For external validity, the speech communication discipline must establish a common understanding of its work and send clear…
Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Roy, Jean-Sébastien; Desmeules, François
2017-01-01
More evidence on diagnostic validity of physical examination tests for knee disorders is needed to lower frequently used and costly imaging tests. To conduct a systematic review of systematic reviews (SR) and meta-analyses (MA) evaluating the diagnostic validity of physical examination tests for knee disorders. A structured literature search was conducted in five databases until January 2016. Methodological quality was assessed using the AMSTAR. Seventeen reviews were included with mean AMSTAR score of 5.5 ± 2.3. Based on six SR, only the Lachman test for ACL injuries is diagnostically valid when individually performed (Likelihood ratio (LR+):10.2, LR-:0.2). Based on two SR, the Ottawa Knee Rule is a valid screening tool for knee fractures (LR-:0.05). Based on one SR, the EULAR criteria had a post-test probability of 99% for the diagnosis of knee osteoarthritis. Based on two SR, a complete physical examination performed by a trained health provider was found to be diagnostically valid for ACL, PCL and meniscal injuries as well as for cartilage lesions. When individually performed, common physical tests are rarely able to rule in or rule out a specific knee disorder, except the Lachman for ACL injuries. There is low-quality evidence concerning the validity of combining history elements and physical tests. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pathophysiology and management of multivalvular disease
Unger, Philippe; Clavel, Marie-Annick; Lindman, Brian R.; Mathieu, Patrick; Pibarot, Philippe
2016-01-01
Multivalvular disease (MVD) is a common condition with a complex pathophysiology, dependent on the specific combination of valve lesions. Diagnosis is challenging as several echocardiographic methods commonly used for the assessment of stenosis or regurgitation have been validated only in patients with single valve disease. Decisions about the timing and type of treatment should be made by a multidisciplinary heart valve team, on a case-by-case basis. Several factors should be considered, including the severity and consequences of the MVD, the patient’s life expectancy and comorbidities, the surgical risk associated with combined valve procedures, the long-term risk of morbidity and mortality associated with multiple valve prostheses, and the likelihood and risk of reoperation. The introduction of transcatheter valve therapies into clinical practice has provided new treatment options for patients with MVD, and decision-making algorithms on how to combine surgical and percutaneous treatment options are evolving rapidly. In this Review, we discuss the pathophysiology, diagnosis, and treatment of MVD, focussing on the combination of valve pathologies that are most often encountered in clinical practice. PMID:27121305
Self-perceived oral health and salivary proteins in children with type 1 diabetes.
Javed, F; Sundin, U; Altamash, M; Klinge, B; Engström, P-E
2009-01-01
The aim was to validate self-perceived oral health with salivary IgG as an inflammatory parameter in children with type 1 diabetes. Unstimulated whole saliva samples were collected from 36 children with well controlled and 12 with poorly controlled type 1 diabetes and 40 non-diabetic children (Controls). Salivary flow rate, random blood glucose level, salivary protein concentration and immunoglobulin A and G levels were recorded using standard techniques. Data concerning oral health and diabetes status were collected. Self-perceived gingival bleeding (bleeding gums), bad breath and dry mouth were higher in diabetic children when compared with those in controls (P < 0.05). Gingival bleeding was frequently perceived by children with poorly controlled compared to well-controlled type 1 diabetes (P < 0.05) and controls (P < 0.001). Bad breath was common perceived by children with poorly controlled compared to well-controlled type 1 diabetes (P < 0.05) and controls (P < 0.0001). Salivary flow rate was lower in the diabetic children compared to controls (P < 0.01) with no difference between children with poorly controlled and well-controlled type 1 diabetes. Salivary IgG per mg protein concentration was higher in the diabetics when compared with the control group (P < 0.0001). IgG per mg protein levels were also higher in children with poorly controlled when compared with well-controlled type 1 diabetes (P < 0.05). There was no difference in IgA per mg protein and total protein concentrations between children with poorly controlled and well-controlled type 1 diabetes. Self-perceived gingival bleeding and salivary IgG per mg protein concentration were increased in children with type 1 diabetes compared with controls. These variables were also increased in children with poorly controlled compared with well-controlled type 1 diabetes.
Nakashima, Ken'ichiro; Isobe, Chikae; Toshihiko, Souma; Ura, Mitsuhiro
2013-06-01
Moderating effects of group type on the relationship between in-group social values and group identity were investigated. Previous research has indicated that values attached to the in-group, such as its status, privileges, and power, lead to increased group identity. However, these studies have not investigated the role of the type of in-groups on this effect. We conducted an experiment that manipulated the in-group type. In the common-identity type of in-group condition, formation of in- and out-groups on the basis of social categorization was established. In the common-bond type of in-group condition, interactions between the group members were conducted. Results indicated that in the former condition, the degree of in-group social values affected group identity; however, this effect was not found in the latter condition. These results suggest that social values of the in-group have an asymmetric effect on group identity, depending upon the in-group type as a common-identity or common-bond group.
Detection of selective sweeps in cattle using genome-wide SNP data
2013-01-01
Background The domestication and subsequent selection by humans to create breeds and biological types of cattle undoubtedly altered the patterning of variation within their genomes. Strong selection to fix advantageous large-effect mutations underlying domesticability, breed characteristics or productivity created selective sweeps in which variation was lost in the chromosomal region flanking the selected allele. Selective sweeps have now been identified in the genomes of many animal species including humans, dogs, horses, and chickens. Here, we attempt to identify and characterise regions of the bovine genome that have been subjected to selective sweeps. Results Two datasets were used for the discovery and validation of selective sweeps via the fixation of alleles at a series of contiguous SNP loci. BovineSNP50 data were used to identify 28 putative sweep regions among 14 diverse cattle breeds. Affymetrix BOS 1 prescreening assay data for five breeds were used to identify 85 regions and validate 5 regions identified using the BovineSNP50 data. Many genes are located within these regions and the lack of sequence data for the analysed breeds precludes the nomination of selected genes or variants and limits the prediction of the selected phenotypes. However, phenotypes that we predict to have historically been under strong selection include horned-polled, coat colour, stature, ear morphology, and behaviour. Conclusions The bias towards common SNPs in the design of the BovineSNP50 assay led to the identification of recent selective sweeps associated with breed formation and common to only a small number of breeds rather than ancient events associated with domestication which could potentially be common to all European taurines. The limited SNP density, or marker resolution, of the BovineSNP50 assay significantly impacted the rate of false discovery of selective sweeps, however, we found sweeps in common between breeds which were confirmed using an ultra-high-density assay scored in a small number of animals from a subset of the breeds. No sweep regions were shared between indicine and taurine breeds reflecting their divergent selection histories and the very different environmental habitats to which these sub-species have adapted. PMID:23758707
Chung, Wen Wei; Chua, Siew Siang; Lai, Pauline Siew Mei; Morisky, Donald E
2015-01-01
Medication non-adherence is a prevalent problem worldwide but up to today, no gold standard is available to assess such behavior. This study was to evaluate the psychometric properties, particularly the concurrent validity of the English version of the Malaysian Medication Adherence Scale (MALMAS) among people with type 2 diabetes in Malaysia. Individuals with type 2 diabetes, aged 21 years and above, using at least one anti-diabetes agent and could communicate in English were recruited. The MALMAS was compared with the 8-item Morisky Medication Adherence Scale (MMAS-8) to assess its convergent validity while concurrent validity was evaluated based on the levels of glycated hemoglobin (HbA1C). Participants answered the MALMAS twice: at baseline and 4 weeks later. The study involved 136 participants. The MALMAS achieved acceptable internal consistency (Cronbach's alpha=0.565) and stable reliability as the test-retest scores showed fair correlation (Spearman's rho=0.412). The MALMAS has good correlation with the MMAS-8 (Spearman's rho=0.715). Participants who were adherent to their anti-diabetes medications had significantly lower median HbA1C values than those who were non-adherence (7.90 versus 8.55%, p=0.032). The odds of participants who were adherent to their medications achieving good glycemic control was 3.36 times (95% confidence interval: 1.09-10.37) of those who were non-adherence. This confirms the concurrent validity of the MALMAS. The sensitivity of the MALMAS was 88.9% while its specificity was 29.6%. The findings of this study further substantiates the reliability and validity of the MALMAS, in particular its concurrent validity and sensitivity for assessing medication adherence of people with type 2 diabetes in Malaysia.
Lai, Pauline Siew Mei; Morisky, Donald E.
2015-01-01
Medication non-adherence is a prevalent problem worldwide but up to today, no gold standard is available to assess such behavior. This study was to evaluate the psychometric properties, particularly the concurrent validity of the English version of the Malaysian Medication Adherence Scale (MALMAS) among people with type 2 diabetes in Malaysia. Individuals with type 2 diabetes, aged 21 years and above, using at least one anti-diabetes agent and could communicate in English were recruited. The MALMAS was compared with the 8-item Morisky Medication Adherence Scale (MMAS-8) to assess its convergent validity while concurrent validity was evaluated based on the levels of glycated hemoglobin (HbA1C). Participants answered the MALMAS twice: at baseline and 4 weeks later. The study involved 136 participants. The MALMAS achieved acceptable internal consistency (Cronbach’s alpha=0.565) and stable reliability as the test-retest scores showed fair correlation (Spearman’s rho=0.412). The MALMAS has good correlation with the MMAS-8 (Spearman’s rho=0.715). Participants who were adherent to their anti-diabetes medications had significantly lower median HbA1C values than those who were non-adherence (7.90 versus 8.55%, p=0.032). The odds of participants who were adherent to their medications achieving good glycemic control was 3.36 times (95% confidence interval: 1.09-10.37) of those who were non-adherence. This confirms the concurrent validity of the MALMAS. The sensitivity of the MALMAS was 88.9% while its specificity was 29.6%. The findings of this study further substantiates the reliability and validity of the MALMAS, in particular its concurrent validity and sensitivity for assessing medication adherence of people with type 2 diabetes in Malaysia. PMID:25909363
ERIC Educational Resources Information Center
Teo, Timothy; Tan, Lynde
2012-01-01
This study applies the theory of planned behavior (TPB), a theory that is commonly used in commercial settings, to the educational context to explain pre-service teachers' technology acceptance. It is also interested in examining its validity when used for this purpose. It has found evidence that the TPB is a valid model to explain pre-service…
LeBeau, Richard T; Mesri, Bita; Craske, Michelle G
2016-10-30
With DSM-5, the APA began providing guidelines for anxiety disorder severity assessment that incorporates newly developed self-report scales. The scales share a common template, are brief, and are free of copyright restrictions. Initial validation studies have been promising, but the English-language versions of the scales have not been formally validated in clinical samples. Forty-seven individuals with a principal diagnosis of Social Anxiety Disorder (SAD) completed a diagnostic assessment, as well as the DSM-5 SAD severity scale and several previously validated measures. The scale demonstrated internal consistency, convergent validity, and discriminant validity. The next steps in the validation process are outlined. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!
Vetter, Thomas R; Mascha, Edward J
2017-09-01
Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.
Unraveling Genetic Modifiers in the Gria4 Mouse Model of Absence Epilepsy
Frankel, Wayne N.; Mahaffey, Connie L.; McGarr, Tracy C.; Beyer, Barbara J.; Letts, Verity A.
2014-01-01
Absence epilepsy (AE) is a common type of genetic generalized epilepsy (GGE), particularly in children. AE and GGE are complex genetic diseases with few causal variants identified to date. Gria4 deficient mice provide a model of AE, one for which the common laboratory inbred strain C3H/HeJ (HeJ) harbors a natural IAP retrotransposon insertion in Gria4 that reduces its expression 8-fold. Between C3H and non-seizing strains such as C57BL/6, genetic modifiers alter disease severity. Even C3H substrains have surprising variation in the duration and incidence of spike-wave discharges (SWD), the characteristic electroencephalographic feature of absence seizures. Here we discovered extensive IAP retrotransposition in the C3H substrain, and identified a HeJ-private IAP in the Pcnxl2 gene, which encodes a putative multi-transmembrane protein of unknown function, resulting in decreased expression. By creating new Pcnxl2 frameshift alleles using TALEN mutagenesis, we show that Pcnxl2 deficiency is responsible for mitigating the seizure phenotype – making Pcnxl2 the first known modifier gene for absence seizures in any species. This finding gave us a handle on genetic complexity between strains, directing us to use another C3H substrain to map additional modifiers including validation of a Chr 15 locus that profoundly affects the severity of SWD episodes. Together these new findings expand our knowledge of how natural variation modulates seizures, and highlights the feasibility of characterizing and validating modifiers in mouse strains and substrains in the post-genome sequence era. PMID:25010494
Metabolic costs of daily activity in older adults (Chores XL) study: design and methods.
Corbett, Duane B; Wanigatunga, Amal A; Valiani, Vincenzo; Handberg, Eileen M; Buford, Thomas W; Brumback, Babette; Casanova, Ramon; Janelle, Christopher M; Manini, Todd M
2017-06-01
For over 20 years, normative data has guided the prescription of physical activity. This data has since been applied to research and used to plan interventions. While this data seemingly provides accurate estimates of the metabolic cost of daily activities in young adults, the accuracy of use among older adults is less clear. As such, a thorough evaluation of the metabolic cost of daily activities in community dwelling adults across the lifespan is needed. The Metabolic Costs of Daily Activity in Older Adults Study is a cross-sectional study designed to compare the metabolic cost of daily activities in 250 community dwelling adults across the lifespan. Participants (20+ years) performed 38 common daily activities while expiratory gases were measured using a portable indirect calorimeter (Cosmed K4b2). The metabolic cost was examined as a metabolic equivalent value (O 2 uptake relative to 3.5 milliliter• min-1•kg-1), a function of work rate - metabolic economy, and a relative value of resting and peak oxygen uptake. The primary objective is to determine age-related differences in the metabolic cost of common lifestyle and exercise activities. Secondary objectives include (a) investigating the effect of functional impairment on the metabolic cost of daily activities, (b) evaluating the validity of perception-based measurement of exertion across the lifespan, and (c) validating activity sensors for estimating the type and intensity of physical activity. Results of this study are expected to improve the effectiveness by which physical activity and nutrition is recommended for adults across the lifespan.
Classifying GABAergic interneurons with semi-supervised projected model-based clustering.
Mihaljević, Bojan; Benavides-Piccione, Ruth; Guerra, Luis; DeFelipe, Javier; Larrañaga, Pedro; Bielza, Concha
2015-09-01
A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names. We sought to automatically classify digitally reconstructed interneuronal morphologies according to this scheme. Simultaneously, we sought to discover possible subtypes of these types that might emerge during automatic classification (clustering). We also investigated which morphometric properties were most relevant for this classification. A set of 118 digitally reconstructed interneuronal morphologies classified into the common basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of the world's leading neuroscientists, quantified by five simple morphometric properties of the axon and four of the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. We then removed this class information for each type separately, and applied semi-supervised clustering to those cells (keeping the others' cluster membership fixed), to assess separation from other types and look for the formation of new groups (subtypes). We performed this same experiment unlabeling the cells of two types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixture of Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performed the described experiments on three different subsets of the data, formed according to how many experts agreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least 26 (47 neurons). Interneurons with more reliable type labels were classified more accurately. We classified HT cells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy, respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, and no subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette width and ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively, confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a single type also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometric properties were more relevant that dendritic ones, with the axonal polar histogram length in the [π, 2π) angle interval being particularly useful. The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heterogeneous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones for distinguishing among the CB, HT, LB, and MA interneuron types. Copyright © 2015 Elsevier B.V. All rights reserved.
Understanding and Validity in Qualitative Research.
ERIC Educational Resources Information Center
Maxwell, Joseph A.
1992-01-01
Details the philosophical and practical dimensions of five types of validity used in qualitative research: descriptive, interpretive, theoretical, generalizable, and evaluative, with corresponding issues of understanding. Presents this typology as a checklist of the kinds of threats to validity that may arise. (SK)
STR-validator: an open source platform for validation and process control.
Hansson, Oskar; Gill, Peter; Egeland, Thore
2014-11-01
This paper addresses two problems faced when short tandem repeat (STR) systems are validated for forensic purposes: (1) validation is extremely time consuming and expensive, and (2) there is strong consensus about what to validate but not how. The first problem is solved by powerful data processing functions to automate calculations. Utilising an easy-to-use graphical user interface, strvalidator (hereafter referred to as STR-validator) can greatly increase the speed of validation. The second problem is exemplified by a series of analyses, and subsequent comparison with published material, highlighting the need for a common validation platform. If adopted by the forensic community STR-validator has the potential to standardise the analysis of validation data. This would not only facilitate information exchange but also increase the pace at which laboratories are able to switch to new technology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Rapid fabrication of microfluidic chips based on the simplest LED lithography
NASA Astrophysics Data System (ADS)
Li, Yue; Wu, Ping; Luo, Zhaofeng; Ren, Yuxuan; Liao, Meixiang; Feng, Lili; Li, Yuting; He, Liqun
2015-05-01
Microfluidic chips are generally fabricated by a soft lithography method employing commercial lithography equipment. These heavy machines require a critical room environment and high lamp power, and the cost remains too high for most normal laboratories. Here we present a novel microfluidics fabrication method utilizing a portable ultraviolet (UV) LED as an alternative UV source for photolithography. With this approach, we can repeat several common microchannels as do these conventional commercial exposure machines, and both the verticality of the channel sidewall and lithography resolution are proved to be acceptable. Further microfluidics applications such as mixing, blood typing and microdroplet generation are implemented to validate the practicability of the chips. This simple but innovative method decreases the cost and requirement of chip fabrication dramatically and may be more popular with ordinary laboratories.
Classical Statistics and Statistical Learning in Imaging Neuroscience
Bzdok, Danilo
2017-01-01
Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896
Consolidated View on Space Software Engineering Problems - An Empirical Study
NASA Astrophysics Data System (ADS)
Silva, N.; Vieira, M.; Ricci, D.; Cotroneo, D.
2015-09-01
Independent software verification and validation (ISVV) has been a key process for engineering quality assessment for decades, and is considered in several international standards. The “European Space Agency (ESA) ISVV Guide” is used for the European Space market to drive the ISVV tasks and plans, and to select applicable tasks and techniques. Software artefacts have room for improvement due to the amount if issues found during ISVV tasks. This article presents the analysis of the results of a large set of ISVV issues originated from three different ESA missions-amounting to more than 1000 issues. The study presents the main types, triggers and impacts related to the ISVV issues found and sets the path for a global software engineering improvement based on the most common deficiencies identified for space projects.
Plant-derived therapeutics for the treatment of metabolic syndrome.
Graf, Brittany L; Raskin, Ilya; Cefalu, William T; Ribnicky, David M
2010-10-01
Metabolic syndrome is defined as a set of coexisting metabolic disorders that increase an individual's likelihood of developing type 2 diabetes, cardiovascular disease and stroke. Medicinal plants, some of which have been used for thousands of years, serve as an excellent source of bioactive compounds for the treatment of metabolic syndrome because they contain a wide range of phytochemicals with diverse metabolic effects. In order for botanicals to be effectively used against metabolic syndrome, however, botanical preparations must be characterized and standardized through the identification of their active compounds and respective modes of action, followed by validation in controlled clinical trials with clearly defined endpoints. This review assesses examples of commonly known and partially characterized botanicals to describe specific considerations for the phytochemical, preclinical and clinical characterization of botanicals associated with metabolic syndrome.
Validating soil phosphorus routines in the SWAT model
USDA-ARS?s Scientific Manuscript database
Phosphorus transfer from agricultural soils to surface waters is an important environmental issue. Commonly used models like SWAT have not always been updated to reflect improved understanding of soil P transformations and transfer to runoff. Our objective was to validate the ability of the P routin...
Moving beyond Traditional Methods of Survey Validation
ERIC Educational Resources Information Center
Maul, Andrew
2017-01-01
In his focus article, "Rethinking Traditional Methods of Survey Validation," published in this issue of "Measurement: Interdisciplinary Research and Perspectives," Andrew Maul wrote that it is commonly believed that self-report, survey-based instruments can be used to measure a wide range of psychological attributes, such as…
Gupta, V; Vinay, D G; Rafiq, S; Kranthikumar, M V; Janipalli, C S; Giambartolomei, C; Evans, D M; Mani, K R; Sandeep, M N; Taylor, A E; Kinra, S; Sullivan, R M; Bowen, L; Timpson, N J; Smith, G D; Dudbridge, F; Prabhakaran, D; Ben-Shlomo, Y; Reddy, K S; Ebrahim, S; Chandak, G R
2012-02-01
Evaluation of the association of 31 common single nucleotide polymorphisms (SNPs) with fasting glucose, fasting insulin, HOMA-beta cell function (HOMA-β), HOMA-insulin resistance (HOMA-IR) and type 2 diabetes in the Indian population. We genotyped 3,089 sib pairs recruited in the Indian Migration Study from four cities in India (Lucknow, Nagpur, Hyderabad and Bangalore) for 31 SNPs in 24 genes previously associated with type 2 diabetes in European populations. We conducted within-sib-pair analysis for type 2 diabetes and its related quantitative traits. The risk-allele frequencies of all the SNPs were comparable with those reported in western populations. We demonstrated significant associations of CXCR4 (rs932206), CDKAL1 (rs7756992) and TCF7L2 (rs7903146, rs12255372) with fasting glucose, with β values of 0.007 (p = 0.05), 0.01 (p = 0.01), 0.007 (p = 0.05), 0.01 (p = 0.003) and 0.08 (p = 0.01), respectively. Variants in NOTCH2 (rs10923931), TCF-2 (also known as HNF1B) (rs757210), ADAM30 (rs2641348) and CDKN2A/B (rs10811661) significantly predicted fasting insulin, with β values of -0.06 (p = 0.04), 0.05 (p = 0.05), -0.08 (p = 0.01) and -0.08 (p = 0.02), respectively. For HOMA-IR, we detected associations with TCF-2, ADAM30 and CDKN2A/B, with β values of 0.05 (p = 0.04), -0.07 (p = 0.03) and -0.08 (p = 0.02), respectively. We also found significant associations of ADAM30 (β = -0.05; p = 0.01) and CDKN2A/B (β = -0.05; p = 0.03) with HOMA-β. THADA variant (rs7578597) was associated with type 2 diabetes (OR 1.5; 95% CI 1.04, 2.22; p = 0.03). We validated the association of seven established loci with intermediate traits related to type 2 diabetes in an Indian population using a design resistant to population stratification.
Gurnani, Ashita S.; Saurman, Jessica L.; Chapman, Kimberly R.; Steinberg, Eric G.; Martin, Brett; Chaisson, Christine E.; Mez, Jesse; Tripodis, Yorghos; Stern, Robert A.
2016-01-01
Two of the most commonly used methods to assess memory functioning in studies of cognitive aging and dementia are story memory and list learning tests. We hypothesized that the most commonly used story memory test, Wechsler's Logical Memory, would generate more pronounced practice effects than a well validated but less common list learning test, the Neuropsychological Assessment Battery (NAB) List Learning test. Two hundred eighty-seven older adults, ages 51 to 100 at baseline, completed both tests as part of a larger neuropsychological test battery on an annual basis. Up to five years of recall scores from participants who were diagnosed as cognitively normal (n = 96) or with mild cognitive impairment (MCI; n = 72) or Alzheimer's disease (AD; n = 121) at their most recent visit were analyzed with linear mixed effects regression to examine the interaction between the type of test and the number of times exposed to the test. Other variables, including age at baseline, sex, education, race, time (years) since baseline, and clinical diagnosis were also entered as fixed effects predictor variables. The results indicated that both tests produced significant practice effects in controls and MCI participants; in contrast, participants with AD declined or remained stable. However, for the delayed—but not the immediate—recall condition, Logical Memory generated more pronounced practice effects than NAB List Learning (b = 0.16, p < .01 for controls). These differential practice effects were moderated by clinical diagnosis, such that controls and MCI participants—but not participants with AD—improved more on Logical Memory delayed recall than on delayed NAB List Learning delayed recall over five annual assessments. Because the Logical Memory test is ubiquitous in cognitive aging and neurodegenerative disease research, its tendency to produce marked practice effects—especially on the delayed recall condition—suggests a threat to its validity as a measure of new learning, an essential construct for dementia diagnosis. PMID:27711147
IN SITU ESTIMATES OF FOREST LAI FOR MODIS DATA VALIDATION
Satellite remote sensor data are commonly used to assess ecosystem conditions through synoptic monitoring of terrestrial vegetation extent, biomass, and seasonal dynamics. Two commonly used vegetation indices that can be derived from various remote sensor systems include the Norm...
The Intuitive Eating Scale: Development and Preliminary Validation
ERIC Educational Resources Information Center
Hawks, Steven; Merrill, Ray M.; Madanat, Hala N.
2004-01-01
This article describes the development and validation of an instrument designed to measure the concept of intuitive eating. To ensure face and content validity for items used in the Likert-type Intuitive Eating Scale (IES), content domain was clearly specified and a panel of experts assessed the validity of each item. Based on responses from 391…
Cho, Minwoo; Kim, Jee Hyun; Kong, Hyoun Joong; Hong, Kyoung Sup; Kim, Sungwan
2018-05-01
The colonoscopy adenoma detection rate depends largely on physician experience and skill, and overlooked colorectal adenomas could develop into cancer. This study assessed a system that detects polyps and summarizes meaningful information from colonoscopy videos. One hundred thirteen consecutive patients had colonoscopy videos prospectively recorded at the Seoul National University Hospital. Informative video frames were extracted using a MATLAB support vector machine (SVM) model and classified as bleeding, polypectomy, tool, residue, thin wrinkle, folded wrinkle, or common. Thin wrinkle, folded wrinkle, and common frames were reanalyzed using SVM for polyp detection. The SVM model was applied hierarchically for effective classification and optimization of the SVM. The mean classification accuracy according to type was over 93%; sensitivity was over 87%. The mean sensitivity for polyp detection was 82.1%, and the positive predicted value (PPV) was 39.3%. Polyps detected using the system were larger (6.3 ± 6.4 vs. 4.9 ± 2.5 mm; P = 0.003) with a more pedunculated morphology (Yamada type III, 10.2 vs. 0%; P < 0.001; Yamada type IV, 2.8 vs. 0%; P < 0.001) than polyps missed by the system. There were no statistically significant differences in polyp distribution or histology between the groups. Informative frames and suspected polyps were presented on a timeline. This summary was evaluated using the system usability scale questionnaire; 89.3% of participants expressed positive opinions. We developed and verified a system to extract meaningful information from colonoscopy videos. Although further improvement and validation of the system is needed, the proposed system is useful for physicians and patients.
A novel type of very long baseline astronomical intensity interferometer
NASA Astrophysics Data System (ADS)
Borra, Ermanno F.
2013-12-01
This article presents a novel type of very long baseline astronomical interferometer that uses the fluctuations, as a function of time, of the intensity measured by a quadratic detector, which is a common type of astronomical detector. The theory on which the technique is based is validated by laboratory experiments. Its outstanding principal advantages comes from the fact that the angular structure of an astronomical object is simply determined from the visibility of the minima of the spectrum of the intensity fluctuations measured by the detector, as a function of the frequency of the fluctuations, while keeping the spacing between mirrors constant. This would allow a simple setup capable of high angular resolutions because it could use an extremely large baseline. Another major interest is that it allows for a more efficient use of telescope time because observations at a single baseline are sufficient, while amplitude and intensity interferometers need several observations at different baselines. The fact that one does not have to move the telescopes would also allow detecting faster time variations because having to move the telescopes sets a lower limit to the time variations that can be detected. The technique uses wave interaction effects and thus has some characteristics in common with intensity interferometry. A disadvantage of the technique, like in intensity interferometry, is that it needs strong sources if observing at high frequencies (e.g. the visible). This is a minor disadvantage in the radio region. At high frequencies, this disadvantage is mitigated by the fact that, like in intensity interferometry, the requirements of the optical quality of the mirrors used are far less severe than in amplitude interferometry so that poor quality large reflectors (e.g. Cherenkov telescopes) can be used in the optical region.
Ferrero, Giulio; Cordero, Francesca; Tarallo, Sonia; Arigoni, Maddalena; Riccardo, Federica; Gallo, Gaetano; Ronco, Guglielmo; Allasia, Marco; Kulkarni, Neha; Matullo, Giuseppe; Vineis, Paolo; Calogero, Raffaele A; Pardini, Barbara; Naccarati, Alessio
2018-01-09
The role of non-coding RNAs in different biological processes and diseases is continuously expanding. Next-generation sequencing together with the parallel improvement of bioinformatics analyses allows the accurate detection and quantification of an increasing number of RNA species. With the aim of exploring new potential biomarkers for disease classification, a clear overview of the expression levels of common/unique small RNA species among different biospecimens is necessary. However, except for miRNAs in plasma, there are no substantial indications about the pattern of expression of various small RNAs in multiple specimens among healthy humans. By analysing small RNA-sequencing data from 243 samples, we have identified and compared the most abundantly and uniformly expressed miRNAs and non-miRNA species of comparable size with the library preparation in four different specimens (plasma exosomes, stool, urine, and cervical scrapes). Eleven miRNAs were commonly detected among all different specimens while 231 miRNAs were globally unique across them. Classification analysis using these miRNAs provided an accuracy of 99.6% to recognize the sample types. piRNAs and tRNAs were the most represented non-miRNA small RNAs detected in all specimen types that were analysed, particularly in urine samples. With the present data, the most uniformly expressed small RNAs in each sample type were also identified. A signature of small RNAs for each specimen could represent a reference gene set in validation studies by RT-qPCR. Overall, the data reported hereby provide an insight of the constitution of the human miRNome and of other small non-coding RNAs in various specimens of healthy individuals.
Hedlund, Lena; Gyllensten, Amanda Lundvik; Waldegren, Tomas; Hansson, Lars
2016-05-01
Motor disturbances and disturbed self-recognition are common features that affect mobility in persons with schizophrenia spectrum disorder and bipolar disorder. Physiotherapists in Scandinavia assess and treat movement difficulties in persons with severe mental illness. The Body Awareness Scale Movement Quality and Experience (BAS MQ-E) is a new and shortened version of the commonly used Body Awareness Scale-Health (BAS-H). The purpose of this study was to investigate the inter-rater reliability and the concurrent validity of BAS MQ-E in persons with severe mental illness. The concurrent validity was examined by investigating the relationships between neurological soft signs, alexithymia, fatigue, anxiety, and mastery. Sixty-two persons with severe mental illness participated in the study. The results showed a satisfactory inter-rater reliability (n = 53) and a concurrent validity (n = 62) with neurological soft signs, especially cognitive and perceptual based signs. There was also a concurrent validity linked to physical fatigue and aspects of alexithymia. The scores of BAS MQ-E were in general higher for persons with schizophrenia compared to persons with other diagnoses within the schizophrenia spectrum disorders and bipolar disorder. The clinical implications are presented in the discussion.
Dobbs, Thomas; Hutchings, Hayley A; Whitaker, Iain S
2017-09-24
Skin cancer is the most common malignancy worldwide, often occurring on the face, where the cosmetic outcome of treatment is paramount. A number of skin cancer-specific patient-reported outcome measures (PROMs) exist, however none adequately consider the difference in type of reconstruction from a patient's point of view. It is the aim of this study to 'anglicise' (to UK English) a recently developed US PROM for facial skin cancer (the FACE-Q Skin Cancer Module) and to validate this UK version of the PROM. The validation will also involve an assessment of the items for relevance to facial reconstruction patients. This will either validate this new measure for the use in clinical care and research of various facial reconstructive options, or provide evidence that a more specific PROM is required. This is a prospective validation study of the FACE-Q Skin Cancer Module in a UK facial skin cancer population with a specific focus on the difference between types of reconstruction. The face and content validity of the FACE-Q questionnaire will initially be assessed by a review process involving patients, skin cancer specialists and methodologists. An assessment of whether questions are relevant and any missing questions will be made. Initial validation will then be carried out by recruiting a cohort of 100 study participants with skin cancer of the face pre-operatively. All eligible patients will be invited to complete the questionnaire preoperatively and postoperatively. Psychometric analysis will be performed to test validity, reliability and responsiveness to change. Subgroup analysis will be performed on patients undergoing different forms of reconstruction postexcision of their skin cancer. This study has been approved by the West Midlands, Edgbaston Research Ethics Committee (Ref 16/WM/0445). All personal data collected will be anonymised and patient-specific data will only be reported in terms of group demographics. Identifiable data collected will include the patient name and date of birth. Other collected personal data will include their diagnosis, treatment performed, method of reconstruction and complications. A unique identifier will be applied to each patient so that pretreatment and post-treatment questionnaire results can be compared. All data acquisition and storage will be in accordance with the Data Protection Act 1998. Following completion of the study, all records will be stored in the Abertawe Bro Morgannwg University (AMBU) Health Board archive facility. Only qualified personnel working on the project will have access to the data.The outputs from this work will be published as widely as possible in peer-review journals and it is our aim to make this open access. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Mitchell, Travis D.; Urli, Kristina E.; Breitenbach, Jacques; Yelverton, Chris
2007-01-01
Abstract Objective This study aimed to evaluate the validity of the sacral base pressure test in diagnosing sacroiliac joint dysfunction. It also determined the predictive powers of the test in determining which type of sacroiliac joint dysfunction was present. Methods This was a double-blind experimental study with 62 participants. The results from the sacral base pressure test were compared against a cluster of previously validated tests of sacroiliac joint dysfunction to determine its validity and predictive powers. The external rotation of the feet, occurring during the sacral base pressure test, was measured using a digital inclinometer. Results There was no statistically significant difference in the results of the sacral base pressure test between the types of sacroiliac joint dysfunction. In terms of the results of validity, the sacral base pressure test was useful in identifying positive values of sacroiliac joint dysfunction. It was fairly helpful in correctly diagnosing patients with negative test results; however, it had only a “slight” agreement with the diagnosis for κ interpretation. Conclusions In this study, the sacral base pressure test was not a valid test for determining the presence of sacroiliac joint dysfunction or the type of dysfunction present. Further research comparing the agreement of the sacral base pressure test or other sacroiliac joint dysfunction tests with a criterion standard of diagnosis is necessary. PMID:19674694
Reliability and validity of the adolescent health profile-types.
Riley, A W; Forrest, C B; Starfield, B; Green, B; Kang, M; Ensminger, M
1998-08-01
The purpose of this study was to demonstrate the preliminary reliability and validity of a set 13 profiles of adolescent health that describe distinct patterns of health and health service requirements on four domains of health. Reliability and validity were tested in four ethnically diverse population samples of urban and rural youths aged 11 to 17-years-old in public schools (N = 4,066). The reliability of the classification procedure and construct validity were examined in terms of the predicted and actual distributions of age, gender, race, socioeconomic status, and family type. School achievement, medical conditions, and the proportion of youths with a psychiatric disorder also were examined as tests of construct validity. The classification method was shown to produce consistent results across the four populations in terms of proportions of youths assigned with specific sociodemographic characteristics. Variations in health described by specific profiles showed expected relations to sociodemographic characteristics, family structure, school achievement, medical disorders, and psychiatric disorders. This taxonomy of health profile-types appears to effectively describe a set of patterns that characterize adolescent health. The profile-types provide a unique and practical method for identifying subgroups having distinct needs for health services, with potential utility for health policy and planning. Such integrative reporting methods are critical for more effective utilization of health status instruments in health resource planning and policy development.
Leveraging molecular datasets for biomarker-based clinical trial design in glioblastoma.
Tanguturi, Shyam K; Trippa, Lorenzo; Ramkissoon, Shakti H; Pelton, Kristine; Knoff, David; Sandak, David; Lindeman, Neal I; Ligon, Azra H; Beroukhim, Rameen; Parmigiani, Giovanni; Wen, Patrick Y; Ligon, Keith L; Alexander, Brian M
2017-07-01
Biomarkers can improve clinical trial efficiency, but designing and interpreting biomarker-driven trials require knowledge of relationships among biomarkers, clinical covariates, and endpoints. We investigated these relationships across genomic subgroups of glioblastoma (GBM) within our institution (DF/BWCC), validated results in The Cancer Genome Atlas (TCGA), and demonstrated potential impacts on clinical trial design and interpretation. We identified genotyped patients at DF/BWCC, and clinical associations across 4 common GBM genomic biomarker groups were compared along with overall survival (OS), progression-free survival (PFS), and survival post-progression (SPP). Significant associations were validated in TCGA. Biomarker-based clinical trials were simulated using various assumptions. Epidermal growth factor receptor (EGFR)(+) and p53(-) subgroups were more likely isocitrate dehydrogenase (IDH) wild-type. Phosphatidylinositol-3 kinase (PI3K)(+) patients were older, and patients with O6-DNA methylguanine-methyltransferase (MGMT)-promoter methylation were more often female. OS, PFS, and SPP were all longer for IDH mutant and MGMT methylated patients, but there was no independent prognostic value for other genomic subgroups. PI3K(+) patients had shorter PFS among IDH wild-type tumors, however, and no DF/BWCC long-term survivors were either EGFR(+) (0% vs 7%, P = .014) or p53(-) (0% vs 10%, P = .005). The degree of biomarker overlap impacted the efficiency of Bayesian-adaptive clinical trials, while PFS and OS distribution variation had less impact. Biomarker frequency was proportionally associated with sample size in all designs. We identified several associations between GBM genomic subgroups and clinical or molecular prognostic covariates and validated known prognostic factors in all survival periods. These results are important for biomarker-based trial design and interpretation of biomarker-only and nonrandomized trials. © The Author(s) 2017. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Sedova, Petra; Brown, Robert D; Zvolsky, Miroslav; Kadlecova, Pavla; Bryndziar, Tomas; Volny, Ondrej; Weiss, Viktor; Bednarik, Josef; Mikulik, Robert
2015-09-01
Stroke is a common cause of mortality and morbidity in Eastern Europe. However, detailed epidemiological data are not available. The National Registry of Hospitalized Patients (NRHOSP) is a nationwide registry of prospectively collected data regarding each hospitalization in the Czech Republic since 1998. As a first step in the evaluation of stroke epidemiology in the Czech Republic, we validated stroke cases in NRHOSP. Any hospital in the Czech Republic with a sufficient number of cases was included. We randomly selected 10 of all 72 hospitals and then 50 patients from each hospital in 2011 stratified according to stroke diagnosis (International Classification of Diseases Tenth Revision [ICD-10] cerebrovascular codes I60, I61, I63, I64, and G45). Discharge summaries from hospitalization were reviewed independently by 2 reviewers and compared with NRHOSP for accuracy of discharge diagnosis. Any disagreements were adjudicated by a third reviewer. Of 500 requested discharge summaries, 484 (97%) were available. Validators confirmed diagnosis in NRHOSP as follows: transient ischemic attack (TIA) or any stroke type in 82% (95% confidence interval [CI], 79-86), any stroke type in 85% (95% CI, 81-88), I63/cerebral infarction in 82% (95% CI, 74-89), I60/subarachnoid hemorrhage in 91% (95% CI, 85-97), I61/intracerebral hemorrhage in 91% (95% CI, 85-96), and G45/TIA in 49% (95% CI, 39-58). The most important reason for disagreement was use of I64/stroke, not specified for patients with I63. The accuracy of coding of the stroke ICD-10 codes for subarachnoid hemorrhage (I60) and intracerebral hemorrhage (I61) included in a Czech Republic national registry was high. The accuracy of coding for I63/cerebral infarction was somewhat lower than for ICH and SAH. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.
A qualitative examination of the content validity of the EQ-5D-5L in patients with type 2 diabetes.
Matza, Louis S; Boye, Kristina S; Stewart, Katie D; Curtis, Bradley H; Reaney, Matthew; Landrian, Amanda S
2015-12-01
The EQ-5D is frequently used to derive utilities for patients with type 2 diabetes (T2D). Despite widely available quantitative psychometric data on the EQ-5D, little is known about content validity in this population. Thus, the purpose of this qualitative study was to examine content validity of the EQ-5D in patients with T2D. Patients with T2D in the UK completed concept elicitation interviews, followed by administration of the EQ-5D-5L and cognitive interviewing focused on the instrument's relevance, clarity, and comprehensiveness. A total of 25 participants completed interviews (52.0 % male; mean age = 53.5 years). Approximately half (52 %) reported that the EQ-5D-5L was relevant to their experience with T2D. When asked if each individual item was relevant to their experience with T2D, responses varied widely (24.0 % said the self-care item was relevant; 68.0 % said the anxiety/depression item was relevant). Participants frequently said items were not relevant to themselves, but could be relevant to patients with more severe diabetes. Most participants (92.0 %) reported that T2D and/or its treatment/monitoring requirements had an impact on their quality of life that was not captured by the EQ-5D-5L. Common missing concepts included food awareness/restriction (n = 13, 52.0 %); activities (n = 11, 44.0 %); emotional functioning other than depression/anxiety (n = 8, 32.0 %); and social/relationship functioning (n = 8, 32.0 %). The results highlight strengths and potential limitations of the EQ-5D-5L, including missing content that could be important for some patients with T2D. Suggestions for addressing limitations are provided.
Validating the Interpretations of PISA and TIMSS Tasks: A Rating Study
ERIC Educational Resources Information Center
Rindermann, Heiner; Baumeister, Antonia E. E.
2015-01-01
Scholastic tests regard cognitive abilities to be domain-specific competences. However, high correlations between competences indicate either high task similarity or a dependence on common factors. The present rating study examined the validity of 12 Programme for International Student Assessment (PISA) and Third or Trends in International…
Using Evaluation to Guide and Validate Improvements to the Utah Master Naturalist Program
ERIC Educational Resources Information Center
Larese-Casanova, Mark
2015-01-01
Integrating evaluation into an Extension program offers multiple opportunities to understand program success through achieving program goals and objectives, delivering programming using the most effective techniques, and refining program audiences. It is less common that evaluation is used to guide and validate the effectiveness of program…
Validity of the American Sign Language Discrimination Test
ERIC Educational Resources Information Center
Bochner, Joseph H.; Samar, Vincent J.; Hauser, Peter C.; Garrison, Wayne M.; Searls, J. Matt; Sanders, Cynthia A.
2016-01-01
American Sign Language (ASL) is one of the most commonly taught languages in North America. Yet, few assessment instruments for ASL proficiency have been developed, none of which have adequately demonstrated validity. We propose that the American Sign Language Discrimination Test (ASL-DT), a recently developed measure of learners' ability to…
40 CFR 72.90 - Annual compliance certification report.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) using a common stack, were monitored or accounted for through the missing data procedures and reported in the quarterly monitoring reports, including whether conditionally valid data, as defined in § 72.2, were reported in the quarterly report. If conditionally valid data were reported, the owner or operator...
Validating an Asthma Case Detection Instrument in a Head Start Sample
ERIC Educational Resources Information Center
Bonner, Sebastian; Matte, Thomas; Rubin, Mitchell; Sheares, Beverley J.; Fagan, Joanne K.; Evans, David; Mellins, Robert B.
2006-01-01
Although specific tests screen children in preschool programs for vision, hearing, and dental conditions, there are no published validated instruments to detect preschool-age children with asthma, one of the most common pediatric chronic conditions affecting children in economically disadvantaged communities of color. As part of an asthma…
Validating the Type D personality construct in Chinese patients with coronary heart disease.
Yu, Doris S F; Thompson, David R; Yu, Cheuk Man; Pedersen, Susanne S; Denollet, Johan
2010-08-01
Type D personality predicts poor prognosis in coronary heart disease (CHD) but little is known about Type D in non-Western cultures. We examined the (a) validity of the Type D construct and its assessment with the DS14 scale in the Chinese culture, (b) prevalence of Type D, and (c) gender vs. Type D discrepancies in depression/anxiety, among Chinese patients with CHD. Patients with CHD (N=326) completed the Chinese version of the DS14. The NEO Five Factor Inventory (NEO-FFI), Hospital Anxiety and Depression Scale (HADS), and Stress Symptom Checklist (SSC) were administered to subsamples to establish construct and discriminant validity. Administration of the DS14, HADS, and SSC was repeated at 1 month after hospital discharge in 66 patients, and stability of the DS14 was examined in another subsample of 100 patients. The theoretical structure of the Type D construct in the Chinese culture was supported (chi(2)/df=2.89, root mean square error of approximation=0.08, normal fit index=0.91, non-normal fit index=0.91, comparative fit index=0.93). The Negative Affectivity (NA) and Social Inhibition (SI) subscales of the DS14 in the entire sample were internally consistent (Cronbach's alpha=0.89/0.81), measured stable traits (3-month test-retest ICC=0.76/0.74), and correlated significantly with the neuroticism (NA/neuroticism, r=0.78, P<.001) and extraversion subscales (SI/extraversion, r=-0.64, P<.001) of the NEO-FFI, respectively. The prevalence of Type D personality was 31%. Type D was not related to transient emotional states. However, Chinese patients with a Type D personality were at increased concurrent risk of anxiety (P=.002) and depression (P=.016). Type D personality is a cross-culturally valid construct, is associated with an increased risk of anxiety and depression, and deserves prompt attention in estimating the prognostic risk of Chinese CHD patients. Copyright 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strons, Philip; Bailey, James L.; Davis, John
2016-03-01
In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.
Validation of prospective portion size and latency to eat as measures of reactivity to snack foods.
van den Akker, Karolien; Bongers, Peggy; Hanssen, Imke; Jansen, Anita
2017-09-01
In experimental studies that investigate reactivity to the sight and smell of highly palatable snack foods, ad libitum food intake is commonly used as a behavioural outcome measure. However, this measure has several drawbacks. The current study investigated two intake-related measures not yet validated for food cue exposure research involving common snack foods: prospective portion size and latency to eat. We aimed to validate these measures by assessing prospective portion size and eating latencies in female undergraduate students who either underwent snack food exposure or a control exposure. Furthermore, we correlated prospective portion size and latency to eat with commonly used measures of food cue reactivity, i.e., self-reported desire to eat, salivation, and ad libitum food intake. Results showed increases in prospective portion size after food cue exposure but not after control exposure. Latency to eat did not differ between the two conditions. Prospective portion size correlated positively with desire to eat and food intake, and negatively with latency to eat. Latency to eat was also negatively correlated with desire to eat and food intake. It is concluded that the current study provides initial evidence for the prospective portion size task as a valid measure of reactivity to snack foods in a Dutch female and mostly healthy weight student population. Copyright © 2017 Elsevier Ltd. All rights reserved.
A high-throughput fluorescence polarization assay for inhibitors of gyrase B.
Glaser, Bryan T; Malerich, Jeremiah P; Duellman, Sarah J; Fong, Julie; Hutson, Christopher; Fine, Richard M; Keblansky, Boris; Tang, Mary J; Madrid, Peter B
2011-02-01
DNA gyrase, a type II topoisomerase that introduces negative supercoils into DNA, is a validated antibacterial drug target. The holoenzyme is composed of 2 subunits, gyrase A (GyrA) and gyrase B (GyrB), which form a functional A(2)B(2) heterotetramer required for bacterial viability. A novel fluorescence polarization (FP) assay has been developed and optimized to detect inhibitors that bind to the adenosine triphosphate (ATP) binding domain of GyrB. Guided by the crystal structure of the natural product novobiocin bound to GyrB, a novel novobiocin-Texas Red probe (Novo-TRX) was designed and synthesized for use in a high-throughput FP assay. The binding kinetics of the interaction of Novo-TRX with GyrB from Francisella tularensis has been characterized, as well as the effect of common buffer additives on the interaction. The assay was developed into a 21-µL, 384-well assay format and has been validated for use in high-throughput screening against a collection of Food and Drug Administration-approved compounds. The assay performed with an average Z' factor of 0.80 and was able to identify GyrB inhibitors from a screening library.
Marijuana abstinence effects in marijuana smokers maintained in their home environment.
Budney, A J; Hughes, J R; Moore, B A; Novy, P L
2001-10-01
Although withdrawal symptoms are commonly reported by persons seeking treatment for marijuana dependence, the validity and clinical significance of a marijuana withdrawal syndrome has not been established. This controlled outpatient study examined the reliability and specificity of the abstinence effects that occur when daily marijuana users abruptly stop smoking marijuana. Twelve daily marijuana smokers were assessed on 16 consecutive days during which they smoked marijuana as usual (days 1-5), abstained from smoking marijuana (days 6-8), returned to smoking marijuana (days 9-13), and again abstained from smoking marijuana (days 14-16). An overall measure of withdrawal discomfort increased significantly during the abstinence phases and returned to baseline when marijuana smoking resumed. Craving for marijuana, decreased appetite, sleep difficulty, and weight loss reliably changed across the smoking and abstinence phases. Aggression, anger, irritability, restlessness, and strange dreams increased significantly during one abstinence phase, but not the other. Collateral observers confirmed participant reports of these symptoms. This study validated several specific effects of marijuana abstinence in heavy marijuana users, and showed they were reliable and clinically significant. These withdrawal effects appear similar in type and magnitude to those observed in studies of nicotine withdrawal.
Effects of dietary restriction on adipose mass and biomarkers of healthy aging in human.
Lettieri-Barbato, Daniele; Giovannetti, Esmeralda; Aquilano, Katia
2016-11-29
In developing countries the rise of obesity and obesity-related metabolic disorders, such as cardiovascular diseases and type 2 diabetes, reflects the changes in lifestyle habits and wrong dietary choices. Dietary restriction (DR) regimens have been shown to extend health span and lifespan in many animal models including primates. Identifying biomarkers predictive of clinical benefits of treatment is one of the primary goals of precision medicine. To monitor the clinical outcomes of DR interventions in humans, several biomarkers are commonly adopted. However, a validated link between the behaviors of such biomarkers and DR effects is lacking at present time. Through a systematic analysis of human intervention studies, we evaluated the effect size of DR (i.e. calorie restriction, very low calorie diet, intermittent fasting, alternate day fasting) on health-related biomarkers. We found that DR is effective in reducing total and visceral adipose mass and improving inflammatory cytokines profile and adiponectin/leptin ratio. By analysing the levels of canonical biomarkers of healthy aging, we also validated the changes of insulin, IGF-1 and IGFBP-1,2 to monitor DR effects. Collectively, we developed a useful platform to evaluate the human responses to dietary regimens low in calories.
Short arc orbit determination for altimeter calibration and validation on TOPEX/POSEIDON
NASA Technical Reports Server (NTRS)
Williams, B. G.; Christensen, E. J.; Yuan, D. N.; Mccoll, K. C.; Sunseri, R. F.
1993-01-01
TOPEX/POSEIDON (T/P) is a joint mission of United States' National Aeronautics and Space Administration (NASA) and French Centre National d'Etudes Spatiales (CNES) design launched August 10, 1992. It carries two radar altimeters which alternately share a common antenna. There are two project designated verification sites, a NASA site off the coast at Pt. Conception, CA and a CNES site near Lampedusa Island in the Mediterranean Sea. Altimeter calibration and validation for T/P is performed over these highly instrumented sites by comparing the spacecraft's altimeter radar range to computed range based on in situ measurements which include the estimated orbit position. This paper presents selected results of orbit determination over each of these sites to support altimeter verification. A short arc orbit determination technique is used to estimate a locally accurate position determination of T/P from less than one revolution of satellite laser ranging (SLR) data. This technique is relatively insensitive to gravitational and non-gravitational force modeling errors and is demonstrated by covariance analysis and by comparison to orbits determined from longer arcs of data and other tracking data types, such as Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) and Global Positioning System Demonstration Receiver (GPSDR) data.
A generalization of the power law distribution with nonlinear exponent
NASA Astrophysics Data System (ADS)
Prieto, Faustino; Sarabia, José María
2017-01-01
The power law distribution is usually used to fit data in the upper tail of the distribution. However, commonly it is not valid to model data in all the range. In this paper, we present a new family of distributions, the so-called Generalized Power Law (GPL), which can be useful for modeling data in all the range and possess power law tails. To do that, we model the exponent of the power law using a non-linear function which depends on data and two parameters. Then, we provide some basic properties and some specific models of that new family of distributions. After that, we study a relevant model of the family, with special emphasis on the quantile and hazard functions, and the corresponding estimation and testing methods. Finally, as an empirical evidence, we study how the debt is distributed across municipalities in Spain. We check that power law model is only valid in the upper tail; we show analytically and graphically the competence of the new model with municipal debt data in the whole range; and we compare the new distribution with other well-known distributions including the Lognormal, the Generalized Pareto, the Fisk, the Burr type XII and the Dagum models.
Bongers, Peggy; Jansen, Anita
2016-01-01
In eating research, it is common practice to group people into different eater types, such as emotional, external and restrained eaters. This categorization is generally based on scores on self-report questionnaires. However, recent studies have started to raise questions about the validity of such questionnaires. In the realm of emotional eating, a considerable number of studies, both in the lab and in naturalistic settings, fail to demonstrate increased food intake in emotional situations in self-described emotional eaters. The current paper provides a review of experimental and naturalistic studies investigating the relationships between self-reported emotional eater status, mood, and food consumption. It is concluded that emotional eating scales lack predictive and discriminative validity; they cannot be assumed to measure accurately what they intend to measure, namely increased food intake in response to negative emotions. The review is followed by a discussion of alternative interpretations of emotional eating scores that have been suggested in the past few years, i.e., concerned eating, uncontrolled eating, a tendency to attribute overeating to negative affect, and cue-reactive eating. PMID:28008323
Saub, R; Locker, D; Allison, P
2008-09-01
To compare two methods of developing short forms of the Malaysian Oral Health Impact Profile (OHIP-M) measure. Cross sectional data obtained using the long form of the OHIP-M was used to produce two types of OHIP-M short forms, derived using two different methods; namely regression and item frequency methods. The short version derived using a regression method is known as Reg-SOHIP(M) and that derived using a frequency method is known as Freq-SOHIP(M). Both short forms contained 14 items. These two forms were then compared in terms of their content, scores, reliability, validity and the ability to distinguish between groups. Out of 14 items, only four were in common. The form derived from the frequency method contained more high prevalence items and higher scores than the form derived from the regression method. Both methods produced a reliable and valid measure. However, the frequency method produced a measure, which was slightly better in terms of distinguishing between groups. Regardless of the method used to produce the measures, both forms performed equally well when tested for their cross-sectional psychometric properties.
ERIC Educational Resources Information Center
Köksal, Mustafa Serdar; Ertekin, Pelin; Çolakoglu, Özgür Murat
2014-01-01
The purpose of this study is to investigate association of data collectors' differences with the differences in reliability and validity of scores regarding affective variables (motivation toward science learning and science attitude) that are measured by Likert-type scales. Four researchers trained in data collection and seven science teachers…
NASA Astrophysics Data System (ADS)
van Noort, Thomas; Achten, Peter; Plasmeijer, Rinus
We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.
MaNIDA: an operational infrastructure for shipborne data
NASA Astrophysics Data System (ADS)
Macario, Ana; Scientific MaNIDA Team
2013-04-01
The Marine Network for Integrated Data Access (MaNIDA) aims to build a sustainable e-Infrastruture to support discovery and re-use of data archived in a distributed network of data providers in Germany (see related abstracts in session ESSI1.2 and session ESSI2.2). Because one of the primary focus of MaNIDA is the underway data acquired on board of German academic research vessels, we will be addressing various issues related to cruise-level metadata, shiptrack navigation, sampling events conducted during the cruise (event logs), standardization of device-related (type, name, parameters) and place-related (gazetteer) vocabularies, QA/QC procedures (near real time and post-cruise validation, corrections, quality flags) as well as ingestion and management of contextual information (e.g. various types of cruise-related reports and project-related information). One of MaNIDA's long-term goal is to be able to offer an integrative "one-stop-shop" framework for management and access of ship-related information based on international standards and interoperability. This access framework will be freely available and is intended for scientists, funding agencies and the public. The master "catalog" we are building currently contains information from 13 German academic research vessels and respective cruises (to date ~1900 cruises with expected growing rate of ~150 cruises annually). Moreover, MaNIDA's operational infrastructure will additionally provide a direct pipeline to SeaDataNet Cruise Summary Report Inventory, among others. In this presentation, we will focus on the extensions we are currently implementing to support automated acquisition and standardized transfer of various types of data from German research vessels to hosts on land. Our concept towards nationwide common QA/QC procedures for various types of underway data (including versioning concept) and common workflows will also be presented. The "linking" of cruise-related information with quality-controlled data and data products (e.g., digital terrain models), publications, cruise-related reports, people and other contextual information will be additionally shown in the framework of a prototype for R.V. Polarstern.
NASA Astrophysics Data System (ADS)
Reed Espinosa, W.; Vanderlei Martins, J.; Remer, Lorraine A.; Puthukkudy, Anin; Orozco, Daniel; Dolgos, Gergely
2018-03-01
This work provides a synopsis of aerosol phase function (F11) and polarized phase function (F12) measurements made by the Polarized Imaging Nephelometer (PI-Neph) during the Studies of Emissions, Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) and the Deep Convection Clouds and Chemistry (DC3) field campaigns. In order to more easily explore this extensive dataset, an aerosol classification scheme is developed that identifies the different aerosol types measured during the deployments. This scheme makes use of ancillary data that include trace gases, chemical composition, aerodynamic particle size and geographic location, all independent of PI-Neph measurements. The PI-Neph measurements are then grouped according to their ancillary data classifications and the resulting scattering patterns are examined in detail. These results represent the first published airborne measurements of F11 and -F12/F11 for many common aerosol types. We then explore whether PI-Neph light-scattering measurements alone are sufficient to reconstruct the results of this ancillary data classification algorithm. Principal component analysis (PCA) is used to reduce the dimensionality of the multi-angle PI-Neph scattering data and the individual measurements are examined as a function of ancillary data classification. Clear clustering is observed in the PCA score space, corresponding to the ancillary classification results, suggesting that, indeed, a strong link exists between the angular-scattering measurements and the aerosol type or composition. Two techniques are used to quantify the degree of clustering and it is found that in most cases the results of the ancillary data classification can be predicted from PI-Neph measurements alone with better than 85 % recall. This result both emphasizes the validity of the ancillary data classification as well as the PI-Neph's ability to distinguish common aerosol types without additional information.
De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A
2009-06-01
Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.
Psychometric evaluation of commonly used game-specific skills tests in rugby: A systematic review
Oorschot, Sander; Chiwaridzo, Matthew; CM Smits-Engelsman, Bouwien
2017-01-01
Objectives To (1) give an overview of commonly used game-specific skills tests in rugby and (2) evaluate available psychometric information of these tests. Methods The databases PubMed, MEDLINE CINAHL and Africa Wide information were systematically searched for articles published between January 1995 and March 2017. First, commonly used game-specific skills tests were identified. Second, the available psychometrics of these tests were evaluated and the methodological quality of the studies assessed using the Consensus-based Standards for the selection of health Measurement Instruments checklist. Studies included in the first step had to report detailed information on the construct and testing procedure of at least one game-specific skill, and studies included in the second step had additionally to report at least one psychometric property evaluating reliability, validity or responsiveness. Results 287 articles were identified in the first step, of which 30 articles met the inclusion criteria and 64 articles were identified in the second step of which 10 articles were included. Reactive agility, tackling and simulated rugby games were the most commonly used tests. All 10 studies reporting psychometrics reported reliability outcomes, revealing mainly strong evidence. However, all studies scored poor or fair on methodological quality. Four studies reported validity outcomes in which mainly moderate evidence was indicated, but all articles had fair methodological quality. Conclusion Game-specific skills tests indicated mainly high reliability and validity evidence, but the studies lacked methodological quality. Reactive agility seems to be a promising domain, but the specific tests need further development. Future high methodological quality studies are required in order to develop valid and reliable test batteries for rugby talent identification. Trial registration number PROSPERO CRD42015029747. PMID:29259812
Splenomegaly - Diagnostic validity, work-up, and underlying causes.
Curovic Rotbain, Emelie; Lund Hansen, Dennis; Schaffalitzky de Muckadell, Ove; Wibrand, Flemming; Meldgaard Lund, Allan; Frederiksen, Henrik
2017-01-01
Our aim was to assess the validity of the ICD-10 code for splenomegaly in the Danish National Registry of Patients (DNRP), as well as to investigate which underlying diseases explained the observed splenomegaly. Splenomegaly is a common finding in patients referred to an internal medical department and can be caused by a large spectrum of diseases, including haematological diseases and liver cirrhosis. However, some patients remain without a causal diagnosis, despite extensive medical work-up. We identified 129 patients through the DNRP, that had been given the ICD-10 splenomegaly diagnosis code in 1994-2013 at Odense University Hospital, Denmark, excluding patients with prior splenomegaly, malignant haematological neoplasia or liver cirrhosis. Medical records were reviewed for validity of the splenomegaly diagnosis, diagnostic work-up, and the underlying disease was determined. The positive predictive value (PPV) with 95% confidence interval (CI) was calculated for the splenomegaly diagnosis code. Patients with idiopathic splenomegaly in on-going follow-up were also invited to be investigated for Gaucher disease. The overall PPV was 92% (95% CI: 85, 96). Haematological diseases were the underlying causal diagnosis in 39%; hepatic diseases in 18%, infectious disease in 10% and other diseases in 8%. 25% of patients with splenomegaly remained without a causal diagnosis. Lymphoma was the most common haematological causal diagnosis and liver cirrhosis the most common hepatic causal diagnosis. None of the investigated patients with idiopathic splenomegaly had Gaucher disease. Our findings show that the splenomegaly diagnosis in the DNRP is valid and can be used in registry-based studies. However, because of suspected significant under-coding, it should be considered if supplementary data sources should be used in addition, in order to attain a more representative population. Haematological diseases were the most common cause, however in a large fraction of patients no causal diagnosis was found.
Loss-of-function mutations in SLC30A8 protect against type 2 diabetes
Flannick, Jason; Thorleifsson, Gudmar; Beer, Nicola L.; Jacobs, Suzanne B. R.; Grarup, Niels; Burtt, Noël P.; Mahajan, Anubha; Fuchsberger, Christian; Atzmon, Gil; Benediktsson, Rafn; Blangero, John; Bowden, Don W.; Brandslund, Ivan; Brosnan, Julia; Burslem, Frank; Chambers, John; Cho, Yoon Shin; Christensen, Cramer; Douglas, Desirée A.; Duggirala, Ravindranath; Dymek, Zachary; Farjoun, Yossi; Fennell, Timothy; Fontanillas, Pierre; Forsén, Tom; Gabriel, Stacey; Glaser, Benjamin; Gudbjartsson, Daniel F.; Hanis, Craig; Hansen, Torben; Hreidarsson, Astradur B.; Hveem, Kristian; Ingelsson, Erik; Isomaa, Bo; Johansson, Stefan; Jørgensen, Torben; Jørgensen, Marit Eika; Kathiresan, Sekar; Kong, Augustine; Kooner, Jaspal; Kravic, Jasmina; Laakso, Markku; Lee, Jong-Young; Lind, Lars; Lindgren, Cecilia M; Linneberg, Allan; Masson, Gisli; Meitinger, Thomas; Mohlke, Karen L; Molven, Anders; Morris, Andrew P.; Potluri, Shobha; Rauramaa, Rainer; Ribel-Madsen, Rasmus; Richard, Ann-Marie; Rolph, Tim; Salomaa, Veikko; Segrè, Ayellet V.; Skärstrand, Hanna; Steinthorsdottir, Valgerdur; Stringham, Heather M.; Sulem, Patrick; Tai, E Shyong; Teo, Yik Ying; Teslovich, Tanya; Thorsteinsdottir, Unnur; Trimmer, Jeff K.; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Vaziri-Sani, Fariba; Voight, Benjamin F.; Wilson, James G.; Boehnke, Michael; McCarthy, Mark I.; Njølstad, Pål R.; Pedersen, Oluf; Groop, Leif; Cox, David R.; Stefansson, Kari; Altshuler, David
2014-01-01
Loss-of-function mutations protective against human disease provide in vivo validation of therapeutic targets1,2,3, yet none are described for type 2 diabetes (T2D). Through sequencing or genotyping ~150,000 individuals across five ethnicities, we identified 12 rare protein-truncating variants in SLC30A8, which encodes an islet zinc transporter (ZnT8)4 and harbors a common variant (p.Trp325Arg) associated with T2D risk, glucose, and proinsulin levels5–7. Collectively, protein-truncating variant carriers had 65% reduced T2D risk (p=1.7×10−6), and non-diabetic Icelandic carriers of a frameshift variant (p.Lys34SerfsX50) demonstrated reduced glucose levels (−0.17 s.d., p=4.6×10−4). The two most common protein-truncating variants (p.Arg138X and p.Lys34SerfsX50) individually associate with T2D protection and encode unstable ZnT8 proteins. Previous functional study of SLC30A8 suggested reduced zinc transport increases T2D risk8,9, yet phenotypic heterogeneity was observed in rodent Slc30a8 knockouts10–15. Contrastingly, loss-of-function mutations in humans provide strong evidence that SLC30A8 haploinsufficiency protects against T2D, proposing ZnT8 inhibition as a therapeutic strategy in T2D prevention. PMID:24584071
Al-Abri, Mohammed; Al-Asmi, Abdullah; Al-Shukairi, Aisha; Al-Qanoobi, Arwa; Nandhagopal, Ramachandiran; Jacob, Povothoor; Gujjar, Arunodaya
2015-01-01
Objectives: Epilepsy is a common neurological disorder with a median lifetime prevalence of 14 per 1000 subjects. Sleep disorders could influence epileptic seizure. The most common sleep disorder is obstructive sleep apnea syndrome (OSAS) which occurs in 2% of adult women and 4% of adult men in the general population. The aim of this study is to estimate the frequency of OSAS among patients with epilepsy and to study the seizure characteristics among those patients with co-morbid OSAS. Methods: Patients with a confirmed diagnosis of epilepsy who attended the Sultan Qaboos University Hospital neurology clinic were recruited for the study between June 2011 and April 2012. Patients were screened for OSAS by direct interview using the validated Arabic version of the Berlin questionnaire. Patients identified as high-risk underwent polysomnography. Results: A total of 100 patients with epilepsy (55 men and 45 women) were screened for OSAS. Generalized and focal seizure was found in 67% of male and 27% of female patients. Six percent of the participants had epilepsy of undetermined type. Only 9% of the sample was found to have high risk of OSAS based on the Berlin questionnaire. No significant correlation was found between risk of OSAS, type of epilepsy, and anti-epileptic drugs. Conclusion: The risk of OSAS was marginally greater in patients with epilepsy compared to the general population with the overall prevalence of 9%. PMID:25829998
Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.
Brodish, D L
1998-01-01
The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.
Gonzalez, Araceli; Rozenman, Michelle; Langley, Audra K; Kendall, Philip C; Ginsburg, Golda S; Compton, Scott; Walkup, John T; Birmaher, Boris; Albano, Anne Marie; Piacentini, John
2017-06-01
Anxiety disorders are among the most common mental health problems in youth, and faulty interpretation bias has been positively linked to anxiety severity, even within anxiety-disordered youth. Quick, reliable assessment of interpretation bias may be useful in identifying youth with certain types of anxiety or assessing changes on cognitive bias during intervention. This study examined the factor structure, reliability, and validity of the Self-report of Ambiguous Social Situations for Youth (SASSY) scale, a self-report measure developed to assess interpretation bias in youth. Participants (N=488, age 7 to 17) met diagnostic criteria for Social Phobia, Generalized Anxiety Disorder, and/or Separation Anxiety Disorder. An exploratory factor analysis was performed on baseline data from youth participating in a large randomized clinical trial. Exploratory factor analysis yielded two factors (Accusation/Blame, Social Rejection). The SASSY full scale and Social Rejection factor demonstrated adequate internal consistency, convergent validity with social anxiety, and discriminant validity as evidenced by non-significant correlations with measures of non-social anxiety. Further, the SASSY Social Rejection factor accurately distinguished children and adolescents with Social Phobia from those with other anxiety disorders, supporting its criterion validity, and revealed sensitivity to changes with treatment. Given the relevance to youth with social phobia, pre- and post-intervention data were examined for youth social phobia to test sensitivity to treatment effects; results suggested that SASSY scores reduced for treatment responders. Findings suggest the potential utility of the SASSY Social Rejection factor as a quick, reliable, and efficient way of assessing interpretation bias in anxious youth, particularly as related to social concerns, in research and clinical settings.
Broström, Anders; Arestedt, Kristofer Franzén; Nilsen, Per; Strömberg, Anna; Ulander, Martin; Svanborg, Eva
2010-12-01
Continuous positive airway pressure (CPAP) is the treatment of choice for obstructive sleep apnoea syndrome (OSAS), but side-effects are common. No validated self-rating scale measuring side-effects to CPAP treatment exists today. The aim was to develop the side-effects to CPAP treatment inventory (SECI), and investigate the validity and reliability of the instrument among patients with OSAS. SECI was developed on the basis of: (1) in-depth interviews with 23 patients; (2) examination of the scientific literature and (3) consensus agreement of a multi-professional expert panel. This yielded 15 different types of side-effects related to CPAP treatment. Each side-effect has three sub-questions (scales): perceived frequency (a) and magnitude (b) of the side-effect, as well as its perceived impact on CPAP use (c). A cross-sectional descriptive design was used. A total of 329 patients with OSAS with an average use of CPAP treatment for 39 months (2 weeks to 182 months) were recruited. Data were collected with SECI, and obtained from medical records (clinical variables and data related to CPAP treatment). Construct validity was confirmed with factor analysis (principal component analysis with orthogonal rotation). A logical two-factor solution, the device subscale and symptom subscale, emerged across all three scales. The symptom subscale describing physical and psychological side-effects and the device subscale described mask and device-related side-effects. Internal consistency reliability of the three scales was good (Cronbach's α = 0.74-0.86) and acceptable for the subscales (Cronbach's α = 0.62-0.86). The satisfactory measurement properties of this new instrument are promising and indicate that SECI can be used to measure side-effects to CPAP treatment. © 2010 European Sleep Research Society.
Translation and Validation of the Persian Version the Boston Carpal Tunnel Syndrome Questionnaire.
Hassankhani, Golnaz Ghayyem; Moradi, Ali; Birjandinejad, Ali; Vahedi, Ehsan; Kachooei, Amir R; Ebrahimzadeh, Mohammad H
2018-01-01
Carpal tunnel syndrome (CTS) is recognized as the most common type of neuropathies. Questionnaires are the method of choice for evaluating patients with CTS. Boston Carpal Tunnel Syndrome (BCTS) is one of the most famous questionnaires that evaluate the functional and symptomatic aspects of CTS. This study was performed to evaluate the validity and reliability of the Persian version of BCTS questionnaire. First, both parts of the original questionnaire (Symptom Severity Scale and Functional Status Scale) were translated into Persian by two expert translators. The translated questionnaire was revised after merging and confirmed by an orthopedic hand surgeon. The confirmed questionnaire was interpreted back into the original language (English) to check for any possible content inequality between the original questionnaire and its final translated version. The final Persian questionnaire was answered by 10 patients suffering from CTS to elucidate its comprehensibility; afterwards, it was filled by 142 participants along with the Persian version of the Quick-DASH questionnaire. After 2 to 6 days, the translated questionnaire was refilled by some of the previous patients who had not received any substantial medical treatment during that period. Among all 142 patients, 13.4 % were male and 86.6 % were female. The reliability of the questionnaire was tested using Cronbach's alpha and Intraclass correlation coefficient (ICC). Cronbach's alpha was 0.859 for symptom severity scale (SSS) and 0.878 for functional status scale (FSS). Also, ICCs were calculated as 0.538 for SSS and 0.773 for FSS. In addition, construct validity of SSS and FSS against QuickDASH were 0.641 and 0.701, respectively. Based on our results, the Persian version of the BCTQ is valid and reliable. Level of evidence: II.
Validation of the CMT Pediatric Scale as an outcome measure of disability
Burns, Joshua; Ouvrier, Robert; Estilow, Tim; Shy, Rosemary; Laurá, Matilde; Pallant, Julie F.; Lek, Monkol; Muntoni, Francesco; Reilly, Mary M.; Pareyson, Davide; Acsadi, Gyula; Shy, Michael E.; Finkel, Richard S.
2012-01-01
Objective Charcot-Marie-Tooth disease (CMT) is a common heritable peripheral neuropathy. There is no treatment for any form of CMT although clinical trials are increasingly occurring. Patients usually develop symptoms during the first two decades of life but there are no established outcome measures of disease severity or response to treatment. We identified a set of items that represent a range of impairment levels and conducted a series of validation studies to build a patient-centered multi-item rating scale of disability for children with CMT. Methods As part of the Inherited Neuropathies Consortium, patients aged 3–20 years with a variety of CMT types were recruited from the USA, UK, Italy and Australia. Initial development stages involved: definition of the construct, item pool generation, peer review and pilot testing. Based on data from 172 patients, a series of validation studies were conducted, including: item and factor analysis, reliability testing, Rasch modeling and sensitivity analysis. Results Seven areas for measurement were identified (strength, dexterity, sensation, gait, balance, power, endurance), and a psychometrically robust 11-item scale constructed (Charcot-Marie-Tooth disease Pediatric Scale: CMTPedS). Rasch analysis supported the viability of the CMTPedS as a unidimensional measure of disability in children with CMT. It showed good overall model fit, no evidence of misfitting items, no person misfit and it was well targeted for children with CMT. Interpretation The CMTPedS is a well-tolerated outcome measure that can be completed in 25-minutes. It is a reliable, valid and sensitive global measure of disability for children with CMT from the age of 3 years. PMID:22522479
Farkas, József; Kovács, László Á; Gáspár, László; Nafz, Anna; Gaszner, Tamás; Ujvári, Balázs; Kormos, Viktória; Csernus, Valér; Hashimoto, Hitoshi; Reglődi, Dóra; Gaszner, Balázs
2017-06-23
Major depression is a common cause of chronic disability. Despite decades of efforts, no equivocally accepted animal model is available for studying depression. We tested the validity of a new model based on the three-hit concept of vulnerability and resilience. Genetic predisposition (hit 1, mutation of pituitary adenylate cyclase-activating polypeptide, PACAP gene), early-life adversity (hit 2, 180-min maternal deprivation, MD180) and chronic variable mild stress (hit 3, CVMS) were combined. Physical, endocrinological, behavioral and functional morphological tools were used to validate the model. Body- and adrenal weight changes as well as corticosterone titers proved that CVMS was effective. Forced swim test indicated increased depression in CVMS PACAP heterozygous (Hz) mice with MD180 history, accompanied by elevated anxiety level in marble burying test. Corticotropin-releasing factor neurons in the oval division of the bed nucleus of the stria terminalis showed increased FosB expression, which was refractive to CVMS exposure in wild-type and Hz mice. Urocortin1 neurons became over-active in CMVS-exposed PACAP knock out (KO) mice with MD180 history, suggesting the contribution of centrally projecting Edinger-Westphal nucleus to the reduced depression and anxiety level of stressed KO mice. Serotoninergic neurons of the dorsal raphe nucleus lost their adaptation ability to CVMS in MD180 mice. In conclusion, the construct and face validity criteria suggest that MD180 PACAP HZ mice on CD1 background upon CVMS may be used as a reliable model for the three-hit theory. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Silva de Lima, Ana Lígia; Evers, Luc J W; Hahn, Tim; Bataille, Lauren; Hamilton, Jamie L; Little, Max A; Okuma, Yasuyuki; Bloem, Bastiaan R; Faber, Marjan J
2017-08-01
Despite the large number of studies that have investigated the use of wearable sensors to detect gait disturbances such as Freezing of gait (FOG) and falls, there is little consensus regarding appropriate methodologies for how to optimally apply such devices. Here, an overview of the use of wearable systems to assess FOG and falls in Parkinson's disease (PD) and validation performance is presented. A systematic search in the PubMed and Web of Science databases was performed using a group of concept key words. The final search was performed in January 2017, and articles were selected based upon a set of eligibility criteria. In total, 27 articles were selected. Of those, 23 related to FOG and 4 to falls. FOG studies were performed in either laboratory or home settings, with sample sizes ranging from 1 PD up to 48 PD presenting Hoehn and Yahr stage from 2 to 4. The shin was the most common sensor location and accelerometer was the most frequently used sensor type. Validity measures ranged from 73-100% for sensitivity and 67-100% for specificity. Falls and fall risk studies were all home-based, including samples sizes of 1 PD up to 107 PD, mostly using one sensor containing accelerometers, worn at various body locations. Despite the promising validation initiatives reported in these studies, they were all performed in relatively small sample sizes, and there was a significant variability in outcomes measured and results reported. Given these limitations, the validation of sensor-derived assessments of PD features would benefit from more focused research efforts, increased collaboration among researchers, aligning data collection protocols, and sharing data sets.
Costa, Raquel; Probst, Michel; Bastos, Tânia; Vilhena, Estela; Seabra, André; Corredeira, Rui
2017-06-22
People with schizophrenia have low physical activity levels that can be explained by the restriction in motivation. The Behavioural Regulation in Exercise Questionnaire-2 is a 19-item scale commonly used to assess five different motivational subtypes for physical activity. However, there are limited psychometric analyses of this version in the schizophrenia context. Moreover, there is a lack of information related to the psychometric properties of version 3 of this questionnaire, with 24 items and six different motivational subtypes. The aim of this study was to examine the construct validity of both Portuguese versions in people with schizophrenia. A total of 118 persons with schizophrenia were included (30 women). Cronbach's alpha was used for internal consistency, Pearson's correlation for the retained motivation-types, confirmatory factor analysis for the structural validity of version 2 and exploratory factor analysis for the factor structure of version 3. Analyses of version 2 provided an adequate fit index for the structure of the five factors. Exploratory analyses suggested retaining 2 factors of version 3. The results of this study suggest that version 3 was an appropriate measure to assess controlled and autonomous motivation for physical activity in people with schizophrenia and support its use in clinical practice and research. Implications for Rehabilitation This study supports the need to identify the reasons why people with schizophrenia practice physical activity. For that purpose, it is important to use valid and cost-effective instruments. The Portuguese version of BREQ-2 confirmed a 5-factor model and showed adequate fit for the application in people with schizophrenia. However, the incremental indices values were lower than expected. The Portuguese version of BREQ-3 showed acceptable psychometric properties to assess controlled and autonomous motivation for physical activity in people with schizophrenia.
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
Chen, Zhijun; Wu, Chaozhong; Zhong, Ming; Lyu, Nengchao; Huang, Zhen
2015-08-01
Drowsy/distracted driving has become one of the leading causes of traffic crash. Only certain particular drowsy/distracted driving behaviors have been studied by previous studies, which are mainly based on dedicated sensor devices such as bio and visual sensors. The objective of this study is to extract the common features for identifying drowsy/distracted driving through a set of common vehicle motion parameters. An intelligent vehicle was used to collect vehicle motion parameters. Fifty licensed drivers (37 males and 13 females, M=32.5 years, SD=6.2) were recruited to carry out road experiments in Wuhan, China and collecting vehicle motion data under four driving scenarios including talking, watching roadside, drinking and under the influence of drowsiness. For the first scenario, the drivers were exposed to a set of questions and asked to repeat a few sentences that had been proved valid in inducing driving distraction. Watching roadside, drinking and driving under drowsiness were assessed by an observer and self-reporting from the drivers. The common features of vehicle motions under four types of drowsy/distracted driving were analyzed using descriptive statistics and then Wilcoxon rank sum test. The results indicated that there was a significant difference of lateral acceleration rates and yaw rate acceleration between "normal driving" and drowsy/distracted driving. Study results also shown that, under drowsy/distracted driving, the lateral acceleration rates and yaw rate acceleration were significantly larger from the normal driving. The lateral acceleration rates were shown to suddenly increase or decrease by more than 2.0m/s(3) and the yaw rate acceleration by more than 2.5°/s(2). The standard deviation of acceleration rate (SDA) and standard deviation of yaw rate acceleration (SDY) were identified to as the common features of vehicle motion for distinguishing the drowsy/distracted driving from the normal driving. In order to identify a time window for effectively extracting the two common features, a double-window method was used and the optimized "Parent Window" and "Child Window" were found to be 55s and 6s, respectively. The study results can be used to develop a driving assistant system, which can warn drivers when any one of the four types of drowsy/distracted driving is detected. Copyright © 2015. Published by Elsevier Ltd.
Gold, T
1992-07-01
There are strong indications that microbial life is widespread at depth in the crust of the Earth, just as such life has been identified in numerous ocean vents. This life is not dependent on solar energy and photosynthesis for its primary energy supply, and it is essentially independent of the surface circumstances. Its energy supply comes from chemical sources, due to fluids that migrate upward from deeper levels in the Earth. In mass and volume it may be comparable with all surface life. Such microbial life may account for the presence of biological molecules in all carbonaceous materials in the outer crust, and the inference that these materials must have derived from biological deposits accumulated at the surface is therefore not necessarily valid. Subsurface life may be widespread among the planetary bodies of our solar system, since many of them have equally suitable conditions below, while having totally inhospitable surfaces. One may even speculate that such life may be widely disseminated in the universe, since planetary type bodies with similar subsurface conditions may be common as solitary objects in space, as well as in other solar-type systems.
Targeting IFN-λ: therapeutic implications.
Eslam, Mohammed; George, Jacob
2016-12-01
Type-III interferons (IFN-λ), the most recently discovered family of IFNs, shares common features with other family members, but also has many distinctive activities. IFN-λ uniquely has a different receptor complex, and a more focused pattern of tissue expression and signaling effects, from other classes of IFNs. Multiple genome-wide association studies (GWAS) and subsequent validation reports suggest a pivotal role for polymorphisms near the IFNL3 gene in hepatitis C clearance and control, as also for several other epithelial cell tropic viruses. Apart from its antiviral activity, IFN-λ possesses anti-tumor, immune-inflammatory and homeostatic functions. The overlapping effects of IFN-λ with type I IFN, with a restricted tissue expression pattern renders IFN-λ an attractive therapeutic target for viral infection, cancer and autoimmune diseases, with limited side effects. Areas covered: This review will summarize the current and future therapeutic opportunities offered by this most recently discovered family of interferons. Expert opinion: Our knowledge on IFN-λ is rapidly expanding. Though there are many remaining questions and challenges that require elucidation, the unique characteristics of IFN-λ increases enthusiasm that multiple therapeutic options will emerge.
Working memory capacity and the scope and control of attention.
Shipstead, Zach; Harrison, Tyler L; Engle, Randall W
2015-08-01
Complex span and visual arrays are two common measures of working memory capacity that are respectively treated as measures of attention control and storage capacity. A recent analysis of these tasks concluded that (1) complex span performance has a relatively stronger relationship to fluid intelligence and (2) this is due to the requirement that people engage control processes while performing this task. The present study examines the validity of these conclusions by examining two large data sets that include a more diverse set of visual arrays tasks and several measures of attention control. We conclude that complex span and visual arrays account for similar amounts of variance in fluid intelligence. The disparity relative to the earlier analysis is attributed to the present study involving a more complete measure of the latent ability underlying the performance of visual arrays. Moreover, we find that both types of working memory task have strong relationships to attention control. This indicates that the ability to engage attention in a controlled manner is a critical aspect of working memory capacity, regardless of the type of task that is used to measure this construct.
Iskar, Murat; Zeller, Georg; Blattmann, Peter; Campillos, Monica; Kuhn, Michael; Kaminska, Katarzyna H; Runz, Heiko; Gavin, Anne-Claude; Pepperkok, Rainer; van Noort, Vera; Bork, Peer
2013-01-01
In pharmacology, it is crucial to understand the complex biological responses that drugs elicit in the human organism and how well they can be inferred from model organisms. We therefore identified a large set of drug-induced transcriptional modules from genome-wide microarray data of drug-treated human cell lines and rat liver, and first characterized their conservation. Over 70% of these modules were common for multiple cell lines and 15% were conserved between the human in vitro and the rat in vivo system. We then illustrate the utility of conserved and cell-type-specific drug-induced modules by predicting and experimentally validating (i) gene functions, e.g., 10 novel regulators of cellular cholesterol homeostasis and (ii) new mechanisms of action for existing drugs, thereby providing a starting point for drug repositioning, e.g., novel cell cycle inhibitors and new modulators of α-adrenergic receptor, peroxisome proliferator-activated receptor and estrogen receptor. Taken together, the identified modules reveal the conservation of transcriptional responses towards drugs across cell types and organisms, and improve our understanding of both the molecular basis of drug action and human biology. PMID:23632384
p53 Regulates Bone Differentiation and Osteosarcoma Formation | Center for Cancer Research
Osteosarcoma is an uncommon cancer that usually begins in the large bones of the arm or leg, but is the second leading cause of cancer-related death in children and young adults. The tumor suppressor protein, p53, appears to be an important player in osteosarcomagenesis in part because these cancers are one of the most common to develop in patients with Li-Fraumeni syndrome, which is caused by an inherited mutation in p53. However, the precise role of p53 in osteosarcoma development has not been established. To begin investigating its importance to the formation of normal bone and osteosarcomas, Jing Huang, Ph.D., of CCR’s Laboratory of Cancer Biology and Genetics, and his colleagues, isolated bone marrow-derived mesenchymal stem cells (BMSCs) from p53 wild type (WT) and knock out (KO) mice using a recently validated approach. Because BMSCs are one of the cells-of-origin of osteosarcoma, they serve as a useful model system. BMSCs contain a subset of multipotent stem cells that can differentiate into several cell types, including osteoblasts, and are important mediators of bone homeostasis.
Ali, Nora A; Mourad, Hebat-Allah M; ElSayed, Hany M; El-Soudani, Magdy; Amer, Hassanein H; Daoud, Ramez M
2016-11-01
The interference is the most important problem in LTE or LTE-Advanced networks. In this paper, the interference was investigated in terms of the downlink signal to interference and noise ratio (SINR). In order to compare the different frequency reuse methods that were developed to enhance the SINR, it would be helpful to have a generalized expression to study the performance of the different methods. Therefore, this paper introduces general expressions for the SINR in homogeneous and in heterogeneous networks. In homogeneous networks, the expression was applied for the most common types of frequency reuse techniques: soft frequency reuse (SFR) and fractional frequency reuse (FFR). The expression was examined by comparing it with previously developed ones in the literature and the comparison showed that the expression is valid for any type of frequency reuse scheme and any network topology. Furthermore, the expression was extended to include the heterogeneous network; the expression includes the problem of co-tier and cross-tier interference in heterogeneous networks (HetNet) and it was examined by the same method of the homogeneous one.
Conservaton and retrieval of information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, M.
This is a summary of the findings of a Nordic working group formed in 1990 and given the task of establishing a basis for a common Nordic view of the need for information conservation for nuclear waste repositories by investigating the following: (1) the type of information that should be conserved; (2) the form in which the information should be kept; (3) the quality of the information as regards both type and form; and (4) the problems of future retrieval of information, including retrieval after very long periods of time. High-level waste from nuclear power generation will remain radioactive formore » very long times even though the major part of the radioactivity will have decayed within 1000 yr. Certain information about the waste must be kept for long time periods because future generations may-intentionally or inadvertently-come into contact with the radioactive waste. Current day waste management would benefit from an early identification of documents to be part of an archive for radioactive waste repositories. The same reasoning is valid for repositories for other toxic wastes.« less
Huprich, Steven K; Nelson, Sharon M
2014-05-01
Several personality disorders (PDs) have been of interest in the clinical literature, yet failed to have been adequately represented in the diagnostic manuals. Some of these are masochistic, self-defeating, depressive, and narcissistic PDs. The theoretical and empirical relationships among these disorders are reviewed. It is proposed that a particular type of self-structure, malignant self-regard (MSR), may account for similarities among all of them and provide a better framework upon which to understand the nature of these personality types and their discrimination from related constructs. Subsequently, a questionnaire to assess MSR was created and evaluated for its psychometric properties. The measure was found to be reliable (Cronbach's alpha=.93) and valid, given its correlations with measures of self-defeating, depressive, and vulnerably narcissistic personalities (rs range from .66 to .76). MSR also can be meaningfully differentiated from a nomological network of related constructs, including neuroticism, extraversion, depression, and grandiose narcissism. The utility of assessing self-structures, such as MSR, in the diagnostic manuals is discussed. Copyright © 2014. Published by Elsevier Inc.
Tornambè, A; Manfra, L; Canepa, S; Oteri, F; Martuccio, G; Cicero, A M; Magaletti, E
2018-02-01
The OECD TG 215 method (2000) (C.14 method of EC Regulation 440/2008) was developed on the rainbow trout (Oncorynchus mykiss) to assess chronic toxicity (28d) of chemicals on fish juveniles. It contemplates to use other well documented species identifying suitable conditions to evaluate their growth. OECD proposes the European sea bass (Dicentrarchus labrax, L. 1758) as Mediterranean species among vertebrates recommended in the OECD guidelines for the toxicity testing of chemicals. In this context, our study is aimed to proposing the adaptation of the growth test (OECD TG 215, 2000) to D. labrax. For this purpose toxicity tests were performed with sodium dodecyl sulfate, a reference toxicant commonly used in fish toxicity assays. The main aspects of the testing procedure were reviewed: fish size (weight), environmental conditions, dilution water type, experimental design, loading rate and stocking density, feeding (food type and ration), test validity criteria. The experience gained from growth tests with the sea bass allows to promote its inclusion among the species to be used for the C.14 method. Copyright © 2016. Published by Elsevier Inc.
Outcomes with the Boston Type 1 Keratoprosthesis at Instituto de Microcirugía Ocular IMO.
Güell, Jose L; Arcos, Edilio; Gris, Oscar; Aristizabal, Diego; Pacheco, Miguel; Sanchez, Claudia L; Manero, Felicidad
2011-07-01
To report the outcomes on the Boston Type 1 Keratoprosthesis at our institution. Retrospective analysis case series. We analyzed 54 eyes of 53 patients who previously underwent Boston Type 1 Keratoprosthesis surgery at our institution from July 2006 to March 2011. Preoperative and postoperative parameters were collected and analyzed. Visual acuity and keratoprosthesis stability. Common preoperative diagnoses were penetrating keratoplasty failure in 49 eyes (90.7%), chronic keratitis in 2 eyes (3.7%), ocular cicatricial pemphigoid in 1 eye (1.85%), Stevens Johnson syndrome in 1 eye (1.85%) and corneal vascularization in 1 eye (1.85%). Additionally, 40 eyes (74%) had preoperative glaucoma, and an Ahmed valve was implanted in 55% of them. Preoperative BCVA ranged from 20/200 to light perception. At an average follow-up of 20.15 months ± 12.7 (range, 1-56), postoperative vision improved to ⩾20/200 in 18 eyes (33.3%) and ⩾20/50 in 4 eyes (7.4%). The graft retention was 96%. The Boston Type 1 keratoprosthesis is a valid option for high-risk patients. The design improvements in the Boston keratoprosthesis, as well as the daily implementation of the therapeutic methods, have notably diminished occurrence of the most serious complications, such as corneal necrosis and endophthalmitis. As such, glaucoma and its subsequent complications now stand as the most prevalent prognostic factor in the long term.
Brown, J B; Nakatsui, Masahiko; Okuno, Yasushi
2014-12-01
The cost of pharmaceutical R&D has risen enormously, both worldwide and in Japan. However, Japan faces a particularly difficult situation in that its population is aging rapidly, and the cost of pharmaceutical R&D affects not only the industry but the entire medical system as well. To attempt to reduce costs, the newly launched K supercomputer is available for big data drug discovery and structural simulation-based drug discovery. We have implemented both primary (direct) and secondary (infrastructure, data processing) methods for the two types of drug discovery, custom tailored to maximally use the 88 128 compute nodes/CPUs of K, and evaluated the implementations. We present two types of results. In the first, we executed the virtual screening of nearly 19 billion compound-protein interactions, and calculated the accuracy of predictions against publicly available experimental data. In the second investigation, we implemented a very computationally intensive binding free energy algorithm, and found that comparison of our binding free energies was considerably accurate when validated against another type of publicly available experimental data. The common feature of both result types is the scale at which computations were executed. The frameworks presented in this article provide prospectives and applications that, while tuned to the computing resources available in Japan, are equally applicable to any equivalent large-scale infrastructure provided elsewhere. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Development and Validation of a Taiwanese Communication Progression in Science Education
ERIC Educational Resources Information Center
Hsin, Ming-Chin; Chien, Sung-pei; Hsu, Yin-Shao; Lin, Chen-Yung; Yore, Larry D.
2016-01-01
Common core standards, interdisciplinary education, and discipline-specific literacy are common international education reforms. The constructive-interpretative language arts pairs (speaking-listening, writing-reading, representing-viewing) and the communication, construction, and persuasion functions of language are central in these movements.…
Carloni, Elisa; Amagliani, Giulia; Omiccioli, Enrica; Ceppetelli, Veronica; Del Mastro, Michele; Rotundo, Luca; Brandi, Giorgio; Magnani, Mauro
2017-06-01
Pasta is the Italian product par excellence and it is now popular worldwide. Pasta of a superior quality is made with pure durum wheat. In Italy, addition of Triticum aestivum (common wheat) during manufacturing is not allowed and, without adequate labeling, its presence is considered an adulteration. PCR-related techniques can be employed for the detection of common wheat contaminations. In this work, we demonstrated that a previously published method for the detection of T. aestivum, based on the gliadin gene, is inadequate. Moreover, a new molecular method, based on DNA extraction from semolina and real-time PCR determination of T. aestivum in Triticum spp., was validated. This multiplex real-time PCR, based on the dual-labeled probe strategy, guarantees target detection specificity and sensitivity in a short period of time. Moreover, the molecular analysis of common wheat contamination in commercial wheat and flours is described for the first time. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Simkin, Mark G.
2008-01-01
Data-validation routines enable computer applications to test data to ensure their accuracy, completeness, and conformance to industry or proprietary standards. This paper presents five programming cases that require students to validate five different types of data: (1) simple user data entries, (2) UPC codes, (3) passwords, (4) ISBN numbers, and…
[Conversion ratio between different botulinum neuroprotein product in neurological practice].
Orlova, O R; Timerbaeva, S L; Khatkova, S E; Kostenko, E V; Krasavina, D A; Zakharov, D V
Despite nearly 30 years of experience in the application of botulinum toxin type A (BTA) in clinical practice, many fundamental questions of therapy remain valid. There are 5 botulinum toxin type A used for neurological indications in the Russian Federation in 2017. They contain different number of active neuroprotein (150 kDa) in a therapeutic dose of the drug that may have a potential impact on the efficacy and duration of action. The current SmPC of each BTA stated that the unit of activity is unique and can not be compared with any other BTA. In scientific publications one can find many details concerning the equivalence doses of onabotulinumtoxin A (botox) and abobotulinumtoxin A (dysport) and the ratio of units varies from 1:1 to 1:11. However, according to clinical guidelines, systematic reviews and high quality research evidence of recent years, the ratio of units of abobotulinumtoxin A (dysport) and onabotulinumtoxin A (botox) is 3(2,5):1. Use of a fixed ratio of units is possible only when switching from one drug to another or in case of limiting access to specific drug. Botulinum toxin type A is the first line of therapy in the treatment of several neurological diseases. The most commonly used drugs of botulinum toxin type A (botox, dysport, xeomin) have a significant evidence base that confirms their efficacy and optimal safety profile. The main difference between botulinum toxin type A is their potential activity of action, i.e., activity units and total therapeutic dose.
PREDICTING APHASIA TYPE FROM BRAIN DAMAGE MEASURED WITH STRUCTURAL MRI
Yourganov, Grigori; Smith, Kimberly G.; Fridriksson, Julius; Rorden, Chris
2015-01-01
Chronic aphasia is a common consequence of a left-hemisphere stroke. Since the early insights by Broca and Wernicke, studying the relationship between the loci of cortical damage and patterns of language impairment has been one of the concerns of aphasiology. We utilized multivariate classification in a cross-validation framework to predict the type of chronic aphasia from the spatial pattern of brain damage. Our sample consisted of 98 patients with five types of aphasia (Broca’s, Wernicke’s, global, conduction, and anomic), classified based on scores on the Western Aphasia Battery. Binary lesion maps were obtained from structural MRI scans (obtained at least 6 months poststroke, and within 2 days of behavioural assessment); after spatial normalization, the lesions were parcellated into a disjoint set of brain areas. The proportion of damage to the brain areas was used to classify patients’ aphasia type. To create this parcellation, we relied on five brain atlases; our classifier (support vector machine) could differentiate between different kinds of aphasia using any of the five parcellations. In our sample, the best classification accuracy was obtained when using a novel parcellation that combined two previously published brain atlases, with the first atlas providing the segmentation of grey matter, and the second atlas used to segment the white matter. For each aphasia type, we computed the relative importance of different brain areas for distinguishing it from other aphasia types; our findings were consistent with previously published reports of lesion locations implicated in different types of aphasia. Overall, our results revealed that automated multivariate classification could distinguish between aphasia types based on damage to atlas-defined brain areas. PMID:26465238
Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.
2016-01-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G
2016-08-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. Copyright © 2016 David et al.
Predicting aphasia type from brain damage measured with structural MRI.
Yourganov, Grigori; Smith, Kimberly G; Fridriksson, Julius; Rorden, Chris
2015-12-01
Chronic aphasia is a common consequence of a left-hemisphere stroke. Since the early insights by Broca and Wernicke, studying the relationship between the loci of cortical damage and patterns of language impairment has been one of the concerns of aphasiology. We utilized multivariate classification in a cross-validation framework to predict the type of chronic aphasia from the spatial pattern of brain damage. Our sample consisted of 98 patients with five types of aphasia (Broca's, Wernicke's, global, conduction, and anomic), classified based on scores on the Western Aphasia Battery (WAB). Binary lesion maps were obtained from structural MRI scans (obtained at least 6 months poststroke, and within 2 days of behavioural assessment); after spatial normalization, the lesions were parcellated into a disjoint set of brain areas. The proportion of damage to the brain areas was used to classify patients' aphasia type. To create this parcellation, we relied on five brain atlases; our classifier (support vector machine - SVM) could differentiate between different kinds of aphasia using any of the five parcellations. In our sample, the best classification accuracy was obtained when using a novel parcellation that combined two previously published brain atlases, with the first atlas providing the segmentation of grey matter, and the second atlas used to segment the white matter. For each aphasia type, we computed the relative importance of different brain areas for distinguishing it from other aphasia types; our findings were consistent with previously published reports of lesion locations implicated in different types of aphasia. Overall, our results revealed that automated multivariate classification could distinguish between aphasia types based on damage to atlas-defined brain areas. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gracie, David J; Hamlin, P John; Ford, Alexander C
2018-05-01
The impact of irritable bowel syndrome (IBS)-type symptoms on the natural history of inflammatory bowel disease (IBD) is uncertain. We aimed to address this in a longitudinal study of secondary care patients. Longitudinal disease activity was defined by disease flare, escalation of medical therapy, hospitalization, or intestinal resection. The number of investigations performed and clinics attended determined healthcare utilization. Psychological well-being and quality of life were assessed using validated questionnaires. These outcomes were compared over a minimum period of 2 years between patients reporting IBS-type symptoms and patients with quiescent disease, occult inflammation, and active disease at baseline. In 360 IBD patients, there were no differences in longitudinal disease activity between patients with IBS-type symptoms and patients with quiescent disease or occult inflammation. Disease flare and escalation of medical therapy was more common in patients with active disease than in patients with IBS-type symptoms (hazard ratio (HR) = 3.16; 95% confidence interval (CI) 1.93-5.19 and HR = 3.24; 95% CI 1.98-5.31, respectively). A greater number of investigations were performed in patients with IBS-type symptoms than quiescent disease (P = 0.008), but not compared with patients with occult inflammation or active disease. Anxiety, depression, and somatization scores at follow up were higher, and quality-of-life scores lower, in patients with IBS-type symptoms when compared with patients with quiescent disease, but were similar to patients with active disease. IBS-type symptoms in IBD were associated with increased healthcare utilization, psychological comorbidity, reduced quality of life, but not adverse disease activity outcomes during extended follow-up.
Applicability of Type A/B alcohol dependence in the general population.
Tam, Tammy W; Mulia, Nina; Schmidt, Laura A
2014-05-01
This study examined the concurrent and predictive validity of Type A/B alcohol dependence in the general population-a typology developed in clinical populations to gauge severity of dependence. Data were drawn from Waves 1 and 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). The sample included 1,172 alcohol-dependent drinkers at baseline who were reinterviewed three years later. Latent class analysis was used to derive Type A/B classification using variables replicating the original Type A/B typology. Predictive validity of the Type A/B classification was assessed by multivariable linear and logistic regressions. A two-class solution consistent with Babor's original Type A/B typology adequately fit the data. Type B alcoholics in the general population, compared to Type As, had higher alcohol severity and more co-occurring drug, mental, and physical health problems. In the absence of treatment services utilization, Type B drinkers had two times the odds of being alcohol dependent three years later. Among those who utilized alcohol treatment services, Type B membership was predictive of heavy drinking and drug dependence, but not alcohol dependence, three years later. Findings suggest that Type A/B classification is both generalizable to, and valid within, the US general population of alcohol dependent drinkers. Results highlight the value of treatment for mitigating the persistence of dependence among Type B alcoholics in the general population. Screening for markers of vulnerability to Type B dependence could be of clinical value for health care providers to determine appropriate intervention. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
2011-01-01
Background Early detection of common mental disorders, such as depression and anxiety, among children and adolescents requires the use of validated, culturally sensitive, and developmentally appropriate screening instruments. The Arab region has a high proportion of youth, yet Arabic-language screening instruments for mental disorders among this age group are virtually absent. Methods We carried out construct and clinical validation on the recently-developed Arab Youth Mental Health (AYMH) scale as a screening tool for depression/anxiety. The scale was administered with 10-14 year old children attending a social service center in Beirut, Lebanon (N = 153). The clinical assessment was conducted by a child and adolescent clinical psychiatrist employing the DSM IV criteria. We tested the scale's sensitivity, specificity, and internal consistency. Results Scale scores were generally significantly associated with how participants responded to standard questions on health, mental health, and happiness, indicating good construct validity. The results revealed that the scale exhibited good internal consistency (Cronbach's alpha = 0.86) and specificity (79%). However, it exhibited moderate sensitivity for girls (71%) and poor sensitivity for boys (50%). Conclusions The AYMH scale is useful as a screening tool for general mental health states and a valid screening instrument for common mental disorders among girls. It is not a valid instrument for detecting depression and anxiety among boys in an Arab culture. PMID:21435213
Modelling guidelines--terminology and guiding principles
NASA Astrophysics Data System (ADS)
Refsgaard, Jens Christian; Henriksen, Hans Jørgen
2004-01-01
Some scientists argue, with reference to Popper's scientific philosophical school, that models cannot be verified or validated. Other scientists and many practitioners nevertheless use these terms, but with very different meanings. As a result of an increasing number of examples of model malpractice and mistrust to the credibility of models, several modelling guidelines are being elaborated in recent years with the aim of improving the quality of modelling studies. This gap between the views and the lack of consensus experienced in the scientific community and the strongly perceived need for commonly agreed modelling guidelines is constraining the optimal use and benefits of models. This paper proposes a framework for quality assurance guidelines, including a consistent terminology and a foundation for a methodology bridging the gap between scientific philosophy and pragmatic modelling. A distinction is made between the conceptual model, the model code and the site-specific model. A conceptual model is subject to confirmation or falsification like scientific theories. A model code may be verified within given ranges of applicability and ranges of accuracy, but it can never be universally verified. Similarly, a model may be validated, but only with reference to site-specific applications and to pre-specified performance (accuracy) criteria. Thus, a model's validity will always be limited in terms of space, time, boundary conditions and types of application. This implies a continuous interaction between manager and modeller in order to establish suitable accuracy criteria and predictions associated with uncertainty analysis.
Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V
2010-01-01
Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p < 0.001). Alternative algorithms did not improve on the predictive value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.
A defense of the common morality.
Beauchamp, Tom L
2003-09-01
Phenomena of moral conflict and disagreement have led writers in ethics to two antithetical conclusions: Either valid moral distinctions hold universally or they hold relative to a particular and contingent moral framework, and so cannot be applied with universal validly. Responding to three articles in this issue of the Journal that criticize his previously published views on the common morality, the author maintains that one can consistently deny universality to some justified moral norms and claim universality for others. Universality is located on the common morality and nonuniversality in other parts of the moral life, called "particular moralities." The existence of universal moral standards is defended in terms of: (1) a theory of the objectives of morality, (2) an account of the norms that achieve those objectives, and (3) an account of normative justification (both pragmatic and coherentist).
Bialas, Andrzej
2010-01-01
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems should be developed in a restricted way to provide them with assurance. One of assurance creation methodologies is Common Criteria (ISO/IEC 15408), used for IT products and systems. The contribution of the paper is a Common Criteria compliant and pattern-based method for the intelligent sensors security development. The paper concisely presents this method and its evaluation for the sensor detecting methane in a mine, focusing on the security problem of the intelligent sensor definition and solution. The aim of the validation is to evaluate and improve the introduced method.
EPICS Input Output Controller (IOC) Record Reference Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.B.; Kraimer, M.R.
1994-12-01
This manual describes all supported EPICS record types. The first chapter gives introduction and describes the field summary table. The second chapter describes the fields in database common, i.e. the fields that are present in every record type. The third chapter describes the input and output field that are common to many record types and have the same usage wherever they are used. Following the third chapter is a separate chapter for each record type containing a description of all the fields for that record type except those in database common.
ERIC Educational Resources Information Center
Inglis, Sandra Cheldelin
An instrument to measure student reports of perceived teacher invitations and of teacher behaviors traditionally considered effective was developed and validated. Invitational (I-type) factors and effective (E-type) factors were correlated with academic achievement. Scale items were suggested by the literature, and then rated and categorized by…
Unachukwu, Uchenna J; Ahmed, Selena; Kavalier, Adam; Lyles, James T; Kennelly, Edward J
2010-08-01
Recent investigations have associated white teas with anti-carcinogenic, immune-boosting, and antioxidative properties that may impact human health in a manner comparable to green teas. An in-depth chemical analysis of white tea types was conducted to quantify polyphenols and antioxidant potential of 8 commercially available white teas, and compare them to green tea. Extraction and HPLC protocols were optimized and validated for the quantification of 9 phenolic and 3 methylxanthine compounds to examine inter- and intra-variation in white and green tea types and subtypes. A sampling strategy was devised to assess various subtypes procured from different commercial sources. Variation in antioxidant activity and total phenolic content (TPC) of both tea types was further assessed by the 1-1-diphenyl-2-picrylhydrazyl (DPPH) and Folin-Ciocalteau (F-C) assays, respectively. Total catechin content (TCC) for white teas ranged widely from 14.40 to 369.60 mg/g of dry plant material for water extracts and 47.16 to 163.94 mg/g for methanol extracts. TCC for green teas also ranged more than 10-fold, from 21.38 to 228.20 mg/g of dry plant material for water extracts and 32.23 to 141.24 mg/g for methanol extracts. These findings indicate that statements suggesting a hierarchical order of catechin content among tea types are inconclusive and should be made with attention to a sampling strategy that specifies the tea subtype and its source. Certain white teas have comparable quantities of total catechins to some green teas, but lesser antioxidant capacity, suggesting that white teas have fewer non-catechin antioxidants present. Practical Application: In this investigation white and green teas were extracted in ways that mimic common tea preparation practices, and their chemical profiles were determined using validated analytical chemistry methods. The results suggest certain green and white tea types have comparable levels of catechins with potential health promoting qualities. Specifically, the polyphenolic content of green teas was found to be similar to certain white tea varieties, which makes the latter tea type a potential substitute for people interested in consuming polyphenols for health reasons. Moreover, this study is among the first to demonstrate the effect subtype sampling, source of procurement, cultivation, and processing practices have on the final white tea product, as such analysis has previously been mostly carried out on green teas.
ERIC Educational Resources Information Center
He, Jian-Ping; Burstein, Marcy; Schmitz, Anja; Merikangas, Kathleen R.
2013-01-01
The Strengths and Difficulties Questionnaire (SDQ) is one of the most commonly used instruments for screening psychopathology in children and adolescents. This study evaluated the hypothesized five-factor structure of the SDQ and examined its convergent validity against comprehensive clinical diagnostic assessments. Data were derived from the…
A Cross-Validation Study of the School Attitude Assessment Survey (SAAS).
ERIC Educational Resources Information Center
McCoach, D. Betsy
Factors commonly associated with underachievement in the research literature include low self-concept, low self-motivation/self-regulation, negative attitude toward school, and negative peer influence. This study attempts to isolate these four factors within a secondary school population. The purpose of the study was to design a valid and reliable…
The Development, Evaluation, and Validation of a Financial Stress Scale for Undergraduate Students
ERIC Educational Resources Information Center
Northern, Jebediah J.; O'Brien, William H.; Goetz, Paul W.
2010-01-01
Financial stress is commonly experienced among college students and is associated with adverse academic, mental health, and physical health outcomes. Surprisingly, no validated measures of financial stress have been developed for undergraduate populations. The present study was conducted to generate and evaluate a measure of financial stress for…
Repeatability and Validity of the Combined Arm-Leg (Cruiser) Ergometer
ERIC Educational Resources Information Center
Simmelink, Elisabeth K.; Wempe, Johan B.; Geertzen, Jan H. B.; Dekker, Rienk
2009-01-01
The measurement of physical fitness of lower limb amputees is difficult, as the commonly used ergometer tests have limitations. A combined arm-leg (Cruiser) ergometer might be valuable. The aim of this study was to establish the repeatability and validity of the combined arm-leg (Cruiser) ergometer. Thirty healthy volunteers carried out three…
ERIC Educational Resources Information Center
Honey, Emma; McConachie, Helen; Turner, Michelle; Rodgers, Jacqui
2012-01-01
The repetitive behaviour questionnaire (RBQ) (Turner, 1995) is one of the three most commonly used interview/questionnaire measures of repetitive behaviour (Honey et al., in preparation). Despite this there is a scarcity of information concerning its structure, reliability and validity. The psychometric properties of the RBQ were examined when…
miR-Sens--a retroviral dual-luciferase reporter to detect microRNA activity in primary cells.
Beillard, Emmanuel; Ong, Siau Chi; Giannakakis, Antonis; Guccione, Ernesto; Vardy, Leah A; Voorhoeve, P Mathijs
2012-05-01
MicroRNA-mRNA interactions are commonly validated and deconstructed in cell lines transfected with luciferase reporters. However, due to cell type-specific variations in microRNA or RNA-binding protein abundance, such assays may not reliably reflect microRNA activity in other cell types that are less easily transfected. In order to measure miRNA activity in primary cells, we constructed miR-Sens, a MSCV-based retroviral vector that encodes both a Renilla luciferase reporter gene controlled by microRNA binding sites in its 3' UTR and a Firefly luciferase normalization gene. miR-Sens sensors can be efficiently transduced in primary cells such as human fibroblasts and mammary epithelial cells, and allow the detection of overexpressed and, more importantly, endogenous microRNAs. Notably, we find that the relative luciferase activity is correlated to the miRNA expression, allowing quantitative measurement of microRNA activity. We have subsequently validated the miR-Sens 3' UTR vectors with known human miRNA-372, miRNA-373, and miRNA-31 targets (LATS2 and TXNIP). Overall, we observe that miR-Sens-based assays are highly reproducible, allowing detection of the independent contribution of multiple microRNAs to 3' UTR-mediated translational control of LATS2. In conclusion, miR-Sens is a new tool for the efficient study of microRNA activity in primary cells or panels of cell lines. This vector will not only be useful for studies on microRNA biology, but also more broadly on other factors influencing the translation of mRNAs.
NASA Astrophysics Data System (ADS)
Ben Cheikh, Bassem; Bor-Angelier, Catherine; Racoceanu, Daniel
2017-03-01
Breast carcinomas are cancers that arise from the epithelial cells of the breast, which are the cells that line the lobules and the lactiferous ducts. Breast carcinoma is the most common type of breast cancer and can be divided into different subtypes based on architectural features and growth patterns, recognized during a histopathological examination. Tumor microenvironment (TME) is the cellular environment in which tumor cells develop. Being composed of various cell types having different biological roles, TME is recognized as playing an important role in the progression of the disease. The architectural heterogeneity in breast carcinomas and the spatial interactions with TME are, to date, not well understood. Developing a spatial model of tumor architecture and spatial interactions with TME can advance our understanding of tumor heterogeneity. Furthermore, generating histological synthetic datasets can contribute to validating, and comparing analytical methods that are used in digital pathology. In this work, we propose a modeling method that applies to different breast carcinoma subtypes and TME spatial distributions based on mathematical morphology. The model is based on a few morphological parameters that give access to a large spectrum of breast tumor architectures and are able to differentiate in-situ ductal carcinomas (DCIS) and histological subtypes of invasive carcinomas such as ductal (IDC) and lobular carcinoma (ILC). In addition, a part of the parameters of the model controls the spatial distribution of TME relative to the tumor. The validation of the model has been performed by comparing morphological features between real and simulated images.
miR-Sens—a retroviral dual-luciferase reporter to detect microRNA activity in primary cells
Beillard, Emmanuel; Ong, Siau Chi; Giannakakis, Antonis; Guccione, Ernesto; Vardy, Leah A.; Voorhoeve, P. Mathijs
2012-01-01
MicroRNA–mRNA interactions are commonly validated and deconstructed in cell lines transfected with luciferase reporters. However, due to cell type-specific variations in microRNA or RNA-binding protein abundance, such assays may not reliably reflect microRNA activity in other cell types that are less easily transfected. In order to measure miRNA activity in primary cells, we constructed miR-Sens, a MSCV-based retroviral vector that encodes both a Renilla luciferase reporter gene controlled by microRNA binding sites in its 3′ UTR and a Firefly luciferase normalization gene. miR-Sens sensors can be efficiently transduced in primary cells such as human fibroblasts and mammary epithelial cells, and allow the detection of overexpressed and, more importantly, endogenous microRNAs. Notably, we find that the relative luciferase activity is correlated to the miRNA expression, allowing quantitative measurement of microRNA activity. We have subsequently validated the miR-Sens 3′ UTR vectors with known human miRNA-372, miRNA-373, and miRNA-31 targets (LATS2 and TXNIP). Overall, we observe that miR-Sens-based assays are highly reproducible, allowing detection of the independent contribution of multiple microRNAs to 3′ UTR–mediated translational control of LATS2. In conclusion, miR-Sens is a new tool for the efficient study of microRNA activity in primary cells or panels of cell lines. This vector will not only be useful for studies on microRNA biology, but also more broadly on other factors influencing the translation of mRNAs. PMID:22417692
van der Wal, Martijn; Bloemen, Monica; Verhaegen, Pauline; Tuinebreijer, Wim; de Vet, Henrica; van Zuijlen, Paul; Middelkoop, Esther
2013-01-01
Color measurements are an essential part of scar evaluation. Thus, vascularization (erythema) and pigmentation (melanin) are common outcome parameters in scar research. The aim of this study was to investigate the clinimetric properties and clinical feasibility of the Mexameter, Colorimeter, and the DSM II ColorMeter for objective measurements on skin and scars. Fifty scars with a mean age of 6 years (2 months to 53 years) were included. Reliability was tested using the single-measure interobserver intraclass correlation coefficient. Validity was determined by measuring the Pearson correlation with the Fitzpatrick skin type classification (for skin) and the Patient and Observer Scar Assessment Scale (for scar tissue). All three instruments provided reliable readings (intraclass correlation coefficient ≥ 0.83; confidence interval: 0.71-0.90) on normal skin and scar tissue. Parameters with the highest correlations with the Fitzpatrick classification were melanin (Mexameter), 0.72; ITA (Colorimeter), -0.74; and melanin (DSM II), 0.70. On scars, the highest correlations with the Patient and Observer Scar Assessment Scale vascularization scores were the following: erythema (Mexameter), 0.59; LAB2 (Colorimeter), 0.69; and erythema (DSM II), 0.66. For hyperpigmentation, the highest correlations were melanin (Mexameter), 0.75; ITA (Colorimeter), -0.80; and melanin (DSM II), 0.83. This study shows that all three instruments can provide reliable color data on skin and scars with a single measurement. The authors also demonstrated that they can assist in objective skin type classification. For scar assessment, the most valid parameters in each instrument were identified.
Peterman, Karen; Withy, Kelley; Boulay, Rachel
2018-06-01
A common challenge in the evaluation of K-12 science education is identifying valid scales that are an appropriate fit for both a student's age and the educational outcomes of interest. Though many new scales have been validated in recent years, there is much to learn about the appropriate educational contexts and audiences for these measures. This study investigated two such scales, the DEVISE Self-Efficacy for Science scale and the Career Interest Questionnaire (CIQ), within the context of two related health sciences projects. Consistent patterns were found in the reliability of each scale across three age groups (middle school, high school, early college) and within the context of each project. As expected, self-efficacy and career interest, as measured through these scales, were found to be correlated. The pattern of results for CIQ scores was also similar to that reported in other literature. This study provides examples of how practitioners can validate established measures for new and specific contexts and provides some evidence to support the use of the scales studied in health science education contexts.
Ocular findings in patients with the Hermansky-Pudlak syndrome (types 1 and 3)
Jardón, Javier; Izquierdo, Natalio J.; Renta, Jessica Y; García-Rodríguez, Omar; Cadilla, Carmen L.
2014-01-01
Purpose To describe and compare ocular findings in patients with Hermansky-Pudlak syndrome (HPS) type 1 and 3. Methods This is a retrospective case series of 64 patients with HPS from 1999 to 2009 evaluated at an outpatient private ophthalmologic clinic. Patients underwent genetic analysis of selected albinism (Tyrosine and P gene) and HPS genes (HPS-1 and HPS-3) by screening for common mutations and exon sequencing with DNA screening. Descriptive and a non-parametric statistical analysis were done. Results Nearly 70% of the patients were homozygous for common Puerto Rican mutations leading to the HPS1 gene (16-BP DUP, 53.6%), while 30% had the 3904-BP DEL HPS3 gene mutation. BCVA was poorer in patients with type 1 HPS than in patients with type 3 HPS (p<0.001), esotropia was more common among type 1 HPS (p<0.018), while exotropia was more common among patients with type 3 HPS. Total iris transillumination was more common in patients with type 1 HPS and minimal iris transillumination in patients with type 3 HPS (p<0.001). The maculae were translucent in patients with type 1 HPS, while patients with type 3 HPS had opaque maculae (p<0.001). Conclusions Patients with type 1 HPS had poorer BCVA, increase incidence of esotropia, lighter iris and macular appearance. In contrast, patients with type 3 HPS 3 had more exotropia. In addition, to our knowledge this is the largest series type 3 HPS ever reported. PMID:24766090
Ball, Samuel A.; Nich, Charla; Rounsaville, Bruce J.; Eagan, Dorothy; Carroll, Kathleen M.
2013-01-01
The concurrent and predictive validity of 2 different methods of Millon Clinical Multiaxial Inventory–III subtyping (protocol sorting, cluster analysis) was evaluated in 125 recently detoxified opioid-dependent outpatients in a 12-week randomized clinical trial. Participants received naltrexone and relapse prevention group counseling and were assigned to 1 of 3 intervention conditions: (a) no-incentive vouchers, (b) incentive vouchers alone, or (c) incentive vouchers plus relationship counseling. Affective disturbance was the most common Axis I protocol-sorted subtype (66%), antisocial–narcissistic was the most common Axis II subtype (46%), and cluster analysis suggested that a 2-cluster solution (high vs. low psychiatric severity) was optimal. Predictive validity analyses indicated less symptom improvement for the higher problem subtypes, and patient treatment matching analyses indicated that some subtypes had better outcomes in the no-incentive voucher conditions. PMID:15301655
Army Communicator. Volume 37, Number 2, Summer 2012
2012-01-01
solution will have to meet four criteria: FIPS 140-2 validated crypto; approved data-at-rest; Common Access Card enablement; and, enterprise management...Information Grid. Common Access Cards , Federal Information Processing Standard 140-2 certifications, and software compliance are just a few of the...and Evaluation Command BMC – Brigade Modernization Command CAC – Common Access Card FIPS – Federal Information Processing Standard GIG – Global
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynoso, F; Cho, S
Purpose: To develop and validate a Monte Carlo (MC) model of a Phillips RT-250 orthovoltage unit to test various beam spectrum modulation strategies for in vitro/vivo studies. A model of this type would enable the production of unconventional beams from a typical orthovoltage unit for novel therapeutic applications such as gold nanoparticle-aided radiotherapy. Methods: The MCNP5 code system was used to create a MC model of the head of RT-250 and a 30 × 30 × 30 cm{sup 3} water phantom. For the x-ray machine head, the current model includes the vacuum region, beryllium window, collimators, inherent filters and exteriormore » steel housing. For increased computational efficiency, the primary x-ray spectrum from the target was calculated from a well-validated analytical software package. Calculated percentage-depth-dose (PDD) values and photon spectra were validated against experimental data from film and Compton-scatter spectrum measurements. Results: The model was validated for three common settings of the machine namely, 250 kVp (0.25 mm Cu), 125 kVp (2 mm Al), and 75 kVp (2 mm Al). The MC results for the PDD curves were compared with film measurements and showed good agreement for all depths with a maximum difference of 4 % around dmax and under 2.5 % for all other depths. The primary photon spectra were also measured and compared with the MC results showing reasonable agreement between the two, validating the input spectra and the final spectra as predicted by the current MC model. Conclusion: The current MC model accurately predicted the dosimetric and spectral characteristics of each beam from the RT-250 orthovoltage unit, demonstrating its applicability and reliability for beam spectrum modulation tasks. It accomplished this without the need to model the bremsstrahlung xray production from the target, while significantly improved on computational efficiency by at least two orders of magnitude. Supported by DOD/PCRP grant W81XWH-12-1-0198.« less
Liu, Xiaoli; Dai, Long; Chen, Bo; Feng, Nongping; Wu, Qianhui; Lin, Yonghai; Zhang, Lan; Tan, Dong; Zhang, Jinhua; Tu, Huijuan; Li, Changfeng; Wang, Wenjuan
2016-01-01
To evaluate the validity and reliability of Diabetes Self-management Knowledge, Attitude, and Behavior Assessment Scale (DSKAB). We selected 460 patients with diabetes in the community, used the scale which was after two rounds of the Delphi method and pilot study. Investigators surveyed the patients by the way of face to face. by draw lots, we selected 25 community diabetes randomly for repeating investigations after one week. The validity analyses included face validity, content validity, construct validity and discriminant validity. The reliability analyses included Cronbach's α coefficient, θ coefficient, Ω coefficient, split-half reliability and test-retest reliability. This study distributed a total of 460 questionnaires, reclaimed 442, qualified 432. The score of the scale was 254.59 ± 28.90, the scores of the knowledge, attitude, behavior sub-scales were 82.44 ± 11.24, 63.53 ± 5.77 and 108.61 ± 17.55, respectively. It had excellent face validity and content validity. The correlation coefficient was from 0.71 to 0.91 among three sub-scales and the scale, P<0.001. The common factor cumulative variance contribution rate of the scale and three sub-scales was from 57.28% to 67.19%, which achieved more than 50% of the approved standard, there was 25 common factors, 91 items of the total 98 items held factor loading ≥0.40 in its relevant common factor, it had good construct validity. The scores of high group and low group in three sub-scales were: knowledge (91.12 ± 3.62) and (69.96 ± 11.20), attitude (68.75 ± 4.51) and (58.79 ± 4.87), behavior (129.38 ± 8.53) and (89.65 ± 11.34),mean scores of three sub-scales were apparently different, which compared between high score group and low score group, the t value were - 19.45, -16.24 and -30.29, respectively, P<0.001, and it had good discriminant validity. The Cronbach's α coefficient of the scale and three sub-scales was from 0.79 to 0.93, the θ coefficient was from 0.86 to 0.95, the Ω coefficient was from 0.90 to 0.98, split-half reliability was from 0.89 to 0.95.Test-retest reliability of the scale was 0.51;the three sub-scales was from 0.46 to 0.52, P<0.05. The validity and reliability of the Diabetes Self-management Knowledge, Attitude, and Behavior Assessment Scale are excellent, which is a suitable instrument to evaluate the self-management for patients with diabetes.
Yucel, Cigdem; Taskin, Lale; Low, Lisa Kane
2015-12-01
Although obstetrical interventions are used commonly in Turkey, there is no standardized evidence-based assessment tool to evaluate maternity care outcomes. The Optimality Index-US (OI-US) is an evidence-based tool that was developed for the purpose of measuring aggregate perinatal care processes and outcomes against an optimal or best possible standard. This index has been validated and used in Netherlands, USA and UK until now. The objective of this study was to adapt the OI-US to assess maternity care outcomes in Turkey. Translation and back translation were used to develop the Optimality Index-Turkey (OI-TR) version. To evaluate the content validity of the OI-TR, an expert panel group (n=10) reviewed the items and evidence-based quality of the OI-TR for application in Turkey. Following the content validity process, the OI-TR was used to assess 150 healthy and 150 high-risk pregnant women who gave birth at a high volume, urban maternity hospital in Turkey. The scores between the two groups were compared to assess the discriminant validity of the OI-TR. The percentage of agreement between two raters and the Kappa statistic were calculated to evaluate the reliability. Content validity was established for the OI-TR by an expert group. Discriminant validity was confirmed by comparing the OI scores of healthy pregnant women (mean OI score=77.65%) and those of high-risk pregnant women (mean OI score=78.60%). The percentage of agreement between the two raters was 96.19, and inter-rater agreement was provided for each item in the OI-TR. OI-TR is a valid and reliable tool that can be used to assess maternity care outcomes in Turkey. The results of this study indicate that although the risk statuses of the women differed, the type of care they received was essentially the same, as measured by the OI-TR. Care was not individualised based on risk and for a majority of items was inconsistent with evidence based practice, which is not optimal. Use of the OI-TR will help to provide a standardized way to assess maternity care process and outcomes of maternity care in Turkey which can inform future research aimed at improving maternity care outcomes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Laganà, Luciana; Prilutsky, Roxanne R.
2016-01-01
Introduction Older women often experience various types of stressors, including the death of a spouse and associated financial stress (often with a lack of social support), emotional stress due to factors such as caregiving and being single, and the challenges of the aging process. These circumstances could produce or aggravate anxious symptomatology that can in turn compound the negative effects of aging. A brief scale of perceived stress that is not confounded with health status and covers multiple culturally relevant potential stressors is needed for quick use in busy medical settings. Aim To assess the reliability and the validity of an original stress scale designed to measure perceptions of stress beyond health status in a non-clinical convenience sample of community-dwelling older women. Method In this cross-sectional pilot investigation, via conducting item-total correlations and correlational tests of validity, we studied the psychometric properties of our measure using data from volunteer older subjects (mainly low-income and from non-Caucasian backgrounds). The domains covered by the nine items of the tool were selected based on a literature review of common stressors experienced by older adults, especially by older women. Data were collected face-to-face using a demographic list, a well-established depression measure, a brief posttraumatic stress disorder (PTSD) screener, and our 9-item stress tool. Primary outcomes: reliability and validity of the scale of older women’s non-medical stress. Secondary outcomes: demographic characteristics of the sample and correlations between stress items. Results Based on our sample of older women (N=40, mean age 71 years), good internal consistency between the items of the stress scale was found (Cronbach’s a=.66). The findings of the data analyses also revealed that our psychometric tool has good convergent validity with the PTSD screener (r=.53). Moreover, in contrast with most other stress tools, it has strong discriminant validity (r=.11) with a well-validated depression scale. Conclusion Our results suggest that this new measure is psychometrically strong. Future research directions encompass using larger samples, ideally including older men with the modification of the scale’s name, as well as validating this tool against more measures. Clinical implications of our findings are briefly discussed. PMID:27390770
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Development of a Content-Valid Standardized Orthopedic Assessment Tool (SOAT)
ERIC Educational Resources Information Center
Lafave, Mark; Katz, Larry; Butterwick, Dale
2008-01-01
Content validation of an instrument that measures student performance in OSCE-type practical examinations is a critical step in a tool's overall validity and reliability [Hopkins (1998), "Educational and Psychological Measurement and Evaluation" (8th ed.). Toronto: Allyn & Bacon]. The purpose of the paper is to outline the process…
Social Power and Forms of Change: Implications for Psychopolitical Validity
ERIC Educational Resources Information Center
Speer, Paul W.
2008-01-01
Prilleltensky's notion of psychopolitical validity elevates power as a key phenomenon of interest within community psychology. Importantly, two types of psychopolitical validity are articulated: epistemic--the explicit study of power, and transformative--understanding the role of power in social change. In this article, the author develops the…
Face Validity of Test and Acceptance of Generalized Personality Interpretations
ERIC Educational Resources Information Center
Delprato, Dennis J.
1975-01-01
The degree to which variations in the face validity of psychological tests affected students' willingness to accept personality interpretations was studied. Acceptance of personality interpretations was compared for four types of tests which varied in face validity. The relationship between judged accuracy and rated likability of the…
Designing the Nuclear Energy Attitude Scale.
ERIC Educational Resources Information Center
Calhoun, Lawrence; And Others
1988-01-01
Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)
Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.
Harrington, Peter de Boves
2018-01-02
Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.
S-type granitic magmas—petrogenetic issues, models and evidence
NASA Astrophysics Data System (ADS)
Clemens, J. D.
2003-04-01
Despite a perception that it represents a perverse divergence, it is perfectly possible to believe in the existence of S- and I-type granites (and the implications for the nature of their protoliths), and to disbelieve in the applicability of the restite-unmixing model for chemical variation in granitic magmas. White and Chappell erected the S-I classification with impeccable validity. The isotopic evidence demands contrasting source reservoirs for S- and I-type granitic magmas. However, the major advance was not the classification, but the recognition that highly contrasting parental materials must be involved in the genesis of granitic magmas. The restite-unmixing model is commonly seen as a companion to the S-I classification, but it is really a separate issue. This model implies that the compositions of granites 'image' those of their source rocks in a simple way. However, there are other equally valid models that can explain the data, and none of them represents a unique solution. The most cogent explanation for the high-grade metasedimentary enclaves in most S-type granites is that they represent mid-crustal xenoliths; restitic enclaves are either rare or absent. Inherited zircons in S-type rocks are certainly restitic. However, the occurrence of a substantial restitic zircon population does not imply an equally substantial restitic component in the rest of the rock. Zircon and zirconium behaviours are controlled by disequilibrium and kinetics, and Zr contents of granitic rocks can rarely be used to infer magma temperatures. Since the dominant ages among inherited zircons in Lachlan Fold Belt (LFB) S-type granites are Ordovician and Proterozoic, it seems likely that crust of this age, but geochemically different from the exposed rocks, not only underlies much of the LFB but also forms a component in the granite magma sources. The evidence is overwhelming that the dark, microgranular enclaves that occur in both S- and I-type granites are igneous in origin. They represent globules of quenched, more mafic magma mingled and modified by exchange with the host granitic magma. However, magma mixing does not appear to be a significant process affecting the chemical evolution of the host magmas. Likewise, the multicomponent mixing models erected for some granitic rock suites are mathematically nonunique and, in some cases, violate constraints from isotopic studies. S- and I-type magmas commonly retain their distinct identities. This suggests limited source mixing, limited magma mixing and limited wall-rock assimilation. Though intermediate types certainly exist, they are probably relatively minor in volume. Crystal fractionation probably plays the major role in the differentiation of very many granitic magmas, including most S-types, especially those emplaced at high crustal levels or in the volcanic environment. Minor mechanisms include magma mixing, wall-rock assimilation and restite unmixing. Isotopic variations within plutons and in granite suites could be caused by source heterogeneities, magma mixing, assimilation and even by isotopic disequilibrium. However, source heterogeneity, coupled with the inefficiency of magma mixing is probably the major cause of observed heterogeneity. Normal geothermal gradients are seldom sufficient to provide the necessary heat for partial melting of the crust, and crustal thickening likewise fails to provide sufficient heat. Generally, the mantle must be the major heat source. This might be provided through mantle upwelling and crustal thinning, and possibly through the intra- and underplating of mafic magmas. Upper crustal extension seems to have been common in regions undergoing granitic magmatism. Migmatites probably provide poor analogues of granite source regions because they are mostly formed by fluid-present reactions. Granitic magmas are mostly formed by fluid-absent processes. Where we do see rare evidence for arrested fluid-absent partial melting, the melt fraction is invariably concentrated into small shear zones, veinlets and small dykes. Thus, it seems likely that dyking is important in transporting granitic magma on a variety of scales and at many crustal levels. However, one major missing link in the chain is the mechanism by which melt fractions, in small-scale segregations occurring over a wide area, can be gathered and focused to efficiently feed much wider-spaced major magma conduits. Answers may lie in the geometry of the melting zones and in the tendency of younger propagating fractures to curve toward and merge with older ones. Self-organization almost certainly plays a role.
Demarco, Maria; Carter-Pokras, Olivia; Hyun, Noorie; Castle, Philip E; He, Xin; Dallal, Cher M; Chen, Jie; Gage, Julia C; Befano, Brian; Fetterman, Barbara; Lorey, Thomas; Poitras, Nancy; Raine-Bennett, Tina R; Wentzensen, Nicolas; Schiffman, Mark
2018-05-01
As cervical cancer screening shifts from cytology to human papillomavirus (HPV) testing, a major question is the clinical value of identifying individual HPV types. We aimed to validate Onclarity (Becton Dickinson Diagnostics, Sparks, MD), a nine-channel HPV test recently approved by the FDA, by assessing (i) the association of Onclarity types/channels with precancer/cancer; (ii) HPV type/channel agreement between the results of Onclarity and cobas (Roche Molecular Systems, Pleasanton, CA), another FDA-approved test; and (iii) Onclarity typing for all types/channels compared to typing results from a research assay (linear array [LA]; Roche). We compared Onclarity to histopathology, cobas, and LA. We tested a stratified random sample ( n = 9,701) of discarded routine clinical specimens that had tested positive by Hybrid Capture 2 (HC2; Qiagen, Germantown, MD). A subset had already been tested by cobas and LA ( n = 1,965). Cervical histopathology was ascertained from electronic health records. Hierarchical Onclarity channels showed a significant linear association with histological severity. Onclarity and cobas had excellent agreement on partial typing of HPV16, HPV18, and the other 12 types as a pool (sample-weighted kappa value of 0.83); cobas was slightly more sensitive for HPV18 and slightly less sensitive for the pooled high-risk types. Typing by Onclarity showed excellent agreement with types and groups of types identified by LA (kappa values from 0.80 for HPV39/68/35 to 0.97 for HPV16). Onclarity typing results corresponded well to histopathology and to an already validated HPV DNA test and could provide additional clinical typing if such discrimination is determined to be clinically desirable. This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.
NASA Astrophysics Data System (ADS)
Wahyuni, A.
2018-05-01
This research is aimed to find out whether the model of cooperative learning type Student Team Achievement Division (STAD) is more effective than cooperative learning type Think-Pair-Share in SMP Negeri 7 Yogyakarta. This research was a quasi-experimental research, using two experimental groups. The population of research was all students of 7thclass in SMP Negeri 7 Yogyakarta that consists of 5 Classes. From the population were taken 2 classes randomly which used as sample. The instrument to collect data was a description test. Measurement of instrument validity use content validity and construct validity, while measuring instrument reliability use Cronbach Alpha formula. To investigate the effectiveness of cooperative learning type STAD and cooperative learning type TPS on the aspect of student’s mathematical method, the datas were analyzed by one sample test. Comparing the effectiveness of cooperative learning type STAD and TPS in terms of mathematical communication skills by using t-test. Normality test was not conducted because the sample of research more than 30 students, while homogeneity tested by using Kolmogorov Smirnov test. The analysis was performed at 5% confidence level.The results show as follows : 1) The model of cooperative learning type STAD and TPS are effective in terms of mathematical method of junior high school students. 2). STAD type cooperative learning model is more effective than TPS type cooperative learning model in terms of mathematical methods of junior high school students.
Lebel, Alexandre; Daepp, Madeleine I G; Block, Jason P; Walker, Renée; Lalonde, Benoît; Kestens, Yan; Subramanian, S V
2017-01-01
This paper reviews studies of the validity of commercially available business (CAB) data on food establishments ("the foodscape"), offering a meta-analysis of characteristics associated with CAB quality and a case study evaluating the performance of commonly-used validity indicators describing the foodscape. Existing validation studies report a broad range in CAB data quality, although most studies conclude that CAB quality is "moderate" to "substantial". We conclude that current studies may underestimate the quality of CAB data. We recommend that future validation studies use density-adjusted and exposure measures to offer a more meaningful characterization of the relationship of data error with spatial exposure.
Lebel, Alexandre; Daepp, Madeleine I. G.; Block, Jason P.; Walker, Renée; Lalonde, Benoît; Kestens, Yan; Subramanian, S. V.
2017-01-01
This paper reviews studies of the validity of commercially available business (CAB) data on food establishments (“the foodscape”), offering a meta-analysis of characteristics associated with CAB quality and a case study evaluating the performance of commonly-used validity indicators describing the foodscape. Existing validation studies report a broad range in CAB data quality, although most studies conclude that CAB quality is “moderate” to “substantial”. We conclude that current studies may underestimate the quality of CAB data. We recommend that future validation studies use density-adjusted and exposure measures to offer a more meaningful characterization of the relationship of data error with spatial exposure. PMID:28358819
Evolving forecasting classifications and applications in health forecasting
Soyiri, Ireneous N; Reidpath, Daniel D
2012-01-01
Health forecasting forewarns the health community about future health situations and disease episodes so that health systems can better allocate resources and manage demand. The tools used for developing and measuring the accuracy and validity of health forecasts commonly are not defined although they are usually adapted forms of statistical procedures. This review identifies previous typologies used in classifying the forecasting methods commonly used in forecasting health conditions or situations. It then discusses the strengths and weaknesses of these methods and presents the choices available for measuring the accuracy of health-forecasting models, including a note on the discrepancies in the modes of validation. PMID:22615533