Choi, William; Tong, Xiuli; Cain, Kate
2016-08-01
This 1-year longitudinal study examined the role of Cantonese lexical tone sensitivity in predicting English reading comprehension and the pathways underlying their relation. Multiple measures of Cantonese lexical tone sensitivity, English lexical stress sensitivity, Cantonese segmental phonological awareness, general auditory sensitivity, English word reading, and English reading comprehension were administered to 133 Cantonese-English unbalanced bilingual second graders. Structural equation modeling analysis identified transfer of Cantonese lexical tone sensitivity to English reading comprehension. This transfer was realized through a direct pathway via English stress sensitivity and also an indirect pathway via English word reading. These results suggest that prosodic sensitivity is an important factor influencing English reading comprehension and that it needs to be incorporated into theoretical accounts of reading comprehension across languages. Copyright © 2016 Elsevier Inc. All rights reserved.
Kim, Unyong; Oh, Myung Jin; Seo, Youngsuk; Jeon, Yinae; Eom, Joon-Ho; An, Hyun Joo
2017-09-01
Glycosylation of recombinant human erythropoietins (rhEPOs) is significantly associated with drug's quality and potency. Thus, comprehensive characterization of glycosylation is vital to assess the biotherapeutic quality and establish the equivalency of biosimilar rhEPOs. However, current glycan analysis mainly focuses on the N-glycans due to the absence of analytical tools to liberate O-glycans with high sensitivity. We developed selective and sensitive method to profile native O-glycans on rhEPOs. O-glycosylation on rhEPO including O-acetylation on a sialic acid was comprehensively characterized. Details such as O-glycan structure and O-acetyl-modification site were obtained from tandem MS. This method may be applied to QC and batch analysis of not only rhEPOs but also other biotherapeutics bearing multiple O-glycosylations.
Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model
Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance
2014-01-01
Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
Optimization Issues with Complex Rotorcraft Comprehensive Analysis
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.
1998-01-01
This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.
Comprehensive analysis of transport aircraft flight performance
NASA Astrophysics Data System (ADS)
Filippone, Antonio
2008-04-01
This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less
NASA Technical Reports Server (NTRS)
Friedmann, P. P.; Venkatesan, C.; Yuan, K.
1992-01-01
This paper describes the development of a new structural optimization capability aimed at the aeroelastic tailoring of composite rotor blades with straight and swept tips. The primary objective is to reduce vibration levels in forward flight without diminishing the aeroelastic stability margins of the blade. In the course of this research activity a number of complicated tasks have been addressed: (1) development of a new, aeroelastic stability and response analysis; (2) formulation of a new comprehensive sensitive analysis, which facilitates the generation of the appropriate approximations for the objective and the constraints; (3) physical understanding of the new model and, in particular, determination of its potential for aeroelastic tailoring, and (4) combination of the newly developed analysis capability, the sensitivity derivatives and the optimizer into a comprehensive optimization capability. The first three tasks have been completed and the fourth task is in progress.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Comprehensive genetic testing for female and male infertility using next-generation sequencing.
Patel, Bonny; Parets, Sasha; Akana, Matthew; Kellogg, Gregory; Jansen, Michael; Chang, Chihyu; Cai, Ying; Fox, Rebecca; Niknazar, Mohammad; Shraga, Roman; Hunter, Colby; Pollock, Andrew; Wisotzkey, Robert; Jaremko, Malgorzata; Bisignano, Alex; Puig, Oscar
2018-05-19
To develop a comprehensive genetic test for female and male infertility in support of medical decisions during assisted reproductive technology (ART) protocols. We developed a next-generation sequencing (NGS) gene panel consisting of 87 genes including promoters, 5' and 3' untranslated regions, exons, and selected introns. In addition, sex chromosome aneuploidies and Y chromosome microdeletions were analyzed concomitantly using the same panel. The NGS panel was analytically validated by retrospective analysis of 118 genomic DNA samples with known variants in loci representative of female and male infertility. Our results showed analytical accuracy of > 99%, with > 98% sensitivity for single-nucleotide variants (SNVs) and > 91% sensitivity for insertions/deletions (indels). Clinical sensitivity was assessed with samples containing variants representative of male and female infertility, and it was 100% for SNVs/indels, CFTR IVS8-5T variants, sex chromosome aneuploidies, and copy number variants (CNVs) and > 93% for Y chromosome microdeletions. Cost analysis shows potential savings when comparing this single NGS assay with the standard approach, which includes multiple assays. A single, comprehensive, NGS panel can simplify the ordering process for healthcare providers, reduce turnaround time, and lower the overall cost of testing for genetic assessment of infertility in females and males, while maintaining accuracy.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Byers, Helen; Wallis, Yvonne; van Veen, Elke M; Lalloo, Fiona; Reay, Kim; Smith, Philip; Wallace, Andrew J; Bowers, Naomi; Newman, William G; Evans, D Gareth
2016-11-01
The sensitivity of testing BRCA1 and BRCA2 remains unresolved as the frequency of deep intronic splicing variants has not been defined in high-risk familial breast/ovarian cancer families. This variant category is reported at significant frequency in other tumour predisposition genes, including NF1 and MSH2. We carried out comprehensive whole gene RNA analysis on 45 high-risk breast/ovary and male breast cancer families with no identified pathogenic variant on exonic sequencing and copy number analysis of BRCA1/2. In addition, we undertook variant screening of a 10-gene high/moderate risk breast/ovarian cancer panel by next-generation sequencing. DNA testing identified the causative variant in 50/56 (89%) breast/ovarian/male breast cancer families with Manchester scores of ≥50 with two variants being confirmed to affect splicing on RNA analysis. RNA sequencing of BRCA1/BRCA2 on 45 individuals from high-risk families identified no deep intronic variants and did not suggest loss of RNA expression as a cause of lost sensitivity. Panel testing in 42 samples identified a known RAD51D variant, a high-risk ATM variant in another breast ovary family and a truncating CHEK2 mutation. Current exonic sequencing and copy number analysis variant detection methods of BRCA1/2 have high sensitivity in high-risk breast/ovarian cancer families. Sequence analysis of RNA does not identify any variants undetected by current analysis of BRCA1/2. However, RNA analysis clarified the pathogenicity of variants of unknown significance detected by current methods. The low diagnostic uplift achieved through sequence analysis of the other known breast/ovarian cancer susceptibility genes indicates that further high-risk genes remain to be identified.
Yu, Yuncui; Jia, Lulu; Meng, Yao; Hu, Lihua; Liu, Yiwei; Nie, Xiaolu; Zhang, Meng; Zhang, Xuan; Han, Sheng; Peng, Xiaoxia; Wang, Xiaoling
2018-04-01
Establishing a comprehensive clinical evaluation system is critical in enacting national drug policy and promoting rational drug use. In China, the 'Clinical Comprehensive Evaluation System for Pediatric Drugs' (CCES-P) project, which aims to compare drugs based on clinical efficacy and cost effectiveness to help decision makers, was recently proposed; therefore, a systematic and objective method is required to guide the process. An evidence-based multi-criteria decision analysis model that involved an analytic hierarchy process (AHP) was developed, consisting of nine steps: (1) select the drugs to be reviewed; (2) establish the evaluation criterion system; (3) determine the criterion weight based on the AHP; (4) construct the evidence body for each drug under evaluation; (5) select comparative measures and calculate the original utility score; (6) place a common utility scale and calculate the standardized utility score; (7) calculate the comprehensive utility score; (8) rank the drugs; and (9) perform a sensitivity analysis. The model was applied to the evaluation of three different inhaled corticosteroids (ICSs) used for asthma management in children (a total of 16 drugs with different dosage forms and strengths or different manufacturers). By applying the drug analysis model, the 16 ICSs under review were successfully scored and evaluated. Budesonide suspension for inhalation (drug ID number: 7) ranked the highest, with comprehensive utility score of 80.23, followed by fluticasone propionate inhaled aerosol (drug ID number: 16), with a score of 79.59, and budesonide inhalation powder (drug ID number: 6), with a score of 78.98. In the sensitivity analysis, the ranking of the top five and lowest five drugs remains unchanged, suggesting this model is generally robust. An evidence-based drug evaluation model based on AHP was successfully developed. The model incorporates sufficient utility and flexibility for aiding the decision-making process, and can be a useful tool for the CCES-P.
Correlation of SA349/2 helicopter flight-test data with a comprehensive rotorcraft model
NASA Technical Reports Server (NTRS)
Yamauchi, Gloria K.; Heffernan, Ruth M.; Gaubert, Michel
1986-01-01
A comprehensive rotorcraft analysis model was used to predict blade aerodynamic and structural loads for comparison with flight test data. The data were obtained from an SA349/2 helicopter with an advanced geometry rotor. Sensitivity of the correlation to wake geometry, blade dynamics, and blade aerodynamic effects was investigated. Blade chordwise pressure coefficients were predicted for the blade transonic regimes using the model coupled with two finite-difference codes.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Statistical sensitivity analysis of a simple nuclear waste repository model
NASA Astrophysics Data System (ADS)
Ronen, Y.; Lucius, J. L.; Blow, E. M.
1980-06-01
A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.
Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A
2011-08-15
Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J.; Hodge, B. M.; Florita, A.
2013-10-01
Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al
2017-04-01
Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.
DOT National Transportation Integrated Search
2009-11-01
The new Mechanistic-Empirical Pavement Design Guide (NCHRP 1-37A and 1-40D) is based on fundamental engineering principles and is far more comprehensive than the current empirical AASHTO Design Guide developed for conditions more than 40 years previo...
USDA-ARS?s Scientific Manuscript database
Current morphometric methods that comprehensively measure shape cannot compare the disparate leaf shapes found in flowering plants and are sensitive to processing artifacts. Here we describe a persistent homology approach to measuring shape. Persistent homology is a topological method (concerned wit...
Listening comprehension across the adult lifespan
Sommers, Mitchell S.; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent
2011-01-01
Short Summary The current study provides the first systematic assessment of listening comprehension across the adult lifespan. A total of 433 participants ranging in age from 20-90 listened to spoken passages and answered comprehension questions following each passage. In addition, measures of auditory sensitivity were obtained from all participants to determine if hearing loss and listening comprehension changed similarly across the adult lifespan. As expected, auditory sensitivity declined from age 20 to age 90. In contrast, listening comprehension remained relatively unchanged until approximately age 65-70, with declines evident only for the oldest participants. PMID:21716112
Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment
NASA Technical Reports Server (NTRS)
Lee, Meemong; Bowman, Kevin
2014-01-01
Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.
ERIC Educational Resources Information Center
Perkins, Kyle
In this paper four classes of procedures for measuring the instructional sensitivity of reading comprehension test items are reviewed. True experimental designs are not recommended because some of the most important reading comprehension variables do not lend themselves to experimental manipulation. "Ex post facto" factorial designs are…
Preoperative localization strategies for primary hyperparathyroidism: an economic analysis.
Lubitz, Carrie C; Stephen, Antonia E; Hodin, Richard A; Pandharipande, Pari
2012-12-01
Strategies for localizing parathyroid pathology preoperatively vary in cost and accuracy. Our purpose was to compute and compare comprehensive costs associated with common localization strategies. A decision-analytic model was developed to evaluate comprehensive, short-term costs of parathyroid localization strategies for patients with primary hyperparathyroidism. Eight strategies were compared. Probabilities of accurate localization were extracted from the literature, and costs associated with each strategy were based on 2011 Medicare reimbursement schedules. Differential cost considerations included outpatient versus inpatient surgeries, operative time, and costs of imaging. Sensitivity analyses were performed to determine effects of variability in key model parameters upon model results. Ultrasound (US) followed by 4D-CT was the least expensive strategy ($5,901), followed by US alone ($6,028), and 4D-CT alone ($6,110). Strategies including sestamibi (SM) were more expensive, with associated expenditures of up to $6,329 for contemporaneous US and SM. Four-gland, bilateral neck exploration (BNE) was the most expensive strategy ($6,824). Differences in cost were dependent upon differences in the sensitivity of each strategy for detecting single-gland disease, which determined the proportion of patients able to undergo outpatient minimally invasive parathyroidectomy. In sensitivity analysis, US alone was preferred over US followed by 4D-CT only when both the sensitivity of US alone for detecting an adenoma was ≥ 94 %, and the sensitivity of 4D-CT following negative US was ≤ 39 %. 4D-CT alone was the least costly strategy when US sensitivity was ≤ 31 %. Among commonly used strategies for preoperative localization of parathyroid pathology, US followed by selective 4D-CT is the least expensive.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
NASA Astrophysics Data System (ADS)
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Measuring the Instructional Sensitivity of ESL Reading Comprehension Items.
ERIC Educational Resources Information Center
Brutten, Sheila R.; And Others
A study attempted to estimate the instructional sensitivity of items in three reading comprehension tests in English as a second language (ESL). Instructional sensitivity is a test-item construct defined as the tendency for a test item to vary in difficulty as a function of instruction. Similar tasks were given to readers at different proficiency…
Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map
2014-01-01
We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970
Changes in vegetation cover associated with urban planning efforts may affect regional meteorology and air quality. Here we use a comprehensive coupled meteorology-air quality model (WRF-CMAQ) to simulate the influence of planned land use changes from green infrastructure impleme...
O'Flaherty, Brigid M; Li, Yan; Tao, Ying; Paden, Clinton R; Queen, Krista; Zhang, Jing; Dinwiddie, Darrell L; Gross, Stephen M; Schroth, Gary P; Tong, Suxiang
2018-06-01
Next generation sequencing (NGS) technologies have revolutionized the genomics field and are becoming more commonplace for identification of human infectious diseases. However, due to the low abundance of viral nucleic acids (NAs) in relation to host, viral identification using direct NGS technologies often lacks sufficient sensitivity. Here, we describe an approach based on two complementary enrichment strategies that significantly improves the sensitivity of NGS-based virus identification. To start, we developed two sets of DNA probes to enrich virus NAs associated with respiratory diseases. The first set of probes spans the genomes, allowing for identification of known viruses and full genome sequencing, while the second set targets regions conserved among viral families or genera, providing the ability to detect both known and potentially novel members of those virus groups. Efficiency of enrichment was assessed by NGS testing reference virus and clinical samples with known infection. We show significant improvement in viral identification using enriched NGS compared to unenriched NGS. Without enrichment, we observed an average of 0.3% targeted viral reads per sample. However, after enrichment, 50%-99% of the reads per sample were the targeted viral reads for both the reference isolates and clinical specimens using both probe sets. Importantly, dramatic improvements on genome coverage were also observed following virus-specific probe enrichment. The methods described here provide improved sensitivity for virus identification by NGS, allowing for a more comprehensive analysis of disease etiology. © 2018 O'Flaherty et al.; Published by Cold Spring Harbor Laboratory Press.
Vanasse, A; Courteau, M; Ethier, J-F
2018-04-01
To synthesize concepts and approaches related to the analysis of patterns or processes of care and patient's outcomes into a comprehensive model of care trajectories, focusing on hospital readmissions for patients with chronic ambulatory care sensitive conditions (ACSCs). Narrative literature review. Published studies between January 2000 and November 2017, using the concepts of 'continuity', 'pathway', 'episode', and 'trajectory', and focused on readmissions and chronic ACSCs, were collected in electronic databases. Qualitative content analysis was performed with emphasis on key constituents to build a comprehensive model. Specific common constituents are shared by the concepts reviewed: they focus on the patient, aim to measure and improve outcomes, follow specific periods of time and consider other factors related to care providers, care units, care settings, and treatments. Using these common denominators, the comprehensive '6W' multidimensional model of care trajectories was created. Considering patients' attributes and their chronic ACSCs illness course ('who' and 'why' dimensions), this model reflects their patterns of health care use across care providers ('which'), care units ('where'), and treatments ('what'), at specific periods of time ('when'). The '6W' model of care trajectories could provide valuable information on 'missed opportunities' to reduce readmission rates and improve quality of both ambulatory and inpatient care. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Erguler, Kamil; Stumpf, Michael P H
2011-05-01
The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.
Rosas, Samuel; Krill, Michael K; Amoo-Achampong, Kelms; Kwon, KiHyun; Nwachukwu, Benedict U; McCormick, Frank
2017-08-01
Clinical examination of the shoulder joint has gained attention as clinicians aim to use an evidence-based examination of the biceps tendon, with the desire for a proper diagnosis while minimizing costly imaging procedures. The purpose of this study is to create a decision tree analysis that enables the development of a clinical algorithm for diagnosing long head of biceps (LHB) pathology. A literature review of Level I and II diagnostic studies was conducted to extract characteristics of clinical tests for LHB pathology through a systematic review of PubMed, Medline, Ovid, and Cochrane Review databases. Tests were combined in series and parallel to determine sensitivities and specificities, and positive and negative likelihood ratios were determined for each combination using a subjective pretest probability. The "gold standard" for diagnosis in all included studies was arthroscopy or arthrotomy. The optimal testing modality was use of the uppercut test combined with the tenderness to palpation of the biceps tendon test. This combination achieved a sensitivity of 88.4% when performed in parallel and a specificity of 93.8% when performed in series. These tests used in combination optimize post-test probability accuracy greater than any single individual test. Performing the uppercut test and biceps groove tenderness to palpation test together has the highest sensitivity and specificity of known physical examinations maneuvers to aid in the diagnosis of LHB pathology compared with diagnostic arthroscopy (practical, evidence-based, comprehensive examination). A decision tree analysis aides in the practical, evidence-based, comprehensive examination diagnostic accuracy post-testing based on the ordinal scale pretest probability. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Examining the relationship between comprehension and production processes in code-switched language
Guzzardo Tamargo, Rosa E.; Valdés Kroff, Jorge R.; Dussias, Paola E.
2016-01-01
We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish–English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants’ comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension. PMID:28670049
Examining the relationship between comprehension and production processes in code-switched language.
Guzzardo Tamargo, Rosa E; Valdés Kroff, Jorge R; Dussias, Paola E
2016-08-01
We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish-English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants' comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension.
NASA Astrophysics Data System (ADS)
Harshan, Suraj
The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.
Peng, Qiliang; Shen, Yi; Lin, Kaisu; Zou, Li; Shen, Yuntian; Zhu, Yaqun
2018-05-15
Recently, accumulating evidences have revealed that microRNA-106 (miR-106) may serve as a non-invasive and cost-effective biomarker in gastric cancer (GC) detection. However, inconsistent results have prevented its application to clinical practice. As a result of this, a comprehensive meta-analysis was conducted to evaluate the diagnostic performance of miR-106 alone and miR-106-related combination markers for GC detection. Meanwhile, an integrative bioinformatics analysis was performed to explore the function of miR-106 at the systems biology level. The results in our work showed that sensitivity of 0.71 (95% CI 0.65-0.76) and specificity of 0.82 (0.72-0.88), with the under area AUC (area under the curve) value of 0.80 (0.76-0.83) for miR-106 alone. Prospectively, miR-106-related combination markers improved the combined sensitivity, specificity and AUC, describing the discriminatory ability of 0.78 (0.65-0.87), 0.83 (0.77-0.89) and 0.88 (0.85-0.90) in the present analysis. Furthermore, targets of miR-106 were obtained and enriched by gene ontology and Kyoto Encyclopedia of Genes and Genomes pathway analysis, revealing their associations with the occurrence and development of GC. Hub genes and significant modules were identified from the protein-protein interaction networks constructed by miR-106 targets and found closely associated with the initiation and progression of GC again. Our comprehensive and integrative analysis revealed that miR-106 may be suitable as a diagnostic biomarker for GC while microRNA combination biomarkers may provide a new alternative for clinical application. However, it is necessary to conduct large-scale population-based studies and biological experiments to further investigate the diagnostic value of miR-106.
Skjerning, Halfdan; Hourihane, Jonathan; Husby, Steffen; DunnGalvin, Audrey
2017-10-01
Coeliac disease (CD) is a chronic immune-mediated disease in genetically susceptible individuals, induced by ingested gluten. The treatment for CD is a lifelong gluten-free diet (GFD). The GFD involves restrictions in diet that may impact on a person's Health-Related Quality of Life (HRQoL). The aim of the present study was to develop the Coeliac Disease Quality of Life questionnaire (CDQL): a comprehensive CD-specific HRQoL measure that can be completed by children, adolescents, and adults or by proxy. The questionnaire was developed in three phases. In phase 1, focus group methods and qualitative analysis of verbatim transcripts generated CD-specific items for a prototype instrument to sensitively captured patient concerns. In phase 2, CD patients completed the prototype CDQL. The questionnaire was refined through analysis of data and cognitive interviewing. In phase 3, the final version of the CDQL was answered by Danish respondents. The psychometric properties of the CDQL were assessed, and the HRQoL data were analyzed. The CDQL was completed by 422 respondents. The CDQL has 12 patient background items, 2 generic HRQoL items, and 30 CD-specific HRQoL item. The CD-specific HRQoL items were distributed on eight scales with acceptable to excellent reliability. Comprehensiveness and understandability was shown by feedback from cognitive interviewing from children, adolescents, and adults. Content validity was ensured by involving patients and clinicians in the development of the questionnaire. Sensitivity of the questionnaire was demonstrated in differences found between children, adolescents, and adult's perception of their HRQoL in relation to having CD. The CDQL comprehensively measures HRQoL in CD, and is psychometrically robust. The questionnaire may prove useful in tracking HRQoL in CD across age groups.
Maccario, Roberta; Rouhani, Saba; Drake, Tom; Nagy, Annie; Bamadio, Modibo; Diarra, Seybou; Djanken, Souleymane; Roschnik, Natalie; Clarke, Siân E; Sacko, Moussa; Brooker, Simon; Thuilliez, Josselin
2017-06-12
The expansion of malaria prevention and control to school-aged children is receiving increasing attention, but there are still limited data on the costs of intervention. This paper analyses the costs of a comprehensive school-based intervention strategy, delivered by teachers, that included participatory malaria educational activities, distribution of long lasting insecticide-treated nets (LLIN), and Intermittent Parasite Clearance in schools (IPCs) in southern Mali. Costs were collected alongside a randomised controlled trial conducted in 80 primary schools in Sikasso Region in Mali in 2010-2012. Cost data were compiled between November 2011 and March 2012 for the 40 intervention schools (6413 children). A provider perspective was adopted. Using an ingredients approach, costs were classified by cost category and by activity. Total costs and cost per child were estimated for the actual intervention, as well as for a simpler version of the programme more suited for scale-up by the government. Univariate sensitivity analysis was performed. The economic cost of the comprehensive intervention was estimated to $10.38 per child (financial cost $8.41) with malaria education, LLIN distribution and IPCs costing $2.13 (20.5%), $5.53 (53.3%) and $2.72 (26.2%) per child respectively. Human resources were found to be the key cost driver, and training costs were the greatest contributor to overall programme costs. Sensitivity analysis showed that an adapted intervention delivering one LLIN instead of two would lower the economic cost to $8.66 per child; and that excluding LLIN distribution in schools altogether, for example in settings where malaria control already includes universal distribution of LLINs at community-level, would reduce costs to $4.89 per child. A comprehensive school-based control strategy may be a feasible and affordable way to address the burden of malaria among schoolchildren in the Sahel.
Harper, Marc; Gronenberg, Luisa; Liao, James; Lee, Christopher
2014-01-01
Discovering all the genetic causes of a phenotype is an important goal in functional genomics. We combine an experimental design for detecting independent genetic causes of a phenotype with a high-throughput sequencing analysis that maximizes sensitivity for comprehensively identifying them. Testing this approach on a set of 24 mutant strains generated for a metabolic phenotype with many known genetic causes, we show that this pathway-based phenotype sequencing analysis greatly improves sensitivity of detection compared with previous methods, and reveals a wide range of pathways that can cause this phenotype. We demonstrate our approach on a metabolic re-engineering phenotype, the PEP/OAA metabolic node in E. coli, which is crucial to a substantial number of metabolic pathways and under renewed interest for biofuel research. Out of 2157 mutations in these strains, pathway-phenoseq discriminated just five gene groups (12 genes) as statistically significant causes of the phenotype. Experimentally, these five gene groups, and the next two high-scoring pathway-phenoseq groups, either have a clear connection to the PEP metabolite level or offer an alternative path of producing oxaloacetate (OAA), and thus clearly explain the phenotype. These high-scoring gene groups also show strong evidence of positive selection pressure, compared with strictly neutral selection in the rest of the genome.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Egidi, Giovanna; Caramazza, Alfonso
2014-12-01
According to recent research on language comprehension, the semantic features of a text are not the only determinants of whether incoming information is understood as consistent. Listeners' pre-existing affective states play a crucial role as well. The current fMRI experiment examines the effects of happy and sad moods during comprehension of consistent and inconsistent story endings, focusing on brain regions previously linked to two integration processes: inconsistency detection, evident in stronger responses to inconsistent endings, and fluent processing (accumulation), evident in stronger responses to consistent endings. The analysis evaluated whether differences in the BOLD response for consistent and inconsistent story endings correlated with self-reported mood scores after a mood induction procedure. Mood strongly affected regions previously associated with inconsistency detection. Happy mood increased sensitivity to inconsistency in regions specific for inconsistency detection (e.g., left IFG, left STS), whereas sad mood increased sensitivity to inconsistency in regions less specific for language processing (e.g., right med FG, right SFG). Mood affected more weakly regions involved in accumulation of information. These results show that mood can influence activity in areas mediating well-defined language processes, and highlight that integration is the result of context-dependent mechanisms. The finding that language comprehension can involve different networks depending on people's mood highlights the brain's ability to reorganize its functions. Copyright © 2014 Elsevier Inc. All rights reserved.
Parametric sensitivity analysis of leachate transport simulations at landfills.
Bou-Zeid, E; El-Fadel, M
2004-01-01
This paper presents a case study in simulating leachate generation and transport at a 2000 ton/day landfill facility and assesses leachate migration away from the landfill in order to control associated environmental impacts, particularly on groundwater wells down gradient of the site. The site offers unique characteristics in that it is a former quarry converted to a landfill and is planned to have refuse depths that could reach 100 m, making it one of the deepest in the world. Leachate quantity and potential percolation into the subsurface are estimated using the Hydrologic Evaluation of Landfill Performance (HELP) model. A three-dimensional subsurface model (PORFLOW) was adopted to simulate ground water flow and contaminant transport away from the site. A comprehensive sensitivity analysis to leachate transport control parameters was also conducted. Sensitivity analysis suggests that changes in partition coefficient, source strength, aquifer hydraulic conductivity, and dispersivity have the most significant impact on model output indicating that these parameters should be carefully selected when similar modeling studies are performed. Copyright 2004 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Ney, Michael; Abdulhalim, Ibrahim
2016-03-01
Skin cancer detection at its early stages has been the focus of a large number of experimental and theoretical studies during the past decades. Among these studies two prominent approaches presenting high potential are reflectometric sensing at the THz wavelengths region and polarimetric imaging techniques in the visible wavelengths. While THz radiation contrast agent and source of sensitivity to cancer related tissue alterations was considered to be mainly the elevated water content in the cancerous tissue, the polarimetric approach has been verified to enable cancerous tissue differentiation based on cancer induced structural alterations to the tissue. Combining THz with the polarimetric approach, which is considered in this study, is examined in order to enable higher detection sensitivity than previously pure reflectometric THz measurements. For this, a comprehensive MC simulation of radiative transfer in a complex skin tissue model fitted for the THz domain that considers the skin`s stratified structure, tissue material optical dispersion modeling, surface roughness, scatterers, and substructure organelles has been developed. Additionally, a narrow beam Mueller matrix differential analysis technique is suggested for assessing skin cancer induced changes in the polarimetric image, enabling the tissue model and MC simulation to be utilized for determining the imaging parameters resulting in maximal detection sensitivity.
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
The Trans-Pacific Partnership Agreement: Trading away our health?
Ruckert, Arne; Schram, Ashley; Labonté, Ronald
2015-04-29
There is long-standing interest by the public health community in the potential implications of trade and investment agreements for public health. Our commentary highlights the main pathways by which the Trans-Pacific Partnerships (TPP), a comprehensive trade and investment agreement currently under negotiation, might undermine population health (based on analysis of and commentary about leaked chapters of the TPP), and calls for a more transparent and health-sensitive TPP negotiation process. We argue that use of comprehensive health impact assessments could be helpful in identifying how the potentially serious health consequences of the TPP and similar future international trade and investment agreements can be avoided, minimized or mitigated.
Surrogate-based Analysis and Optimization
NASA Technical Reports Server (NTRS)
Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin
2005-01-01
A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.
deKleijn, Miriam; Lagro-Janssen, Antoine L M; Canelo, Ismelda; Yano, Elizabeth M
2015-04-01
Women Veterans are a significant minority of users of the VA healthcare system, limiting provider and staff experience meeting their needs in environments historically designed for men. The VA is nonetheless committed to ensuring that women Veterans have access to comprehensive care in environments sensitive to their needs. We sought to determine what aspects of care need to be tailored to the needs of women Veterans in order for the VA to deliver gender-sensitive comprehensive care. Modified Delphi expert panel process. Eleven clinicians and social scientists with expertise in women's health, primary care, and mental health. Importance of tailoring over 100 discrete aspects of care derived from the Institute of Medicine's definition of comprehensive care and literature-based domains of sex-sensitive care on a 5-point scale. Panelists rated over half of the aspects of care as very-to-extremely important (median score 4+) to tailor to the needs of women Veterans. The panel arrived at 14 priority recommendations that broadly encompassed the importance of (1) the design/delivery of services sensitive to trauma histories, (2) adapting to women's preferences and information needs, and (3) sex awareness and cultural transformation in every facet of VA operations. We used expert panel methods to arrive at consensus on top priority recommendations for improving delivery of sex-sensitive comprehensive care in VA settings. Accomplishment of their breadth will require national, regional, and local strategic action and multilevel stakeholder engagement, and will support VA's national efforts at improving customer service for all Veterans.
Creating a Roadmap for Delivering Gender-sensitive Comprehensive Care for Women Veterans
deKleijn, Miriam; Lagro-Janssen, Antoine L.M.; Canelo, Ismelda
2015-01-01
Background: Women Veterans are a significant minority of users of the VA healthcare system, limiting provider and staff experience meeting their needs in environments historically designed for men. The VA is nonetheless committed to ensuring that women Veterans have access to comprehensive care in environments sensitive to their needs. Objectives: We sought to determine what aspects of care need to be tailored to the needs of women Veterans in order for the VA to deliver gender-sensitive comprehensive care. Research Design: Modified Delphi expert panel process. Subjects: Eleven clinicians and social scientists with expertise in women’s health, primary care, and mental health. Measures: Importance of tailoring over 100 discrete aspects of care derived from the Institute of Medicine’s definition of comprehensive care and literature-based domains of sex-sensitive care on a 5-point scale. Results: Panelists rated over half of the aspects of care as very-to-extremely important (median score 4+) to tailor to the needs of women Veterans. The panel arrived at 14 priority recommendations that broadly encompassed the importance of (1) the design/delivery of services sensitive to trauma histories, (2) adapting to women’s preferences and information needs, and (3) sex awareness and cultural transformation in every facet of VA operations. Conclusions: We used expert panel methods to arrive at consensus on top priority recommendations for improving delivery of sex-sensitive comprehensive care in VA settings. Accomplishment of their breadth will require national, regional, and local strategic action and multilevel stakeholder engagement, and will support VA’s national efforts at improving customer service for all Veterans. PMID:25767971
Joint Services Electronics Program.
1985-12-31
year a comprehensive experimental study of the collision- enhanced Hanle-type resonances in Na vapor with various buffer gases has been completed...demonstrated theoretically that the collision-enhanced Hanle resonances are equivalent to the phenomenon of collision-induced transverse optical pumping. The...for the sensitivity of the mean sojourn times. We also developed a set of new equations based on perturbation analysis which calculates theoretically
Quantifying the Variability in Species' Vulnerability to Ocean Acidification
NASA Astrophysics Data System (ADS)
Kroeker, K. J.; Kordas, R. L.; Crim, R.; Gattuso, J.; Hendriks, I.; Singh, G. G.
2012-12-01
Ocean acidification represents a threat to marine species and ecosystems worldwide. As such, understanding the potential ecological impacts of acidification is a high priority for science, management, and policy. As research on the biological impacts of ocean acidification continues to expand at an exponential rate, a comprehensive understanding of the generalities and/or variability in organisms' responses and the corresponding levels of certainty of these potential responses is essential. Meta-analysis is a quantitative technique for summarizing the results of primary research studies and provides a transparent method to examine the generalities and/or variability in scientific results across numerous studies. Here, we perform the most comprehensive meta-analysis to date by synthesizing the results of 228 studies examining the biological impacts of ocean acidification. Our results reveal decreased survival, calcification, growth, reproduction and development in response to acidification across a broad range of marine organisms, as well as significant trait-mediated variation among taxonomic groups and enhanced sensitivity among early life history stages. In addition, our results reveal a pronounced sensitivity of molluscs to acidification, especially among the larval stages, and enhanced vulnerability to acidification with concurrent exposure to increased seawater temperatures across a diversity of organisms.
Borsu, Laetitia; Intrieri, Julie; Thampi, Linta; Yu, Helena; Riely, Gregory; Nafa, Khedoudja; Chandramohan, Raghu; Ladanyi, Marc; Arcila, Maria E
2016-11-01
Although next-generation sequencing (NGS) is a robust technology for comprehensive assessment of EGFR-mutant lung adenocarcinomas with acquired resistance to tyrosine kinase inhibitors, it may not provide sufficiently rapid and sensitive detection of the EGFR T790M mutation, the most clinically relevant resistance biomarker. Here, we describe a digital PCR (dPCR) assay for rapid T790M detection on aliquots of NGS libraries prepared for comprehensive profiling, fully maximizing broad genomic analysis on limited samples. Tumor DNAs from patients with EGFR-mutant lung adenocarcinomas and acquired resistance to epidermal growth factor receptor inhibitors were prepared for Memorial Sloan-Kettering-Integrated Mutation Profiling of Actionable Cancer Targets sequencing, a hybrid capture-based assay interrogating 410 cancer-related genes. Precapture library aliquots were used for rapid EGFR T790M testing by dPCR, and results were compared with NGS and locked nucleic acid-PCR Sanger sequencing (reference high sensitivity method). Seventy resistance samples showed 99% concordance with the reference high sensitivity method in accuracy studies. Input as low as 2.5 ng provided a sensitivity of 1% and improved further with increasing DNA input. dPCR on libraries required less DNA and showed better performance than direct genomic DNA. dPCR on NGS libraries is a robust and rapid approach to EGFR T790M testing, allowing most economical utilization of limited material for comprehensive assessment. The same assay can also be performed directly on any limited DNA source and cell-free DNA. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Kim, Dong Seong; Park, Jong Sou
2014-01-01
It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732
Listening comprehension across the adult lifespan.
Sommers, Mitchell S; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent
2011-01-01
Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but that age accounted for significant variance even after controlling for auditory sensitivity. Results suggest that age-related reductions in auditory sensitivity account for a sizable portion of individual variance in listening comprehension that was observed across the adult lifespan. Other potential contributors including a possible role for age-related declines in perceptual and cognitive abilities are discussed. Clinically, the results suggest that amplification is likely to improve listening comprehension but that increased audibility alone may not be sufficient to maintain listening comprehension beyond age 65 and 70 yr. Additional research will be needed to identify potential target abilities for training or other rehabilitation procedures that could supplement sensory aids to provide additional improvements in listening comprehension.
Lee, Jaime B; Moore Sohlberg, McKay
2013-05-01
This pilot study investigated the impact of direct attention training combined with metacognitive facilitation on reading comprehension in individuals with aphasia. A single-subject, multiple baseline design was employed across 4 participants to evaluate potential changes in reading comprehension resulting from an 8-week intervention using Attention Process Training-3 (APT-3). The primary outcome measure was a maze reading task. Pre- and posttesting included attention and reading comprehension measures. Visual inspection of graphed performance data across conditions was used as the primary method of analysis. Treatment effect sizes were calculated for changes in reading comprehension probes from baseline to maintenance phases. Two of the study's 4 participants demonstrated improvements in maze reading, with corresponding effect sizes that were small in magnitude according to benchmarks for aphasia treatment research. All 4 participants made improvements on select standardized measures of attention. Interventions that include a metacognitive component with direct attention training may elicit improvements in participants' attention and allocation of resources. Maze passage reading is a repeated measure that appears sensitive to treatment-related changes in reading comprehension. Issues for future research related to measurement, candidacy, and clinical delivery are discussed.
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
Lu, R; Xiao, Y
2017-07-18
Objective: To evaluate the clinical value of ultrasonic elastography and ultrasonography comprehensive scoring method in the diagnosis of cervical lesions. Methods: A total of 116 patients were selected from the Department of Gynecology of the first hospital affiliated with Central South University from March 2014 to September 2015.All of the lesions were preoperatively examined by Doppler Ultrasound and elastography.The elasticity score was determined by a 5-point scoring method. Calculation of the strain ratio was based on a comparison of the average strain measured in the lesion with the adjacent tissue of the same depth, size, and shape.All these ultrasonic parameters were quantified, added, and arrived at ultrasonography comprehensive scores.To use surgical pathology as the gold standard, the sensitivity, specificity, accuracy of Doppler Ultrasound, elasticity score and strain ratio methods and ultrasonography comprehensive scoring method were comparatively analyzed. Results: (1) The sensitivity, specificity, and accuracy of Doppler Ultrasound in diagnosing cervical lesions were 82.89% (63/76), 85.0% (34/40), and 83.62% (97/116), respectively.(2) The sensitivity, specificity, and accuracy of the elasticity score method were 77.63% (59/76), 82.5% (33/40), and 79.31% (92/116), respectively; the sensitivity, specificity, and accuracy of the strain ratio measure method were 84.21% (64/76), 87.5% (35/40), and 85.34% (99/116), respectively.(3) The sensitivity, specificity, and accuracy of ultrasonography comprehensive scoring method were 90.79% (69/76), 92.5% (37/40), and 91.38% (106/116), respectively. Conclusion: (1) It was obvious that ultrasonic elastography had certain diagnostic value in cervical lesions. Strain ratio measurement can be more objective than elasticity score method.(2) The combined application of ultrasonography comprehensive scoring method, ultrasonic elastography and conventional sonography was more accurate than single parameter.
NASA Technical Reports Server (NTRS)
1970-01-01
A comprehensive search review and analysis was made of various technical documents relating to both pyrotechnics and high explosives testing, handling, storage, manufacturing, physical and chemical characteristics and accidents and incidents. Of approximately 5000 technical abstracts reviewed, 300 applicable documents were analyzed in detail. These 300 documents were then converted to a subject matrix so that they may be readily referenced for application to the current programs. It was generally concluded that information in several important categories was lacking. Two of the more important categories were in pyrotechnics sensitivity testing and TNT equivalency testing. A general recommendation resulting from this study was that this activity continue and a comprehensive data bank be generated that would allow immediate access to a large volume of pertinent information in a relatively short period of time.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra
2017-12-01
The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.
Kroeker, Kristy J; Kordas, Rebecca L; Crim, Ryan; Hendriks, Iris E; Ramajo, Laura; Singh, Gerald S; Duarte, Carlos M; Gattuso, Jean-Pierre
2013-01-01
Ocean acidification represents a threat to marine species worldwide, and forecasting the ecological impacts of acidification is a high priority for science, management, and policy. As research on the topic expands at an exponential rate, a comprehensive understanding of the variability in organisms' responses and corresponding levels of certainty is necessary to forecast the ecological effects. Here, we perform the most comprehensive meta-analysis to date by synthesizing the results of 228 studies examining biological responses to ocean acidification. The results reveal decreased survival, calcification, growth, development and abundance in response to acidification when the broad range of marine organisms is pooled together. However, the magnitude of these responses varies among taxonomic groups, suggesting there is some predictable trait-based variation in sensitivity, despite the investigation of approximately 100 new species in recent research. The results also reveal an enhanced sensitivity of mollusk larvae, but suggest that an enhanced sensitivity of early life history stages is not universal across all taxonomic groups. In addition, the variability in species' responses is enhanced when they are exposed to acidification in multi-species assemblages, suggesting that it is important to consider indirect effects and exercise caution when forecasting abundance patterns from single-species laboratory experiments. Furthermore, the results suggest that other factors, such as nutritional status or source population, could cause substantial variation in organisms' responses. Last, the results highlight a trend towards enhanced sensitivity to acidification when taxa are concurrently exposed to elevated seawater temperature. PMID:23505245
Kroeker, Kristy J; Kordas, Rebecca L; Crim, Ryan; Hendriks, Iris E; Ramajo, Laura; Singh, Gerald S; Duarte, Carlos M; Gattuso, Jean-Pierre
2013-06-01
Ocean acidification represents a threat to marine species worldwide, and forecasting the ecological impacts of acidification is a high priority for science, management, and policy. As research on the topic expands at an exponential rate, a comprehensive understanding of the variability in organisms' responses and corresponding levels of certainty is necessary to forecast the ecological effects. Here, we perform the most comprehensive meta-analysis to date by synthesizing the results of 228 studies examining biological responses to ocean acidification. The results reveal decreased survival, calcification, growth, development and abundance in response to acidification when the broad range of marine organisms is pooled together. However, the magnitude of these responses varies among taxonomic groups, suggesting there is some predictable trait-based variation in sensitivity, despite the investigation of approximately 100 new species in recent research. The results also reveal an enhanced sensitivity of mollusk larvae, but suggest that an enhanced sensitivity of early life history stages is not universal across all taxonomic groups. In addition, the variability in species' responses is enhanced when they are exposed to acidification in multi-species assemblages, suggesting that it is important to consider indirect effects and exercise caution when forecasting abundance patterns from single-species laboratory experiments. Furthermore, the results suggest that other factors, such as nutritional status or source population, could cause substantial variation in organisms' responses. Last, the results highlight a trend towards enhanced sensitivity to acidification when taxa are concurrently exposed to elevated seawater temperature. © 2013 Blackwell Publishing Ltd.
Testing the effectiveness of simplified search strategies for updating systematic reviews.
Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana
2017-08-01
The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.
Liu, Rong; Yin, Zhibin; Leng, Yixin; Hang, Wei; Huang, Benli
2018-01-01
Laser desorption laser postionization time-of-flight mass spectrometry (LDPI-TOFMS) was employed for direct analysis and determination of typical basic dyes. It was also used for the analysis and comprehensive understanding of complex materials such as blue ballpoint pen inks. Simultaneous emergences of fragmental and molecular information largely simplify and facilitate unambiguous identification of dyes via variable energy of 266nm postionization laser. More specifically, by optimizing postionization laser energy with the same energy of desorption laser, the structurally significant results show definite differences in the fragmentation patterns, which offer opportunities for discrimination of isomeric species with identical molecular weight. Moreover, relatively high spectra resolution can be acquired without the expense of sensitivity. In contrast to laser desorption/ionization mass spectrometry (LDI-MS), LDPI-MS simultaneously offers valuable molecular information about dyes in traces, solvents and additives about inks, thereby offering direct determination and comprehensive understanding of blue ballpoint inks and giving a high level of confidence to discriminate the complicated evidentiary samples. In addition, direct analysis of the inks not only allows the avoidance of the tedious sample preparation processes, significantly shortening the overall analysis time and improving throughput, but allows minimized sample consumption which is important for rare and precious samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.
2008-12-01
A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026
Simulation analysis of an integrated model for dynamic cellular manufacturing system
NASA Astrophysics Data System (ADS)
Hao, Chunfeng; Luan, Shichao; Kong, Jili
2017-05-01
Application of dynamic cellular manufacturing system (DCMS) is a well-known strategy to improve manufacturing efficiency in the production environment with high variety and low volume of production. Often, neither the trade-off of inter and intra-cell material movements nor the trade-off of hiring and firing of operators are examined in details. This paper presents simulation results of an integrated mixed-integer model including sensitivity analysis for several numerical examples. The comprehensive model includes cell formation, inter and intracellular materials handling, inventory and backorder holding, operator assignment (including resource adjustment) and flexible production routing. The model considers multi-production planning with flexible resources (machines and operators) where each period has different demands. The results verify the validity and sensitivity of the proposed model using a genetic algorithm.
Enhanced Lipidome Coverage in Shotgun Analyses by using Gas-Phase Fractionation
NASA Astrophysics Data System (ADS)
Nazari, Milad; Muddiman, David C.
2016-11-01
A high resolving power shotgun lipidomics strategy using gas-phase fractionation and data-dependent acquisition (DDA) was applied toward comprehensive characterization of lipids in a hen ovarian tissue in an untargeted fashion. Using this approach, a total of 822 unique lipids across a diverse range of lipid categories and classes were identified based on their MS/MS fragmentation patterns. Classes of glycerophospholipids and glycerolipids, such as glycerophosphocholines (PC), glycerophosphoethanolamines (PE), and triglycerides (TG), are often the most abundant peaks observed in shotgun lipidomics analyses. These ions suppress the signal from low abundance ions and hinder the chances of characterizing low abundant lipids when DDA is used. These issues were circumvented by utilizing gas-phase fractionation, where DDA was performed on narrow m/z ranges instead of a broad m/z range. Employing gas-phase fractionation resulted in an increase in sensitivity by more than an order of magnitude in both positive- and negative-ion modes. Furthermore, the enhanced sensitivity increased the number of lipids identified by a factor of ≈4, and facilitated identification of low abundant lipids from classes such as cardiolipins that are often difficult to observe in untargeted shotgun analyses and require sample-specific preparation steps prior to analysis. This method serves as a resource for comprehensive profiling of lipids from many different categories and classes in an untargeted manner, as well as for targeted and quantitative analyses of individual lipids. Furthermore, this comprehensive analysis of the lipidome can serve as a species- and tissue-specific database for confident identification of other MS-based datasets, such as mass spectrometry imaging.
Systems and methods for analyzing building operations sensor data
Mezic, Igor; Eisenhower, Bryan A.
2015-05-26
Systems and methods are disclosed for analyzing building sensor information and decomposing the information therein to a more manageable and more useful form. Certain embodiments integrate energy-based and spectral-based analysis methods with parameter sampling and uncertainty/sensitivity analysis to achieve a more comprehensive perspective of building behavior. The results of this analysis may be presented to a user via a plurality of visualizations and/or used to automatically adjust certain building operations. In certain embodiments, advanced spectral techniques, including Koopman-based operations, are employed to discern features from the collected building sensor data.
Comprehensive Assessment of Osteoporosis and Bone Fragility with CT Colonography
Murthy, Naveen S.; Khosla, Sundeep; Clarke, Bart L.; Bruining, David H.; Kopperdahl, David L.; Lee, David C.; Keaveny, Tony M.
2016-01-01
Purpose To evaluate the ability of additional analysis of computed tomographic (CT) colonography images to provide a comprehensive osteoporosis assessment. Materials and Methods This Health Insurance Portability and Accountability Act–compliant study was approved by our institutional review board with a waiver of informed consent. Diagnosis of osteoporosis and assessment of fracture risk were compared between biomechanical CT analysis and dual-energy x-ray absorptiometry (DXA) in 136 women (age range, 43–92 years), each of whom underwent CT colonography and DXA within a 6-month period (between January 2008 and April 2010). Blinded to the DXA data, biomechanical CT analysis was retrospectively applied to CT images by using phantomless calibration and finite element analysis to measure bone mineral density and bone strength at the hip and spine. Regression, Bland-Altman, and reclassification analyses and paired t tests were used to compare results. Results For bone mineral density T scores at the femoral neck, biomechanical CT analysis was highly correlated (R2 = 0.84) with DXA, did not differ from DXA (P = .15, paired t test), and was able to identify osteoporosis (as defined by DXA), with 100% sensitivity in eight of eight patients (95% confidence interval [CI]: 67.6%, 100%) and 98.4% specificity in 126 of 128 patients (95% CI: 94.5%, 99.6%). Considering both the hip and spine, the classification of patients at high risk for fracture by biomechanical CT analysis—those with osteoporosis or “fragile bone strength”—agreed well against classifications for clinical osteoporosis by DXA (T score ≤−2.5 at the hip or spine), with 82.8% sensitivity in 24 of 29 patients (95% CI: 65.4%, 92.4%) and 85.7% specificity in 66 of 77 patients (95% CI: 76.2%, 91.8%). Conclusion Retrospective biomechanical CT analysis of CT colonography for colorectal cancer screening provides a comprehensive osteoporosis assessment without requiring changes in imaging protocols. © RSNA, 2015 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on July 24, 2015. PMID:26200602
Novel railway-subgrade vibration monitoring technology using phase-sensitive OTDR
NASA Astrophysics Data System (ADS)
Wang, Zhaoyong; Lu, Bin; Zheng, Hanrong; Ye, Qing; Pan, Zhengqing; Cai, Haiwen; Qu, Ronghui; Fang, Zujie; Zhao, Howell
2017-04-01
High-speed railway is being developed rapidly; its safety, including infrastructure and train operation, is vital. This paper presents a railway-subgrade vibration monitoring scheme based on phase-sensitive OTDR for railway safety. The subgrade vibration is detected and rebuilt. Multi-dimension comprehensive analysis (MDCA) is proposed to identify the running train signals and illegal constructions along railway. To our best knowledge, it is the first time that a railway-subgrade vibration monitoring scheme is proposed. This scheme is proved effective by field tests for real-time train tracking and activities monitoring along railway. It provides a new passive distributed way for all-weather railway-subgrade vibration monitoring.
Zhu, Ming-Zhi; Chen, Gui-Lin; Wu, Jian-Lin; Li, Na; Liu, Zhong-Hua; Guo, Ming-Quan
2018-04-23
Medicinal plants are gaining increasing attention worldwide due to their empirical therapeutic efficacy and being a huge natural compound pool for new drug discovery and development. The efficacy, safety and quality of medicinal plants are the main concerns, which are highly dependent on the comprehensive analysis of chemical components in the medicinal plants. With the advances in mass spectrometry (MS) techniques, comprehensive analysis and fast identification of complex phytochemical components have become feasible, and may meet the needs, for the analysis of medicinal plants. Our aim is to provide an overview on the latest developments in MS and its hyphenated technique and their applications for the comprehensive analysis of medicinal plants. Application of various MS and its hyphenated techniques for the analysis of medicinal plants, including but not limited to one-dimensional chromatography, multiple-dimensional chromatography coupled to MS, ambient ionisation MS, and mass spectral database, have been reviewed and compared in this work. Recent advancs in MS and its hyphenated techniques have made MS one of the most powerful tools for the analysis of complex extracts from medicinal plants due to its excellent separation and identification ability, high sensitivity and resolution, and wide detection dynamic range. To achieve high-throughput or multi-dimensional analysis of medicinal plants, the state-of-the-art MS and its hyphenated techniques have played, and will continue to play a great role in being the major platform for their further research in order to obtain insight into both their empirical therapeutic efficacy and quality control. Copyright © 2018 John Wiley & Sons, Ltd.
Longitudinal study of factors affecting taste sense decline in old-old individuals.
Ogawa, T; Uota, M; Ikebe, K; Arai, Y; Kamide, K; Gondo, Y; Masui, Y; Ishizaki, T; Inomata, C; Takeshita, H; Mihara, Y; Hatta, K; Maeda, Y
2017-01-01
The sense of taste plays a pivotal role for personal assessment of the nutritional value, safety and quality of foods. Although it is commonly recognised that taste sensitivity decreases with age, alterations in that sensitivity over time in an old-old population have not been previously reported. Furthermore, no known studies utilised comprehensive variables regarding taste changes and related factors for assessments. Here, we report novel findings from a 3-year longitudinal study model aimed to elucidate taste sensitivity decline and its related factors in old-old individuals. We utilised 621 subjects aged 79-81 years who participated in the Septuagenarians, Octogenarians, Nonagenarians Investigation with Centenarians Study for baseline assessments performed in 2011 and 2012, and then conducted follow-up assessments 3 years later in 328 of those. Assessment of general health, an oral examination and determination of taste sensitivity were performed for each. We also evaluated cognitive function using Montreal Cognitive Assessment findings, then excluded from analysis those with a score lower than 20 in order to secure the validity and reliability of the subjects' answers. Contributing variables were selected using univariate analysis, then analysed with multivariate logistic regression analysis. We found that males showed significantly greater declines in taste sensitivity for sweet and sour tastes than females. Additionally, subjects with lower cognitive scores showed a significantly greater taste decrease for salty in multivariate analysis. In conclusion, our longitudinal study revealed that gender and cognitive status are major factors affecting taste sensitivity in geriatric individuals. © 2016 John Wiley & Sons Ltd.
Coyle, Kathryn; Carrier, Marc; Lazo-Langner, Alejandro; Shivakumar, Sudeep; Zarychanski, Ryan; Tagalakis, Vicky; Solymoss, Susan; Routhier, Nathalie; Douketis, James; Coyle, Douglas
2017-03-01
Unprovoked venous thromboembolism (VTE) can be the first manifestation of cancer. It is unclear if extensive screening for occult cancer including a comprehensive computed tomography (CT) scan of the abdomen/pelvis is cost-effective in this patient population. To assess the health care related costs, number of missed cancer cases and health related utility values of a limited screening strategy with and without the addition of a comprehensive CT scan of the abdomen/pelvis and to identify to what extent testing should be done in these circumstances to allow early detection of occult cancers. Cost effectiveness analysis using data that was collected alongside the SOME randomized controlled trial which compared an extensive occult cancer screening including a CT of the abdomen/pelvis to a more limited screening strategy in patients with a first unprovoked VTE, was used for the current analyses. Analyses were conducted with a one-year time horizon from a Canadian health care perspective. Primary analysis was based on complete cases, with sensitivity analysis using appropriate multiple imputation methods to account for missing data. Data from a total of 854 patients with a first unprovoked VTE were included in these analyses. The addition of a comprehensive CT scan was associated with higher costs ($551 CDN) with no improvement in utility values or number of missed cancers. Results were consistent when adopting multiple imputation methods. The addition of a comprehensive CT scan of the abdomen/pelvis for the screening of occult cancer in patients with unprovoked VTE is not cost effective, as it is both more costly and not more effective in detecting occult cancer. Copyright © 2017 Elsevier Ltd. All rights reserved.
Almeida, Suzana C; George, Steven Z; Leite, Raquel D V; Oliveira, Anamaria S; Chaves, Thais C
2018-05-17
We aimed to empirically derive psychosocial and pain sensitivity subgroups using cluster analysis within a sample of individuals with chronic musculoskeletal pain (CMP) and to investigate derived subgroups for differences in pain and disability outcomes. Eighty female participants with CMP answered psychosocial and disability scales and were assessed for pressure pain sensitivity. A cluster analysis was used to derive subgroups, and analysis of variance (ANOVA) was used to investigate differences between subgroups. Psychosocial factors (kinesiophobia, pain catastrophizing, anxiety, and depression) and overall pressure pain threshold (PPT) were entered into the cluster analysis. Three subgroups were empirically derived: cluster 1 (high pain sensitivity and high psychosocial distress; n = 12) characterized by low overall PPT and high psychosocial scores; cluster 2 (high pain sensitivity and intermediate psychosocial distress; n = 39) characterized by low overall PPT and intermediate psychosocial scores; and cluster 3 (low pain sensitivity and low psychosocial distress; n = 29) characterized by high overall PPT and low psychosocial scores compared to the other subgroups. Cluster 1 showed higher values for mean pain intensity (F (2,77) = 10.58, p < 0.001) compared with cluster 3, and cluster 1 showed higher values for disability (F (2,77) = 3.81, p = 0.03) compared with both clusters 2 and 3. Only cluster 1 was distinct from cluster 3 according to both pain and disability outcomes. Pain catastrophizing, depression, and anxiety were the psychosocial variables that best differentiated the subgroups. Overall, these results call attention to the importance of considering pain sensitivity and psychosocial variables to obtain a more comprehensive characterization of CMP patients' subtypes.
Savas, Jeffrey N.; De Wit, Joris; Comoletti, Davide; Zemla, Roland; Ghosh, Anirvan
2015-01-01
Ligand-receptor interactions represent essential biological triggers which regulate many diverse and important cellular processes. We have developed a discovery-based proteomic biochemical protocol which couples affinity purification with multidimensional liquid chromatographic tandem mass spectrometry (LCLC-MS/MS) and bioinformatic analysis. Compared to previous approaches, our analysis increases sensitivity, shortens analysis duration, and boosts comprehensiveness. In this protocol, receptor extracellular domains are fused with the Fc region of IgG to generate fusion proteins that are purified from transfected HEK293T cells. These “ecto-Fcs” are coupled to protein A beads and serve as baits for binding assays with prey proteins extracted from rodent brain. After capture, the affinity purified proteins are digested into peptides and comprehensively analyzed by LCLC-MS/MS with ion trap mass spectrometers. In four working days, this protocol can generate shortlists of candidate ligand-receptor protein-protein interactions. Our “Ecto-Fc MS” approach outperforms antibody-based approaches and provides a reproducible and robust framework to identify extracellular ligand – receptor interactions. PMID:25101821
Motif-based analysis of large nucleotide data sets using MEME-ChIP
Ma, Wenxiu; Noble, William S; Bailey, Timothy L
2014-01-01
MEME-ChIP is a web-based tool for analyzing motifs in large DNA or RNA data sets. It can analyze peak regions identified by ChIP-seq, cross-linking sites identified by cLIP-seq and related assays, as well as sets of genomic regions selected using other criteria. MEME-ChIP performs de novo motif discovery, motif enrichment analysis, motif location analysis and motif clustering, providing a comprehensive picture of the DNA or RNA motifs that are enriched in the input sequences. MEME-ChIP performs two complementary types of de novo motif discovery: weight matrix–based discovery for high accuracy; and word-based discovery for high sensitivity. Motif enrichment analysis using DNA or RNA motifs from human, mouse, worm, fly and other model organisms provides even greater sensitivity. MEME-ChIP’s interactive HTML output groups and aligns significant motifs to ease interpretation. this protocol takes less than 3 h, and it provides motif discovery approaches that are distinct and complementary to other online methods. PMID:24853928
Gilchrist, Elizabeth S; Nesterenko, Pavel N; Smith, Norman W; Barron, Leon P
2015-03-20
There has recently been increased interest in coupling ion chromatography (IC) to high resolution mass spectrometry (HRMS) to enable highly sensitive and selective analysis. Herein, the first comprehensive study focusing on the direct coupling of suppressed IC to HRMS without the need for post-suppressor organic solvent modification is presented. Chromatographic selectivity and added HRMS sensitivity offered by organic solvent-modified IC eluents on a modern hyper-crosslinked polymeric anion-exchange resin (IonPac AS18) are shown using isocratic eluents containing 5-50 mM hydroxide with 0-80% methanol or acetonitrile for a range of low molecular weight anions (<165 Da). Comprehensive experiments on IC thermodynamics over a temperature range between 20-45 °C with the eluent containing up to 60% of acetonitrile or methanol revealed markedly different retention behaviour and selectivity for the selected analytes on the same polymer based ion-exchange resin. Optimised sensitivity with HRMS was achieved with as low as 30-40% organic eluent content. Analytical performance characteristics are presented and compared with other IC-MS based works. This study also presents the first application of IC-HRMS to forensic detection of trace low-order anionic explosive residues in latent human fingermarks. Copyright © 2015 Elsevier B.V. All rights reserved.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
Characterization of iron-doped lithium niobate for holographic storage applications
NASA Technical Reports Server (NTRS)
Shah, R. R.; Kim, D. M.; Rabson, T. A.; Tittel, F. K.
1976-01-01
A comprehensive characterization of chemical and holographic properties of eight systematically chosen Fe:LiNbO3 crystals is performed in order to determine optimum performance of the crystals in holographic storage and display applications. The discussion covers determination of Fe(2+) and Fe(3+) ion concentrations in Fe:LiNbO3 system from optical absorption and EPR measurements; establishment of the relation between the photorefractive sensitivity of Fe(2+) and Fe(3+) concentrations; study of the spectral dependence, the effect of oxygen annealing, and of other impurities on the photorefractive sensitivity; analysis of the diffraction efficiency curves for different crystals and corresponding sensitivities with the dynamic theory of hologram formation; and determination of the bulk photovoltaic fields as a function of Fe(2+) concentrations. In addition to the absolute Fe(2+) concentration, the relative concentrations of Fe(2+) and Fe(3+) ions are also important in determining the photorefractive sensitivity. There exists an optimal set of crystal characteristics for which the photorefractive sensitivity is most favorable.
Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.
2015-01-01
Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208
Mirabelli, Mario F; Zenobi, Renato
2018-04-17
A novel capillary ionization source based on atmospheric pressure photoionization (cAPPI) was developed and used for the direct interfacing between solid-phase microextraction (SPME) and mass spectrometry (MS). The efficiency of the source was evaluated for direct and dopant-assisted photoionization, analyzing both polar (e.g., triazines and organophosphorus pesticides) and nonpolar (polycyclic aromatic hydrocarbons, PAHs) compounds. The results show that the range of compound polarity, which can be addressed by direct SPME-MS can be substantially extended by using cAPPI, compared to other sensitive techniques like direct analysis in real time (DART) and dielectric barrier discharge ionization (DBDI). The new source delivers a very high sensitivity, down to sub parts-per-trillion (ppt), making it a viable alternative when compared to previously reported and less comprehensive direct approaches.
Liu, Ting; He, Xiang-ge
2006-05-01
To evaluate the overall diagnostic capabilities of frequency-doubling technology (FDT) in patients of primary glaucoma, with standard automated perimetry (SAP) and/or optic disc appearance as the gold standard. A comprehensive electric search in MEDLINE, EMBASE, Cochrane Library, BIOSIS, Previews, HMIC, IPA, OVID, CNKI, CBMdisc, VIP information, CMCC, CCPD, SSreader and 21dmedia and a manual search in related textbooks, journals, congress articles and their references were performed to identify relevant English and Chinese language articles. Criteria for adaptability were established according to validity criteria for diagnostic research published by the Cochrane Methods Group on Screening and Diagnostic Tests. Quality of the included articles was assessed and relevant materials were extracted for studying. Statistical analysis was performed with Meta Test version 0.6 software. Heterogeneity of the included articles was tested, which was used to select appropriate effect model to calculate pooled weighted sensitivity and specificity. Summary Receiver Operating Characteristic (SROC) curve was established and the area under the curve (AUC) was calculated. Finally, sensitivity analysis was performed. Fifteen English articles (21 studies) of 206 retrieved articles were included in the present study, with a total of 3172 patients. The reported sensitivity of FDT ranged from 0.51 to 1.00, and specificity from 0.58 to 1.00. The pooled weighted sensitivity and specificity for FDT with 95% confidence intervals (95% CI) after correction for standard error were 0.86 (0.80 - 0.90), 0.87 (0.81 - 0.91), respectively. The AUC of SROC was 93.01%. Sensitivity analysis demonstrated no disproportionate influences of individual study. The included articles are of good quality and FDT can be a highly efficient diagnostic test for primary glaucoma based on Meta-analysis. However, a high quality perspective study is still required for further analysis.
Next-Generation Molecular Testing of Newborn Dried Blood Spots for Cystic Fibrosis.
Lefterova, Martina I; Shen, Peidong; Odegaard, Justin I; Fung, Eula; Chiang, Tsoyu; Peng, Gang; Davis, Ronald W; Wang, Wenyi; Kharrazi, Martin; Schrijver, Iris; Scharfe, Curt
2016-03-01
Newborn screening for cystic fibrosis enables early detection and management of this debilitating genetic disease. Implementing comprehensive CFTR analysis using Sanger sequencing as a component of confirmatory testing of all screen-positive newborns has remained impractical due to relatively lengthy turnaround times and high cost. Here, we describe CFseq, a highly sensitive, specific, rapid (<3 days), and cost-effective assay for comprehensive CFTR gene analysis from dried blood spots, the common newborn screening specimen. The unique design of CFseq integrates optimized dried blood spot sample processing, a novel multiplex amplification method from as little as 1 ng of genomic DNA, and multiplex next-generation sequencing of 96 samples in a single run to detect all relevant CFTR mutation types. Sequence data analysis utilizes publicly available software supplemented by an expert-curated compendium of >2000 CFTR variants. Validation studies across 190 dried blood spots demonstrated 100% sensitivity and a positive predictive value of 100% for single-nucleotide variants and insertions and deletions and complete concordance across the polymorphic poly-TG and consecutive poly-T tracts. Additionally, we accurately detected both a known exon 2,3 deletion and a previously undetected exon 22,23 deletion. CFseq is thus able to replace all existing CFTR molecular assays with a single robust, definitive assay at significant cost and time savings and could be adapted to high-throughput screening of other inherited conditions. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Tsiachristas, Apostolos; Burgers, Laura; Rutten-van Mölken, Maureen P M H
2015-12-01
Disease management programs (DMPs) for cardiovascular risk (CVR) and chronic obstructive pulmonary disease (COPD) are increasingly implemented in The Netherlands to improve care and patient's health behavior. The aim of this study was to provide evidence about the (cost-) effectiveness of Dutch DMPs as implemented in daily practice. We compared the physical activity, smoking status, quality-adjusted life-years, and yearly costs per patient between the most and the least comprehensive DMPs in four disease categories: primary CVR prevention, secondary CVR prevention, both types of CVR prevention, and COPD (N = 1034). Propensity score matching increased comparability between DMPs. A 2-year cost-utility analysis was performed from the health care and societal perspectives. Sensitivity analysis was performed to estimate the impact of DMP development and implementation costs on cost-effectiveness. Patients in the most comprehensive DMPs increased their physical activity more (except for primary CVR prevention) and had higher smoking cessation rates. The incremental QALYs ranged from -0.032 to 0.038 across all diseases. From a societal perspective, the most comprehensive DMPs decreased costs in primary CVR prevention (certainty 57%), secondary CVR prevention (certainty 88%), and both types of CVR prevention (certainty 98%). Moreover, the implementation of comprehensive DMPs led to QALY gains in secondary CVR prevention (certainty 92%) and COPD (certainty 69%). The most comprehensive DMPs for CVR and COPD have the potential to be cost saving, effective, or cost-effective compared with the least comprehensive DMPs. The challenge for Dutch stakeholders is to find the optimal mixture of interventions that is most suited for each target group. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Automatic differentiation evaluated as a tool for rotorcraft design and optimization
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.
1995-01-01
This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.
NASA Astrophysics Data System (ADS)
Brill, Nicolai; Wirtz, Mathias; Merhof, Dorit; Tingart, Markus; Jahr, Holger; Truhn, Daniel; Schmitt, Robert; Nebelung, Sven
2016-07-01
Polarization-sensitive optical coherence tomography (PS-OCT) is a light-based, high-resolution, real-time, noninvasive, and nondestructive imaging modality yielding quasimicroscopic cross-sectional images of cartilage. As yet, comprehensive parameterization and quantification of birefringence and tissue properties have not been performed on human cartilage. PS-OCT and algorithm-based image analysis were used to objectively grade human cartilage degeneration in terms of surface irregularity, tissue homogeneity, signal attenuation, as well as birefringence coefficient and band width, height, depth, and number. Degeneration-dependent changes were noted for the former three parameters exclusively, thereby questioning the diagnostic value of PS-OCT in the assessment of human cartilage degeneration.
Sampling and sensitivity analyses tools (SaSAT) for computational modelling
Hoare, Alexander; Regan, David G; Wilson, David P
2008-01-01
SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361
Wang, Z X; Chen, S L; Wang, Q Q; Liu, B; Zhu, J; Shen, J
2015-06-01
The aim of this study was to evaluate the accuracy of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury through a meta-analysis. A comprehensive literature search was conducted before 1 April 2014. All studies comparing magnetic resonance imaging results with arthroscopy or open surgery findings were reviewed, and 25 studies that satisfied the eligibility criteria were included. Data were pooled to yield pooled sensitivity and specificity, which were respectively 0.83 and 0.82. In detection of central and peripheral tears, magnetic resonance imaging had respectively a pooled sensitivity of 0.90 and 0.88 and a pooled specificity of 0.97 and 0.97. Six high-quality studies using Ringler's recommended magnetic resonance imaging parameters were selected for analysis to determine whether optimal imaging protocols yielded better results. The pooled sensitivity and specificity of these six studies were 0.92 and 0.82, respectively. The overall accuracy of magnetic resonance imaging was acceptable. For peripheral tears, the pooled data showed a relatively high accuracy. Magnetic resonance imaging with appropriate parameters are an ideal method for diagnosing different types of triangular fibrocartilage complex tears. © The Author(s) 2015.
Gomes, Ciro Martins; Mazin, Suleimy Cristina; dos Santos, Elisa Raphael; Cesetti, Mariana Vicente; Bächtold, Guilherme Albergaria Brízida; Cordeiro, João Henrique de Freitas; Theodoro, Fabrício Claudino Estrela Terra; Damasco, Fabiana dos Santos; Carranza, Sebastián Andrés Vernal; Santos, Adriana de Oliveira; Roselino, Ana Maria; Sampaio, Raimunda Nonata Ribeiro
2015-01-01
The diagnosis of mucocutaneous leishmaniasis (MCL) is hampered by the absence of a gold standard. An accurate diagnosis is essential because of the high toxicity of the medications for the disease. This study aimed to assess the ability of polymerase chain reaction (PCR) to identify MCL and to compare these results with clinical research recently published by the authors. A systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement was performed using comprehensive search criteria and communication with the authors. A meta-analysis considering the estimates of the univariate and bivariate models was performed. Specificity near 100% was common among the papers. The primary reason for accuracy differences was sensitivity. The meta-analysis, which was only possible for PCR samples of lesion fragments, revealed a sensitivity of 71% [95% confidence interval (CI) = 0.59; 0.81] and a specificity of 93% (95% CI = 0.83; 0.98) in the bivariate model. The search for measures that could increase the sensitivity of PCR should be encouraged. The quality of the collected material and the optimisation of the amplification of genetic material should be prioritised. PMID:25946238
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Comprehensive Iran... Prohibited Sources 25.703-3 Comprehensive Iran Sanctions, Accountability, and Divestment Act of 2010, section... of goods or services with a person that exports certain sensitive technology to Iran, as determined...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Comprehensive Iran... Prohibited Sources 25.703-3 Comprehensive Iran Sanctions, Accountability, and Divestment Act of 2010, section... goods or services with a person that exports certain sensitive technology to Iran, as determined by the...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Comprehensive Iran... Prohibited Sources 25.703-3 Comprehensive Iran Sanctions, Accountability, and Divestment Act of 2010, section... goods or services with a person that exports certain sensitive technology to Iran, as determined by the...
ERIC Educational Resources Information Center
Solan, Harold A.; Shelley-Tremblay, John F.; Hansen, Peter C.; Larson, Steven
2007-01-01
The authors examined the relationships between reading comprehension, visual attention, and magnocellular processing in 42 Grade 7 students. The goal was to quantify the sensitivity of visual attention and magnocellular visual processing as concomitants of poor reading comprehension in the absence of either vision therapy or cognitive…
Lin, Nan; Yang, Xiaohong; Li, Jing; Wang, Shaonan; Hua, Huimin; Ma, Yujun; Li, Xingshan
2018-04-01
Neuroimaging studies have found that theory of mind (ToM) and discourse comprehension involve similar brain regions. These brain regions may be associated with three cognitive components that are necessarily or frequently involved in ToM and discourse comprehension, including social concept representation and retrieval, domain-general semantic integration, and domain-specific integration of social semantic contents. Using fMRI, we investigated the neural correlates of these three cognitive components by exploring how discourse topic (social/nonsocial) and discourse processing period (ending/beginning) modulate brain activation in a discourse comprehension (and also ToM) task. Different sets of brain areas showed sensitivity to discourse topic, discourse processing period, and the interaction between them, respectively. The most novel finding was that the right temporoparietal junction and middle temporal gyrus showed sensitivity to discourse processing period only during social discourse comprehension, indicating that they selectively contribute to domain-specific semantic integration. Our finding indicates how different domains of semantic information are processed and integrated in the brain and provides new insights into the neural correlates of ToM and discourse comprehension.
Clinical color vision testing and correlation with visual function.
Zhao, Jiawei; Davé, Sarita B; Wang, Jiangxia; Subramanian, Prem S
2015-09-01
To determine if Hardy-Rand-Rittler (H-R-R) and Ishihara testing are accurate estimates of color vision in subjects with acquired visual dysfunction. Assessment of diagnostic tools. Twenty-two subjects with optic neuropathy (aged 18-65) and 18 control subjects were recruited prospectively from an outpatient clinic. Individuals with visual acuity (VA) <20/200 or with congenital color blindness were excluded. All subjects underwent a comprehensive eye examination including VA, color vision, and contrast sensitivity testing. Color vision was assessed using H-R-R and Ishihara plates and Farnsworth D-15 (D-15) discs. D-15 is the accepted standard for detecting and classifying color vision deficits. Contrast sensitivity was measured using Pelli-Robson contrast sensitivity charts. No relationship was found between H-R-R and D-15 scores (P = .477). H-R-R score and contrast sensitivity were positively correlated (P = .003). On multivariate analysis, contrast sensitivity (β = 8.61, P < .001) and VA (β = 2.01, P = .022) both showed association with H-R-R scores. Similar to H-R-R, Ishihara score did not correlate with D-15 score (P = .973), but on multivariate analysis was related to contrast sensitivity (β = 8.69, P < .001). H-R-R and Ishihara scores had an equivalent relationship with contrast sensitivity (P = .069). Neither H-R-R nor Ishihara testing appears to assess color identification in patients with optic neuropathy. Both H-R-R and Ishihara testing are correlated with contrast sensitivity, and these tests may be useful clinical surrogates for contrast sensitivity testing. Copyright © 2015 Elsevier Inc. All rights reserved.
Liu, Huan; Xie, Yanming
2011-10-01
The clinical literature evaluation of the post-marketing traditional Chinese medicine is a comprehensive evaluation by the comprehensive gain, analysis of the drug, literature of drug efficacy, safety, economy, based on the literature evidence and is part of the evaluation of evidence-based medicine. The literature evaluation in the post-marketing Chinese medicine clinical evaluation is in the foundation and the key position. Through the literature evaluation, it can fully grasp the information, grasp listed drug variety of traditional Chinese medicines second development orientation, make clear further clinical indications, perfect the medicines, etc. This paper discusses the main steps and emphasis of the clinical literature evaluation. Emphasizing security literature evaluation should attach importance to the security of a comprehensive collection drug information. Safety assessment should notice traditional Chinese medicine validity evaluation in improving syndrome, improveing the living quality of patients with special advantage. The economics literature evaluation should pay attention to reliability, sensitivity and practicability of the conclusion.
Results and lessons learned from MODIS polarization sensitivity characterization
NASA Astrophysics Data System (ADS)
Sun, J.; Xiong, X.; Wang, X.; Qiu, S.; Xiong, S.; Waluschka, E.
2006-08-01
In addition to radiometric, spatial, and spectral calibration requirements, MODIS design specifications include polarization sensitivity requirements of less than 2% for all Reflective Solar Bands (RSB) except for the band centered at 412nm. To the best of our knowledge, MODIS was the first imaging radiometer that went through comprehensive system level (end-to-end) polarization characterization. MODIS polarization sensitivity was measured pre-launch at a number of sensor view angles using a laboratory Polarization Source Assembly (PSA) that consists of a rotatable source, a polarizer (Ahrens prism design), and a collimator. This paper describes MODIS polarization characterization approaches used by MODIS Characterization Support Team (MCST) at NASA/GSFC and addresses issues and concerns in the measurements. Results (polarization factor and phase angle) using different analyzing methods are discussed. Also included in this paper is a polarization characterization comparison between Terra and Aqua MODIS. Our previous and recent analysis of MODIS RSB polarization sensitivity could provide useful information for future Earth-observing sensor design, development, and characterization.
Chen, Qinghua; Raghavan, Prashant; Mukherjee, Sugoto; Jameson, Mark J; Patrie, James; Xin, Wenjun; Xian, Junfang; Wang, Zhenchang; Levine, Paul A; Wintermark, Max
2015-10-01
The aim of this study was to systematically compare a comprehensive array of magnetic resonance (MR) imaging features in terms of their sensitivity and specificity to diagnose cervical lymph node metastases in patients with thyroid cancer. The study included 41 patients with thyroid malignancy who underwent surgical excision of cervical lymph nodes and had preoperative MR imaging ≤4weeks prior to surgery. Three head and neck neuroradiologists independently evaluated all the MR images. Using the pathology results as reference, the sensitivity, specificity and interobserver agreement of each MR imaging characteristic were calculated. On multivariate analysis, no single imaging feature was significantly correlated with metastasis. In general, imaging features demonstrated high specificity, but poor sensitivity and moderate interobserver agreement at best. Commonly used MR imaging features have limited sensitivity at correctly identifying cervical lymph node metastases in patients with thyroid cancer. A negative neck MR scan should not dissuade a surgeon from performing a neck dissection in patients with thyroid carcinomas.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
Jin, H; Yuan, L; Li, C; Kan, Y; Hao, R; Yang, J
2014-03-01
The purpose of this study was to systematically review and perform a meta-analysis of published data regarding the diagnostic performance of positron emission tomography (PET) or PET/computed tomography (PET/CT) in prosthetic infection after arthroplasty. A comprehensive computer literature search of studies published through May 31, 2012 regarding PET or PET/CT in patients suspicious of prosthetic infection was performed in PubMed/MEDLINE, Embase and Scopus databases. Pooled sensitivity and specificity of PET or PET/CT in patients suspicious of prosthetic infection on a per prosthesis-based analysis were calculated. The area under the receiver-operating characteristic (ROC) curve was calculated to measure the accuracy of PET or PET/CT in patients with suspicious of prosthetic infection. Fourteen studies comprising 838 prosthesis with suspicious of prosthetic infection after arthroplasty were included in this meta-analysis. The pooled sensitivity of PET or PET/CT in detecting prosthetic infection was 86% (95% confidence interval [CI] 82-90%) on a per prosthesis-based analysis. The pooled specificity of PET or PET/CT in detecting prosthetic infection was 86% (95% CI 83-89%) on a per prosthesis-based analysis. The area under the ROC curve was 0.93 on a per prosthesis-based analysis. In patients suspicious of prosthetic infection, FDG PET or PET/CT demonstrated high sensitivity and specificity. FDG PET or PET/CT are accurate methods in this setting. Nevertheless, possible sources of false positive results and influcing factors should kept in mind.
Yousefifard, Mahmoud; Baikpour, Masoud; Ghelichkhani, Parisa; Asady, Hadi; Darafarin, Abolfazl; Amini Esfahani, Mohammad Reza; Hosseini, Mostafa; Yaseri, Mehdi; Safari, Saeed
2016-01-01
Introduction: The potential benefit of ultrasonography for detection of thoracic bone fractures has been proven in various surveys but no comprehensive conclusion has been drawn yet; therefore, the present study aimed to conduct a thorough meta-analytic systematic review on this subject. Methods: Two reviewers independently carried out a comprehensive systematic search in Medline, EMBASE, ISI Web of Knowledge, Scopus, Cochrane Library, and ProQuest databases. Data were summarized as true positive, false positive, true negative and false negative and were analyzed via STATA 11.0 software using a mixed-effects binary regression model. Sources of heterogeneity were further assessed through subgroup analysis. Results: Data on 1667 patients (807 subjects with and 860 cases without thoracic fractures), whose age ranged from 0 to 92 years, were extracted from 17 surveys. Pooled sensitivity and specificity of ultrasonography in detection of thoracic bone fractures were 0.97 (95% CI: 0.90-0.99; I2= 88.88, p<0.001) and 0.94 (95% CI: 0.86-0.97; I2= 71.97, p<0.001), respectively. The same measures for chest radiography were found to be 0.77 (95% CI: 0.56-0.90; I2= 97.76, p<0.001) and 1.0 (95% CI: 0.91-1.00; I2= 97.24, p<0.001), respectively. The sensitivity of ultrasonography was higher in detection of rib fractures, compared to fractures of sternum or clavicle (97% vs. 91%). Moreover, the sensitivity was found to be higher when the procedure was carried out by a radiologist in comparison to an emergency medicine specialist (96% vs. 90%). Conclusion: Base on the findings of the present meta-analysis, screening performance characteristic of ultrasonography in detection of thoracic bone fractures was found to be higher than radiography. However, these characteristics were more prominent in detection of rib fractures and in cases where was performed by a radiologist. PMID:27274514
Yousefifard, Mahmoud; Baikpour, Masoud; Ghelichkhani, Parisa; Asady, Hadi; Darafarin, Abolfazl; Amini Esfahani, Mohammad Reza; Hosseini, Mostafa; Yaseri, Mehdi; Safari, Saeed
2016-01-01
The potential benefit of ultrasonography for detection of thoracic bone fractures has been proven in various surveys but no comprehensive conclusion has been drawn yet; therefore, the present study aimed to conduct a thorough meta-analytic systematic review on this subject. Two reviewers independently carried out a comprehensive systematic search in Medline, EMBASE, ISI Web of Knowledge, Scopus, Cochrane Library, and ProQuest databases. Data were summarized as true positive, false positive, true negative and false negative and were analyzed via STATA 11.0 software using a mixed-effects binary regression model. Sources of heterogeneity were further assessed through subgroup analysis. Data on 1667 patients (807 subjects with and 860 cases without thoracic fractures), whose age ranged from 0 to 92 years, were extracted from 17 surveys. Pooled sensitivity and specificity of ultrasonography in detection of thoracic bone fractures were 0.97 (95% CI: 0.90-0.99; I2= 88.88, p<0.001) and 0.94 (95% CI: 0.86-0.97; I2= 71.97, p<0.001), respectively. The same measures for chest radiography were found to be 0.77 (95% CI: 0.56-0.90; I2= 97.76, p<0.001) and 1.0 (95% CI: 0.91-1.00; I2= 97.24, p<0.001), respectively. The sensitivity of ultrasonography was higher in detection of rib fractures, compared to fractures of sternum or clavicle (97% vs. 91%). Moreover, the sensitivity was found to be higher when the procedure was carried out by a radiologist in comparison to an emergency medicine specialist (96% vs. 90%). Base on the findings of the present meta-analysis, screening performance characteristic of ultrasonography in detection of thoracic bone fractures was found to be higher than radiography. However, these characteristics were more prominent in detection of rib fractures and in cases where was performed by a radiologist.
Cost analysis of PET and comprehensive lifestyle modification for the reversal of atherosclerosis.
Delgado, Rigoberto I; Swint, J Michael; Lairson, David R; Johnson, Nils P; Gould, K Lance; Sdringola, Stefano
2014-01-01
We present a preliminary cost analysis of a combination intervention using PET and comprehensive lifestyle modification to reverse atherosclerosis. With a sensitivity of 92%-95% and specificity of 85%-95%, PET is an essential tool for high-precision diagnosis of coronary artery disease, accurately guiding optimal treatment for both symptomatic and asymptomatic patients. PET imaging provides a powerful visual and educational aid for helping patients identify and adopt appropriate treatments. However, little is known about the operational cost of using the technology for this purpose. The analysis was done in the context of the Century Health Study for Cardiovascular Medicine (Century Trial), a 1,300-patient, randomized study combining PET imaging with lifestyle changes. Our methodology included a microcosting and time study focusing on estimating average direct and indirect costs. The total cost of the Century Trial in present-value terms is $9.2 million, which is equal to $7,058 per patient. Sensitivity analysis indicates that the present value of total costs is likely to range between $8.8 and $9.7 million, which is equivalent to $6,655-$7,606 per patient. The clinical relevance of the Century Trial is significant since it is, to our knowledge, the first randomized controlled trial to combine high-precision imaging with lifestyle strategies. The Century Trial is in its second year of a 5-y protocol, and we present preliminary findings. The results of this cost study, however, provide policy makers with an early estimate of the costs of implementing, at large scale, a combined intervention such as the Century Trial. Further, we believe that imaging-guided lifestyle management may have considerable potential for improving outcomes and reducing health-care costs by eliminating unnecessary invasive procedures.
Using Television Commercials to Develop Reading Comprehension.
ERIC Educational Resources Information Center
Bowman, James D.; Bowman, S. Ray
1991-01-01
Suggests that teachers can use reluctant readers' sensitivity to and sophistication with "musicomedy" to both assess and develop reading comprehension. Discusses several class activities using the musical and humorous expressions in television commercials for fulfilling this objective. (RS)
Comprehensive Evaluation of the Contribution of X Chromosome Genes to Platinum Sensitivity
Gamazon, Eric R.; Im, Hae Kyung; O’Donnell, Peter H.; Ziliak, Dana; Stark, Amy L.; Cox, Nancy J.; Dolan, M. Eileen; Huang, Rong Stephanie
2011-01-01
Utilizing a genome-wide gene expression dataset generated from Affymetrix GeneChip® Human Exon 1.0ST array, we comprehensively surveyed the role of 322 X chromosome gene expression traits on cellular sensitivity to cisplatin and carboplatin. We identified 31 and 17 X chromosome genes whose expression levels are significantly correlated (after multiple testing correction) with sensitivity to carboplatin and cisplatin, respectively, in the combined HapMap CEU and YRI populations (false discovery rate, FDR<0.05). Of those, 14 overlap for both cisplatin and carboplatin. Employing an independent gene expression quantification method, the Illumina Sentrix Human-6 Expression BeadChip, measured on the same HapMap cell lines, we found that 4 and 2 of these genes are significantly associated with carboplatin and cisplatin sensitivity respectively in both analyses. Two genes, CTPS2 and DLG3, were identified by both genome-wide gene expression analyses as correlated with cellular sensitivity to both platinating agents. The expression of DLG3 gene was also found to correlate with cellular sensitivity to platinating agents in NCI60 cancer cell lines. In addition, we evaluated the role of X chromosome gene expression to the observed differences in sensitivity to the platinums between CEU and YRI derived cell lines. Of the 34 distinct genes significantly correlated with either carboplatin or cisplatin sensitivity, 14 are differentially expressed (defined as p<0.05) between CEU and YRI. Thus, sex chromosome genes play a role in cellular sensitivity to platinating agents and differences in the expression level of these genes are an important source of variation that should be included in comprehensive pharmacogenomic studies. PMID:21252287
Implementation Strategies for Gender-Sensitive Public Health Practice: A European Workshop.
Oertelt-Prigione, Sabine; Dalibert, Lucie; Verdonk, Petra; Stutz, Elisabeth Zemp; Klinge, Ineke
2017-11-01
Providing a robust scientific background for the focus on gender-sensitive public health and a systematic approach to its implementation. Within the FP7-EUGenMed project ( http://eugenmed.eu ) a workshop on sex and gender in public health was convened on February 2-3, 2015. The experts participated in moderated discussion rounds to (1) assemble available knowledge and (2) identify structural influences on practice implementation. The findings were summarized and analyzed in iterative rounds to define overarching strategies and principles. The participants discussed the rationale for implementing gender-sensitive public health and identified priorities and key stakeholders to engage in the process. Communication strategies and specific promotion strategies with distinct stakeholders were defined. A comprehensive list of gender-sensitive practices was established using the recently published taxonomy of the Expert Recommendations for Implementing Change (ERIC) project as a blueprint. A clearly defined implementation strategy should be mandated for all new projects in the field of gender-sensitive public health. Our tool can support researchers and practitioners with the analysis of current and past research as well as with the planning of new projects.
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement
Yang, Bo; Hu, Di; Wu, Lei
2016-01-01
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive simulation and analysis. PMID:27399716
Zhang, Ying; Tobias, Herbert J.; Brenna, J. Thomas
2014-01-01
Comprehensive two dimensional gas chromatography (GC×GC) provides greater separation space than conventional GC. Because of fast peak elution, a time of flight mass spectrometer (TOFMS) is the usual structure-specific detector of choice. The quantitative capabilities of a novel GC×GC fast quadrupole MS were investigated with electron ionization (EI), and CH4 or NH3 positive chemical ionization (PCI) for analysis of endogenous urinary steroids targeted in anti-doping tests. Average precisions for steroid quantitative analysis from replicate urine extractions were 6% (RSD) for EI and 8% for PCI-NH3. The average limits of detection (LOD) calculated by quantification ions for 12 target steroids spiked into steroid-free urine matrix (SFUM) were 2.6 ng mL−1 for EI, 1.3 ng mL−1 for PCI-CH4, and 0.3 ng mL−1 for PCI-NH3, all in mass scanning mode. The measured limits of quantification (LOQ) with full mass scan GC×GC-qMS were comparable with the LOQ values measured by one-dimensional GC-MS in single ion monitoring (SIM) mode. PCI-NH3 yields fewer fragments and greater (pseudo)molecular ion abundances than EI or PCI-CH4. These data show a benchtop GC×GC-qMS system has the sensitivity, specificity, and resolution to analyze urinary steroids at normal urine concentrations, and that PCI-NH3, not currently available on most GC×GC-TOFMS instruments, is of particular value for generation of structure-specific ions. PMID:22606686
Warren, Tessa; Dickey, Michael Walsh; Liburd, Teljer L
2017-07-01
The rational inference, or noisy channel, account of language comprehension predicts that comprehenders are sensitive to the probabilities of different interpretations for a given sentence and adapt as these probabilities change (Gibson, Bergen & Piantadosi, 2013). This account provides an important new perspective on aphasic sentence comprehension: aphasia may increase the likelihood of sentence distortion, leading people with aphasia (PWA) to rely more on the prior probability of an interpretation and less on the form or structure of the sentence (Gibson, Sandberg, Fedorenko, Bergen & Kiran, 2015). We report the results of a sentence-picture matching experiment that tested the predictions of the rational inference account and other current models of aphasic sentence comprehension across a variety of sentence structures. Consistent with the rational inference account, PWA showed similar sensitivity to the probability of particular kinds of form distortions as age-matched controls, yet overall their interpretations relied more on prior probability and less on sentence form. As predicted by rational inference, but not by other models of sentence comprehension in aphasia, PWA's interpretations were more faithful to the form for active and passive sentences than for direct object and prepositional object sentences. However contra rational inference, there was no evidence that individual PWA's severity of syntactic or semantic impairment predicted their sensitivity to form versus the prior probability of a sentence, as cued by semantics. These findings confirm and extend previous findings that suggest the rational inference account holds promise for explaining aphasic and neurotypical comprehension, but they also raise new challenges for the account. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wu, Yiping; Liu, Shu-Guang
2012-01-01
R program language-Soil and Water Assessment Tool-Flexible Modeling Environment (R-SWAT-FME) (Wu and Liu, 2012) is a comprehensive modeling framework that adopts an R package, Flexible Modeling Environment (FME) (Soetaert and Petzoldt, 2010), for the Soil and Water Assessment Tool (SWAT) model (Arnold and others, 1998; Neitsch and others, 2005). This framework provides the functionalities of parameter identifiability, model calibration, and sensitivity and uncertainty analysis with instant visualization. This user's guide shows how to apply this framework for a customized SWAT project.
2010-01-01
Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245
Ribas-Maynou, J; García-Peiró, A; Fernández-Encinas, A; Abad, C; Amengual, M J; Prada, E; Navarro, J; Benet, J
2013-09-01
Sperm DNA fragmentation (SDF) is becoming an important test to assess male infertility. Several different tests are available, but no consensus has yet been reached as to which tests are most predictive of infertility. Few publications have reported a comprehensive analysis comparing these methods within the same population. The objective of this study was to analyze the differences between the five most common methodologies, to study their correlations and to establish their cut-off values, sensitivity and specificity in predicting male infertility. We found differences in SDF between fertile donors and infertile patients in TUNEL, SCSA, SCD and alkaline Comet assays, but none with the neutral Comet assay. The alkaline COMET assay was the best in predicting male infertility followed by TUNEL, SCD and SCSA, whereas the neutral COMET assay had no predictive power. For our patient population, threshold values for infertility were 20.05% for TUNEL assay, 18.90% for SCSA, 22.75% for the SCD test, 45.37% for alkaline Comet and 34.37% for neutral Comet. This work establishes in a comprehensive study that the all techniques except neutral Comet are useful to distinguish fertile and infertile men. © 2013 American Society of Andrology and European Academy of Andrology.
Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong
2010-01-18
The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.
Cost-Effectiveness and Quality of Care of a Comprehensive ART Program in Malawi
Orlando, Stefano; Diamond, Samantha; Palombi, Leonardo; Sundaram, Maaya; Shear Zinner, Lauren; Marazzi, Maria Cristina; Mancinelli, Sandro; Liotta, Giuseppe
2016-01-01
Abstract The aim of this study is to assess the cost-effectiveness of a holistic, comprehensive human immunodeficiency virus (HIV) treatment Program in Malawi. Comprehensive cost data for the year 2010 have been collected at 30 facilities from the public network of health centers providing antiretroviral treatment (ART) throughout the country; two of these facilities were operated by the Disease Relief through Excellent and Advanced Means (DREAM) program. The outcomes analysis was carried out over five years comparing two cohorts of patients on treatment: 1) 2387 patients who started ART in the two DREAM centers during 2008, 2) patients who started ART in Malawi in the same year under the Ministry of Health program. Assuming the 2010 cost as constant over the five years the cost-effective analysis was undertaken from a health sector and national perspective; a sensitivity analysis included two hypothesis of ART impact on patients’ income. The total cost per patient per year (PPPY) was $314.5 for the DREAM protocol and $188.8 for the other Malawi ART sites, with 737 disability adjusted life years (DALY) saved among the DREAM program patients compared with the others. The Incremental Cost-Effectiveness Ratio was $1640 per DALY saved; it ranged between $896–1268 for national and health sector perspective respectively. The cost per DALY saved remained under $2154 that is the AFR-E-WHO regional gross domestic product per capita threshold for a program to be considered very cost-effective. HIV/acquired immune deficiency syndrome comprehensive treatment program that joins ART with laboratory monitoring, treatment adherence reinforcing and Malnutrition control can be very cost-effective in the sub-Saharan African setting. PMID:27227921
Gao, Xiaoli; Zhang, Qibin; Meng, Da; Issac, Giorgis; Zhao, Rui; Fillmore, Thomas L.; Chu, Rosey K.; Zhou, Jianying; Tang, Keqi; Hu, Zeping; Moore, Ronald J.; Smith, Richard D.; Katze, Michael G.; Metz, Thomas O.
2012-01-01
Lipidomics is a critical part of metabolomics and aims to study all the lipids within a living system. We present here the development and evaluation of a sensitive capillary UPLC-MS method for comprehensive top-down/bottom-up lipid profiling. Three different stationary phases were evaluated in terms of peak capacity, linearity, reproducibility, and limit of quantification (LOQ) using a mixture of lipid standards representative of the lipidome. The relative standard deviations of the retention times and peak abundances of the lipid standards were 0.29% and 7.7%, respectively, when using the optimized method. The linearity was acceptable at >0.99 over 3 orders of magnitude, and the LOQs were sub-fmol. To demonstrate the performance of the method in the analysis of complex samples, we analyzed lipids extracted from a human cell line, rat plasma, and a model human skin tissue, identifying 446, 444, and 370 unique lipids, respectively. Overall, the method provided either higher coverage of the lipidome, greater measurement sensitivity, or both, when compared to other approaches of global, untargeted lipid profiling based on chromatography coupled with MS. PMID:22354571
Nutritional Lipidomics: Molecular Metabolism, Analytics, and Diagnostics
Smilowitz, Jennifer T.; Zivkovic, Angela M.; Wan, Yu-Jui Yvonne; Watkins, Steve M.; Nording, Malin L.; Hammock, Bruce D.; German, J. Bruce
2013-01-01
The field of lipidomics is providing nutritional science a more comprehensive view of lipid intermediates. Lipidomics research takes advantage of the increase in accuracy and sensitivity of mass detection of mass spectrometry with new bioinformatics toolsets to characterize the structures and abundances of complex lipids. Yet, translating lipidomics to practice via nutritional interventions is still in its infancy. No single instrumentation platform is able to solve the varying analytical challenges of the different molecular lipid species. Biochemical pathways of lipid metabolism remain incomplete and the tools to map lipid compositional data to pathways are still being assembled. Biology itself is dauntingly complex and simply separating biological structures remains a key challenge to lipidomics. Nonetheless, the strategy of combining tandem analytical methods to perform the sensitive, high-throughput, quantitative and comprehensive analysis of lipid metabolites of very large numbers of molecules is poised to drive the field forward rapidly. Among the next steps for nutrition to understand the changes in structures, compositions and function of lipid biomolecules in response to diet is to describe their distribution within discrete functional compartments-lipoproteins. Additionally, lipidomics must tackle the task of assigning the functions of lipids as signaling molecules, nutrient sensors, and intermediates of metabolic pathways. PMID:23818328
ERIC Educational Resources Information Center
Kim, Young-Suk Grace; Petscher, Yaacov
2016-01-01
Emerging evidence suggests that children's sensitivity to suprasegmental phonology such as stress and timing (i.e., prosodic sensitivity) contributes to reading. The primary goal of this study was to investigate pathways of the relation of prosodic sensitivity to reading (word reading and reading comprehension) using data from 370 first-grade…
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.
2015-01-01
Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972
Fang, Y G; Chen, N N; Cheng, Y B; Sun, S J; Li, H X; Sun, F; Xiang, Y
2015-12-01
Urinary neutrophil gelatinase-associated lipocalin (uNGAL) is relatively specific in lupus nephritis (LN) patients. However, its diagnostic value has not been evaluated. The aim of this review was to determine the value of uNGAL for diagnosis and estimating activity in LN. A comprehensive search was performed on PubMed, EMBASE, Web of Knowledge, Cochrane electronic databases through December 2014. Meta-analysis of sensitivity and specificity was performed with a random-effects model. Additionally, summary receiver operating characteristic (SROC) curves and area under the curve (AUC) values were calculated. Fourteen studies were selected for this review. With respect to diagnosing LN, the pooled sensitivity and specificity were 73.6% (95% confidence interval (CI), 61.9-83.3) and 78.1% (95% CI, 69.0-85.6), respectively. The SROC-AUC value was 0.8632. Regarding estimating LN activity, the pooled sensitivity and specificity were 66.2% (95% CI, 60.4-71.7) and 62.1% (95% CI, 57.9-66.3), respectively. The SROC-AUC value was 0.7583. In predicting renal flares, the pooled sensitivity and specificity were 77.5% (95% CI, 68.1-85.1) and 65.3% (95% CI, 60.0-70.3), respectively. The SROC-AUC value was 0.7756. In conclusion, this meta-analysis indicates that uNGAL has relatively fair sensitivity and specificity in diagnosing LN, estimating LN activity and predicting renal flares, suggesting that uNGAL is a potential biomarker in diagnosing LN and monitoring LN activity. © The Author(s) 2015.
Sensitivity analysis of 1-D dynamical model for basin analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, S.
1987-01-01
Geological processes related to petroleum generation, migration and accumulation are very complicated in terms of time and variables involved, and it is very difficult to simulate these processes by laboratory experiments. For this reasons, many mathematic/computer models have been developed to simulate these geological processes based on geological, geophysical and geochemical principles. The sensitivity analysis in this study is a comprehensive examination on how geological, geophysical and geochemical parameters influence the reconstructions of geohistory, thermal history and hydrocarbon generation history using the 1-D fluid flow/compaction model developed in the Basin Modeling Group at the University of South Carolina. This studymore » shows the effects of some commonly used parameter such as depth, age, lithology, porosity, permeability, unconformity (eroded thickness and erosion time), temperature at sediment surface, bottom hole temperature, present day heat flow, thermal gradient, thermal conductivity and kerogen type and content on the evolutions of formation thickness, porosity, permeability, pressure with time and depth, heat flow with time, temperature with time and depth, vitrinite reflectance (Ro) and TTI with time and depth, and oil window in terms of time and depth, amount of hydrocarbons generated with time and depth. Lithology, present day heat flow and thermal conductivity are the most sensitive parameters in the reconstruction of temperature history.« less
Jha, Ashwini Kumar; Tang, Wen Hao; Bai, Zhi Bin; Xiao, Jia Quan
2014-01-01
To perform a meta-analysis to review the sensitivity and specificity of computed tomography and different known computed yomography signs for the diagnosis of strangulation in patients with acute small bowel obstruction. A comprehensive Pubmed search was performed for all reports that evaluated the use of CT and discussed different CT criteria for the diagnosis of acute SBO. Articles published in English language from January 1978 to June 2008 were included. Review articles, case reports, pictorial essays and articles without original data were excluded. The bivariate random effect model was used to obtain pooled sensitivity and pooled specificity. Summary receiver operating curve was calculated using Meta-Disc. Software Openbugs 3.0.3 was used to summarize the data. A total of 12 studies fulfilled the inclusion criteria. The pooled sensitivity and specificity of CT in the diagnosis of strangulation was 0.720 (95% CI 0.674 to 0.763) and 0.866 (95% CI 0.837 to 0.892) respectively. Among different CT signs, mesenteric edema had highest Pooled sensitivity of 0. 741 and lack of bowel wall enhancement had highest pooled specificity of 0.991. This review demonstrates that CT is highly sensitive as well as specific in the preoperative diagnosis of strangulation SBO which are in accordance with the published studies. Our analysis also shows that "presence of mesenteric fluid" is most sensitive, and "lack of bowel wall enhancement" is most specific CT sign of strangulation, and also justifies need of large scale prospective studies to validate the results obtained as well as to determine a clinical protocol.
A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.
2018-02-01
Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
Plasticity of the Arabidopsis Root System under Nutrient Deficiencies1[C][W][OPEN
Gruber, Benjamin D.; Giehl, Ricardo F.H.; Friedel, Swetlana; von Wirén, Nicolaus
2013-01-01
Plant roots show a particularly high variation in their morphological response to different nutrient deficiencies. Although such changes often determine the nutrient efficiency or stress tolerance of plants, it is surprising that a comprehensive and comparative analysis of root morphological responses to different nutrient deficiencies has not yet been conducted. Since one reason for this is an inherent difficulty in obtaining nutrient-deficient conditions in agar culture, we first identified conditions appropriate for producing nutrient-deficient plants on agar plates. Based on a careful selection of agar specifically for each nutrient being considered, we grew Arabidopsis (Arabidopsis thaliana) plants at four levels of deficiency for 12 nutrients and quantified seven root traits. In combination with measurements of biomass and elemental concentrations, we observed that the nutritional status and type of nutrient determined the extent and type of changes in root system architecture (RSA). The independent regulation of individual root traits further pointed to a differential sensitivity of root tissues to nutrient limitations. To capture the variation in RSA under different nutrient supplies, we used principal component analysis and developed a root plasticity chart representing the overall modulations in RSA under a given treatment. This systematic comparison of RSA responses to nutrient deficiencies provides a comprehensive view of the overall changes in root plasticity induced by the deficiency of single nutrients and provides a solid basis for the identification of nutrient-sensitive steps in the root developmental program. PMID:23852440
Baby-MONITOR: A Composite Indicator of NICU Quality
Kowalkowski, Marc A.; Zupancic, John A. F.; Pietz, Kenneth; Richardson, Peter; Draper, David; Hysong, Sylvia J.; Thomas, Eric J.; Petersen, Laura A.; Gould, Jeffrey B.
2014-01-01
BACKGROUND AND OBJECTIVES: NICUs vary in the quality of care delivered to very low birth weight (VLBW) infants. NICU performance on 1 measure of quality only modestly predicts performance on others. Composite measurement of quality of care delivery may provide a more comprehensive assessment of quality. The objective of our study was to develop a robust composite indicator of quality of NICU care provided to VLBW infants that accurately discriminates performance among NICUs. METHODS: We developed a composite indicator, Baby-MONITOR, based on 9 measures of quality chosen by a panel of experts. Measures were standardized, equally weighted, and averaged. We used the California Perinatal Quality Care Collaborative database to perform across-sectional analysis of care given to VLBW infants between 2004 and 2010. Performance on the Baby-MONITOR is not an absolute marker of quality but indicates overall performance relative to that of the other NICUs. We used sensitivity analyses to assess the robustness of the composite indicator, by varying assumptions and methods. RESULTS: Our sample included 9023 VLBW infants in 22 California regional NICUs. We found significant variations within and between NICUs on measured components of the Baby-MONITOR. Risk-adjusted composite scores discriminated performance among this sample of NICUs. Sensitivity analysis that included different approaches to normalization, weighting, and aggregation of individual measures showed the Baby-MONITOR to be robust (r = 0.89–0.99). CONCLUSIONS: The Baby-MONITOR may be a useful tool to comprehensively assess the quality of care delivered by NICUs. PMID:24918221
Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G.; Ultsch, Alfred
2018-01-01
Abstract The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models. PMID:28700537
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
Tambur, Anat R.; Leventhal, Joseph; Kaufman, Dixon B.; Friedewald, John; Miller, Joshua; Abecassis, Michael M.
2014-01-01
Background Patients with human leukocyte antigen antibodies constitute a significantly disadvantaged population among those awaiting renal transplantation. We speculated that more understanding of the patients’ antibody makeup would allow a more “immunologic” evaluation of crossmatch data, facilitate the use of virtual crossmatch (XM), and lead to more transplantability of these patients. Methods We retrospectively compared the transplantability and transplant outcome of two consecutive patient populations transplanted in our center. Group I (n=374) was evaluated using solid-phase base testing for determination of percentage panel reactive antibody (“PRA screen”) with limited antibody identification testing. Group II (n=333) was tested in a more comprehensive manner with major emphasis on antibody identification, antibody strength assignment, and the use of pronase for crossmatch. Results Given this approach, 49% (166/333) of the transplanted patients in group II were sensitized compared with 40% (150/374) of the recipients in group I; P=0.012. Transplant outcome at 1-year posttransplant was similar in both groups. Conclusions We conclude that comprehensive evaluation of human leukocyte antigen sensitization and application of immunologic in analyzing compatibility between donor and recipient can increase the transplantability of sensitized patients while maintaining similar outcome. Our approach is in line with United Network for Organ Sharing new guidelines for calculated panel reactive antibody and virtual XM analysis. We hope this report will prompt additional transplant programs to consider how they will use the new United Network for Organ Sharing algorithms. PMID:18946342
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
Cost Benefit of Comprehensive Primary and Preventive School-Based Health Care.
Padula, William V; Connor, Katherine A; Mueller, Josiah M; Hong, Jonathan C; Velazquez, Gabriela Calderon; Johnson, Sara B
2018-01-01
The Rales Health Center is a comprehensive school-based health center at an urban elementary/middle school. Rales Health Center provides a full range of pediatric services using an enriched staffing model consisting of pediatrician, nurse practitioner, registered nurses, and medical office assistant. This staffing model provides greater care but costs more than traditional school-based health centers staffed by part-time nurses. The objective was to analyze the cost benefit of Rales Health Center enhanced staffing model compared with a traditional school-based health center (standard care), focusing on asthma care, which is among the most prevalent chronic conditions of childhood. In 2016, cost-benefit analysis using a decision tree determined the net social benefit of Rales Health Center compared with standard care from the U.S. societal perspective based on the 2015-2016 academic year. It was assumed that Rales Health Center could handle greater patient throughput related to asthma, decreased prescription costs, reduced parental resources in terms of missed work time, and improved student attendance. Univariate and multivariate probabilistic sensitivity analyses were conducted. The expected cost to operate Rales Health Center was $409,120, compared with standard care cost of $172,643. Total monetized incremental benefits of Rales Health Center were estimated to be $993,414. The expected net social benefit for Rales Health Center was $756,937, which demonstrated substantial societal benefit at a return of $4.20 for every dollar invested. This net social benefit estimate was robust to sensitivity analyses. Despite the greater cost associated with the Rales Health Center's enhanced staffing model, the results of this analysis highlight the cost benefit of providing comprehensive, high-quality pediatric care in schools, particularly schools with a large proportion of underserved students. Copyright © 2018 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Tang, Zhentao; Hou, Wenqian; Liu, Xiuming; Wang, Mingfeng; Duan, Yixiang
2016-08-26
Integral analysis plays an important role in study and quality control of substances with complex matrices in our daily life. As the preliminary construction of integral analysis of substances with complex matrices, developing a relatively comprehensive and sensitive methodology might offer more informative and reliable characteristic components. Flavoring mixtures belonging to the representatives of substances with complex matrices have now been widely used in various fields. To better study and control the quality of flavoring mixtures as additives in food industry, an in-house fabricated solid-phase microextraction (SPME) fiber was prepared based on sol-gel technology in this work. The active organic component of the fiber coating was multi-walled carbon nanotubes (MWCNTs) functionalized with hydroxyl-terminated polydimethyldiphenylsiloxane, which integrate the non-polar and polar chains of both materials. In this way, more sensitive extraction capability for a wider range of compounds can be obtained in comparison with commercial SPME fibers. Preliminarily integral analysis of three similar types of samples were realized by the optimized SPME-GC-MS method. With the obtained GC-MS data, a valid and well-fit model was established by partial least square discriminant analysis (PLS-DA) for classification of these samples (R2X=0.661, R2Y=0.996, Q2=0.986). The validity of the model (R2=0.266, Q2=-0.465) has also approved the potential to predict the "belongingness" of new samples. With the PLS-DA and SPSS method, further screening out the markers among three similar batches of samples may be helpful for monitoring and controlling the quality of the flavoring mixtures as additives in food industry. Conversely, the reliability and effectiveness of the GC-MS data has verified the comprehensive and efficient extraction performance of the in-house fabricated fiber. Copyright © 2016 Elsevier B.V. All rights reserved.
Cooperberg, Matthew R.; Ramakrishna, Naren R.; Duff, Steven B.; Hughes, Kathleen E.; Sadownik, Sara; Smith, Joseph A.; Tewari, Ashutosh K.
2012-01-01
Objectives To characterize the costs and outcomes associated with radical prostatectomy (open, laparoscopic, or robot-assisted) and radiation therapy (dose-escalated 3-dimensional conformal radiation, intensity-modulated radiation, brachytherapy, or combination), using a comprehensive, lifetime decision analytic model. Patients and Methods A Markov model was constructed to follow hypothetical men with low-, intermediate-, and high-risk prostate cancer over their lifetimes following primary treatment; probabilities of outcomes were based on an exhaustive literature search yielding 232 unique publications. Patients could experience remission, recurrence, salvage treatment, metastasis, death from prostate cancer, and death from other causes. Utilities for each health state were determined, and disutilities were applied for complications and toxicities of treatment. Costs were determined from the U.S. payer perspective, with incorporation of patient costs in a sensitivity analysis. Results Differences in quality-adjusted life years across modalities were modest, ranging from 10.3 to 11.3 for low-risk patients, 9.6 to 10.5 for intermediate-risk patients, and 7.8 to 9.3 for high-risk patients. There were no statistically significant differences among surgical modalities, which tended to be more effective than radiation modalities, with the exception of combination external beam + brachytherapy for high-risk disease. Radiation modalities were consistently more expensive than surgical modalities; costs ranged from $19,901 (robot-assisted prostatectomy for low-risk disease) to $50,276 (combination radiation for high-risk disease). These findings were robust to an extensive set of sensitivity analyses. Conclusions Our analysis found small differences in outcomes and substantial differences in payer and patient costs across treatment alternatives. These findings may inform future policy discussions regarding strategies to improve efficiency of treatment selection for localized prostate cancer. PMID:23279038
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Jiao; Scheibe, Timothy D.; Mahadevan, Radhakrishnan
2011-01-24
Uranium contamination is a serious concern at several sites motivating the development of novel treatment strategies such as the Geobacter-mediated reductive immobilization of uranium. However, this bioremediation strategy has not yet been optimized for the sustained uranium removal. While several reactive-transport models have been developed to represent Geobacter-mediated bioremediation of uranium, these models often lack the detailed quantitative description of the microbial process (e.g., biomass build-up in both groundwater and sediments, electron transport system, etc.) and the interaction between biogeochemical and hydrological process. In this study, a novel multi-scale model was developed by integrating our recent model on electron capacitancemore » of Geobacter (Zhao et al., 2010) with a comprehensive simulator of coupled fluid flow, hydrologic transport, heat transfer, and biogeochemical reactions. This mechanistic reactive-transport model accurately reproduces the experimental data for the bioremediation of uranium with acetate amendment. We subsequently performed global sensitivity analysis with the reactive-transport model in order to identify the main sources of prediction uncertainty caused by synergistic effects of biological, geochemical, and hydrological processes. The proposed approach successfully captured significant contributing factors across time and space, thereby improving the structure and parameterization of the comprehensive reactive-transport model. The global sensitivity analysis also provides a potentially useful tool to evaluate uranium bioremediation strategy. The simulations suggest that under difficult environments (e.g., highly contaminated with U(VI) at a high migration rate of solutes), the efficiency of uranium removal can be improved by adding Geobacter species to the contaminated site (bioaugmentation) in conjunction with the addition of electron donor (biostimulation). The simulations also highlight the interactive effect of initial cell concentration and flow rate on U(VI) reduction.« less
ERIC Educational Resources Information Center
Spencer, Mercedes; Wagner, Richard K.
2018-01-01
The purpose of this meta-analysis was to examine the comprehension problems of children who have a specific reading comprehension deficit (SCD), which is characterized by poor reading comprehension despite adequate decoding. The meta-analysis included 86 studies of children with SCD who were assessed in reading comprehension and oral language…
Wirth, Meg E.; Balk, Deborah; Delamonica, Enrique; Storeygard, Adam; Sacks, Emma; Minujin, Alberto
2006-01-01
OBJECTIVE: This analysis seeks to set the stage for equity-sensitive monitoring of the health-related Millennium Development Goals (MDGs). METHODS: We use data from international household-level surveys (Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS)) to demonstrate that establishing an equity baseline is necessary and feasible, even in low-income and data-poor countries. We assess data from six countries using 11 health indicators and six social stratifiers. Simple bivariate stratification is complemented by simultaneous stratification to expose the compound effect of multiple forms of vulnerability. FINDINGS: The data reveal that inequities are complex and interactive: inferences cannot be drawn about the nature or extent of inequities in health outcomes from a single stratifier or indicator. CONCLUSION: The MDGs and other development initiatives must become more comprehensive and explicit in their analysis and tracking of inequities. The design of policies to narrow health gaps must take into account country-specific inequities. PMID:16878225
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Sensitivity of Rayleigh wave ellipticity and implications for surface wave inversion
NASA Astrophysics Data System (ADS)
Cercato, Michele
2018-04-01
The use of Rayleigh wave ellipticity has gained increasing popularity in recent years for investigating earth structures, especially for near-surface soil characterization. In spite of its widespread application, the sensitivity of the ellipticity function to the soil structure has been rarely explored in a comprehensive and systematic manner. To this end, a new analytical method is presented for computing the sensitivity of Rayleigh wave ellipticity with respect to the structural parameters of a layered elastic half-space. This method takes advantage of the minor decomposition of the surface wave eigenproblem and is numerically stable at high frequency. This numerical procedure allowed to retrieve the sensitivity for typical near surface and crustal geological scenarios, pointing out the key parameters for ellipticity interpretation under different circumstances. On this basis, a thorough analysis is performed to assess how ellipticity data can efficiently complement surface wave dispersion information in a joint inversion algorithm. The results of synthetic and real-world examples are illustrated to analyse quantitatively the diagnostic potential of the ellipticity data with respect to the soil structure, focusing on the possible sources of misinterpretation in data inversion.
Sensitivity of the Boundary Plasma to the Plasma-Material Interface
Canik, John M.; Tang, X. -Z.
2017-01-01
While the sensitivity of the scrape-off layer and divertor plasma to the highly uncertain cross-field transport assumptions is widely recognized, the plasma is also sensitive to the details of the plasma-material interface (PMI) models used as part of comprehensive predictive simulations. Here in this paper, these PMI sensitivities are studied by varying the relevant sub-models within the SOLPS plasma transport code. Two aspects are explored: the sheath model used as a boundary condition in SOLPS, and fast particle reflection rates for ions impinging on a material surface. Both of these have been the study of recent high-fidelity simulation efforts aimedmore » at improving the understanding and prediction of these phenomena. It is found that in both cases quantitative changes to the plasma solution result from modification of the PMI model, with a larger impact in the case of the reflection coefficient variation. Finally, this indicates the necessity to better quantify the uncertainties within the PMI models themselves, and perform thorough sensitivity analysis to propagate these throughout the boundary model; this is especially important for validation against experiment, where the error in the simulation is a critical and less-studied piece of the code-experiment comparison.« less
NASA Technical Reports Server (NTRS)
1973-01-01
An improved method for estimating aircraft weight and cost using a unique and fundamental approach was developed. The results of this study were integrated into a comprehensive digital computer program, which is intended for use at the preliminary design stage of aircraft development. The program provides a means of computing absolute values for weight and cost, and enables the user to perform trade studies with a sensitivity to detail design and overall structural arrangement. Both batch and interactive graphics modes of program operation are available.
Kazarian, Artaches A; Taylor, Mark R; Haddad, Paul R; Nesterenko, Pavel N; Paull, Brett
2013-12-01
The comprehensive separation and detection of hydrophobic and hydrophilic active pharmaceutical ingredients (APIs), their counter-ions (organic, inorganic) and excipients, using a single mixed-mode chromatographic column, and a dual injection approach is presented. Using a mixed-mode Thermo Fisher Acclaim Trinity P1 column, APIs, their counter-ions and possible degradants were first separated using a combination of anion-exchange, cation-exchange and hydrophobic interactions, using a mobile phase consisting of a dual organic modifier/salt concentration gradient. A complementary method was also developed using the same column for the separation of hydrophilic bulk excipients, using hydrophilic interaction liquid chromatography (HILIC) under high organic solvent mobile phase conditions. These two methods were then combined within a single gradient run using dual sample injection, with the first injection at the start of the applied gradient (mixed-mode retention of solutes), followed by a second sample injection at the end of the gradient (HILIC retention of solutes). Detection using both ultraviolet absorbance and refractive index enabled the sensitive detection of APIs and UV-absorbing counter-ions, together with quantitative determination of bulk excipients. The developed approach was applied successfully to the analysis of a dry powder inhalers (Flixotide(®), Spiriva(®)), enabling comprehensive quantification of all APIs and excipients in the sample. Copyright © 2013 Elsevier B.V. All rights reserved.
Fu, Shi-Feng; Zhang, Ping; Jiang, Jin-Long
2012-02-01
Assessment of land resources carrying capacity is the key point of planning environment impact assessment and the main foundation to determine whether the planning could be implemented or not. With the help of the space analysis function of Geographic Information System, and selecting altitude, slope, land use type, distance from resident land, distance from main traffic roads, and distance from environmentally sensitive area as the sensitive factors, a comprehensive assessment on the ecological sensitivity and its spatial distribution in Zhangzhou Merchants Economic and Technological Development Zone, Fujian Province of East China was conducted, and the assessment results were combined with the planning land layout diagram for the ecological suitability analysis. In the Development Zone, 84.0% of resident land, 93.1% of industrial land, 86.0% of traffic land, and 76. 0% of other constructive lands in planning were located in insensitive and gently sensitive areas, and thus, the implement of the land use planning generally had little impact on the ecological environment, and the land resources in the planning area was able to meet the land use demand. The assessment of the population carrying capacity with ecological land as the limiting factor indicated that in considering the highly sensitive area and 60% of the moderately sensitive area as ecological land, the population within the Zone in the planning could reach 240000, and the available land area per capita could be 134.0 m2. Such a planned population scale is appropriate, according to the related standards of constructive land.
System analysis in rotorcraft design: The past decade
NASA Technical Reports Server (NTRS)
Galloway, Thomas L.
1988-01-01
Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.
Vibration analysis of the SA349/2 helicopter
NASA Technical Reports Server (NTRS)
Heffernan, Ruth; Precetti, Dominique; Johnson, Wayne
1991-01-01
Helicopter airframe vibration is examined using calculations and measurements for the SA349/2 research helicopter. The hub loads, which transmit excitations to the fuselage, are predicted using a comprehensive rotorcraft analysis and correlated with measuring hub loads. The predicted and measured hub loads are then coupled with finite element models representing the SA349/2 fuselage. The resulting vertical acceleration at the pilot seat is examined. Adjustments are made to the airframe structural models to examine the sensitivity of predicted vertical acceleration to the model. Changes of a few percent to the damping and frequency of specific models lead to large reductions in predicted vibration, and to major improvements in the correlations with measured pilot-seat vertical acceleration.
Intonation and Gesture as Bootstrapping Devices in Speaker Uncertainty
ERIC Educational Resources Information Center
Hübscher, Iris; Esteve-Gibert, Núria; Igualada, Alfonso; Prieto, Pilar
2017-01-01
This study investigates 3- to 5-year-old children's sensitivity to lexical, intonational and gestural information in the comprehension of speaker uncertainty. Most previous studies on children's understanding of speaker certainty and uncertainty across languages have focused on the comprehension of lexical markers, and little is known about the…
Scott, Jamie S; Sterling, Sarah A; To, Harrison; Seals, Samantha R; Jones, Alan E
2016-07-01
Matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) has shown promise in decreasing time to identification of causative organisms compared to traditional methods; however, the utility of MALDI-TOF MS in a heterogeneous clinical setting is uncertain. To perform a systematic review on the operational performance of the Bruker MALDI-TOF MS system and evaluate published cut-off values compared to traditional blood cultures. A comprehensive literature search was performed. Studies were included if they performed direct MALDI-TOF MS analysis of blood culture specimens in human patients with suspected bacterial infections using the Bruker Biotyper software. Sensitivities and specificities of the combined studies were estimated using a hierarchical random effects linear model (REML) incorporating cut-off scores of ≥1.7 and ≥2.0. Fifty publications were identified, with 11 studies included after final review. The estimated sensitivity utilising a cut-off of ≥2.0 from the combined studies was 74.6% (95% CI = 67.9-89.3%), with an estimated specificity of 88.0% (95% CI = 74.8-94.7%). When assessing a cut-off of ≥1.7, the combined sensitivity increases to 92.8% (95% CI = 87.4-96.0%), but the estimated specificity decreased to 81.2% (95% CI = 61.9-96.6%). In this analysis, MALDI-TOF MS showed acceptable sensitivity and specificity in bacterial speciation with the current recommended cut-off point compared to blood cultures; however, lowering the cut-off point from ≥2.0 to ≥1.7 would increase the sensitivity of the test without significant detrimental effect on the specificity, which could improve clinician confidence in their results.
Wang, Zhen; Kwok, Kevin W H; Lui, Gilbert C S; Zhou, Guang-Jie; Lee, Jae-Seong; Lam, Michael H W; Leung, Kenneth M Y
2014-06-01
Due to a lack of saltwater toxicity data in tropical regions, toxicity data generated from temperate or cold water species endemic to North America and Europe are often adopted to derive water quality guidelines (WQG) for protecting tropical saltwater species. If chemical toxicity to most saltwater organisms increases with water temperature, the use of temperate species data and associated WQG may result in under-protection to tropical species. Given the differences in species composition and environmental attributes between tropical and temperate saltwater ecosystems, there are conceivable uncertainties in such 'temperate-to-tropic' extrapolations. This study aims to compare temperate and tropical saltwater species' acute sensitivity to 11 chemicals through a comprehensive meta-analysis, by comparing species sensitivity distributions (SSDs) between the two groups. A 10 percentile hazardous concentration (HC10) is derived from each SSD, and then a temperate-to-tropic HC10 ratio is computed for each chemical. Our results demonstrate that temperate and tropical saltwater species display significantly different sensitivity towards all test chemicals except cadmium, although such differences are small with the HC10 ratios ranging from 0.094 (un-ionised ammonia) to 2.190 (pentachlorophenol) only. Temperate species are more sensitive to un-ionised ammonia, chromium, lead, nickel and tributyltin, whereas tropical species are more sensitive to copper, mercury, zinc, phenol and pentachlorophenol. Through comparison of a limited number of taxon-specific SSDs, we observe that there is a general decline in chemical sensitivity from algae to crustaceans, molluscs and then fishes. Following a statistical analysis of the results, we recommend an extrapolation factor of two for deriving tropical WQG from temperate information. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
DOT National Transportation Integrated Search
1999-09-01
This booklet presents some of the successes of the community-sensitive transportation facility development process. Although a comprehensive process is described here, not every project involves the full range of steps. By applying the techniques out...
Abbas, Muhammad A; Kim, Tea-Yon; Lee, Sang Uck; Kang, Yong Soo; Bang, Jin Ho
2016-01-13
Gold nanoclusters (Au NCs) with molecule-like behavior have emerged as a new light harvester in various energy conversion systems. Despite several important strides made recently, efforts toward the utilization of NCs as a light harvester have been primarily restricted to proving their potency and feasibility. In solar cell applications, ground-breaking research with a power conversion efficiency (PCE) of more than 2% has recently been reported. Because of the lack of complete characterization of metal cluster-sensitized solar cells (MCSSCs), however, comprehensive understanding of the interfacial events and limiting factors which dictate their performance remains elusive. In this regard, we provide deep insight into MCSSCs for the first time by performing in-depth electrochemical impedance spectroscopy (EIS) analysis combined with physical characterization and density functional theory (DFT) calculations of Au NCs. In particular, we focused on the effect of the size of the Au NCs and electrolytes on the performance of MCSSCs and reveal that they are significantly influential on important solar cell characteristics such as the light absorption capability, charge injection kinetics, interfacial charge recombination, and charge transport. Besides offering comprehensive insights, this work represents an important stepping stone toward the development of MCSSCs by accomplishing a new PCE record of 3.8%.
Guidelines for a Comprehensive Care Program to Ostomized Patients and Families: a Nursing proposal1
de Figueiredo, Paula Alvarenga; Alvim, Neide Aparecida Titonelli
2016-01-01
Objectives: describe care needs and demands that mark the discursive practices of ostomized clients and family members and discuss guidelines for a comprehensive care program to ostomized clients and their families, organized by macrosociological categories. Method: Creative and Sensitive, involving 17 ostomized subjects and family members at a municipal outpatient clinic. The ethical aspects were complied with. A characterization form was used, as well as Creativity and Sensitivity Dynamics: "speaking map", "body-knowledge" and "calendar". Critical Discourse Analysis was applied. Results: the health needs and care demands of the ostomized patients and their family members, in their multiple dimensions, were constituted in the home and community, outpatient and social context, implying new orientations for nursing care. The unveiling of the data brought elements that constituted guidelines, in a macrosociological approach, to achieve the expanded integrality of nursing care. Conclusion: the ostomized clients are unique in their genre/peculiar from Latin sui generis, calling for strategies that respond to and distinguish their specificities. Elaborating a Public Health Policy that improves and reorganizes the care demands, taking into account these individual biopsychosocial and spiritual aspects, is a possible and irrevocable target in the attempt to achieve better conditions of health and wellbeing. PMID:27192418
Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D
2009-01-01
We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.
Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation
Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.
2010-01-01
Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540
3D Simulations of Void collapse in Energetic Materials
NASA Astrophysics Data System (ADS)
Rai, Nirmal Kumar; Udaykumar, H. S.
2017-06-01
Voids present in the microstructure of heterogeneous energetic materials effect the sensitivity towards ignition. It is established that the morphology of voids can play a significant role in sensitivity enhancement of energetic materials. Depending on the void shape, sensitivity can be either increased or decreased under given loading conditions. In the past, effects of different void shapes i.e. triangular, ellipse, cylindrical etc. on the sensitivity of energetic materials have been analyzed. However, most of these studies are performed in 2D and are limited under the plain strain assumption. Axisymmetric studies have also been performed in the past to incorporate the 3D effects, however axisymmetric modeling is limited to only certain geometries i.e. sphere. This work analyzes the effects of various void shapes in three dimensions on the ignition behavior of HMX. Various void shapes are analyzed including spherical, prolate and oblate speheroid oriented at different orientations, etc. Three dimensional void collapse simulations are performed on a single void to quantify the effects void morphology on initiation. A Cartesian grid based Eulerian solver SCIMITAR3D is used to perform the void collapse simulations. Various aspects of void morphology i.e. size, thickness of voids, elongation, orientation etc. are considered to obtain a comprehensive analysis. Also, 2D plane strain calculations are compared with the three dimensional analysis to evaluate the salient differences between 2D and 3D modeling.
Gao, Zilong; Lv, Juan; Wang, Min
2017-02-01
Some controversies still exist between the detection of Epstein-Barr virus (EBV)'s DNA and risks of periodontal diseases. Hence, a comprehensive meta-analysis on all available literatures was performed to clarify the relationship between EBV and preidontitis.A comprehensive search was conducted within the PUBMED, EMBASE, and WANFANG databases up to October 10th, 2016 according to inclusion and exclusion criteria and finally 21 case-control literatures were obtained. The outcomes including odds ratios (ORs) with 95% confidence intervals (CIs) were used to assess the strength of associations. Publication bias was determined by Begg or Egger test. Sensitivity analysis was used to investigate reliability and stability of the results.According to the data from included trials, the association between overall increased risks of periodontitis and the detection of EBV was significant (OR = 6.199, 95% CI = 3.119-12.319, P < 0.001). In the disease-type analysis, the pooled ORs for chronic periodontitis and aggressive periodontitis were 6.586 (95% CI = 3.042-14.262, P < 0.001) and 8.361 (95% CI = 2.109-33.143, P = 0.003), respectively. In the subgroup analysis of ethnicity, our results suggested that high EBV-detecting frequencies were correlated with increased risks of periodontitis in Asians, Europeans, and Americans (P < 0.001). Subgroup analysis by the sample type showed that subgingival plaque (SgP) samples and tissue samples were available for EBV detecting (P < 0.001). Detecting EBV of samples in ≥5 (6) mm sites of periodontal pockets were easier than in ≤3-mm sites (P = 0.023).This meta-analysis indicates that high frequent detection of EBV correlates with increased risk of periodontal diseases. SgP and tissue are available for detecting EBV in patients of periodontitis. At last, our results suggest that detecting EBV of samples in =5 (6) mm sites of periodontal pockets are more sensitive than in ≤3-mm sites.
Nakagawa, Hitoshi; Nagasaka, Takeshi; Cullings, Harry M; Notohara, Kenji; Hoshijima, Naoko; Young, Joanne; Lynch, Henry T; Tanaka, Noriaki; Matsubara, Nagahide
2009-06-01
It is sometimes difficult to diagnose Lynch syndrome by the simple but strict clinical criteria, or even by the definitive genetic testing for causative germline mutation of mismatch repair genes. Thus, some practical and efficient screening strategy to select highly possible Lynch syndrome patients is exceedingly desirable. We performed a comprehensive study to evaluate the methylation status of whole MLH1 promoter region by direct bisulfite sequencing of the entire MLH1 promoter regions on Lynch and non-Lynch colorectal cancers (CRCs). Then, we established a convenient assay to detect methylation in key CpG islands responsible for the silencing of MLH1 expression. We studied the methylation status of MLH1 as well as the CpG island methylator phenotype (CIMP) and immunohistochemical analysis of mismatch repair proteins on 16 cases of Lynch CRC and 19 cases of sporadic CRCs with high-frequency microsatellite instability (MSI-H). Sensitivity to detect Lynch syndrome by MLH1 (CCAAT) methylation was 88% and the specificity was 84%. Positive likelihood ratio (PLR) was 5.5 and negative likelihood ratio (NLR) was 0.15. Sensitivity by mutational analysis of BRAF was 100%, specificity was 84%, PLR was 6.3 and NLR was zero. By CIMP analysis; sensitivity was 88%, specificity was 79%, PLR was 4.2, and NLR was 0.16. BRAF mutation or MLH1 methylation analysis combined with MSI testing could be a good alternative to screen Lynch syndrome patients in a cost effective manner. Although the assay for CIMP status also showed acceptable sensitivity and specificity, it may not be practical because of its rather complicated assay.
Gillam, Ronald B.; Evans, Julia L.; Sergeev, Alexander V.
2017-01-01
Purpose With Aim 1, we compared the comprehension of and sensitivity to canonical and noncanonical word order structures in school-age children with specific language impairment (SLI) and same-age typically developing (TD) children. Aim 2 centered on the developmental improvement of sentence comprehension in the groups. With Aim 3, we compared the comprehension error patterns of the groups. Method Using a “Whatdunit” agent selection task, 117 children with SLI and 117 TD children (ages 7:0–11:11, years:months) propensity matched on age, gender, mother's education, and family income pointed to the picture that best represented the agent in semantically implausible canonical structures (subject–verb–object, subject relative) and noncanonical structures (passive, object relative). Results The SLI group performed worse than the TD group across sentence types. TD children demonstrated developmental improvement across each sentence type, but children with SLI showed improvement only for canonical sentences. Both groups chose the object noun as agent significantly more often than the noun appearing in a prepositional phrase. Conclusions In the absence of semantic–pragmatic cues, comprehension of canonical and noncanonical sentences by children with SLI is limited, with noncanonical sentence comprehension being disproportionately limited. The children's ability to make proper semantic role assignments to the noun arguments in sentences, especially noncanonical, is significantly hindered. PMID:28832884
Investigation of hypoxia off the Changjiang Estuary using a coupled model of ROMS-CoSiNE
NASA Astrophysics Data System (ADS)
Zhou, Feng; Chai, Fei; Huang, Daji; Xue, Huijie; Chen, Jianfang; Xiu, Peng; Xuan, Jiliang; Li, Jia; Zeng, Dingyong; Ni, Xiaobo; Wang, Kui
2017-12-01
The cause for large variability of hypoxia off the Changjiang Estuary has not been well understood partly due to various nutrient sources and complex physical-biological processes involved. The Regional Ocean Modeling Systems (ROMS) coupled with Carbon, Silicate and Nitrogen Ecosystem (CoSiNE) was used to investigate the 2006 hypoxia in the East China Sea, the largest hypoxia ever recorded. The model performance was evaluated comprehensively by comparing a suite of quantitative metrics, procedures and spatiotemporal patterns between the simulated results and observed data. The simulated results are generally consistent with the observations and are capable of reproducing the development of hypoxia and the observed vertical profiles of dissolved oxygen. Event-scale reduction of hypoxia occurred during the weakening of stratification in mid-July and mid-September, due to strong stirring caused by tropical storms or strong northerly wind. Change in wind direction altered the pathway of Changjiang Diluted Water and consequently caused variation in hypoxic location. Increase in river discharge led to an expansion of hypoxic water during the summer monsoon. Sensitivity analysis suggested that the hypoxia extent was affected by the change in nutrient concentration of the Changjiang as well as that of the Kuroshio. Sensitivity analysis also suggested the importance of sediment oxygen consumption to the size of the hypoxic zone. These results demonstrate that a prognostic 3D model is useful for investigating the highly variable hypoxia, with comprehensive considerations of multiple factors related to both physical and biological processes from the estuary to the shelf break of the East China Sea.
Kim, Jihyun; Yum, Hyesun; Jang, Moonhee; Shin, Ilchung; Yang, Wonkyung; Baeck, Seungkyung; Suh, Joon Hyuk; Lee, Sooyeun; Han, Sang Beom
2016-01-01
Hair is a highly relevant specimen that is used to verify drug exposure in victims of drug-facilitated crime (DFC) cases. In the present study, a new analytical method involving ultrahigh-performance liquid chromatography-tandem mass spectrometry was developed for determining the presence of model drugs, including zolazepam and tiletamine and their metabolites in hair specimens from DFCs. The incorporation of zolazepam and tiletamine into hair after a single exposure was investigated in Long-Evans rats with the ratio of the hair concentration to the area under the curve. For rapid and simple sample preparation, methanol extraction and protein precipitation were performed for hair and plasma, respectively. No interference was observed in drug-free hair or plasma, except for hair-derived diphenhydramine in blank hair. The coefficients of variance of the matrix effects were below 12%, and the recoveries of the analytes exceeded 70% in all of the matrices. The precision and accuracy results were satisfactory. The limits of quantification ranged from 20 to 50 pg in 10 mg of hair. The drug incorporation rates were 0.03 ± 0.01% for zolazepam and 2.09 ± 0.51% for tiletamine in pigmented hair. We applied the present method to real hair samples in order to determine the drug that was used in seven cases. These results suggest that this comprehensive and sensitive hair analysis method can successfully verify a drug after a single exposure in crimes and can be applied in forensic and clinical toxicology laboratories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
The dye-sensitized solar cell database.
Venkatraman, Vishwesh; Raju, Rajesh; Oikonomopoulos, Solon P; Alsberg, Bjørn K
2018-04-03
Dye-sensitized solar cells (DSSCs) have garnered a lot of attention in recent years. The solar energy to power conversion efficiency of a DSSC is influenced by various components of the cell such as the dye, electrolyte, electrodes and additives among others leading to varying experimental configurations. A large number of metal-based and metal-free dye sensitizers have now been reported and tools using such data to indicate new directions for design and development are on the rise. DSSCDB, the first of its kind dye-sensitized solar cell database, aims to provide users with up-to-date information from publications on the molecular structures of the dyes, experimental details and reported measurements (efficiencies and spectral properties) and thereby facilitate a comprehensive and critical evaluation of the data. Currently, the DSSCDB contains over 4000 experimental observations spanning multiple dye classes such as triphenylamines, carbazoles, coumarins, phenothiazines, ruthenium and porphyrins. The DSSCDB offers a web-based, comprehensive source of property data for dye sensitized solar cells. Access to the database is available through the following URL: www.dyedb.com .
ERIC Educational Resources Information Center
Deevy, Patricia; Leonard, Laurence B.; Marchman, Virginia A.
2017-01-01
Purpose: This study tested the feasibility of a method designed to assess children's sensitivity to tense/agreement information in fronted auxiliaries during online comprehension of questions (e.g., "Are the nice little dogs running?"). We expected that a group of children who were proficient in auxiliary use would show this sensitivity,…
Cooperberg, Matthew R; Ramakrishna, Naren R; Duff, Steven B; Hughes, Kathleen E; Sadownik, Sara; Smith, Joseph A; Tewari, Ashutosh K
2013-03-01
WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: Multiple treatment alternatives exist for localised prostate cancer, with few high-quality studies directly comparing their comparative effectiveness and costs. The present study is the most comprehensive cost-effectiveness analysis to date for localised prostate cancer, conducted with a lifetime horizon and accounting for survival, health-related quality-of-life, and cost impact of secondary treatments and other downstream events, as well as primary treatment choices. The analysis found minor differences, generally slightly favouring surgical methods, in quality-adjusted life years across treatment options. However, radiation therapy (RT) was consistently more expensive than surgery, and some alternatives, e.g. intensity-modulated RT for low-risk disease, were dominated - that is, both more expensive and less effective than competing alternatives. To characterise the costs and outcomes associated with radical prostatectomy (open, laparoscopic, or robot-assisted) and radiation therapy (RT: dose-escalated three-dimensional conformal RT, intensity-modulated RT, brachytherapy, or combination), using a comprehensive, lifetime decision analytical model. A Markov model was constructed to follow hypothetical men with low-, intermediate-, and high-risk prostate cancer over their lifetimes after primary treatment; probabilities of outcomes were based on an exhaustive literature search yielding 232 unique publications. In each Markov cycle, patients could have remission, recurrence, salvage treatment, metastasis, death from prostate cancer, and death from other causes. Utilities for each health state were determined, and disutilities were applied for complications and toxicities of treatment. Costs were determined from the USA payer perspective, with incorporation of patient costs in a sensitivity analysis. Differences across treatments in quality-adjusted life years across methods were modest, ranging from 10.3 to 11.3 for low-risk patients, 9.6-10.5 for intermediate-risk patients and 7.8-9.3 for high-risk patients. There were no statistically significant differences among surgical methods, which tended to be more effective than RT methods, with the exception of combined external beam + brachytherapy for high-risk disease. RT methods were consistently more expensive than surgical methods; costs ranged from $19 901 (robot-assisted prostatectomy for low-risk disease) to $50 276 (combined RT for high-risk disease). These findings were robust to an extensive set of sensitivity analyses. Our analysis found small differences in outcomes and substantial differences in payer and patient costs across treatment alternatives. These findings may inform future policy discussions about strategies to improve efficiency of treatment selection for localised prostate cancer. © 2012 BJU International.
Deanda, Stephanie; Arias-Trejo, Natalia; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2015-01-01
Although the extant literature provides robust evidence of the influence of language exposure and socioeconomic status (SES) on language acquisition, it is unknown how sensitive the early receptive vocabulary system is to these factors. The current study investigates effects of minimal second language exposure and SES on the comprehension vocabulary of 16-month-old children in the language in which they receive the greatest exposure. Study 1 revealed minimal second language exposure and SES exert significant and independent effects on a direct measure of vocabulary comprehension in English-dominant and English monolingual children (N = 72). In Study 2, we replicated the effect of minimal second language exposure in Spanish-dominant and Spanish monolingual children (N = 86), however no effect of SES on vocabulary was obtained. Our results emphasize the sensitivity of the language system to minimal changes in the environment in early development. PMID:26957947
Davidson, Meghan M; Ellis Weismer, Susan
2017-07-01
This study examined the extent to which a discrepant comprehension-production profile (i.e., relatively more delayed comprehension than production) is characteristic of the early language phenotype in autism spectrum disorders (ASD) and tracked the developmental progression of the profile. Our findings indicated that a discrepant comprehension-production profile distinguished toddlers (30 months) with ASD from late talkers without ASD (91% sensitivity, 100% specificity) in groups that were comparable on expressive language, age, and socioeconomic status. Longitudinal data for children with ASD revealed that the discrepant profile steadily decreased from 30 to 44 months until there was no significant comprehension-production difference at 66 months. In conclusion, results suggest that lower comprehension than production may be an age-specific marker of toddlers with ASD.
ANALYSIS OF GLYCANS DERIVED FROM GLYCOCONJUGATES BY CAPILLARY ELECTROPHORESIS-MASS SPECTROMETRY
Mechref, Yehia
2012-01-01
The high structural variation of glycan derived from glycoconjugates, which substantially increases with the molecular size of a protein, contributes to the complexity of glycosylation patterns commonly associated with glycoconjugates. In the case of glycoproteins, such variation originates from the multiple glycosylation sites of proteins and the number of glycan structures associated with each site (microheterogeneity). The ability to comprehensively characterize highly complex mixture of glycans has been analytically stimulating and challenging. Although the most powerful mass spectrometric (MS) and tandem MS techniques are capable of providing a wealth of structural information, they are still not able to readily identify isomeric glycan structures without high order tandem MS (MSn). The analysis of isomeric glycan structures has been attained using several separation methods, including high-pH anion exchange chromatography (HPAEC), hydrophilic interaction chromatography (HILIC) and gas chromatography (GC). However, capillary electrophoresis (CE) and microfluidics capillary electrophoresis (MCE) offer high separation efficiency and resolutions, allowing the separation of closely related glycan structures. Therefore, interfacing CE and MCE to MS is a powerful analytical approach, allowing potentially comprehensive and sensitive analysis of complex glycan samples. This review describes and discusses the utility of different CE and MCE approaches in the structural characterization of glycoproteins and the feasibility of interfacing these approaches to mass spectrometry. PMID:22180203
Wils, Julien; Fonfrède, Michèle; Augereau, Christine; Watine, Joseph
2014-01-01
Several tools are available to help evaluate the quality of clinical practice guidelines (CPG). The AGREE instrument (Appraisal of guidelines for research & evaluation) is the most consensual tool but it has been designed to assess CPG methodology only. The European federation of laboratory medicine (EFLM) recently designed a check-list dedicated to laboratory medicine which is supposed to be comprehensive and which therefore makes it possible to evaluate more thoroughly the quality of CPG in laboratory medicine. In the present work we test the comprehensiveness of this check-list on a sample of CPG written in French and published in Annales de biologie clinique (ABC). Thus we show that some work remains to be achieved before a truly comprehensive check-list is designed. We also show that there is some room for improvement for the CPG published in ABC, for example regarding the fact that some of these CPG do not provide any information about allowed durations of transport and of storage of biological samples before analysis, or about standards of minimal analytical performance, or about the sensitivities or the specificities of the recommended tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnett, Jonathan L.; Miley, Harry S.; Milbrath, Brian D.
In 2014 the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) undertook the Integrated Field Exercise (IFE) in Jordan. The exercise consisted of a simulated 0.5 – 2 kT underground explosion triggering an On-site Inspection (OSI) to search for evidence of a Treaty violation. This research evaluates two of the OSI techniques, including laboratory-based gamma-spectrometry of soil samples and in situ gamma-spectrometry for 17 particulate radionuclides indicative of nuclear weapon tests. The detection sensitivity is evaluated using real IFE and model data. It indicates that higher sensitivity laboratory measurements are the optimum technique during the IFE and OSI timeframes.
Tan, Susanna K; Burgener, Elizabeth B; Waggoner, Jesse J; Gajurel, Kiran; Gonzalez, Sarah; Chen, Sharon F; Pinsky, Benjamin A
2016-01-01
Background. Cytomegalovirus (CMV) is a major cause of morbidity and mortality in immunocompromised patients, with CMV pneumonitis among the most severe manifestations of infection. Although bronchoalveolar lavage (BAL) samples are frequently tested for CMV, the clinical utility of such testing remains uncertain. Methods. Retrospective analysis of adult patients undergoing BAL testing via CMV polymerase chain reaction (PCR), shell vial culture, and conventional viral culture between August 2008 and May 2011 was performed. Cytomegalovirus diagnostic methods were compared with a comprehensive definition of CMV pneumonitis that takes into account signs and symptoms, underlying host immunodeficiency, radiographic findings, and laboratory results. Results. Seven hundred five patients underwent 1077 bronchoscopy episodes with 1090 BAL specimens sent for CMV testing. Cytomegalovirus-positive patients were more likely to be hematopoietic cell transplant recipients (26% vs 8%, P < .0001) and less likely to have an underlying condition not typically associated with lung disease (3% vs 20%, P < .0001). Histopathology was performed in only 17.3% of CMV-positive bronchoscopy episodes. When CMV diagnostic methods were evaluated against the comprehensive definition, the sensitivity and specificity of PCR, shell vial culture, and conventional culture were 91.3% and 94.6%, 54.4% and 97.4%, and 28.3% and 96.5%, respectively. Compared with culture, PCR provided significantly higher sensitivity and negative predictive value (P ≤ .001), without significantly lower positive predictive value. Cytomegalovirus quantitation did not improve test performance, resulting in a receiver operating characteristic curve with an area under the curve of 0.53. Conclusions. Cytomegalovirus PCR combined with a comprehensive clinical definition provides a pragmatic approach for the diagnosis of CMV pneumonitis.
Time to angiographic reperfusion in acute ischemic stroke: decision analysis.
Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H
2014-12-01
Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.
Lucas, Rebecca; Norbury, Courtenay Frazier
2014-11-01
Many children with autism spectrum disorders (ASD) have reading comprehension difficulties, but the level of processing at which comprehension is most vulnerable and the influence of language phenotype on comprehension skill is currently unclear. We explored comprehension at sentence and passage levels across language phenotypes. Children with ASD and age-appropriate language skills (n = 25) demonstrated similar syntactic and semantic facilitation to typically developing peers. In contrast, few children with ASD and language impairments (n = 25) could read beyond the single word level. Those who could read sentences benefited from semantic coherence, but were less sensitive to syntactic coherence. At the passage level, the strongest predictor of comprehension was vocabulary knowledge. This emphasizes that the intimate relationship between language competence and both decoding skill and comprehension is evident at the sentence, as well as the passage level, for children with ASD.
msap: a tool for the statistical analysis of methylation-sensitive amplified polymorphism data.
Pérez-Figueroa, A
2013-05-01
In this study msap, an R package which analyses methylation-sensitive amplified polymorphism (MSAP or MS-AFLP) data is presented. The program provides a deep analysis of epigenetic variation starting from a binary data matrix indicating the banding pattern between the isoesquizomeric endonucleases HpaII and MspI, with differential sensitivity to cytosine methylation. After comparing the restriction fragments, the program determines if each fragment is susceptible to methylation (representative of epigenetic variation) or if there is no evidence of methylation (representative of genetic variation). The package provides, in a user-friendly command line interface, a pipeline of different analyses of the variation (genetic and epigenetic) among user-defined groups of samples, as well as the classification of the methylation occurrences in those groups. Statistical testing provides support to the analyses. A comprehensive report of the analyses and several useful plots could help researchers to assess the epigenetic and genetic variation in their MSAP experiments. msap is downloadable from CRAN (http://cran.r-project.org/) and its own webpage (http://msap.r-forge.R-project.org/). The package is intended to be easy to use even for those people unfamiliar with the R command line environment. Advanced users may take advantage of the available source code to adapt msap to more complex analyses. © 2013 Blackwell Publishing Ltd.
Species sensitivity distributions (SSD) require a large number of measured toxicity values to define a chemical’s toxicity to multiple species. This investigation comprehensively evaluated the accuracy of SSDs generated from toxicity values predicted from interspecies correlation...
Advanced techniques for determining long term compatibility of materials with propellants
NASA Technical Reports Server (NTRS)
Green, R. L.
1972-01-01
The search for advanced measurement techniques for determining long term compatibility of materials with propellants was conducted in several parts. A comprehensive survey of the existing measurement and testing technology for determining material-propellant interactions was performed. Selections were made from those existing techniques which were determined could meet or be made to meet the requirements. Areas of refinement or changes were recommended for improvement of others. Investigations were also performed to determine the feasibility and advantages of developing and using new techniques to achieve significant improvements over existing ones. The most interesting demonstration was that of the new technique, the volatile metal chelate analysis. Rivaling the neutron activation analysis in terms of sensitivity and specificity, the volatile metal chelate technique was fully demonstrated.
Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.
Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J
2018-06-01
Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.
Full-Spectrum-Analysis Isotope ID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee; Thoreson, Gregory G.
2017-06-28
FSAIsotopeID analyzes gamma ray spectra to identify radioactive isotopes (radionuclides). The algorithm fits the entire spectrum with combinations of pre-computed templates for a comprehensive set of radionuclides with varying thicknesses and compositions of shielding materials. The isotope identification algorithm is suitable for the analysis of spectra collected by gamma-ray sensors ranging from medium-resolution detectors, such a NaI, to high-resolution detectors, such as HPGe. In addition to analyzing static measurements, the isotope identification algorithm is applied for the radiation search applications. The search subroutine maintains a running background spectrum that is passed to the isotope identification algorithm, and it also selectsmore » temporal integration periods that optimize the responsiveness and sensitivity. Gain stabilization is supported for both types of applications.« less
ERIC Educational Resources Information Center
Wood, Sarah G.; Hart, Sara A.; Little, Callie W.; Phillips, Beth M.
2016-01-01
Past research suggests that reading comprehension test performance does not rely solely on targeted cognitive processes such as word reading, but also on other nontarget aspects such as test anxiety. Using a genetically sensitive design, we sought to understand the genetic and environmental etiology of the association between test anxiety and…
A Temperature Sensor Based on a Polymer Optical Fiber Macro-Bend
Moraleda, Alberto Tapetado; García, Carmen Vázquez; Zaballa, Joseba Zubia; Arrue, Jon
2013-01-01
The design and development of a plastic optical fiber (POF) macrobend temperature sensor is presented. The sensor has a linear response versus temperature at a fixed bend radius, with a sensitivity of 1.92·10−3 (°C)−1. The sensor system used a dummy fiber-optic sensor for reference purposes having a resolution below 0.3 °C. A comprehensive experimental analysis was carried out to provide insight into the effect of different surrounding media on practical macro-bend POF sensor implementation. Experimental results are successfully compared with bend loss calculations. PMID:24077323
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
NASA Astrophysics Data System (ADS)
Mehrpooya, Mehdi; Ansarinasab, Hojat; Moftakhari Sharifzadeh, Mohammad Mehdi; Rosen, Marc A.
2017-10-01
An integrated power plant with a net electrical power output of 3.71 × 105 kW is developed and investigated. The electrical efficiency of the process is found to be 60.1%. The process includes three main sub-systems: molten carbonate fuel cell system, heat recovery section and cryogenic carbon dioxide capturing process. Conventional and advanced exergoeconomic methods are used for analyzing the process. Advanced exergoeconomic analysis is a comprehensive evaluation tool which combines an exergetic approach with economic analysis procedures. With this method, investment and exergy destruction costs of the process components are divided into endogenous/exogenous and avoidable/unavoidable parts. Results of the conventional exergoeconomic analyses demonstrate that the combustion chamber has the largest exergy destruction rate (182 MW) and cost rate (13,100 /h). Also, the total process cost rate can be decreased by reducing the cost rate of the fuel cell and improving the efficiency of the combustion chamber and heat recovery steam generator. Based on the total avoidable endogenous cost rate, the priority for modification is the heat recovery steam generator, a compressor and a turbine of the power plant, in rank order. A sensitivity analysis is done to investigate the exergoeconomic factor parameters through changing the effective parameter variations.
Methodology for determining major constituents of ayahuasca and their metabolites in blood.
McIlhenny, Ethan H; Riba, Jordi; Barbanoj, Manel J; Strassman, Rick; Barker, Steven A
2012-03-01
There is an increasing interest in potential medical applications of ayahuasca, a South American psychotropic plant tea with a long cultural history of indigenous medical and religious use. Clinical research into ayahuasca will require specific, sensitive and comprehensive methods for the characterization and quantitation of these compounds and their metabolites in blood. A combination of two analytical techniques (high-performance liquid chromatography with ultraviolet and/or fluorescence detection and gas chromatography with nitrogen-phosphorus detection) has been used for the analysis of some of the constituents of ayahuasca in blood following its oral consumption. We report here a single methodology for the direct analysis of 14 of the major alkaloid components of ayahuasca, including several known and potential metabolites of N,N-dimethyltryptamine and the harmala alkaloids in blood. The method uses 96-well plate/protein precipitation/filtration for plasma samples, and analysis by HPLC-ion trap-ion trap-mass spectrometry using heated electrospray ionization to reduce matrix effects. The method expands the list of compounds capable of being monitored in blood following ayahuasca administration while providing a simplified approach to their analysis. The method has adequate sensitivity, specificity and reproducibility to make it useful for clinical research with ayahuasca. Copyright © 2011 John Wiley & Sons, Ltd.
Wang, Hong; Wu, Qi-nan; Wu, Cheng-ying; Fan, Xiu-he; Jiang, Zheng; Gu, Wei; Yue, Wei
2015-01-01
To establish a simple, rapid and efficient method for determination of different inorganic elements in Euryale Semen from different habitats. Inductively coupled plasma-optical emission spectrometry(ICP-OES) was applied to determine inorganic elements in Euryale Semen, and the results were analyzed by principal component analysis. Euryale Semen from different habitats contained the kind of inorganic elements ranging from 22 to 26, including micronutrient elements like Iron, Zinc, Selenium, Copper, Molybdenum, Chrome and Cobalt, as well as macronutrient elements such as Potassium, Calcium, Sodium, Magnesium and Phosphorus. Five factors were extracted and used to comprehensively evaluate Euryale Semen from 20 different habitats covered almost China. The comprehensive function was F = 0. 38828F1 + 0. 25603F2 + 0. 07617F3 + 0. 06860F4 + 0. 04868F5, which resulted in the top three samples coming from Jiangsu Gaoyou, Hunan Xiangxi and Jiangsu Suzhou respectively. The study indicates that ICP-OES is a quick, accurate and sensitive method to determine the contents of inorganic elements in Euryale Semen,which provides scientific and reliable reference for its quality control and safety assessment.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-18
... to Iran AGENCY: Department of Defense (DoD), General Services Administration (GSA), and National... representation to implement section 106 of the Comprehensive Iran Sanctions, Accountability, and Divestment Act... certain sensitive technology to Iran. DATES: Effective Date: May 18, 2012. FOR FURTHER INFORMATION CONTACT...
The Processing of Case in Near-Native Spanish
ERIC Educational Resources Information Center
Jegerski, Jill
2015-01-01
This article reports a study that sought to determine whether non-native sentence comprehension can show sensitivity to two different types of Spanish case marking. Sensitivity to case violations was generally more robust with indirect objects in ditransitive constructions than with differential object marking of animate direct objects, even among…
Wood, James M; Lilienfeld, Scott O; Nezworski, M Teresa; Garb, Howard N; Allen, Keli Holloway; Wildermuth, Jessica L
2010-06-01
Gacono and Meloy (2009) have concluded that the Rorschach Inkblot Test is a sensitive instrument with which to discriminate psychopaths from nonpsychopaths. We examined the association of psychopathy with 37 Rorschach variables in a meta-analytic review of 173 validity coefficients derived from 22 studies comprising 780 forensic participants. All studies included the Hare Psychopathy Checklist or one of its versions (Hare, 1980, 1991, 2003) and Exner's (2003) Comprehensive System for the Rorschach. Mean validity coefficients of Rorschach variables in the meta-analysis ranged from -.113 to .239, with a median validity of .070 and a mean validity of .062. Psychopathy displayed a significant and medium-sized association with the number of Aggressive Potential responses (weighted mean validity coefficient = .232) and small but significant associations with the Sum of Texture responses, Cooperative Movement = 0, the number of Personal responses, and the Egocentricity Index (weighted mean validity coefficients = .097 to .159). The remaining 32 Rorschach variables were not significantly related to psychopathy. The present findings contradict the view that the Rorschach is a clinically sensitive instrument for discriminating psychopaths from nonpsychopaths.
A multi-model assessment of terrestrial biosphere model data needs
NASA Astrophysics Data System (ADS)
Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.
2017-12-01
Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.
Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G; Ultsch, Alfred
2017-08-16
The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Kinase Pathway Dependence in Primary Human Leukemias Determined by Rapid Inhibitor Screening
Tyner, Jeffrey W.; Yang, Wayne F.; Bankhead, Armand; Fan, Guang; Fletcher, Luke B.; Bryant, Jade; Glover, Jason M.; Chang, Bill H.; Spurgeon, Stephen E.; Fleming, William H.; Kovacsovics, Tibor; Gotlib, Jason R.; Oh, Stephen T.; Deininger, Michael W.; Zwaan, C. Michel; Den Boer, Monique L.; van den Heuvel-Eibrink, Marry M.; O’Hare, Thomas; Druker, Brian J.; Loriaux, Marc M.
2012-01-01
Kinases are dysregulated in most cancer but the frequency of specific kinase mutations is low, indicating a complex etiology in kinase dysregulation. Here we report a strategy to rapidly identify functionally important kinase targets, irrespective of the etiology of kinase pathway dysregulation, ultimately enabling a correlation of patient genetic profiles to clinically effective kinase inhibitors. Our methodology assessed the sensitivity of primary leukemia patient samples to a panel of 66 small-molecule kinase inhibitors over 3 days. Screening of 151 leukemia patient samples revealed a wide diversity of drug sensitivities, with 70% of the clinical specimens exhibiting hypersensitivity to one or more drugs. From this data set, we developed an algorithm to predict kinase pathway dependence based on analysis of inhibitor sensitivity patterns. Applying this algorithm correctly identified pathway dependence in proof-of-principle specimens with known oncogenes, including a rare FLT3 mutation outside regions covered by standard molecular diagnostic tests. Interrogation of all 151 patient specimens with this algorithm identified a diversity of gene targets and signaling pathways that could aid prioritization of deep sequencing data sets, permitting a cumulative analysis to understand kinase pathway dependence within leukemia subsets. In a proof-of-principle case, we showed that in vitro drug sensitivity could predict both a clinical response and the development of drug resistance. Taken together, our results suggested that drug target scores derived from a comprehensive kinase inhibitor panel could predict pathway dependence in cancer cells while simultaneously identifying potential therapeutic options. PMID:23087056
Zhang, Li; Tang, Min; Min, Zhiqian; Lu, Jun; Lei, Xiaoyan; Zhang, Xiaoling
2016-06-01
Magnetic resonance imaging (MRI) is increasingly being used to examine patients with suspected breast cancer. To determine the diagnostic performance of combined dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion-weighted imaging (DWI) for breast cancer detection. A comprehensive search of the PUBMED, EMBASE, Web of Science, and Cochrane Library databases was performed up to September 2014. Statistical analysis included pooling of sensitivity and specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and diagnostic accuracy using the summary receiver operating characteristic (SROC). All analyses were conducted using STATA (version 12.0), RevMan (version 5.2), and Meta-Disc 1.4 software programs. Fourteen studies were analyzed, which included a total of 1140 patients with 1276 breast lesions. The pooled sensitivity and specificity of combined DCE-MRI and DWI were 91.6% and 85.5%, respectively. The pooled sensitivity and specificity of DWI-MRI were 86.0% and 75.6%, respectively. The pooled sensitivity and specificity of DCE-MRI were 93.2% and 71.1%. The area under the SROC curve (AUC-SROC) of combined DCE-MRI and DWI was 0.94, the DCE-MRI of 0.85. Deeks testing confirmed no significant publication bias in all studies. Combined DCE-MRI and DWI had superior diagnostic accuracy than either DCE-MRI or DWI alone for the diagnosis of breast cancer. © The Foundation Acta Radiologica 2015.
Optical modeling of waveguide coupled TES detectors towards the SAFARI instrument for SPICA
NASA Astrophysics Data System (ADS)
Trappe, N.; Bracken, C.; Doherty, S.; Gao, J. R.; Glowacka, D.; Goldie, D.; Griffin, D.; Hijmering, R.; Jackson, B.; Khosropanah, P.; Mauskopf, P.; Morozov, D.; Murphy, A.; O'Sullivan, C.; Ridder, M.; Withington, S.
2012-09-01
The next generation of space missions targeting far-infrared wavelengths will require large-format arrays of extremely sensitive detectors. The development of Transition Edge Sensor (TES) array technology is being developed for future Far-Infrared (FIR) space applications such as the SAFARI instrument for SPICA where low-noise and high sensitivity is required to achieve ambitious science goals. In this paper we describe a modal analysis of multi-moded horn antennas feeding integrating cavities housing TES detectors with superconducting film absorbers. In high sensitivity TES detector technology the ability to control the electromagnetic and thermo-mechanical environment of the detector is critical. Simulating and understanding optical behaviour of such detectors at far IR wavelengths is difficult and requires development of existing analysis tools. The proposed modal approach offers a computationally efficient technique to describe the partial coherent response of the full pixel in terms of optical efficiency and power leakage between pixels. Initial wok carried out as part of an ESA technical research project on optical analysis is described and a prototype SAFARI pixel design is analyzed where the optical coupling between the incoming field and the pixel containing horn, cavity with an air gap, and thin absorber layer are all included in the model to allow a comprehensive optical characterization. The modal approach described is based on the mode matching technique where the horn and cavity are described in the traditional way while a technique to include the absorber was developed. Radiation leakage between pixels is also included making this a powerful analysis tool.
NASA Astrophysics Data System (ADS)
Caton, R. G.; Colman, J. J.; Parris, R. T.; Nickish, L.; Bullock, G.
2017-12-01
The Air Force Research Laboratory, in collaboration with NorthWest Research Associates, is developing advanced software capabilities for high fidelity simulations of high frequency (HF) sky wave propagation and performance analysis of HF systems. Based on the HiCIRF (High-frequency Channel Impulse Response Function) platform [Nickisch et. al, doi:10.1029/2011RS004928], the new Air Force Coverage Analysis Program (AFCAP) provides the modular capabilities necessary for a comprehensive sensitivity study of the large number of variables which define simulations of HF propagation modes. In this paper, we report on an initial exercise of AFCAP to analyze the sensitivities of the tool to various environmental and geophysical parameters. Through examination of the channel scattering function and amplitude-range-Doppler output on two-way propagation paths with injected target signals, we will compare simulated returns over a range of geophysical conditions as well as varying definitions for environmental noise, meteor clutter, and sea state models for Bragg backscatter. We also investigate the impacts of including clutter effects due to field-aligned backscatter from small scale ionization structures at varied levels of severity as defined by the climatologically WideBand Model (WBMOD). In the absence of additional user provided information, AFCAP relies on International Reference Ionosphere (IRI) model to define the ionospheric state for use in 2D ray tracing algorithms. Because the AFCAP architecture includes the option for insertion of a user defined gridded ionospheric representation, we compare output from the tool using the IRI and ionospheric definitions from assimilative models such as GPSII (GPS Ionospheric Inversion).
Design optimization of condenser microphone: a design of experiment perspective.
Tan, Chee Wee; Miao, Jianmin
2009-06-01
A well-designed condenser microphone backplate is very important in the attainment of good frequency response characteristics--high sensitivity and wide bandwidth with flat response--and low mechanical-thermal noise. To study the design optimization of the backplate, a 2(6) factorial design with a single replicate, which consists of six backplate parameters and four responses, has been undertaken on a comprehensive condenser microphone model developed by Zuckerwar. Through the elimination of insignificant parameters via normal probability plots of the effect estimates, the projection of an unreplicated factorial design into a replicated one can be performed to carry out an analysis of variance on the factorial design. The air gap and slot have significant effects on the sensitivity, mechanical-thermal noise, and bandwidth while the slot/hole location interaction has major influence over the latter two responses. An organized and systematic approach of designing the backplate is summarized.
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
Space shuttle entry and landing navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Crawford, B. S.
1974-01-01
A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.
Model of urban water management towards water sensitive city: a literature review
NASA Astrophysics Data System (ADS)
Maftuhah, D. I.; Anityasari, M.; Sholihah, M.
2018-04-01
Nowadays, many cities are facing with complex issues such as climate change, social, economic, culture, and environmental problems, especially urban water. In other words, the city has to struggle with the challenge to make sure its sustainability in all aspects. This research focuses on how to ensure the city sustainability and resilience on urban water management. Many research were not only conducted in urban water management, but also in sustainability itself. Moreover, water sustainability shifts from urban water management into water sensitive city. This transition needs comprehensive aspects such as social, institutional dynamics, technical innovation, and local contents. Some literatures about model of urban water management and the transition towards water sensitivity had been reviewed in this study. This study proposed discussion about model of urban water management and the transition towards water sensitive city. Research findings suggest that there are many different models developed in urban water management, but they are not comprehensive yet and only few studies discuss about the transition towards water sensitive and resilience city. The drawbacks of previous research can identify and fulfill the gap of this study. Therefore, the paper contributes a general framework for the urban water management modelling studies.
Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages
NASA Technical Reports Server (NTRS)
Summers, R. L.
1969-01-01
A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.
Terrain-analysis procedures for modeling radar backscatter
Schaber, Gerald G.; Pike, Richard J.; Berlin, Graydon Lennis
1978-01-01
The collection and analysis of detailed information on the surface of natural terrain are important aspects of radar-backscattering modeling. Radar is especially sensitive to surface-relief changes in the millimeter- to-decimeter scale four conventional K-band (~1-cm wavelength) to L-band (~25-cm wavelength) radar systems. Surface roughness statistics that characterize these changes in detail have been generated by a comprehensive set of seven programmed calculations for radar-backscatter modeling from sets of field measurements. The seven programs are 1) formatting of data in readable form for subsequent topographic analysis program; 2) relief analysis; 3) power spectral analysis; 4) power spectrum plots; 5) slope angle between slope reversals; 6) slope angle against slope interval plots; and 7) base length slope angle and curvature. This complete Fortran IV software package, 'Terrain Analysis', is here presented for the first time. It was originally developed a decade ago for investigations of lunar morphology and surface trafficability for the Apollo Lunar Roving Vehicle.
Putting lexical constraints in context into the visual-world paradigm.
Novick, Jared M; Thompson-Schill, Sharon L; Trueswell, John C
2008-06-01
Prior eye-tracking studies of spoken sentence comprehension have found that the presence of two potential referents, e.g., two frogs, can guide listeners toward a Modifier interpretation of Put the frog on the napkin... despite strong lexical biases associated with Put that support a Goal interpretation of the temporary ambiguity (Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634; Trueswell, J. C., Sekerina, I., Hill, N. M. & Logrip, M. L. (1999). The kindergarten-path effect: Studying on-line sentence processing in young children. Cognition, 73, 89-134). This pattern is not expected under constraint-based parsing theories: cue conflict between the lexical evidence (which supports the Goal analysis) and the visuo-contextual evidence (which supports the Modifier analysis) should result in uncertainty about the intended analysis and partial consideration of the Goal analysis. We reexamined these put studies (Experiment 1) by introducing a response time-constraint and a spatial contrast between competing referents (a frog on a napkin vs. a frog in a bowl). If listeners immediately interpret on the... as the start of a restrictive modifier, then their eye movements should rapidly converge on the intended referent (the frog on something). However, listeners showed this pattern only when the phrase was unambiguously a Modifier (Put the frog that's on the...). Syntactically ambiguous trials resulted in transient consideration of the Competitor animal (the frog in something). A reading study was also run on the same individuals (Experiment 2) and performance was compared between the two experiments. Those individuals who relied heavily on lexical biases to resolve a complement ambiguity in reading (The man heard/realized the story had been...) showed increased sensitivity to both lexical and contextual constraints in the put-task; i.e., increased consideration of the Goal analysis in 1-Referent Scenes, but also adeptness at using spatial constraints of prepositions (in vs. on) to restrict referential alternatives in 2-Referent Scenes. These findings cross-validate visual world and reading methods and support multiple-constraint theories of sentence processing in which individuals differ in their sensitivity to lexical contingencies.
Sloan, Jamison; Sun, Yunwei; Carrigan, Charles
2016-05-01
Enforcement of the Comprehensive Nuclear Test Ban Treaty (CTBT) will involve monitoring for radiologic indicators of underground nuclear explosions (UNEs). A UNE produces a variety of radioisotopes which then decay through connected radionuclide chains. A particular species of interest is xenon, namely the four isotopes (131m)Xe, (133m)Xe, (133)Xe, and (135)Xe. Due to their half lives, some of these isotopes can exist in the subsurface for more than 100 days. This convenient timescale, combined with modern detection capabilities, makes the xenon family a desirable candidate for UNE detection. Ratios of these isotopes as a function of time have been studied in the past for distinguishing nuclear explosions from civilian nuclear applications. However, the initial yields from UNEs have been treated as fixed values. In reality, these independent yields are uncertain to a large degree. This study quantifies the uncertainty in xenon ratios as a result of these uncertain initial conditions to better bound the values that xenon ratios can assume. We have successfully used a combination of analytical and sampling based statistical methods to reliably bound xenon isotopic ratios. We have also conducted a sensitivity analysis and found that xenon isotopic ratios are primarily sensitive to only a few of many uncertain initial conditions. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wan, Wei; Li, Huan; Xie, Hongjie; Hong, Yang; Long, Di; Zhao, Limin; Han, Zhongying; Cui, Yaokui; Liu, Baojian; Wang, Cunguang; Yang, Wenting
2017-01-01
Lake surface water temperature (LSWT) is sensitive to long-term changes in thermal structure of lakes and regional air temperature. In the context of global climate change, recent studies showed a significant warming trend of LSWT based on investigating 291 lakes (71% are large lakes, ≥50 km2 each) globally. However, further efforts are needed to examine variation in LSWT at finer regional spatial and temporal scales. The Tibetan Plateau (TP), known as ‘the Roof of the World’ and ‘Asia’s water towers’, exerts large influences on and is sensitive to regional and even global climates. Aiming to examine detailed changing patterns and potential driven mechanisms for temperature variations of lakes across the TP region, this paper presents the first comprehensive data set of 15-year (2001–2015) nighttime and daytime LSWT for 374 lakes (≥10 km2 each), using MODIS (Moderate Resolution Imaging Spectroradiometer) Land Surface Temperature (LST) products as well as four lake boundary shapefiles (i.e., 2002, 2005, 2009, and 2014) derived from Landsat/CBERS/GaoFen-1 satellite images. The data set itself reveals significant information on LSWT and its changes over the TP and is an indispensable variable for numerous applications related to climate change, water budget analysis (particularly lake evaporation), water storage changes, glacier melting and permafrost degradation, etc. PMID:28742066
Wan, Wei; Li, Huan; Xie, Hongjie; Hong, Yang; Long, Di; Zhao, Limin; Han, Zhongying; Cui, Yaokui; Liu, Baojian; Wang, Cunguang; Yang, Wenting
2017-07-25
Lake surface water temperature (LSWT) is sensitive to long-term changes in thermal structure of lakes and regional air temperature. In the context of global climate change, recent studies showed a significant warming trend of LSWT based on investigating 291 lakes (71% are large lakes, ≥50 km 2 each) globally. However, further efforts are needed to examine variation in LSWT at finer regional spatial and temporal scales. The Tibetan Plateau (TP), known as 'the Roof of the World' and 'Asia's water towers', exerts large influences on and is sensitive to regional and even global climates. Aiming to examine detailed changing patterns and potential driven mechanisms for temperature variations of lakes across the TP region, this paper presents the first comprehensive data set of 15-year (2001-2015) nighttime and daytime LSWT for 374 lakes (≥10 km 2 each), using MODIS (Moderate Resolution Imaging Spectroradiometer) Land Surface Temperature (LST) products as well as four lake boundary shapefiles (i.e., 2002, 2005, 2009, and 2014) derived from Landsat/CBERS/GaoFen-1 satellite images. The data set itself reveals significant information on LSWT and its changes over the TP and is an indispensable variable for numerous applications related to climate change, water budget analysis (particularly lake evaporation), water storage changes, glacier melting and permafrost degradation, etc.
NASA Technical Reports Server (NTRS)
Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.
2005-01-01
This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).
ERIC Educational Resources Information Center
Allen, Daniel N.; Thaler, Nicholas S.; Barchard, Kimberly A.; Vertinski, Mary; Mayfield, Joan
2012-01-01
The Comprehensive Trail Making Test (CTMT) is a relatively new version of the Trail Making Test that has a number of appealing features, including a large normative sample that allows raw scores to be converted to standard "T" scores adjusted for age. Preliminary validity information suggests that CTMT scores are sensitive to brain…
Comprehensive Trail Making Test Performance in Children and Adolescents with Traumatic Brain Injury
ERIC Educational Resources Information Center
Allen, Daniel N.; Thaler, Nicholas S.; Ringdahl, Erik N.; Barney, Sally J.; Mayfield, Joan
2012-01-01
The sensitivity of the Trail Making Test to brain damage has been well-established over many years, making it one of the most commonly used tests in clinical neuropsychological evaluations. The current study examined the validity of scores from a newer version of the Trail Making Test, the Comprehensive Trail Making Test (CTMT), in children and…
ERIC Educational Resources Information Center
Roodsaz, Rahil
2018-01-01
As part of Western European development aid policy, comprehensive sexuality education (CSE) is increasingly promoted in resource-poor countries. This paper engages with CSE promotion in Bangladesh funded by the Dutch Government. It unpacks the "collaboration" by looking at how a paradox is played out between the universal ideals…
Simulation on a car interior aerodynamic noise control based on statistical energy analysis
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Dengfeng; Ma, Zhengdong
2012-09-01
How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.
Hoffmann, Sebastian
2015-01-01
The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.
Global sensitivity analysis of water age and temperature for informing salmonid disease management
NASA Astrophysics Data System (ADS)
Javaheri, Amir; Babbar-Sebens, Meghna; Alexander, Julie; Bartholomew, Jerri; Hallett, Sascha
2018-06-01
Many rivers in the Pacific Northwest region of North America are anthropogenically manipulated via dam operations, leading to system-wide impacts on hydrodynamic conditions and aquatic communities. Understanding how dam operations alter abiotic and biotic variables is important for designing management actions. For example, in the Klamath River, dam outflows could be manipulated to alter water age and temperature to reduce risk of parasite infections in salmon by diluting or altering viability of parasite spores. However, sensitivity of water age and temperature to the riverine conditions such as bathymetry can affect outcomes from dam operations. To examine this issue in detail, we conducted a global sensitivity analysis of water age and temperature to a comprehensive set of hydraulics and meteorological parameters in the Klamath River, California, where management of salmonid disease is a high priority. We applied an analysis technique, which combined Latin-hypercube and one-at-a-time sampling methods, and included simulation runs with the hydrodynamic numerical model of the Lower Klamath. We found that flow rate and bottom roughness were the two most important parameters that influence water age. Water temperature was more sensitive to inflow temperature, air temperature, solar radiation, wind speed, flow rate, and wet bulb temperature respectively. Our results are relevant for managers because they provide a framework for predicting how water within 'high infection risk' sections of the river will respond to dam water (low infection risk) input. Moreover, these data will be useful for prioritizing the use of water age (dilution) versus temperature (spore viability) under certain contexts when considering flow manipulation as a method to reduce risk of infection and disease in Klamath River salmon.
Wang, X W; Pappoe, F; Huang, Y; Cheng, X W; Xu, D F; Wang, H; Xu, Y H
2015-01-01
The Xpert MTB/RIF assay has been recommended by WHO to replace conventional microscopy, culture, and drug resistance tests. It simultaneously detects both Mycobacterium tuberculosis infection (TB) and resistance to rifampicin (RIF) within two hours. The objective was to review the available research studies on the accuracy of the Xpert MTB/RIF assay for diagnosing pulmonary TB and RIF-resistance in children. A comprehensive search of Pubmed and Embase was performed up to October 28, 2014. We identified published articles estimating the diagnostic accuracy of the Xpert MTB/RIF assay in children with or without HIV using culture or culture plus clinical TB as standard reference. QUADAS-2 tool was used to evaluate the quality of the studies. A summary estimation for sensitivity, specificity, diagnostic odds ratios (DOR), and the area under the summary ROC curve (AUC) was performed. Meta-analysis was used to establish the overall accuracy. 11 diagnostic studies with 3801 patients were included in the systematic review. The overall analysis revealed a moderate sensitivity and high specificity of 65% (95% CI: 61 - 69%) and 99% (95% CI: 98 - 99%), respectively, and a pooled diagnostic odds ratio of 164.09 (95% CI: 111.89 - 240.64). The AUC value was found to be 0.94. The pooled sensitivity and specificity for paediatric rifampicin resistance were 94.0% (95% CI: 80.0 - 93.0%) and 99.0% (95% CI: 95.0 - 98.0%), respectively. Hence, the Xpert MTB/RIF assay has good diagnostic and rifampicin performance for paediatric pulmonary tuberculosis. The Xpert MTB/RIF is sensitive and specific for diagnosing paediatric pulmonary TB. It is also effective in detecting rifamnicin resistance. It can, therefore, be used as an initial diagnostic tool.
Tarafdar, Abhrajyoti; Sinha, Alok
2017-10-01
A carcinogenic risk assessment of polycyclic aromatic hydrocarbons in soils and sediments was conducted using the probabilistic approach from a national perspective. Published monitoring data of polycyclic aromatic hydrocarbons present in soils and sediments at different study points across India were collected and converted to their corresponding BaP equivalent concentrations. These BaP equivalent concentrations were used to evaluate comprehensive cancer risk for two different age groups. Monte Carlo simulation and sensitivity analysis were applied to quantify uncertainties of risk estimation. The analysis denotes 90% cancer risk value of 1.770E-5 for children and 3.156E-5 for adults at heavily polluted site soils. Overall carcinogenic risks of polycyclic aromatic hydrocarbons in soils of India were mostly in acceptance limits. However, the food ingestion exposure route for sediments leads them to a highly risked zone. The 90% risk values from sediments are 7.863E-05 for children and 3.999E-04 for adults. Sensitivity analysis reveals exposure duration and relative skin adherence factor for soil as the most influential parameter of the assessment, followed by BaP equivalent concentration of polycyclic aromatic hydrocarbons. For sediments, biota to sediment accumulation factor of fish in terms of BaP is most sensitive on the total outcome, followed by BaP equivalent and exposure duration. Individual exposure route analysis showed dermal contact for soils and food ingestion for sediments as the main exposure pathway. Some specific locations such as surrounding areas of Bhavnagar, Raniganj, Sunderban, Raipur, and Delhi demand potential strategies of carcinogenic risk management and reduction. The current study is probably the first attempt to provide information on the carcinogenic risk of polycyclic aromatic hydrocarbons in soil and sediments across India.
Potential diagnostic value of serum p53 antibody for detecting colorectal cancer: A meta-analysis.
Meng, Rongqin; Wang, Yang; He, Liang; He, Yuanqing; Du, Zedong
2018-04-01
Numerous studies have assessed the diagnostic value of serum p53 (s-p53) antibody in patients with colorectal cancer (CRC); however, results remain controversial. The present study aimed to comprehensively and quantitatively summarize the potential diagnostic value of s-p53 antibody in CRC. The present study utilized databases, including PubMed and EmBase, systematically regarding s-p53 antibody diagnosis in CRC, accessed on and prior to 31 July 2016. The quality of all the included studies was assessed using quality assessment of studies of diagnostic accuracy (QUADAS). The result of pooled sensitivity, pooled specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were analyzed and compared with overall accuracy measures using diagnostic odds ratios (DORs) and area under the curve (AUC) analysis. Publication bias and heterogeneity were also assessed. A total of 11 trials that enrolled a combined 3,392 participants were included in the meta-analysis. Approximately 72.73% (8/11) of the included studies were of high quality (QUADAS score >7), and all were retrospective case-control studies. The pooled sensitivity was 0.19 [95% confidence interval (CI), 0.18-0.21] and pooled specificity was 0.93 (95% CI, 0.92-0.94). Results also demonstrated a PLR of 4.56 (95% CI, 3.27-6.34), NLR of 0.78 (95% CI, 0.71-0.85) and DOR of 6.70 (95% CI, 4.59-9.76). The symmetrical summary receiver operating characteristic curve was 0.73. Furthermore, no evidence of publication bias or heterogeneity was observed in the meta-analysis. Meta-analysis data indicated that s-p53 antibody possesses potential diagnostic value for CRC. However, discrimination power was somewhat limited due to the low sensitivity.
Zhang, Peige; Zhang, Li; Zheng, Shaoping; Yu, Cheng; Xie, Mingxing; Lv, Qing
2016-01-01
To evaluate the overall performance of acoustic radiation force impulse imaging (ARFI) in differentiating between benign and malignant lymph nodes (LNs) by conducting a meta-analysis. PubMed, Embase, Web of Science, the Cochrane Library and the China National Knowledge Infrastructure were comprehensively searched for potential studies through August 13th, 2016. Studies that investigated the diagnostic power of ARFI for the differential diagnosis of benign and malignant LNs by using virtual touch tissue quantification (VTQ) or virtual touch tissue imaging quantification (VTIQ) were collected. The included articles were published in English or Chinese. Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) was used to evaluate the methodological quality. The pooled sensitivity, specificity, and the area under the summary receiver operating characteristic (SROC) curve (AUC) were calculated by means of a bivariate mixed-effects regression model. Meta-regression analysis was performed to identify the potential sources of between study heterogeneity. Fagan plot analysis was used to explore the clinical utilities. Publication bias was assessed using Deek's funnel plot. Nine studies involving 1084 LNs from 929 patients were identified to analyze in the meta-analysis. The summary sensitivity and specificity of ARFI in detecting malignant LNs were 0.87 (95% confidence interval [CI], 0.83-0.91) and 0.88 (95% CI, 0.82-0.92), respectively. The AUC was 0.93 (95% CI, 0.90-0.95). The pooled DOR was 49.59 (95% CI, 26.11-94.15). Deek's funnel plot revealed no significant publication bias. ARFI is a promising tool for the differentiation of benign and malignant LNs with high sensitivity and specificity.
Yu, Cheng; Xie, Mingxing; Lv, Qing
2016-01-01
Objective To evaluate the overall performance of acoustic radiation force impulse imaging (ARFI) in differentiating between benign and malignant lymph nodes (LNs) by conducting a meta-analysis. Methods PubMed, Embase, Web of Science, the Cochrane Library and the China National Knowledge Infrastructure were comprehensively searched for potential studies through August 13th, 2016. Studies that investigated the diagnostic power of ARFI for the differential diagnosis of benign and malignant LNs by using virtual touch tissue quantification (VTQ) or virtual touch tissue imaging quantification (VTIQ) were collected. The included articles were published in English or Chinese. Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) was used to evaluate the methodological quality. The pooled sensitivity, specificity, and the area under the summary receiver operating characteristic (SROC) curve (AUC) were calculated by means of a bivariate mixed-effects regression model. Meta-regression analysis was performed to identify the potential sources of between study heterogeneity. Fagan plot analysis was used to explore the clinical utilities. Publication bias was assessed using Deek’s funnel plot. Results Nine studies involving 1084 LNs from 929 patients were identified to analyze in the meta-analysis. The summary sensitivity and specificity of ARFI in detecting malignant LNs were 0.87 (95% confidence interval [CI], 0.83–0.91) and 0.88 (95% CI, 0.82–0.92), respectively. The AUC was 0.93 (95% CI, 0.90–0.95). The pooled DOR was 49.59 (95% CI, 26.11–94.15). Deek’s funnel plot revealed no significant publication bias. Conclusion ARFI is a promising tool for the differentiation of benign and malignant LNs with high sensitivity and specificity. PMID:27855188
NASA Astrophysics Data System (ADS)
Tarafdar, Abhrajyoti; Sinha, Alok
2017-10-01
A carcinogenic risk assessment of polycyclic aromatic hydrocarbons in soils and sediments was conducted using the probabilistic approach from a national perspective. Published monitoring data of polycyclic aromatic hydrocarbons present in soils and sediments at different study points across India were collected and converted to their corresponding BaP equivalent concentrations. These BaP equivalent concentrations were used to evaluate comprehensive cancer risk for two different age groups. Monte Carlo simulation and sensitivity analysis were applied to quantify uncertainties of risk estimation. The analysis denotes 90% cancer risk value of 1.770E-5 for children and 3.156E-5 for adults at heavily polluted site soils. Overall carcinogenic risks of polycyclic aromatic hydrocarbons in soils of India were mostly in acceptance limits. However, the food ingestion exposure route for sediments leads them to a highly risked zone. The 90% risk values from sediments are 7.863E-05 for children and 3.999E-04 for adults. Sensitivity analysis reveals exposure duration and relative skin adherence factor for soil as the most influential parameter of the assessment, followed by BaP equivalent concentration of polycyclic aromatic hydrocarbons. For sediments, biota to sediment accumulation factor of fish in terms of BaP is most sensitive on the total outcome, followed by BaP equivalent and exposure duration. Individual exposure route analysis showed dermal contact for soils and food ingestion for sediments as the main exposure pathway. Some specific locations such as surrounding areas of Bhavnagar, Raniganj, Sunderban, Raipur, and Delhi demand potential strategies of carcinogenic risk management and reduction. The current study is probably the first attempt to provide information on the carcinogenic risk of polycyclic aromatic hydrocarbons in soil and sediments across India.
Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo
2017-03-01
The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.
Kulstein, G; Wiegand, P
2018-01-01
Body fluids like blood and saliva are commonly encountered during investigations of high volume crimes like homicides. The identification of the cellular origin and the composition of the trace can link suspects or victims to a certain crime scene and provide a probative value for criminal investigations. To erase all traces from the crime scene, perpetrators often wash away their traces. Characteristically, items that show exposed stains like blood are commonly cleaned or laundered to free them from potential visible leftovers. Mostly, investigators do not delegate the DNA analysis of laundered items. However, some studies have already revealed that items can still be used for DNA analysis even after they have been laundered. Nonetheless, a systematical evaluation of laundered blood and saliva traces that provides a comparison of different established and newly developed methods for body fluid identification (BFI) is still missing. Herein, we present the results of a comprehensive study of laundered blood- and saliva-stained pieces of cloths that were applied to a broad range of methods for BFI including conventional approaches as well as molecular mRNA profiling. The study included the evaluation of cellular origin as well as DNA profiling of blood- and saliva-stained (synthetic fiber and cotton) pieces of cloths, which have been washed at various washing temperatures for one or multiple times. Our experiments demonstrate that, while STR profiling seems to be sufficiently sensitive for the individualization of laundered items, there is a lack of approaches for BFI with the same sensitivity and specificity allowing to characterize the cellular origin of challenging, particularly laundered, blood and saliva samples.
A novel approach in water quality assessment based on fuzzy logic.
Gharibi, Hamed; Mahvi, Amir Hossein; Nabizadeh, Ramin; Arabalibeik, Hossein; Yunesian, Masud; Sowlat, Mohammad Hossein
2012-12-15
The present work aimed at developing a novel water quality index based on fuzzy logic, that is, a comprehensive artificial intelligence (AI) approach to the development of environmental indices for routine assessment of surface water quality, particularly for human drinking purposes. Twenty parameters were included based on their critical importance for the overall water quality and their potential impact on human health. To assess the performance of the proposed index under actual conditions, a case study was conducted at Mamloo dam, Iran, employing water quality data of four sampling stations in the water basin of the dam from 2006 to 2009. Results of this study indicated that the general quality of water in all the sampling stations over all the years of the study period is fairly low (yearly averages are usually in the range of 45-55). According to the results of ANOVA test, water quality did not significantly change over time in any of the sampling stations (P > 0.05). In addition, comparison of the outputs of the fuzzy-based proposed index proposed with those of the NSF water quality index (the WQI) and Canadian Water Quality Index (CWQI) showed similar results and were sensitive to changes in the level of water quality parameters. However, the index proposed by the present study produced a more stringent outputs compared to the WQI and CWQI. Results of the sensitivity analysis suggested that the index is robust against the changes in the rules. In conclusion, the proposed index seems to produce accurate and reliable results and can therefore be used as a comprehensive tool for water quality assessment, especially for the analysis of human drinking water. Copyright © 2012 Elsevier Ltd. All rights reserved.
Oliver, D; Kotlicka-Antczak, M; Minichino, A; Spada, G; McGuire, P; Fusar-Poli, P
2018-03-01
Primary indicated prevention is reliant on accurate tools to predict the onset of psychosis. The gold standard assessment for detecting individuals at clinical high risk (CHR-P) for psychosis in the UK and many other countries is the Comprehensive Assessment for At Risk Mental States (CAARMS). While the prognostic accuracy of CHR-P instruments has been assessed in general, this is the first study to specifically analyse that of the CAARMS. As such, the CAARMS was used as the index test, with the reference index being psychosis onset within 2 years. Six independent studies were analysed using MIDAS (STATA 14), with a total of 1876 help-seeking subjects referred to high risk services (CHR-P+: n=892; CHR-P-: n=984). Area under the curve (AUC), summary receiver operating characteristic curves (SROC), quality assessment, likelihood ratios, and probability modified plots were computed, along with sensitivity analyses and meta-regressions. The current meta-analysis confirmed that the 2-year prognostic accuracy of the CAARMS is only acceptable (AUC=0.79 95% CI: 0.75-0.83) and not outstanding as previously reported. In particular, specificity was poor. Sensitivity of the CAARMS is inferior compared to the SIPS, while specificity is comparably low. However, due to the difficulties in performing these types of studies, power in this meta-analysis was low. These results indicate that refining and improving the prognostic accuracy of the CAARMS should be the mainstream area of research for the next era. Avenues of prediction improvement are critically discussed and presented to better benefit patients and improve outcomes of first episode psychosis. Copyright © 2017 The Authors. Published by Elsevier Masson SAS.. All rights reserved.
Comprehensive two-dimensional gas chromatography for the analysis of Fischer-Tropsch oil products.
van der Westhuizen, Rina; Crous, Renier; de Villiers, André; Sandra, Pat
2010-12-24
The Fischer-Tropsch (FT) process involves a series of catalysed reactions of carbon monoxide and hydrogen, originating from coal, natural gas or biomass, leading to a variety of synthetic chemicals and fuels. The benefits of comprehensive two-dimensional gas chromatography (GC×GC) compared to one-dimensional GC (1D-GC) for the detailed investigation of the oil products of low and high temperature FT processes are presented. GC×GC provides more accurate quantitative data to construct Anderson-Schultz-Flory (ASF) selectivity models that correlate the FT product distribution with reaction variables. On the other hand, the high peak capacity and sensitivity of GC×GC allow the detailed study of components present at trace level. Analyses of the aromatic and oxygenated fractions of a high temperature FT (HT-FT) process are presented. GC×GC data have been used to optimise or tune the HT-FT process by using a lab-scale micro-FT-reactor. Copyright © 2010 Elsevier B.V. All rights reserved.
Comparison of Computed and Measured Vortex Evolution for a UH-60A Rotor in Forward Flight
NASA Technical Reports Server (NTRS)
Ahmad, Jasim Uddin; Yamauchi, Gloria K.; Kao, David L.
2013-01-01
A Computational Fluid Dynamics (CFD) simulation using the Navier-Stokes equations was performed to determine the evolutionary and dynamical characteristics of the vortex flowfield for a highly flexible aeroelastic UH-60A rotor in forward flight. The experimental wake data were acquired using Particle Image Velocimetry (PIV) during a test of the fullscale UH-60A rotor in the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel. The PIV measurements were made in a stationary cross-flow plane at 90 deg rotor azimuth. The CFD simulation was performed using the OVERFLOW CFD solver loosely coupled with the rotorcraft comprehensive code CAMRAD II. Characteristics of vortices captured in the PIV plane from different blades are compared with CFD calculations. The blade airloads were calculated using two different turbulence models. A limited spatial, temporal, and CFD/comprehensive-code coupling sensitivity analysis was performed in order to verify the unsteady helicopter simulations with a moving rotor grid system.
Food allergy and risk assessment: Current status and future directions
NASA Astrophysics Data System (ADS)
Remington, Benjamin C.
2017-09-01
Risk analysis is a three part, interactive process that consists of a scientific risk assessment, a risk management strategy and an exchange of information through risk communication. Quantitative risk assessment methodologies are now available and widely used for assessing risks regarding the unintentional consumption of major, regulated allergens but new or modified proteins can also pose a risk of de-novo sensitization. The risks due to de-novo sensitization to new food allergies are harder to quantify. There is a need for a systematic, comprehensive battery of tests and assessment strategy to identify and characterise de-novo sensitization to new proteins and the risks associated with them. A risk assessment must be attuned to answer the risk management questions and needs. Consequently, the hazard and risk assessment methods applied and the desired information are determined by the requested outcome for risk management purposes and decisions to be made. The COST Action network (ImpARAS, www.imparas.eu) has recently started to discuss these risk management criteria from first principles and will continue with the broader subject of improving strategies for allergen risk assessment throughout 2016-2018/9.
Wood, Sarah G; Hart, Sara A; Little, Callie W; Phillips, Beth M
2016-07-01
Past research suggests that reading comprehension test performance does not rely solely on targeted cognitive processes such as word reading, but also on other non-target aspects such as test anxiety. Using a genetically sensitive design, we sought to understand the genetic and environmental etiology of the association between test anxiety and reading comprehension as measured by a high-stakes test. Mirroring the behavioral literature of test anxiety, three different dimensions of test anxiety were examined in relation to reading comprehension, namely intrusive thoughts, autonomic reactions, and off-task behaviors. Participants included 426 sets of twins from the Florida Twin Project on Reading. The results indicated test anxiety was negatively associated with reading comprehension test performance, specifically through common shared environmental influences. The significant contribution of test anxiety to reading comprehension on a high-stakes test supports the notion that non-targeted factors may be interfering with accurately assessing students' reading abilities.
Owusu, Cynthia; Koroukian, Siran M.; Schluchter, Mark; Bakaki, Paul; Berger, Nathan A.
2011-01-01
Background The Vulnerable Elders Survey (VES-13) has been validated for screening older cancer patients for a Comprehensive Geriatric Assessment (CGA). To identify a widely acceptable approach that encourages oncologists to screen older cancer patients for a CGA, we examined the Eastern Cooperative Oncology Group Performance Status (ECOG-PS) and Karnofsky Index of Performance Status (KPS) scales’ ability to identify abnormalities on a CGA and compared the performance of the two instruments with the VES-13. Methods We enrolled 117 participants, ≥65 years with stage I–IV cancer into this cross-sectional study. Our primary outcome variable was ≥two abnormalities on the CGA, (Yes or No). We employed receiver operating characteristic curve analysis to compare the discriminatory abilities of the three instruments to identify ≥two abnormalities on the CGA. Results Of the 117 participants, 43% had ≥two abnormalities on the CGA. The VES-13 was predictive of ≥two abnormalities on the CGA, area under the curve (AUC)=0.85 [(95% CI: 0.78–0.92); sensitivity=88%, specificity=69%, at cut-off ≥3]. The ECOG-PS and KPS showed similar discriminatory powers, AUC=0.88 [(95% CI: 0.83–0.94); sensitivity=94%, specificity=55%, at cut-off ≥1]; and AUC=0.90 [(95% CI: 0.84–0.96); sensitivity=78%, specificity=91%, at cut-off ≤80%], respectively. Conclusion The ECOG-PS and KPS were equivalent to the VES-13 in identifying older cancer patients with at least two abnormalities on the CGA. Given that oncologists are already conversant with the KPS and ECOG-PS, these two instruments offer medical oncologists a widely acceptable approach for screening older patients for a CGA. PMID:21927633
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
2016-10-01
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.
2009-01-01
Background Large discrepancies in signature composition and outcome concordance have been observed between different microarray breast cancer expression profiling studies. This is often ascribed to differences in array platform as well as biological variability. We conjecture that other reasons for the observed discrepancies are the measurement error associated with each feature and the choice of preprocessing method. Microarray data are known to be subject to technical variation and the confidence intervals around individual point estimates of expression levels can be wide. Furthermore, the estimated expression values also vary depending on the selected preprocessing scheme. In microarray breast cancer classification studies, however, these two forms of feature variability are almost always ignored and hence their exact role is unclear. Results We have performed a comprehensive sensitivity analysis of microarray breast cancer classification under the two types of feature variability mentioned above. We used data from six state of the art preprocessing methods, using a compendium consisting of eight diferent datasets, involving 1131 hybridizations, containing data from both one and two-color array technology. For a wide range of classifiers, we performed a joint study on performance, concordance and stability. In the stability analysis we explicitly tested classifiers for their noise tolerance by using perturbed expression profiles that are based on uncertainty information directly related to the preprocessing methods. Our results indicate that signature composition is strongly influenced by feature variability, even if the array platform and the stratification of patient samples are identical. In addition, we show that there is often a high level of discordance between individual class assignments for signatures constructed on data coming from different preprocessing schemes, even if the actual signature composition is identical. Conclusion Feature variability can have a strong impact on breast cancer signature composition, as well as the classification of individual patient samples. We therefore strongly recommend that feature variability is considered in analyzing data from microarray breast cancer expression profiling experiments. PMID:19941644
Fraysse, Bodvaël; Barthélémy, Inès; Qannari, El Mostafa; Rouger, Karl; Thorin, Chantal; Blot, Stéphane; Le Guiner, Caroline; Chérel, Yan; Hogrel, Jean-Yves
2017-04-12
Accelerometric analysis of gait abnormalities in golden retriever muscular dystrophy (GRMD) dogs is of limited sensitivity, and produces highly complex data. The use of discriminant analysis may enable simpler and more sensitive evaluation of treatment benefits in this important preclinical model. Accelerometry was performed twice monthly between the ages of 2 and 12 months on 8 healthy and 20 GRMD dogs. Seven accelerometric parameters were analysed using linear discriminant analysis (LDA). Manipulation of the dependent and independent variables produced three distinct models. The ability of each model to detect gait alterations and their pattern change with age was tested using a leave-one-out cross-validation approach. Selecting genotype (healthy or GRMD) as the dependent variable resulted in a model (Model 1) allowing a good discrimination between the gait phenotype of GRMD and healthy dogs. However, this model was not sufficiently representative of the disease progression. In Model 2, age in months was added as a supplementary dependent variable (GRMD_2 to GRMD_12 and Healthy_2 to Healthy_9.5), resulting in a high overall misclassification rate (83.2%). To improve accuracy, a third model (Model 3) was created in which age was also included as an explanatory variable. This resulted in an overall misclassification rate lower than 12%. Model 3 was evaluated using blinded data pertaining to 81 healthy and GRMD dogs. In all but one case, the model correctly matched gait phenotype to the actual genotype. Finally, we used Model 3 to reanalyse data from a previous study regarding the effects of immunosuppressive treatments on muscular dystrophy in GRMD dogs. Our model identified significant effect of immunosuppressive treatments on gait quality, corroborating the original findings, with the added advantages of direct statistical analysis with greater sensitivity and more comprehensible data representation. Gait analysis using LDA allows for improved analysis of accelerometry data by applying a decision-making analysis approach to the evaluation of preclinical treatment benefits in GRMD dogs.
The Genetic and Environmental Foundation of the Simple View of Reading in Chinese
Ho, Connie Suk-Han; Chow, Bonnie Wing-Yin; Wong, Simpson Wai-Lap; Waye, Mary M. Y.; Bishop, Dorothy V. M.
2012-01-01
The Simple View of Reading (SVR) in Chinese was examined in a genetically sensitive design. A total of 270 pairs of Chinese twins (190 pairs of monozygotic twins and 80 pairs of same-sex dizygotic twins) were tested on Chinese vocabulary and word reading at the mean age 7.8 years and reading comprehension of sentences and passages one year later. Results of behavior-genetic analyses showed that both vocabulary and word reading had significant independent genetic influences on reading comprehension, and the two factors together accounted for most but not all of the genetic influences on reading comprehension. In addition, sentence comprehension had a stronger genetic correlation with word reading while passage comprehension showed a trend of stronger genetic overlap with vocabulary. These findings suggest that the genetic foundation of the SVR in Chinese is largely supported in that language comprehension and decoding are two core skills for reading comprehension in nonalphabetic as well as alphabetic written languages. PMID:23112862
ENGLISH, LIANNE; BARNES, MARCIA A.; FLETCHER, JACK M.; DENNIS, MAUREEN; RAGHUBAR, KIMBERLY P.
2011-01-01
Spina bifida meningomyelocele (SBM) is a neurodevelopmental disorder associated with intact word decoding and deficient text and discourse comprehension. This study investigated the ability to adjust reading in accordance with specified reading goals in 79 children and adolescents with SBM (9–19 years of age) and 39 controls (8–17 years of age). Both groups demonstrated slower reading times and enhanced comprehension when reading to study or to come up with a title than when reading for specific information or for entertainment. For both groups, verbal working memory contributed to comprehension performance in those reading conditions hypothesized to require more cognitive effort. Despite their sensitivity to the goals of reading, the group with SBM answered fewer comprehension questions correctly across all reading goal conditions. The results are discussed in relation to the hypothesized cognitive underpinnings of comprehension deficits in SBM and to current models of text comprehension. PMID:20338082
Report for the NGFA-5 project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaing, C; Jackson, P; Thissen, J
The objective of this project is to provide DHS a comprehensive evaluation of the current genomic technologies including genotyping, TaqMan PCR, multiple locus variable tandem repeat analysis (MLVA), microarray and high-throughput DNA sequencing in the analysis of biothreat agents from complex environmental samples. To effectively compare the sensitivity and specificity of the different genomic technologies, we used SNP TaqMan PCR, MLVA, microarray and high-throughput illumine and 454 sequencing to test various strains from B. anthracis, B. thuringiensis, BioWatch aerosol filter extracts or soil samples that were spiked with B. anthracis, and samples that were previously collected during DHS and EPAmore » environmental release exercises that were known to contain B. thuringiensis spores. The results of all the samples against the various assays are discussed in this report.« less
NASA Technical Reports Server (NTRS)
Anderson, K. A.; Chase, L. M.; Lin, R. P.; Mccoy, J. E.; Mcguire, R. E.
1974-01-01
The lunar particle shadows and boundary layer experiments aboard the Apollo 15 and 16 subsatellites and scientific reduction and analysis of the data to date are discussed with emphasis on four major topics: solar particles; interplanetry particle phenomena; lunar interactions; and topology and dynamics of the magnetosphere at lunar orbit. The studies of solar and interplanetary particles concentrated on the low energy region which was essentially unexplored, and the studies of lunar interaction pointed up the transition from single particle to plasma characteristics. The analysis concentrated on the electron angular distributions as highly sensitive indicators of localized magnetization of the lunar surface. Magnetosphere experiments provided the first electric field measurements in the distant magnetotail, as well as comprehensive low energy particle measurements at lunar distance.
A mechanistic modelling approach to polymer dissolution using magnetic resonance microimaging.
Kaunisto, Erik; Abrahmsen-Alami, Susanna; Borgquist, Per; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders
2010-10-15
In this paper a computationally efficient mathematical model describing the swelling and dissolution of a polyethylene oxide tablet is presented. The model was calibrated against polymer release, front position and water concentration profile data inside the gel layer, using two different diffusion models. The water concentration profiles were obtained from magnetic resonance microimaging data which, in addition to the previously used texture analysis method, can help to validate and discriminate between the mechanisms of swelling, diffusion and erosion in relation to the dissolution process. Critical parameters were identified through a comprehensive sensitivity analysis, and the effect of hydrodynamic shearing was investigated by using two different stirring rates. Good agreement was obtained between the experimental results and the model. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lazarus, P.; Brazier, A.; Hessels, J. W. T.; Karako-Argaman, C.; Kaspi, V. M.; Lynch, R.; Madsen, E.; Patel, C.; Ransom, S. M.; Scholz, P.; Swiggum, J.; Zhu, W. W.; Allen, B.; Bogdanov, S.; Camilo, F.; Cardoso, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Ferdman, R.; Freire, P. C. C.; Jenet, F. A.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lorimer, D. R.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Stairs, I. H.; Stovall, K.; Venkataraman, A.
2015-10-01
The on-going Arecibo Pulsar-ALFA (PALFA) survey began in 2004 and is searching for radio pulsars in the Galactic plane at 1.4 GHz. Here we present a comprehensive description of one of its main data reduction pipelines that is based on the PRESTO software and includes new interference-excision algorithms and candidate selection heuristics. This pipeline has been used to discover 40 pulsars, bringing the survey’s discovery total to 144 pulsars. Of the new discoveries, eight are millisecond pulsars (MSPs; P\\lt 10 ms) and one is a Fast Radio Burst (FRB). This pipeline has also re-detected 188 previously known pulsars, 60 of them previously discovered by the other PALFA pipelines. We present a novel method for determining the survey sensitivity that accurately takes into account the effects of interference and red noise: we inject synthetic pulsar signals with various parameters into real survey observations and then attempt to recover them with our pipeline. We find that the PALFA survey achieves the sensitivity to MSPs predicted by theoretical models but suffers a degradation for P≳ 100 ms that gradually becomes up to ˜10 times worse for P\\gt 4 {{s}} at {DM}\\lt 150 pc cm-3. We estimate 33 ± 3% of the slower pulsars are missed, largely due to red noise. A population synthesis analysis using the sensitivity limits we measured suggests the PALFA survey should have found 224 ± 16 un-recycled pulsars in the data set analyzed, in agreement with the 241 actually detected. The reduced sensitivity could have implications on estimates of the number of long-period pulsars in the Galaxy.
Shi, Ruo-Yang; Yao, Qiu-Ying; Wu, Lian-Ming; Xu, Jian-Rong
2018-06-01
We compared the diagnostic performance of diffusion weighted imaging (DWI) acquired with 1.5T and 3.0T magnetic resonance (MR) units in differentiating malignant breast lesions from benign ones. A comprehensive search of the PubMed and Embase databases was performed for studies reported from January 1, 2000 to February 19, 2016. The quality of the included studies was assessed. Statistical analysis included pooling of diagnostic sensitivity and specificity and assessing data inhomogeneity and publication bias. A total of 61 studies were included after a full-text review. These included 4778 patients and 5205 breast lesions. The overall sensitivity and specificity were 90% (95% confidence interval [CI], 88%-92%) and 86% (95% CI, 82%-89%), respectively. The pooled diagnostic odds ratio was 53 (95% CI, 37-74). For breast cancer versus benign lesions, the area under the curve was 0.94 (95% CI, 0.92-0.96). For the 44 studies that used a 1.5T MR unit, the pooled sensitivity and specificity were 91% (95% CI, 89%-92%) and 86% (95% CI, 81%-90%), respectively. For the 17 studies that used a 3.0T MR unit, the pooled sensitivity and specificity were 88% (95% CI, 83%-91%) and 84% (95% CI, 0.78-0.89), respectively. Publication bias and significant heterogeneity were observed; however, no threshold was found among the 61 studies. No significant difference was found in the sensitivity or specificity between the subgroups. The results of the comparison between the subgroups that had used either a 1.5T or 3.0T MR unit suggest that the diagnostic accuracy for breast cancer compared with benign lesions is not significantly different. Copyright © 2017 Elsevier Inc. All rights reserved.
Muhammad, Noor Azimah; Shamsuddin, Khadijah; Omar, Khairani; Shah, Shamsul Azhar; Mohd Amin, Rahmah
2014-01-01
Parenting behaviour is culturally sensitive. The aims of this study were (1) to translate the Parental Bonding Instrument into Malay (PBI-M) and (2) to determine its factorial structure and validity among the Malaysian population. The PBI-M was generated from a standard translation process and comprehension testing. The validation study of the PBI-M was administered to 248 college students aged 18 to 22 years. Participants in the comprehension testing had difficulty understanding negative items. Five translated double negative items were replaced with five positive items with similar meanings. Exploratory factor analysis showed a three-factor model for the PBI-M with acceptable reliability. Four negative items (items 3, 4, 8, and 16) and item 19 were omitted from the final PBI-M list because of incorrect placement or low factor loading (< 0.32). Out of the final 20 items of the PBI-M, there were 10 items for the care factor, five items for the autonomy factor and five items for the overprotection factor. All the items loaded positively on their respective factors. The Malaysian population favoured positive items in answering questions. The PBI-M confirmed the three-factor model that consisted of care, autonomy and overprotection. The PBI-M is a valid and reliable instrument to assess the Malaysian parenting style. Confirmatory factor analysis may further support this finding. Malaysia, parenting, questionnaire, validity.
Wang, Xijun; Wang, Huiyu; Zhang, Aihua; Lu, Xin; Sun, Hui; Dong, Hui; Wang, Ping
2012-02-03
The mother and lateral root of Aconitum carmichaelii Debx, named "Chuanwu" (CW) and "Fuzi", respectively, has been used to relieve joint pain and treat rheumatic diseases for over 2000 years. However, it has a very narrow therapeutic range, and the toxicological risk of its usage remains very high. The traditional Chinese processing approach, Paozhi (detoxifying measure),can decompose poisonous Aconitum alkaloids into less or nontoxic derivatives and plays an important role in detoxification. The difference in metabolomic characters among the crude and processed preparations is still unclear, limited by the lack of sensitive and reliable biomarkers. Therefore, this paper was designed to investigate comprehensive metabolomic characters of the crude and its processed products by UPLC-Q-TOF-HDMS combined with pattern recognition methods and ingenuity pathway analysis (IPA). The significant difference in metabolic profiles and changes of metabolite biomarkers of interest between the crude and processed preparations were well observed. The underlying regulations of Paozhi-perturbed metabolic pathways are discussed according to the identified metabolites, and four metabolic pathways are identified using IPA. The present study demonstrates that metabolomic analysis could greatly facilitate and provide useful information to further comprehensively understand the pharmacological activity and potential toxicity of processed Aconite roots in the clinic.
High resolution Physio-chemical Tissue Analysis: Towards Non-invasive In Vivo Biopsy
NASA Astrophysics Data System (ADS)
Xu, Guan; Meng, Zhuo-Xian; Lin, Jian-Die; Deng, Cheri X.; Carson, Paul L.; Fowlkes, J. Brian; Tao, Chao; Liu, Xiaojun; Wang, Xueding
2016-02-01
Conventional gold standard histopathologic diagnosis requires information of both high resolution structural and chemical changes in tissue. Providing optical information at ultrasonic resolution, photoacoustic (PA) technique could provide highly sensitive and highly accurate tissue characterization noninvasively in the authentic in vivo environment, offering a replacement for histopathology. A two-dimensional (2D) physio-chemical spectrogram (PCS) combining micrometer to centimeter morphology and chemical composition simultaneously can be generated for each biological sample with PA measurements at multiple optical wavelengths. This spectrogram presents a unique 2D “physio-chemical signature” for any specific type of tissue. Comprehensive analysis of PCS, termed PA physio-chemical analysis (PAPCA), can lead to very rich diagnostic information, including the contents of all relevant molecular and chemical components along with their corresponding histological microfeatures, comparable to those accessible by conventional histology. PAPCA could contribute to the diagnosis of many diseases involving diffusive patterns such as fatty liver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentry, T.; Schadt, C.; Zhou, J.
Microarray technology has the unparalleled potential tosimultaneously determine the dynamics and/or activities of most, if notall, of the microbial populations in complex environments such as soilsand sediments. Researchers have developed several types of arrays thatcharacterize the microbial populations in these samples based on theirphylogenetic relatedness or functional genomic content. Several recentstudies have used these microarrays to investigate ecological issues;however, most have only analyzed a limited number of samples withrelatively few experiments utilizing the full high-throughput potentialof microarray analysis. This is due in part to the unique analyticalchallenges that these samples present with regard to sensitivity,specificity, quantitation, and data analysis. Thismore » review discussesspecific applications of microarrays to microbial ecology research alongwith some of the latest studies addressing the difficulties encounteredduring analysis of complex microbial communities within environmentalsamples. With continued development, microarray technology may ultimatelyachieve its potential for comprehensive, high-throughput characterizationof microbial populations in near real-time.« less
Mu, John C.; Tootoonchi Afshar, Pegah; Mohiyuddin, Marghoob; Chen, Xi; Li, Jian; Bani Asadi, Narges; Gerstein, Mark B.; Wong, Wing H.; Lam, Hugo Y. K.
2015-01-01
A high-confidence, comprehensive human variant set is critical in assessing accuracy of sequencing algorithms, which are crucial in precision medicine based on high-throughput sequencing. Although recent works have attempted to provide such a resource, they still do not encompass all major types of variants including structural variants (SVs). Thus, we leveraged the massive high-quality Sanger sequences from the HuRef genome to construct by far the most comprehensive gold set of a single individual, which was cross validated with deep Illumina sequencing, population datasets, and well-established algorithms. It was a necessary effort to completely reanalyze the HuRef genome as its previously published variants were mostly reported five years ago, suffering from compatibility, organization, and accuracy issues that prevent their direct use in benchmarking. Our extensive analysis and validation resulted in a gold set with high specificity and sensitivity. In contrast to the current gold sets of the NA12878 or HS1011 genomes, our gold set is the first that includes small variants, deletion SVs and insertion SVs up to a hundred thousand base-pairs. We demonstrate the utility of our HuRef gold set to benchmark several published SV detection tools. PMID:26412485
Wu, Y.; Liu, S.
2012-01-01
Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.
Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall
2012-01-01
Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561
Metabolomic profiling of anionic metabolites by capillary electrophoresis mass spectrometry.
Soga, Tomoyoshi; Igarashi, Kaori; Ito, Chiharu; Mizobuchi, Katsuo; Zimmermann, Hans-Peter; Tomita, Masaru
2009-08-01
We describe a sheath flow capillary electrophoresis time-of-flight mass spectrometry (CE-TOFMS) method in the negative mode using a platinum electrospray ionization (ESI) spray needle, which allows the comprehensive analysis of anionic metabolites. The material of the spray needle had significant effect on the measurement of anions. A stainless steel spray needle was oxidized and corroded at the anodic electrode due to electrolysis. The precipitation of iron oxides (rust) plugged the capillary outlet, resulting in shortened capillary lifetime. Many anionic metabolites also formed complexes with the iron oxides or migrating nickel ion, which was also generated by electrolysis and moved toward the cathode (the capillary inlet). The metal-anion complex formation significantly reduced detection sensitivity of the anionic compounds. The use of a platinum ESI needle prevented both oxidation of the metals and needle corrosion. Sensitivity using the platinum needle increased from several- to 63-fold, with the largest improvements for anions exhibiting high metal chelating properties such as carboxylic acids, nucleotides, and coenzyme A compounds. The detection limits for most anions were between 0.03 and 0.87 micromol/L (0.8 and 24 fmol) at a signal-to-noise ratio of 3. This method is quantitative, sensitive, and robust, and its utility was demonstrated by the analysis of the metabolites in the central metabolic pathways extracted from mouse liver.
Barrett, Christian L.; Cho, Byung-Kwan
2011-01-01
Immuno-precipitation of protein–DNA complexes followed by microarray hybridization is a powerful and cost-effective technology for discovering protein–DNA binding events at the genome scale. It is still an unresolved challenge to comprehensively, accurately and sensitively extract binding event information from the produced data. We have developed a novel strategy composed of an information-preserving signal-smoothing procedure, higher order derivative analysis and application of the principle of maximum entropy to address this challenge. Importantly, our method does not require any input parameters to be specified by the user. Using genome-scale binding data of two Escherichia coli global transcription regulators for which a relatively large number of experimentally supported sites are known, we show that ∼90% of known sites were resolved to within four probes, or ∼88 bp. Over half of the sites were resolved to within two probes, or ∼38 bp. Furthermore, we demonstrate that our strategy delivers significant quantitative and qualitative performance gains over available methods. Such accurate and sensitive binding site resolution has important consequences for accurately reconstructing transcriptional regulatory networks, for motif discovery, for furthering our understanding of local and non-local factors in protein–DNA interactions and for extending the usefulness horizon of the ChIP-chip platform. PMID:21051353
Strategies of molecular imprinting-based fluorescence sensors for chemical and biological analysis.
Yang, Qian; Li, Jinhua; Wang, Xiaoyan; Peng, Hailong; Xiong, Hua; Chen, Lingxin
2018-07-30
One pressing concern today is to construct sensors that can withstand various disturbances for highly selective and sensitive detecting trace analytes in complicated samples. Molecularly imprinted polymers (MIPs) with tailor-made binding sites are preferred to be recognition elements in sensors for effective targets detection, and fluorescence measurement assists in highly sensitive detection and user-friendly control. Accordingly, molecular imprinting-based fluorescence sensors (MI-FL sensors) have attracted great research interest in many fields such as chemical and biological analysis. Herein, we comprehensively review the recent advances in MI-FL sensors construction and applications, giving insights on sensing principles and signal transduction mechanisms, focusing on general construction strategies for intrinsically fluorescent or nonfluorescent analytes and improvement strategies in sensing performance, particularly in sensitivity. Construction strategies are well overviewed, mainly including the traditional indirect methods of competitive binding against pre-bound fluorescent indicators, employment of fluorescent functional monomers and embedding of fluorescence substances, and novel rational designs of hierarchical architecture (core-shell/hollow and mesoporous structures), post-imprinting modification, and ratiometric fluorescence detection. Furthermore, MI-FL sensor based microdevices are discussed, involving micromotors, test strips and microfluidics, which are more portable for rapid point-of-care detection and in-field diagnosing. Finally, the current challenges and future perspectives of MI-FL sensors are proposed. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aalseth, Craig E.; Day, Anthony R.; Haas, Derek A.
On-Site Inspection (OSI) is a key component of the verification regime for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Measurements of radionuclide isotopes created by an underground nuclear explosion are a valuable signature of a Treaty violation. Argon-37 is produced from neutron interaction with calcium in soil, 40Ca(n,α)37Ar. For OSI, the 35-day half-life of 37Ar provides both high specific activity and sufficient time for completion of an inspection before decay limits sensitivity. This paper presents a low-background internal-source gas proportional counter with an 37Ar measurement sensitivity level equivalent to 45.1 mBq/SCM in whole air.
Wright, Robin; Parrish, Mark L; Cadera, Emily; Larson, Lynnelle; Matson, Clinton K; Garrett-Engele, Philip; Armour, Chris; Lum, Pek Yee; Shoemaker, Daniel D
2003-07-30
Increased levels of HMG-CoA reductase induce cell type- and isozyme-specific proliferation of the endoplasmic reticulum. In yeast, the ER proliferations induced by Hmg1p consist of nuclear-associated stacks of smooth ER membranes known as karmellae. To identify genes required for karmellae assembly, we compared the composition of populations of homozygous diploid S. cerevisiae deletion mutants following 20 generations of growth with and without karmellae. Using an initial population of 1,557 deletion mutants, 120 potential mutants were identified as a result of three independent experiments. Each experiment produced a largely non-overlapping set of potential mutants, suggesting that differences in specific growth conditions could be used to maximize the comprehensiveness of similar parallel analysis screens. Only two genes, UBC7 and YAL011W, were identified in all three experiments. Subsequent analysis of individual mutant strains confirmed that each experiment was identifying valid mutations, based on the mutant's sensitivity to elevated HMG-CoA reductase and inability to assemble normal karmellae. The largest class of HMG-CoA reductase-sensitive mutations was a subset of genes that are involved in chromatin structure and transcriptional regulation, suggesting that karmellae assembly requires changes in transcription or that the presence of karmellae may interfere with normal transcriptional regulation. Copyright 2003 John Wiley & Sons, Ltd.
Increased incidence of head and neck cancer in liver transplant recipients: a meta-analysis.
Liu, Qian; Yan, Lifeng; Xu, Cheng; Gu, Aihua; Zhao, Peng; Jiang, Zhao-Yan
2014-10-22
It is unclear whether liver transplantation is associated with an increased incidence of post-transplant head and neck cancer. This comprehensive meta-analysis evaluated the association between liver transplantation and the risk of head and neck cancer using data from all available studies. PubMed and Web of Science were systematically searched to identify all relevant publications up to March 2014. Standardized incidence ratio (SIR) and 95% confidence intervals (CIs) for risk of head and neck cancer in liver transplant recipients were calculated. Tests for heterogeneity, sensitivity, and publishing bias were also performed. Of the 964 identified articles, 10 were deemed eligible. These studies included data on 56,507 patients with a total follow-up of 129,448.9 patient-years. SIR for head and neck cancer was 3.836-fold higher (95% CI 2.754-4.918, P = 0.000) in liver transplant recipients than in the general population. No heterogeneity or publication bias was observed. Sensitivity analysis indicated that omission of any of the studies resulted in an SIR for head and neck cancer between 3.488 (95% CI: 2.379-4.598) and 4.306 (95% CI: 3.020-5.592). Liver transplant recipients are at higher risk of developing head and neck cancer than the general population.
Yang, Zemao; Dai, Zhigang; Lu, Ruike; Wu, Bibo; Tang, Qing; Xu, Ying; Cheng, Chaohua; Su, Jianguang
2017-11-29
Drought stress results in significant crop yield losses. Comparative transcriptome analysis between tolerant and sensitive species can provide insights into drought tolerance mechanisms in jute. We present a comprehensive study on drought tolerance in two jute species-a drought tolerant species (Corchorus olitorius L., GF) and a drought sensitive species (Corchorus capsularis L., YY). In total, 45,831 non-redundant unigenes with average sequence length of 1421 bp were identified. Higher numbers of differentially expressed genes (DEGs) were discovered in YY (794) than in GF (39), implying that YY was relatively more vulnerable or hyper-responsive to drought stress at the molecular level; the two main pathways, phenylpropanoid biosynthesis and peroxisome pathway, significantly involved in scavenging of reactive oxygen species (ROS) and 14 unigenes in the two pathways presented a significant differential expression in response to increase of superoxide. Our classification analysis showed that 1769 transcription factors can be grouped into 81 families and 948 protein kinases (PKs) into 122 families. In YY, we identified 34 TF DEGs from and 23 PK DEGs, including 19 receptor-like kinases (RLKs). Most of these RLKs were downregulated during drought stress, implying their role as negative regulators of the drought tolerance mechanism in jute.
ERIC Educational Resources Information Center
Bae, Jiyoung
2012-01-01
This study explored L2 literacy ability and intercultural sensitivity of Korean late elementary to early middle school students learning English as a foreign language. This study investigated the latent variable structure of L2 literacy abilities, including fluency, vocabulary, reading comprehension, and writing abilities, and intercultural…
Requirements for Next Generation Comprehensive Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne; Data, Anubhav
2008-01-01
The unique demands of rotorcraft aeromechanics analysis have led to the development of software tools that are described as comprehensive analyses. The next generation of rotorcraft comprehensive analyses will be driven and enabled by the tremendous capabilities of high performance computing, particularly modular and scaleable software executed on multiple cores. Development of a comprehensive analysis based on high performance computing both demands and permits a new analysis architecture. This paper describes a vision of the requirements for this next generation of comprehensive analyses of rotorcraft. The requirements are described and substantiated for what must be included and justification provided for what should be excluded. With this guide, a path to the next generation code can be found.
Accuracy of magnetic resonance venography in diagnosing cerebral venous sinus thrombosis.
Gao, Liansheng; Xu, Weilin; Li, Tao; Yu, Xiaobo; Cao, Shenglong; Xu, Hangzhe; Yan, Feng; Chen, Gao
2018-05-17
The non-specific clinical manifestations and lack of effective diagnostic techniques have made cerebral venous sinus thrombosis (CVST) difficult to recognize and easy to misdiagnose. Several studies have suggested that different types of magnetic resonance venography (MRV) have advantages in diagnosing CVST. We conducted this meta-analysis to assess the accuracy of MRV in identifying CVST. We searched the Embase, PubMed, and Chinese Biomedical (CBM) databases comprehensively to retrieve eligible articles up to Mar 31, 2018. The methodological quality of each article was evaluated individually. The summary diagnostic accuracy of MRV for CVST was obtained from pooled analysis with random-effects models. Sensitivity analysis, subgroup analysis, and meta-regression were used to explore the sources of heterogeneity. A trim and fill analysis was conducted to correct the funnel plot asymmetry. The meta-analysis synthesized 12 articles containing 27 cohorts with a total of 1933 cases. The pooled sensitivity and specificity were 0.86 (95% CI: 0.83, 0.89) and 0.94 (95% CI: 0.93, 0.95), respectively. The pooled diagnostic odds ratio (DOR) was 75.24 (95% CI: 38.33, 147.72). The area under the curve (AUC) was 0.9472 (95% CI: 0.9229, 0.9715). Subgroup analysis and meta-regression analysis revealed the technical types of MRV and the methods of counting cases contributing to the heterogeneity. The trim and fill method confirmed that publication bias has little effect on our results. MRV has excellent diagnostic performance and is accurate in confirming CVST. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Comprehensive Evaluation of a Two-Channel Portable Monitor to “Rule in” Obstructive Sleep Apnea
Ward, Kim L.; McArdle, Nigel; James, Alan; Bremner, Alexandra P.; Simpson, Laila; Cooper, Matthew N.; Palmer, Lyle J.; Fedson, Annette C.; Mukherjee, Sutapa; Hillman, David R.
2015-01-01
Study Objectives: We hypothesized that a dual-channel portable monitor (PM) device could accurately identify patients who have a high pretest probability of obstructive sleep apnea (OSA), and we evaluated factors that may contribute to variability between PM and polysomnography (PSG) results. Methods: Consecutive clinic patients (N = 104) with possible OSA completed a home PM study, a PM study simultaneous with laboratory PSG, and a second home PM study. Uniform data analysis methods were applied to both PM and PSG data. Primary outcomes of interest were the positive likelihood ratio (LR+) and sensitivity of the PM device to “rule-in” OSA, defined as an apnea-hypopnea index (AHI) ≥ 5 events/h on PSG. Effects of different test environment and study nights, and order of study and analysis methods (manual compared to automated) on PM diagnostic accuracy were assessed. Results: The PM has adequate LR+ (4.8), sensitivity (80%), and specificity (83%) for detecting OSA in the unattended home setting when benchmarked against laboratory PSG, with better LR+ (> 5) and specificity (100%) and unchanged sensitivity (80%) in the simultaneous laboratory comparison. There were no significant night-night (all p > 0.10) or study order effects (home or laboratory first, p = 0.08) on AHI measures. Manual PM data review improved case finding accuracy, although this was not statistically significant (all p > 0.07). Misclassification was more frequent where OSA was mild. Conclusions: Overall performance of the PM device is consistent with current recommended criteria for an “acceptable” device to confidently “rule-in” OSA (AHI ≥ 5 events/h) in a high pretest probability clinic population. Our data support the utility of simple two-channel diagnostic devices to confirm the diagnosis of OSA in the home environment. Commentary: A commentary on this article appears in this issue on page 411. Citation: Ward KL, McArdle N, James A, Bremner AP, Simpson L, Cooper MN, Palmer LJ, Fedson AC, Mukherjee S, Hillman DR. A comprehensive evaluation of a two-channel portable monitor to “rule in” obstructive sleep apnea. J Clin Sleep Med 2015;11(4):433–444. PMID:25580606
Hurtado-Fernández, Elena; Pacchiarotta, Tiziana; Gómez-Romero, María; Schoenmaker, Bart; Derks, Rico; Deelder, André M; Mayboroda, Oleg A; Carrasco-Pancorbo, Alegría; Fernández-Gutiérrez, Alberto
2011-10-21
We have developed an analytical method using UHPLC-UV/ESI-TOF MS for the comprehensive profiling of the metabolites found in the methanolic extracts of 13 different varieties of avocado at two different ripening degrees. Both chromatographic and detection parameters were optimized in order to maximize the number of compounds detected and the sensitivity. After achieving the optimum conditions, we performed a complete analytical validation of the method with respect to its linearity, sensitivity, precision, accuracy and possible matrix effects. The LODs ranged from 1.64 to 730.54 ppb (in negative polarity) for benzoic acid and chrysin, respectively, whilst they were found within the range from 0.51 to 310.23 ppb in positive polarity. The RSDs for repeatability test did not exceed 7.01% and the accuracy ranged from 97.2% to 102.0%. Our method was then applied to the analysis of real avocado samples and advanced data processing and multivariate statistical analysis (PCA, PLS-DA) were carried out to discriminate/classify the examined avocado varieties. About 200 compounds belonging to various structural classes were tentatively identified; we are certain about the identity of around 60 compounds, 20 of which have been quantified in terms of their own commercially available standard. Copyright © 2011 Elsevier B.V. All rights reserved.
Lu, Xing; Jin, Xin; Yang, Suwei; Xia, Yanfei
2018-03-01
To comprehensively evaluate the associations between the depth of anesthesia and postoperative delirium (POD) or postoperative cognitive dysfunction (POCD). Using the Cochrane evaluation system, the included studies were conducted with quality assessment. We searched Cochrane library, Embase and PubMed databases without language restriction. The retrieval time is up to August 2017. According to the PRISMA guideline, the results associated with POCD and POD separately were compared between low and high bispectral index (BIS) groups under fixed effects model or random effects model. Besides, the risk ratio (RR) and 95% confidence intervals (95% CIs) were utilized as the effect sizes for merging the results. Furthermore, sensitivity analysis was performed to evaluate the stability of the results. Using Egger's test, publication bias was assessed for the included studies. Totally, 4 studies with high qualities were selected for this meta-analysis. The merged results of POCD showed no significant difference between low and high BIS groups (RR (95% CI)=0.84 (0.21, 3.45), P>0.05). Sensitivity analysis showed that the merged results of POCD were not stable (RR (95%CI)=0.41 (0.17, 0.99)-1.88 (1.09, 3.22), P=0.046). Additionally, no significant publication bias for POCD was found (P=0.385). There was no significant correlation between the depth of anesthesia and POCD. Copyright © 2017 Elsevier Inc. All rights reserved.
Spencer, Mercedes; Wagner, Richard K
2018-06-01
The purpose of this meta-analysis was to examine the comprehension problems of children who have a specific reading comprehension deficit (SCD), which is characterized by poor reading comprehension despite adequate decoding. The meta-analysis included 86 studies of children with SCD who were assessed in reading comprehension and oral language (vocabulary, listening comprehension, storytelling ability, and semantic and syntactic knowledge). Results indicated that children with SCD had deficits in oral language ( d = -0.78, 95% CI [-0.89, -0.68], but these deficits were not as severe as their deficit in reading comprehension ( d = -2.78, 95% CI [-3.01, -2.54]). When compared to reading comprehension age-matched normal readers, the oral language skills of the two groups were comparable ( d = 0.32, 95% CI [-0.49, 1.14]), which suggests that the oral language weaknesses of children with SCD represent a developmental delay rather than developmental deviance. Theoretical and practical implications of these findings are discussed.
A high-efficiency HPGe coincidence system for environmental analysis.
Britton, R; Davies, A V; Burnett, J L; Jackson, M J
2015-08-01
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) is supported by a network of certified laboratories which must meet certain sensitivity requirements for CTBT relevant radionuclides. At the UK CTBT Radionuclide Laboratory (GBL15), a high-efficiency, dual-detector gamma spectroscopy system has been developed to improve the sensitivity of measurements for treaty compliance, greatly reducing the time required for each sample. Utilising list-mode acquisition, each sample can be counted once, and processed multiple times to further improve sensitivity. For the 8 key radionuclides considered, Minimum Detectable Activities (MDA's) were improved by up to 37% in standard mode (when compared to a typical CTBT detector system), with the acquisition time required to achieve the CTBT sensitivity requirements reduced from 6 days to only 3. When utilising the system in coincidence mode, the MDA for (60) Co in a high-activity source was improved by a factor of 34 when compared to a standard CTBT detector, and a factor of 17 when compared to the dual-detector system operating in standard mode. These MDA improvements will allow the accurate and timely quantification of radionuclides that decay via both singular and cascade γ emission, greatly enhancing the effectiveness of CTBT laboratories. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D
2010-03-01
Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.
Grande, Antonio Jose; Reid, Hamish; Thomas, Emma; Foster, Charlie; Darton, Thomas C
2016-08-01
Dengue fever is a ubiquitous arboviral infection in tropical and sub-tropical regions, whose incidence has increased over recent decades. In the absence of a rapid point of care test, the clinical diagnosis of dengue is complex. The World Health Organisation has outlined diagnostic criteria for making the diagnosis of dengue infection, which includes the use of the tourniquet test (TT). To assess the quality of the evidence supporting the use of the TT and perform a diagnostic accuracy meta-analysis comparing the TT to antibody response measured by ELISA. A comprehensive literature search was conducted in the following databases to April, 2016: MEDLINE (PubMed), EMBASE, Cochrane Central Register of Controlled Trials, BIOSIS, Web of Science, SCOPUS. Studies comparing the diagnostic accuracy of the tourniquet test with ELISA for the diagnosis of dengue were included. Two independent authors extracted data using a standardized form. A total of 16 studies with 28,739 participants were included in the meta-analysis. Pooled sensitivity for dengue diagnosis by TT was 58% (95% Confidence Interval (CI), 43%-71%) and the specificity was 71% (95% CI, 60%-80%). In the subgroup analysis sensitivity for non-severe dengue diagnosis was 55% (95% CI, 52%-59%) and the specificity was 63% (95% CI, 60%-66%), whilst sensitivity for dengue hemorrhagic fever diagnosis was 62% (95% CI, 53%-71%) and the specificity was 60% (95% CI, 48%-70%). Receiver-operator characteristics demonstrated a test accuracy (AUC) of 0.70 (95% CI, 0.66-0.74). The tourniquet test is widely used in resource poor settings despite currently available evidence demonstrating only a marginal benefit in making a diagnosis of dengue infection alone. The protocol for this systematic review was registered at CRD42015020323.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hayeon, E-mail: kimh2@upmc.edu; Gill, Beant; Beriwal, Sushil
Purpose: To conduct a cost-effectiveness analysis to determine whether stereotactic body radiation therapy (SBRT) is a cost-effective therapy compared with radiofrequency ablation (RFA) for patients with unresectable colorectal cancer (CRC) liver metastases. Methods and Materials: A cost-effectiveness analysis was conducted using a Markov model and 1-month cycle over a lifetime horizon. Transition probabilities, quality of life utilities, and costs associated with SBRT and RFA were captured in the model on the basis of a comprehensive literature review and Medicare reimbursements in 2014. Strategies were compared using the incremental cost-effectiveness ratio, with effectiveness measured in quality-adjusted life years (QALYs). To account formore » model uncertainty, 1-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay threshold of $100,000 per QALY gained. Results: In base case analysis, treatment costs for 3 fractions of SBRT and 1 RFA procedure were $13,000 and $4397, respectively. Median survival was assumed the same for both strategies (25 months). The SBRT costs $8202 more than RFA while gaining 0.05 QALYs, resulting in an incremental cost-effectiveness ratio of $164,660 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of median survival from both treatments. Stereotactic body radiation therapy was economically reasonable if better survival was presumed (>1 month gain) or if used for large tumors (>4 cm). Conclusions: If equal survival is assumed, SBRT is not cost-effective compared with RFA for inoperable colorectal liver metastases. However, if better local control leads to small survival gains with SBRT, this strategy becomes cost-effective. Ideally, these results should be confirmed with prospective comparative data.« less
Mommen, Geert P M; Meiring, Hugo D; Heck, Albert J R; de Jong, Ad P J M
2013-07-16
In proteomics, comprehensive analysis of peptides mixtures necessitates multiple dimensions of separation prior to mass spectrometry analysis to reduce sample complexity and increase the dynamic range of analysis. The main goal of this work was to improve the performance of (online) multidimensional protein identification technology (MudPIT) in terms of sensitivity, compatibility and recovery. The method employs weak anion and strong cation mixed-bed ion exchange chromatography (ACE) in the first separation dimension and reversed phase chromatography (RP) in the second separation dimension (Motoyama et.al. Anal. Chem 2007, 79, 3623-34.). We demonstrated that the chromatographic behavior of peptides in ACE chromatography depends on both the WAX/SCX mixing ratio as the ionic strength of the mobile phase system. This property allowed us to replace the conventional salt gradient by a (discontinuous) salt-free, pH gradient. First dimensional separation of peptides was accomplished with mixtures of aqueous formic acid and dimethylsulfoxide with increasing concentrations. The overall performance of this mobile phase system was found comparable to ammonium acetate buffers in application to ACE chromatography, but clearly outperformed strong cation exchange for use in first dimensional peptide separation. The dramatically improved compatibility between (salt-free) ion exchange chromatography and reversed phase chromatography-mass spectrometry allowed us to downscale the dimensions of the RP analytical column down to 25 μm i.d. for an additional 2- to 3-fold improvement in performance compared to current technology. The achieved levels of sensitivity, orthogonality, and compatibility demonstrates the potential of salt-free ACE MudPIT for the ultrasensitive, multidimensional analysis of very modest amounts of sample material.
2013-01-01
Background Understanding the process of amino acid fermentation as a comprehensive system is a challenging task. Previously, we developed a literature-based dynamic simulation model, which included transcriptional regulation, transcription, translation, and enzymatic reactions related to glycolysis, the pentose phosphate pathway, the tricarboxylic acid (TCA) cycle, and the anaplerotic pathway of Escherichia coli. During simulation, cell growth was defined such as to reproduce the experimental cell growth profile of fed-batch cultivation in jar fermenters. However, to confirm the biological appropriateness of our model, sensitivity analysis and experimental validation were required. Results We constructed an l-glutamic acid fermentation simulation model by removing sucAB, a gene encoding α-ketoglutarate dehydrogenase. We then performed systematic sensitivity analysis for l-glutamic acid production; the results of this process corresponded with previous experimental data regarding l-glutamic acid fermentation. Furthermore, it allowed us to predicted the possibility that accumulation of 3-phosphoglycerate in the cell would regulate the carbon flux into the TCA cycle and lead to an increase in the yield of l-glutamic acid via fermentation. We validated this hypothesis through a fermentation experiment involving a model l-glutamic acid-production strain, E. coli MG1655 ΔsucA in which the phosphoglycerate kinase gene had been amplified to cause accumulation of 3-phosphoglycerate. The observed increase in l-glutamic acid production verified the biologically meaningful predictive power of our dynamic metabolic simulation model. Conclusions In this study, dynamic simulation using a literature-based model was shown to be useful for elucidating the precise mechanisms involved in fermentation processes inside the cell. Further exhaustive sensitivity analysis will facilitate identification of novel factors involved in the metabolic regulation of amino acid fermentation. PMID:24053676
Nishio, Yousuke; Ogishima, Soichi; Ichikawa, Masao; Yamada, Yohei; Usuda, Yoshihiro; Masuda, Tadashi; Tanaka, Hiroshi
2013-09-22
Understanding the process of amino acid fermentation as a comprehensive system is a challenging task. Previously, we developed a literature-based dynamic simulation model, which included transcriptional regulation, transcription, translation, and enzymatic reactions related to glycolysis, the pentose phosphate pathway, the tricarboxylic acid (TCA) cycle, and the anaplerotic pathway of Escherichia coli. During simulation, cell growth was defined such as to reproduce the experimental cell growth profile of fed-batch cultivation in jar fermenters. However, to confirm the biological appropriateness of our model, sensitivity analysis and experimental validation were required. We constructed an L-glutamic acid fermentation simulation model by removing sucAB, a gene encoding α-ketoglutarate dehydrogenase. We then performed systematic sensitivity analysis for L-glutamic acid production; the results of this process corresponded with previous experimental data regarding L-glutamic acid fermentation. Furthermore, it allowed us to predicted the possibility that accumulation of 3-phosphoglycerate in the cell would regulate the carbon flux into the TCA cycle and lead to an increase in the yield of L-glutamic acid via fermentation. We validated this hypothesis through a fermentation experiment involving a model L-glutamic acid-production strain, E. coli MG1655 ΔsucA in which the phosphoglycerate kinase gene had been amplified to cause accumulation of 3-phosphoglycerate. The observed increase in L-glutamic acid production verified the biologically meaningful predictive power of our dynamic metabolic simulation model. In this study, dynamic simulation using a literature-based model was shown to be useful for elucidating the precise mechanisms involved in fermentation processes inside the cell. Further exhaustive sensitivity analysis will facilitate identification of novel factors involved in the metabolic regulation of amino acid fermentation.
Career Decision-Making and the Military Family: Toward a Comprehensive Model
1990-03-01
specify and test the model. Appendices A through E contain the papers on the five major sub- models. 4 II. IN SEARCH OF A COMPREHENSIVE MODEL Employee ... employees (e.g., Mobley, et al., 1979). In other words, the civilian has considerable opportunity to incorporate experience content into career... belongingness ) to growth needs (esteem, self- actualization). The emphasis is upon display of sensitivity, responsiveness and accommodation by management
Salo, Päivi M.; Arbes, Samuel J.; Jaramillo, Renee; Calatroni, Agustin; Weir, Charles H.; Sever, Michelle L.; Hoppin, Jane A.; Rose, Kathryn M.; Liu, Andrew H.; Gergen, Peter J.; Mitchell, Herman E.; Zeldin, Darryl C.
2014-01-01
Background Allergic sensitization is an important risk factor for the development of atopic disease. The National Health and Nutrition Examination Survey (NHANES) 2005–2006 provides the most comprehensive information on IgE-mediated sensitization in the general US population. Objective We investigated clustering, sociodemographic and regional patterns of allergic sensitization and examined risk factors associated with IgE-mediated sensitization. Methods Data for this cross-sectional analysis were obtained from NHANES 2005–2006. Participants aged ≥1 year (N=9440) were tested for sIgEs to inhalant and food allergens; participants ≥6 years were tested for 19 sIgEs, and children aged 1–5 years for 9 sIgEs. Serum samples were analyzed using the ImmunoCAP System. Information on demographics and participant characteristics was collected by questionnaire. Results Of the study population aged 6 and older, 44.6% had detectable sIgEs, while 36.2% of children aged 1–5 years were sensitized to ≥1 allergen. Allergen-specific IgEs clustered into 7 groups that might have largely reflected biological cross-reactivity. Although sensitization to individual allergens and allergen types showed regional variation, the overall prevalence of sensitization did not differ across census regions, except in early childhood. In multivariate modeling, young age, male gender, non-Hispanic black race/ethnicity, geographic location (census region), and reported pet avoidance measures were most consistently associated with IgE-mediated sensitization. Conclusions The overall prevalence of allergic sensitization does not vary across US census regions, except in early life, although allergen-specific sensitization differs by sociodemographic and regional factors. Biological cross-reactivity may be an important, but not a sole, contributor to the clustering of allergen-specific IgEs. Clinical implications IgE-mediated sensitization shows clustering patterns and differs by sociodemographic and regional factors, but the overall prevalence of sensitization may not vary across US census regions. PMID:24522093
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Tulla, Kiara A; Maker, Ajay V
2018-03-01
Predicting the biologic behavior of intraductal papillary mucinous neoplasm (IPMN) remains challenging. Current guidelines utilize patient symptoms and imaging characteristics to determine appropriate surgical candidates. However, the majority of resected cysts remain low-risk lesions, many of which may be feasible to have under surveillance. We herein characterize the most promising and up-to-date molecular diagnostics in order to identify optimal components of a molecular signature to distinguish levels of IPMN dysplasia. A comprehensive systematic review of pertinent literature, including our own experience, was conducted based on the PRISMA guidelines. Molecular diagnostics in IPMN patient tissue, duodenal secretions, cyst fluid, saliva, and serum were evaluated and organized into the following categories: oncogenes, tumor suppressor genes, glycoproteins, markers of the immune response, proteomics, DNA/RNA mutations, and next-generation sequencing/microRNA. Specific targets in each of these categories, and in aggregate, were identified by their ability to both characterize a cyst as an IPMN and determine the level of cyst dysplasia. Combining molecular signatures with clinical and imaging features in this era of next-generation sequencing and advanced computational analysis will enable enhanced sensitivity and specificity of current models to predict the biologic behavior of IPMN.
Zhou, Xue-mei; Zhao, Peng; Wang, Wei; Zou, Jie; Cheng, Tian-he; Peng, Xiong-bo; Sun, Meng-xiang
2015-01-01
Autophagy is an evolutionarily conserved mechanism in both animals and plants, which has been shown to be involved in various essential developmental processes in plants. Nicotiana tabacum is considered to be an ideal model plant and has been widely used for the study of the roles of autophagy in the processes of plant development and in the response to various stresses. However, only a few autophagy-related genes (ATGs) have been identified in tobacco up to now. Here, we identified 30 ATGs belonging to 16 different groups in tobacco through a genome-wide survey. Comprehensive expression profile analysis reveals an abroad expression pattern of these ATGs, which could be detected in all tissues tested under normal growth conditions. Our series tests further reveal that majority of ATGs are sensitive and responsive to different stresses including nutrient starvation, plant hormones, heavy metal and other abiotic stresses, suggesting a central role of autophagy, likely as an effector, in plant response to various environmental cues. This work offers a detailed survey of all ATGs in tobacco and also suggests manifold functions of autophagy in both normal plant growth and plant response to environmental stresses. PMID:26205094
Zou, Wei; She, Jianwen; Tolstikov, Vladimir V.
2013-01-01
Current available biomarkers lack sensitivity and/or specificity for early detection of cancer. To address this challenge, a robust and complete workflow for metabolic profiling and data mining is described in details. Three independent and complementary analytical techniques for metabolic profiling are applied: hydrophilic interaction liquid chromatography (HILIC–LC), reversed-phase liquid chromatography (RP–LC), and gas chromatography (GC). All three techniques are coupled to a mass spectrometer (MS) in the full scan acquisition mode, and both unsupervised and supervised methods are used for data mining. The univariate and multivariate feature selection are used to determine subsets of potentially discriminative predictors. These predictors are further identified by obtaining accurate masses and isotopic ratios using selected ion monitoring (SIM) and data-dependent MS/MS and/or accurate mass MSn ion tree scans utilizing high resolution MS. A list combining all of the identified potential biomarkers generated from different platforms and algorithms is used for pathway analysis. Such a workflow combining comprehensive metabolic profiling and advanced data mining techniques may provide a powerful approach for metabolic pathway analysis and biomarker discovery in cancer research. Two case studies with previous published data are adapted and included in the context to elucidate the application of the workflow. PMID:24958150
Examining the Effects of Classroom Discussion on Students' Comprehension of Text: A Meta-Analysis
ERIC Educational Resources Information Center
Murphy, P. Karen; Wilkinson, Ian A. G.; Soter, Anna O.; Hennessey, Maeghan N.; Alexander, John F.
2009-01-01
The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students' comprehension and learning. This comprehensive meta-analysis of…
ERIC Educational Resources Information Center
Lan, Yi-Chin; Lo, Yu-Ling; Hsu, Ying-Shao
2014-01-01
Comprehension is the essence of reading. Finding appropriate and effective reading strategies to support students' reading comprehension has always been a critical issue for educators. This article presents findings from a meta-analysis of 17 studies of metacognitive strategy instruction on students' reading comprehension in computerized…
Osago, Harumi; Shibata, Tomoko; Hara, Nobumasa; Kuwata, Suguru; Kono, Michihaya; Uchio, Yuji; Tsuchiya, Mikako
2014-12-15
We developed a method using liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) with a selected reaction monitoring (SRM) mode for simultaneous quantitative analysis of glycosaminoglycans (GAGs). Using one-shot analysis with our MS/MS method, we demonstrated the simultaneous quantification of a total of 23 variously sulfated disaccharides of four GAG classes (8 chondroitin/dermatan sulfates, 1 hyaluronic acid, 12 heparan sulfates, and 2 keratan sulfates) with a sensitivity of less than 0.5 pmol within 20 min. We showed the differences in the composition of GAG classes and the sulfation patterns between porcine articular cartilage and yellow ligament. In addition to the internal disaccharides described above, some saccharides derived from the nonreducing terminal were detected simultaneously. The simultaneous quantification of both internal and nonreducing terminal saccharides could be useful to estimate the chain length of GAGs. This method would help to establish comprehensive "GAGomic" analysis of biological tissues. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Effati, Meysam; Thill, Jean-Claude; Shabani, Shahin
2015-04-01
The contention of this paper is that many social science research problems are too "wicked" to be suitably studied using conventional statistical and regression-based methods of data analysis. This paper argues that an integrated geospatial approach based on methods of machine learning is well suited to this purpose. Recognizing the intrinsic wickedness of traffic safety issues, such approach is used to unravel the complexity of traffic crash severity on highway corridors as an example of such problems. The support vector machine (SVM) and coactive neuro-fuzzy inference system (CANFIS) algorithms are tested as inferential engines to predict crash severity and uncover spatial and non-spatial factors that systematically relate to crash severity, while a sensitivity analysis is conducted to determine the relative influence of crash severity factors. Different specifications of the two methods are implemented, trained, and evaluated against crash events recorded over a 4-year period on a regional highway corridor in Northern Iran. Overall, the SVM model outperforms CANFIS by a notable margin. The combined use of spatial analysis and artificial intelligence is effective at identifying leading factors of crash severity, while explicitly accounting for spatial dependence and spatial heterogeneity effects. Thanks to the demonstrated effectiveness of a sensitivity analysis, this approach produces comprehensive results that are consistent with existing traffic safety theories and supports the prioritization of effective safety measures that are geographically targeted and behaviorally sound on regional highway corridors.
Comparative sequence analysis suggests a conserved gating mechanism for TRP channels
Palovcak, Eugene; Delemotte, Lucie; Klein, Michael L.
2015-01-01
The transient receptor potential (TRP) channel superfamily plays a central role in transducing diverse sensory stimuli in eukaryotes. Although dissimilar in sequence and domain organization, all known TRP channels act as polymodal cellular sensors and form tetrameric assemblies similar to those of their distant relatives, the voltage-gated potassium (Kv) channels. Here, we investigated the related questions of whether the allosteric mechanism underlying polymodal gating is common to all TRP channels, and how this mechanism differs from that underpinning Kv channel voltage sensitivity. To provide insight into these questions, we performed comparative sequence analysis on large, comprehensive ensembles of TRP and Kv channel sequences, contextualizing the patterns of conservation and correlation observed in the TRP channel sequences in light of the well-studied Kv channels. We report sequence features that are specific to TRP channels and, based on insight from recent TRPV1 structures, we suggest a model of TRP channel gating that differs substantially from the one mediating voltage sensitivity in Kv channels. The common mechanism underlying polymodal gating involves the displacement of a defect in the H-bond network of S6 that changes the orientation of the pore-lining residues at the hydrophobic gate. PMID:26078053
Kien, C Lawrence; Bunn, Janice Y; Poynter, Matthew E; Stevens, Robert; Bain, James; Ikayeva, Olga; Fukagawa, Naomi K; Champagne, Catherine M; Crain, Karen I; Koves, Timothy R; Muoio, Deborah M
2013-04-01
Relative to diets enriched in palmitic acid (PA), diets rich in oleic acid (OA) are associated with reduced risk of type 2 diabetes. To gain insight into mechanisms underlying these observations, we applied comprehensive lipidomic profiling to specimens collected from healthy adults enrolled in a randomized, crossover trial comparing a high-PA diet to a low-PA/high-OA (HOA) diet. Effects on insulin sensitivity (SI) and disposition index (DI) were assessed by intravenous glucose tolerance testing. In women, but not men, SI and DI were higher during HOA. The effect of HOA on SI correlated positively with physical fitness upon enrollment. Principal components analysis of either fasted or fed-state metabolites identified one factor affected by diet and heavily weighted by the PA/OA ratio of serum and muscle lipids. In women, this factor correlated inversely with SI in the fasted and fed states. Medium-chain acylcarnitines emerged as strong negative correlates of SI, and the HOA diet was accompanied by lower serum and muscle ceramide concentrations and reductions in molecular biomarkers of inflammatory and oxidative stress. This study provides evidence that the dietary PA/OA ratio impacts diabetes risk in women.
A comprehensive prediction and evaluation method of pilot workload
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
BACKGROUND: The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. OBJECTIVE: A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. METHODS: The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. RESULTS: Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. CONCLUSION: A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%. PMID:29710742
A comprehensive prediction and evaluation method of pilot workload.
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%.
Harlaar, Nicole; Kovas, Yulia; Dale, Philip S.; Petrill, Stephen A.; Plomin, Robert
2013-01-01
Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years. PMID:24319294
Harlaar, Nicole; Kovas, Yulia; Dale, Philip S; Petrill, Stephen A; Plomin, Robert
2012-08-01
Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years.
On the comprehensibility and perceived privacy protection of indirect questioning techniques.
Hoffmann, Adrian; Waubert de Puiseau, Berenike; Schmidt, Alexander F; Musch, Jochen
2017-08-01
On surveys that assess sensitive personal attributes, indirect questioning aims at increasing respondents' willingness to answer truthfully by protecting confidentiality. However, the assumption that subjects understand questioning procedures fully and trust them to protect their privacy is rarely tested. In a scenario-based design, we compared four indirect questioning procedures in terms of their comprehensibility and perceived privacy protection. All indirect questioning techniques were found to be less comprehensible by respondents than a conventional direct question used for comparison. Less-educated respondents experienced more difficulties when confronted with any indirect questioning technique. Regardless of education, the crosswise model was found to be the most comprehensible among the four indirect methods. Indirect questioning in general was perceived to increase privacy protection in comparison to a direct question. Unexpectedly, comprehension and perceived privacy protection did not correlate. We recommend assessing these factors separately in future evaluations of indirect questioning.
2012-01-01
Background A father’s experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers’ experiences of childbirth. Method Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Results Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach’s alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. Conclusions The questionnaire adequately measures important dimensions of first-time fathers’ childbirth experience and may be used to assess aspects of fathers’ experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author. PMID:22594834
Premberg, Åsa; Taft, Charles; Hellström, Anna-Lena; Berg, Marie
2012-05-17
A father's experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers' experiences of childbirth. Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach's alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. The questionnaire adequately measures important dimensions of first-time fathers' childbirth experience and may be used to assess aspects of fathers' experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David C. White
2005-09-14
NABIR funding at the University of Tennessee Center for Biomarker Analysis (CBA) has led to several key contributions to the investigation of bioremediation of metals and radionuclides. This lab has played an integral part in assessing microbial communities at the field scale at the ORNL FRC (Istok et al., 2004) and two UMTRA sites (Anderson et al., 2003, Chang et al., 2001). Our work over the period of the grant has resulted in 42-peer reviewed publications, 62 presentations (14 of which were international), and one patent pending. Currently CBA has 2 papers in press. The main objectives relating to themore » field portion of this program were to provide comprehensive biomarker analysis for NABIR collaborators to enhance the understanding of microbial geo-bioprocesses involved in the effective immobilization of metals (We have worked with and published or currently are publishing with 10 groups of NAIBR investigators). The laboratory portion of our research centered on methods development and has led to three major innovations that could result in a systematic way of evaluating sites for potential bioremediation. The first of these is the development of an in situ sampling device (Peacock et al., 2004, Anderson et al., 2003, Istok et al., 2004) for the collection and concentration of microbial biomass. The second is the development of expanded lipid analysis based on the significantly greater sensitivity and selectivity of the LC/MS/MS that allows the analysis of respiratory quinones, diglycerides, sterols, intact phospholipids, poly-hydroxyalkonates, and potentially archaeol, and caldarchaeols from archea. These new analyses are accomplished more rapidly and with increased sensitivities and resolution than in the past (Lytle et al., 2000a, 2000b, 2001a, Geyer et al., 2004). The third advance is the coupling of lipid analysis with 13C enrichment experiments (Lytle et al., 2001b, Geyer et al. 2005). With this technique it is now possible to follow the active portion of the in situ microbial community with a resolution heretofore not possible. These three advances in technology have been initially demonstrated at the NABIR Field Research Center (FRC) in Oak Ridge, TN and at the UMTRA Old Rifle site in Colorado. Microbial communities are of primary importance in the use of bioimmobilization strategies for metals and radionuclides from contaminated groundwater and sediments. These communities represent a potentially transformable agent that is able to affect virtually all biogeochemical pathways. Microorganisms can alter metal chemistry and mobility through reduction, accumulation, and immobilization and have been shown to be responsible for mineral formation and dissolution. Research is directed to provide collaborating NABIR investigators a rapid, comprehensive, and cost-effective suite of biomarker measurements to quantify microbial community structure, activity, and effectiveness, thereby providing defensible evidence that a desired bioprocess is occurring or may occur at a given site.« less
ERIC Educational Resources Information Center
Stewart, Andrew J.; Haigh, Matthew; Ferguson, Heather J.
2013-01-01
Statements of the form if… then… can be used to communicate conditional speech acts such as tips and promises. Conditional promises require the speaker to have perceived control over the outcome event, whereas conditional tips do not. In an eye-tracking study, we examined whether readers are sensitive to information about perceived speaker control…
ERIC Educational Resources Information Center
Pavias, Marcella; van den Broek, Paul; Hickendorff, Marian; Beker, Katinka; Van Leijenhorst, Linda
2016-01-01
This study examined the contributions of developmental changes in social-cognitive ability throughout adolescence to the development of narrative comprehension. We measured the effects of sensitivity to the causal structure of narratives and of sensitivity to differences in social-cognitive processing demands on narrative recall by children (8-10…
ERIC Educational Resources Information Center
Shohet, Cilly; Jaegermann, Nurit
2012-01-01
The Mediational Intervention for Sensitizing Caregivers (MISC) model is a comprehensive developmental approach to help adults understand their role in child development by enhancing the quality of adult-child interactions. This article describes how the Irving B. Harris Program for Infants, Toddlers and Their Families at Bar-Ilan University…
Alrajab, Saadah; Youssef, Asser M; Akkus, Nuri I; Caldito, Gloria
2013-09-23
Ultrasonography is being increasingly utilized in acute care settings with expanding applications. Pneumothorax evaluation by ultrasonography is a fast, safe, easy and inexpensive alternative to chest radiographs. In this review, we provide a comprehensive analysis of the current literature comparing ultrasonography and chest radiography for the diagnosis of pneumothorax. We searched English-language articles in MEDLINE, EMBASE and Cochrane Library dealing with both ultrasonography and chest radiography for diagnosis of pneumothorax. In eligible studies that met strict inclusion criteria, we conducted a meta-analysis to evaluate the diagnostic accuracy of pleural ultrasonography in comparison with chest radiography for the diagnosis of pneumothorax. We reviewed 601 articles and selected 25 original research articles for detailed review. Only 13 articles met all of our inclusion criteria and were included in the final analysis. One study used lung sliding sign alone, 12 studies used lung sliding and comet tail signs, and 6 studies searched for lung point in addition to the other two signs. Ultrasonography had a pooled sensitivity of 78.6% (95% CI, 68.1 to 98.1) and a specificity of 98.4% (95% CI, 97.3 to 99.5). Chest radiography had a pooled sensitivity of 39.8% (95% CI, 29.4 to 50.3) and a specificity of 99.3% (95% CI, 98.4 to 100). Our meta-regression and subgroup analyses indicate that consecutive sampling of patients compared to convenience sampling provided higher sensitivity results for both ultrasonography and chest radiography. Consecutive versus nonconsecutive sampling and trauma versus nontrauma settings were significant sources of heterogeneity. In addition, subgroup analysis showed significant variations related to operator and type of probe used. Our study indicates that ultrasonography is more accurate than chest radiography for detection of pneumothorax. The results support the previous investigations in this field, add new valuable information obtained from subgroup analysis, and provide accurate estimates for the performance parameters of both bedside ultrasonography and chest radiography for pneumothorax evaluation.
Proteomics: a new approach to the study of disease.
Chambers, G; Lawrie, L; Cash, P; Murray, G I
2000-11-01
The global analysis of cellular proteins has recently been termed proteomics and is a key area of research that is developing in the post-genome era. Proteomics uses a combination of sophisticated techniques including two-dimensional (2D) gel electrophoresis, image analysis, mass spectrometry, amino acid sequencing, and bio-informatics to resolve comprehensively, to quantify, and to characterize proteins. The application of proteomics provides major opportunities to elucidate disease mechanisms and to identify new diagnostic markers and therapeutic targets. This review aims to explain briefly the background to proteomics and then to outline proteomic techniques. Applications to the study of human disease conditions ranging from cancer to infectious diseases are reviewed. Finally, possible future advances are briefly considered, especially those which may lead to faster sample throughput and increased sensitivity for the detection of individual proteins. Copyright 2000 John Wiley & Sons, Ltd.
Direct analysis of [6,6-(2)H2]glucose and [U-(13)C6]glucose dry blood spot enrichments by LC-MS/MS.
Coelho, Margarida; Mendes, Vera M; Lima, Inês S; Martins, Fátima O; Fernandes, Ana B; Macedo, M Paula; Jones, John G; Manadas, Bruno
2016-06-01
A liquid chromatography tandem mass spectrometry (LC-MS/MS) using multiple reaction monitoring (MRM) in a triple-quadrupole scan mode was developed and comprehensively validated for the determination of [6,6-(2)H2]glucose and [U-(13)C6]glucose enrichments from dried blood spots (DBS) without prior derivatization. The method is demonstrated with dried blood spots obtained from rats administered with a primed-constant infusion of [U-(13)C6]glucose and an oral glucose load enriched with [6,6-(2)H2]glucose. The sensitivity is sufficient for analysis of the equivalent to <5μL of blood and the overall method was accurate and precise for the determination of DBS isotopic enrichments. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Spencer, Mercedes; Wagner, Richard K.
2017-01-01
We conducted a meta-analysis of 16 existing studies to examine the nature of the comprehension problems for children who were second-language learners with poor reading comprehension despite adequate decoding. Results indicated that these children had deficits in oral language (d = -0.80), but these deficits were not as severe as their reading…
CH-47D Rotating System Fault Sensing for Condition Based Maintenance
2011-03-01
replacement. This research seeks to create an analytical model in the Rotorcraft Comprehensive Analysis System which will enable the identifica- tion of...answer my many questions. Without your assistance and that of Dr. Jon Keller and Mr. Clayton Kachelle at AMRDEC, the Rotorcraft Comprehensive Analysis...20 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2 Rotorcraft Comprehensive Analysis
Microarray R-based analysis of complex lysate experiments with MIRACLE
List, Markus; Block, Ines; Pedersen, Marlene Lemvig; Christiansen, Helle; Schmidt, Steffen; Thomassen, Mads; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan
2014-01-01
Motivation: Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. Typical challenges involved in this technology are antibody selection, sample preparation and optimization of staining conditions. The issue of combining effective sample management and data analysis, however, has been widely neglected. Results: This motivated us to develop MIRACLE, a comprehensive and user-friendly web application bridging the gap between spotting and array analysis by conveniently keeping track of sample information. Data processing includes correction of staining bias, estimation of protein concentration from response curves, normalization for total protein amount per sample and statistical evaluation. Established analysis methods have been integrated with MIRACLE, offering experimental scientists an end-to-end solution for sample management and for carrying out data analysis. In addition, experienced users have the possibility to export data to R for more complex analyses. MIRACLE thus has the potential to further spread utilization of RPPAs as an emerging technology for high-throughput protein analysis. Availability: Project URL: http://www.nanocan.org/miracle/ Contact: mlist@health.sdu.dk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161257
Microarray R-based analysis of complex lysate experiments with MIRACLE.
List, Markus; Block, Ines; Pedersen, Marlene Lemvig; Christiansen, Helle; Schmidt, Steffen; Thomassen, Mads; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan
2014-09-01
Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. Typical challenges involved in this technology are antibody selection, sample preparation and optimization of staining conditions. The issue of combining effective sample management and data analysis, however, has been widely neglected. This motivated us to develop MIRACLE, a comprehensive and user-friendly web application bridging the gap between spotting and array analysis by conveniently keeping track of sample information. Data processing includes correction of staining bias, estimation of protein concentration from response curves, normalization for total protein amount per sample and statistical evaluation. Established analysis methods have been integrated with MIRACLE, offering experimental scientists an end-to-end solution for sample management and for carrying out data analysis. In addition, experienced users have the possibility to export data to R for more complex analyses. MIRACLE thus has the potential to further spread utilization of RPPAs as an emerging technology for high-throughput protein analysis. Project URL: http://www.nanocan.org/miracle/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Armstrong, Susan M; Wither, Joan E; Borowoy, Alan M; Landolt-Marticorena, Carolina; Davis, Aileen M; Johnson, Sindhu R
2017-01-01
Case ascertainment through self-report is a convenient but often inaccurate method to collect information. The purposes of this study were to develop, assess the sensibility, and validate a tool to identify cases of systemic autoimmune rheumatic diseases (SARD) in the outpatient setting. The SARD tool was administered to subjects sampled from specialty clinics. Determinants of sensibility - comprehensibility, feasibility, validity, and acceptability - were evaluated using a numeric rating scale from 1-7. Comprehensibility was evaluated using the Flesch Reading Ease and the Flesch-Kincaid Grade Level. Self-reported diagnoses were validated against medical records using Cohen's κ statistic. There were 141 participants [systemic lupus erythematosus (SLE), systemic sclerosis (SSc), rheumatoid arthritis, Sjögren syndrome (SS), inflammatory myositis (polymyositis/dermatomyositis; PM/DM), and controls] who completed the questionnaire. The Flesch Reading Ease score was 77.1 and the Flesch-Kincaid Grade Level was 4.4. Respondents endorsed (mean ± SD) comprehensibility (6.12 ± 0.92), feasibility (5.94 ± 0.81), validity (5.35 ± 1.10), and acceptability (3.10 ± 2.03). The SARD tool had a sensitivity of 0.91 (95% CI 0.88-0.94) and a specificity of 0.99 (95% CI 0.96-1.00). The agreement between the SARD tool and medical record was κ = 0.82 (95% CI 0.77-0.88). Subgroup analysis by SARD found κ coefficients for SLE to be κ = 0.88 (95% CI 0.79-0.97), SSc κ = 1.0 (95% CI 1.0-1.0), PM/DM κ = 0.72 (95% CI 0.49-0.95), and SS κ = 0.85 (95% CI 0.71-0.99). The screening questions had sensitivity ranging from 0.96 to 1.0 and specificity ranging from 0.88 to 1.0. This SARD case ascertainment tool has demonstrable sensibility and validity. The use of both screening and confirmatory questions confers added accuracy.
Characterization of LHY-821, a novel moderately differentiated endometrial carcinoma cell line.
Hu, Qian; Yu, Li; Chen, Rui; Zhang, Yan; Xie, Ya; Liao, Qinping
2012-08-01
Endometrial cancer is a major problem for women but only a small number of comprehensively characterized cell models are available for studies. Here, we established a new cell line derived from a Stage IIIc(1) Grade 2 endometrial adenocarcinoma. The cell line, designated LHY-821, was characterized using growth curve, karyotyping, immunohistochemical staining, immunoblotting, drug sensitivity assay, invasion assay, and xenografting in nude mice. LHY-821 has a doubling time of about 46 h and a colony-forming efficiency of approximately 71 %. These cells expresse high levels of progesterone receptor but not estrogen receptor and are sensitive to medroxyprogesterone acetate (MPA). LHY-821 also expresses pan-cytokeratin, PTEN, p53, β-catenin, IGF-1, and IGF-2. In addition, karyotype analysis revealed that LHY-821 possessed a near diploid karyotype including 6q-, 10p-, Xq-, 13q+, 17p+, and Triplo-12. LHY-821 showed highly tumorigenicity in nude mice (100 %) and weak invasiveness. Chemosensitivity tests showed that LHY-821 was sensitive to both carboplatin and paclitaxel. LHY-821 is an immortalized cell line which had survived more than 80 serial passages; it may provide a novel tool to study the molecular mechanism and potential treatment for endometrial cancer.
NASA Astrophysics Data System (ADS)
Sharma, Dheeraj; Singh, Deepika; Pandey, Sunil; Yadav, Shivendra; Kondekar, P. N.
2017-11-01
In this work, we have done a comprehensive study between full-gate and short-gate dielectrically modulated (DM) electrically doped tunnel field-effect transistor (SGDM-EDTFET) based biosensors of equivalent dimensions. However, in both the structures, dielectric constant and charge density are considered as a sensing parameter for sensing the charged and non-charged biomolecules in the given solution. In SGDM-EDTFET architecture, the reduction in gate length results a significant improvement in the tunneling current due to occurrence of strong coupling between gate and channel region which ensures higher drain current sensitivity for detection of the biomolecules. Moreover, the sensitivity of dual metal SGDM-EDTFET is compared with the single metal SGDM-EDTFET to analyze the better sensing capability of both the devices for the biosensor application. Further, the effect of sensing parameter i.e., ON-current (ION), and ION/IOFF ratio is analysed for dual metal SGDM-EDTFET in comparison with dual metal SGDM-EDFET. From the comparison, it is found that dual metal SGDM-EDTFET based biosensor attains relatively better sensitivity and can be utilized as a suitable candidate for biosensing applications.
NASA Astrophysics Data System (ADS)
Cao, Ensi; Yang, Yuqing; Cui, Tingting; Zhang, Yongjia; Hao, Wentao; Sun, Li; Peng, Hua; Deng, Xiao
2017-01-01
LaFeO3-δ nanoparticles were prepared by citric sol-gel method with different raw material choosing and calcination process. The choosing of polyethylene glycol instead of ethylene glycol as raw material and additional pre-calcination at 400 °C rather than direct calcination at 600 °C could result in the decrease of resistance due to the reduction of activation energy Ea. Meanwhile, the choosing of ethylene glycol as raw material and additional pre-calcination leads to the enhancement of sensitivity to ethanol. Comprehensive analysis on the sensitivity and XRD, SEM, TEM, XPS results indicates that the sensing performance of LaFeO3-δ should be mainly determined by the adsorbed oxygen species on Fe ions, with certain contribution from native active oxygen. The best sensitivity of 46.1-200 ppm ethanol at prime working temperature of 112 °C is obtained by the sample using ethylene glycol as raw material with additional pre-calcination, which originates from its uniformly-sized and well-dispersed particles as well as high atomic ratio of Fe/La at surface region.
Larrabee, Glenn J
2014-11-01
Literature on test validity and performance validity is reviewed to propose a framework for specification of an ability-focused battery (AFB). Factor analysis supports six domains of ability: first, verbal symbolic; secondly, visuoperceptual and visuospatial judgment and problem solving; thirdly, sensorimotor skills; fourthly, attention/working memory; fifthly, processing speed; finally, learning and memory (which can be divided into verbal and visual subdomains). The AFB should include at least three measures for each of the six domains, selected based on various criteria for validity including sensitivity to presence of disorder, sensitivity to severity of disorder, correlation with important activities of daily living, and containing embedded/derived measures of performance validity. Criterion groups should include moderate and severe traumatic brain injury, and Alzheimer's disease. Validation groups should also include patients with left and right hemisphere stroke, to determine measures sensitive to lateralized cognitive impairment and so that the moderating effects of auditory comprehension impairment and neglect can be analyzed on AFB measures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Techniques of Force and Pressure Measurement in the Small Joints of the Wrist.
Schreck, Michael J; Kelly, Meghan; Canham, Colin D; Elfar, John C
2018-01-01
The alteration of forces across joints can result in instability and subsequent disability. Previous methods of force measurements such as pressure-sensitive films, load cells, and pressure-sensing transducers have been utilized to estimate biomechanical forces across joints and more recent studies have utilized a nondestructive method that allows for assessment of joint forces under ligamentous restraints. A comprehensive review of the literature was performed to explore the numerous biomechanical methods utilized to estimate intra-articular forces. Methods of biomechanical force measurements in joints are reviewed. Methods such as pressure-sensitive films, load cells, and pressure-sensing transducers require significant intra-articular disruption and thus may result in inaccurate measurements, especially in small joints such as those within the wrist and hand. Non-destructive methods of joint force measurements either utilizing distraction-based joint reaction force methods or finite element analysis may offer a more accurate assessment; however, given their recent inception, further studies are needed to improve and validate their use.
Holtkamp, Hannah; Grabmann, Gerlinde; Hartinger, Christian G
2016-04-01
Electrophoretic methods have been widely applied in research on the roles of metal complexes in biological systems. In particular, CE, often hyphenated to a sensitive MS detector, has provided valuable information on the modes of action of metal-based pharmaceuticals, and more recently new methods have been added to the electrophoretic toolbox. The range of applications continues to expand as a result of enhanced CE-to-MS interfacing, with sensitivity often at picomolar level, and evolved separation modes allowing for innovative sample analysis. This article is a followup to previous reviews about CE methods in metallodrug research (Electrophoresis, 2003, 24, 2023-2037; Electrophoresis, 2007, 28, 3436-3446; Electrophoresis, 2012, 33, 622-634), also providing a comprehensive overview of metal species studied by electrophoretic methods hyphenated to MS. It highlights the latest CE developments, takes a sneak peek into gel electrophoresis, traces biomolecule labeling, and focuses on the importance of early-stage drug development. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Turkec, Aydin; Lucas, Stuart J; Karacanli, Burçin; Baykut, Aykut; Yuksel, Hakki
2016-03-01
Detection of GMO material in crop and food samples is the primary step in GMO monitoring and regulation, with the increasing number of GM events in the world market requiring detection solutions with high multiplexing capacity. In this study, we test the suitability of a high-density oligonucleotide microarray platform for direct, quantitative detection of GMOs found in the Turkish feed market. We tested 1830 different 60nt probes designed to cover the GM cassettes from 12 different GM cultivars (3 soya, 9 maize), as well as plant species-specific and contamination controls, and developed a data analysis method aiming to provide maximum throughput and sensitivity. The system was able specifically to identify each cultivar, and in 10/12 cases was sensitive enough to detect GMO DNA at concentrations of ⩽1%. These GMOs could also be quantified using the microarray, as their fluorescence signals increased linearly with GMO concentration. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ahmad, Mansur; Hollender, Lars; Anderson, Quentin; Kartha, Krishnan; Ohrbach, Richard; Truelove, Edmond L; John, Mike T; Schiffman, Eric L
2009-06-01
As part of the Multisite Research Diagnostic Criteria For Temporomandibular Disorders (RDC/TMD) Validation Project, comprehensive temporomandibular joint diagnostic criteria were developed for image analysis using panoramic radiography, magnetic resonance imaging (MRI), and computerized tomography (CT). Interexaminer reliability was estimated using the kappa (kappa) statistic, and agreement between rater pairs was characterized by overall, positive, and negative percent agreement. Computerized tomography was the reference standard for assessing validity of other imaging modalities for detecting osteoarthritis (OA). For the radiologic diagnosis of OA, reliability of the 3 examiners was poor for panoramic radiography (kappa = 0.16), fair for MRI (kappa = 0.46), and close to the threshold for excellent for CT (kappa = 0.71). Using MRI, reliability was excellent for diagnosing disc displacements (DD) with reduction (kappa = 0.78) and for DD without reduction (kappa = 0.94) and good for effusion (kappa = 0.64). Overall percent agreement for pairwise ratings was >or=82% for all conditions. Positive percent agreement for diagnosing OA was 19% for panoramic radiography, 59% for MRI, and 84% for CT. Using MRI, positive percent agreement for diagnoses of any DD was 95% and of effusion was 81%. Negative percent agreement was >or=88% for all conditions. Compared with CT, panoramic radiography and MRI had poor and marginal sensitivity, respectively, but excellent specificity in detecting OA. Comprehensive image analysis criteria for the RDC/TMD Validation Project were developed, which can reliably be used for assessing OA using CT and for disc position and effusion using MRI.
Comprehensive Analysis of the Soybean (Glycine max) GmLAX Auxin Transporter Gene Family
Chai, Chenglin; Wang, Yongqin; Valliyodan, Babu; Nguyen, Henry T.
2016-01-01
The phytohormone auxin plays a critical role in regulation of plant growth and development as well as plant responses to abiotic stresses. This is mainly achieved through its uneven distribution in plant via a polar auxin transport process. Auxin transporters are major players in polar auxin transport. The AUXIN RESISTENT 1/LIKE AUX1 (AUX/LAX) auxin influx carriers belong to the amino acid permease family of proton-driven transporters and function in the uptake of indole-3-acetic acid (IAA). In this study, genome-wide comprehensive analysis of the soybean AUX/LAX (GmLAX) gene family, including phylogenic relationships, chromosome localization, and gene structure, was carried out. A total of 15 GmLAX genes, including seven duplicated gene pairs, were identified in the soybean genome. They were distributed on 10 chromosomes. Despite their higher percentage identities at the protein level, GmLAXs exhibited versatile tissue-specific expression patterns, indicating coordinated functioning during plant growth and development. Most GmLAXs were responsive to drought and dehydration stresses and auxin and abscisic acid (ABA) stimuli, in a tissue- and/or time point- sensitive mode. Several GmLAX members were involved in responding to salt stress. Sequence analysis revealed that promoters of GmLAXs contained different combinations of stress-related cis-regulatory elements. These studies suggest that the soybean GmLAXs were under control of a very complex regulatory network, responding to various internal and external signals. This study helps to identity candidate GmLAXs for further analysis of their roles in soybean development and adaption to adverse environments. PMID:27014306
Liu, Xin-Guang; Yang, Hua; Cheng, Xiao-Lan; Liu, Lei; Qin, Yong; Wang, Qi; Qi, Lian-Wen; Li, Ping
2014-08-01
Analysis and quality control of Ginkgo biloba have been comprehensively studied. However, little attention has been devoted to the simultaneous extraction and analysis of flavonols and terpene trilactones, especially for direct quantification of flavonol glycosides. This work described a rapid strategy for one-step extraction and quantification of the components. A matrix solid phase dispersion (MSPD) method was designed for the extraction of ginkgo ingredients and compared with the heat-reflux and ultrasonic extraction methods. An ultra-high performance liquid chromatography (UHPLC)-tandem-triple-quadrupole-mass spectrometry (QQQ-MS) method was developed for detection of the 18 components, including 10 original flavonol glycosides, 3 aglycones, and 5 lactones. Subsequently, the proposed strategy was used for the analysis of 12 G. biloba tablets. Results showed that MSPD produced comparable extraction efficiency but consumed less time and required lower solvent volumes compared with conventional methods. Without hydrolysis, the concentration detected was much closer to the original in the sample. The total flavonol glycoside contents in ginkgo tablets ranged from 3.59 to 125.21μgmg(-1), and the terpene trilactone varied from 3.45 to 57.8μgmg(-1) among different manufacturers. In conclusion, the proposed MSPD and UHPLC-QQQ-MS is rapid and sensitive in providing comprehensive profile of chemical constituents especially the genuine flavonol glycosides for improved quality control of ginkgo products. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Colenbrander, Danielle; Nickels, Lyndsey; Kohnen, Saskia
2017-01-01
Background: Identifying reading comprehension difficulties is challenging. There are many comprehension tests to choose from, and a child's diagnosis can be influenced by various factors such as a test's format and content and the choice of diagnostic criteria. We investigate these issues with reference to the Neale Analysis of Reading Ability…
Smrke, Samo; Vovk, Irena
2013-05-10
The coupling of thin-layer chromatography with mass spectrometry (TLC-MS) for the analysis of monomeric flavanols and proanthocyanidins in samples presented as complex matrices has been studied. The elution conditions for TLC-MS were optimised and full scans were compared with selected reaction monitoring for the MS detection of compounds. The performance of silica gel and cellulose plates with different developing solvents in TLC-MS was assessed. Cellulose plates provided superior sensitivity while ionisation suppression was encountered with silica plates. The use of a HILIC guard column beyond the elution head was found to facilitate detection of monomer compounds on silica plates. A new comprehensive TLC×MS procedure for screening flavanols in the entire chromatogram was developed as an alternative to the use of 4-dimethylaminocinnamaldehyde to determine the locations of compounds on the plate. This new procedure was applied to detect flavanols in the peel of Punica granatum L. fruits and in seeds of Juniperus communis L., in which flavanols and proanthocyanidin dimers and trimers were detected for the first time. Copyright © 2013 Elsevier B.V. All rights reserved.
Vendelova, Emilia; Camargo de Lima, Jeferson; Lorenzatto, Karina Rodrigues; Monteiro, Karina Mariante; Mueller, Thomas; Veepaschit, Jyotishman; Grimm, Clemens; Brehm, Klaus; Hrčková, Gabriela; Lutz, Manfred B.; Ferreira, Henrique B.
2016-01-01
Accumulating evidences have assigned a central role to parasite-derived proteins in immunomodulation. Here, we report on the proteomic identification and characterization of immunomodulatory excretory-secretory (ES) products from the metacestode larva (tetrathyridium) of the tapeworm Mesocestoides corti (syn. M. vogae). We demonstrate that ES products but not larval homogenates inhibit the stimuli-driven release of the pro-inflammatory, Th1-inducing cytokine IL-12p70 by murine bone marrow-derived dendritic cells (BMDCs). Within the ES fraction, we biochemically narrowed down the immunosuppressive activity to glycoproteins since active components were lipid-free, but sensitive to heat- and carbohydrate-treatment. Finally, using bioassay-guided chromatographic analyses assisted by comparative proteomics of active and inactive fractions of the ES products, we defined a comprehensive list of candidate proteins released by M. corti tetrathyridia as potential suppressors of DC functions. Our study provides a comprehensive library of somatic and ES products and highlight some candidate parasite factors that might drive the subversion of DC functions to facilitate the persistence of M. corti tetrathyridia in their hosts. PMID:27736880
Kocak, D; Ozel, M Z; Gogus, F; Hamilton, J F; Lewis, A C
2012-12-15
The grilling of meat may generate dangerous levels of mutagenic and carcinogenic nitrosamines (NAs). Meat and vegetable samples underwent a two-step solid-phase extraction before analysis by comprehensive gas chromatography with a nitrogen chemiluminescence detection system (GCxGC-NCD). The GCxGC-NCD method showed high selectivity, sensitivity and equimolarity in its response to six specific NAs. NA contamination of charcoal-grilled lamb at various stages of cooking and with various fat contents and also charcoal-grilled vegetables were investigated. The grilling of lamb on unready charcoal resulted in the formation of considerable quantities of NAs. Grilling lamb on properly prepared, ready charcoal resulted in an increase in total concentrations of six NAs from 0 to 4.51 μg kg(-1) over a period of 16 min. Increasing the fat content of the grilled lamb from 5% to 20% caused a modest increase in total concentrations of the six investigated NAs from 4.51 to 5.30 μg kg(-1). Copyright © 2012 Elsevier Ltd. All rights reserved.
Neuropsychological correlates of sustained attention in schizophrenia.
Chen, E Y; Lam, L C; Chen, R Y; Nguyen, D G; Chan, C K; Wilkins, A J
1997-04-11
We employed a simple and relatively undemanding task of monotone counting for the assessment of sustained attention in schizophrenic patients. The monotone counting task has been validated neuropsychologically and is particularly sensitive to right prefrontal lesions. We compared the performance of schizophrenic patients with age- and education-matched controls. We then explored the extent to which a range of commonly employed neuropsychological tasks in schizophrenia research are related to attentional impairment as measured in this way. Monotone counting performance was found to be correlated with digit span (WAIS-R-HK), information (WAIS-R-HK), comprehension (WAIS-R-HK), logical memory (immediate recall) (Weschler Memory Scale, WMS), and visual reproduction (WMS). Multiple regression analysis also identified visual reproduction, digit span and comprehension as significant predictors of attention performance. In contrast, logical memory (delay recall) (WMS), similarity (WAIS-R-HK), semantic fluency, and Wisconsin Card Sorting Test (perseverative errors) were not correlated with attention. In addition, no significant correlation between sustained attention and symptoms was found. These findings are discussed in the context of a weakly modular cognitive system where attentional impairment may contribute selectively to a range of other cognitive deficits.
Transcriptomic Dose-Response Analysis for Mode of Action ...
Microarray and RNA-seq technologies can play an important role in assessing the health risks associated with environmental exposures. The utility of gene expression data to predict hazard has been well documented. Early toxicogenomics studies used relatively high, single doses with minimal replication. Thus, they were not useful in understanding health risks at environmentally-relevant doses. Until the past decade, application of toxicogenomics in dose response assessment and determination of chemical mode of action has been limited. New transcriptomic biomarkers have evolved to detect chemical hazards in multiple tissues together with pathway methods to study biological effects across the full dose response range and critical time course. Comprehensive low dose datasets are now available and with the use of transcriptomic benchmark dose estimation techniques within a mode of action framework, the ability to incorporate informative genomic data into human health risk assessment has substantially improved. The key advantage to applying transcriptomic technology to risk assessment is both the sensitivity and comprehensive examination of direct and indirect molecular changes that lead to adverse outcomes. Book Chapter with topic on future application of toxicogenomics technologies for MoA and risk assessment
ERIC Educational Resources Information Center
García, J. Ricardo; Cain, Kate
2014-01-01
The twofold purpose of this meta-analysis was to determine the relative importance of decoding skills to reading comprehension in reading development and to identify which reader characteristics and reading assessment characteristics contribute to differences in the decoding and reading comprehension correlation. A meta-analysis of 110 studies…
ERIC Educational Resources Information Center
Lonchamp, F.
This is a presentation of the results of a factor analysis of a battery of tests intended to measure listening and reading comprehension in English as a second language. The analysis sought to answer the following questions: (1) whether the factor analysis method yields results when applied to tests which are not specifically designed for this…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biloni, H.; Lindenvald, N.; Sabato, J.A.
1961-01-01
The inclusions in uranium of nuclear purity (UC, UH/sub 3/, UO/sub 2/, UO, UN, and the complexes which include the intersolubility of U with C and N or with C, N, and O) were . analyzed metallographically, and the results reported by other authors were discussed critically. The existence of the fine precipitate reticular substructure, sensitive to thermal treatments, which generally appears in uraniunn was analyzed. Its origins were discussed in accordance with bibliographic data. Complementary data for its comprehension are given from the metallographic analysis of U--Al and U-- Fe alloys with low Al and Fe concentrations. (tr-auth)
Oxidation Mechanisms of Toluene and Benzene
NASA Technical Reports Server (NTRS)
Bittker, David A.
1995-01-01
An expanded and improved version of a previously published benzene oxidation mechanism is presented and shown to model published experimental data fairly successfully. This benzene submodel is coupled to a modified version of a toluene oxidation submodel from the recent literature. This complete mechanism is shown to successfully model published experimental toluene oxidation data for a highly mixed flow reactor and for higher temperature ignition delay times in a shock tube. A comprehensive sensitivity analysis showing the most important reactions is presented for both the benzene and toluene reacting systems. The NASA Lewis toluene mechanism's modeling capability is found to be equivalent to that of the previously published mechanism which contains a somewhat different benzene submodel.
Campton, Daniel E; Ramirez, Arturo B; Nordberg, Joshua J; Drovetto, Nick; Clein, Alisa C; Varshavskaya, Paulina; Friemel, Barry H; Quarre, Steve; Breman, Amy; Dorschner, Michael; Blau, Sibel; Blau, C Anthony; Sabath, Daniel E; Stilwell, Jackie L; Kaldjian, Eric P
2015-05-06
Circulating tumor cells (CTCs) are malignant cells that have migrated from solid cancers into the blood, where they are typically present in rare numbers. There is great interest in using CTCs to monitor response to therapies, to identify clinically actionable biomarkers, and to provide a non-invasive window on the molecular state of a tumor. Here we characterize the performance of the AccuCyte®--CyteFinder® system, a comprehensive, reproducible and highly sensitive platform for collecting, identifying and retrieving individual CTCs from microscopic slides for molecular analysis after automated immunofluorescence staining for epithelial markers. All experiments employed a density-based cell separation apparatus (AccuCyte) to separate nucleated cells from the blood and transfer them to microscopic slides. After staining, the slides were imaged using a digital scanning microscope (CyteFinder). Precisely counted model CTCs (mCTCs) from four cancer cell lines were spiked into whole blood to determine recovery rates. Individual mCTCs were removed from slides using a single-cell retrieval device (CytePicker™) for whole genome amplification and subsequent analysis by PCR and Sanger sequencing, whole exome sequencing, or array-based comparative genomic hybridization. Clinical CTCs were evaluated in blood samples from patients with different cancers in comparison with the CellSearch® system. AccuCyte--CyteFinder presented high-resolution images that allowed identification of mCTCs by morphologic and phenotypic features. Spike-in mCTC recoveries were between 90 and 91%. More than 80% of single-digit spike-in mCTCs were identified and even a single cell in 7.5 mL could be found. Analysis of single SKBR3 mCTCs identified presence of a known TP53 mutation by both PCR and whole exome sequencing, and confirmed the reported karyotype of this cell line. Patient sample CTC counts matched or exceeded CellSearch CTC counts in a small feasibility cohort. The AccuCyte--CyteFinder system is a comprehensive and sensitive platform for identification and characterization of CTCs that has been applied to the assessment of CTCs in cancer patient samples as well as the isolation of single cells for genomic analysis. It thus enables accurate non-invasive monitoring of CTCs and evolving cancer biology for personalized, molecularly-guided cancer treatment.
Assessing the environmental impacts of aircraft noise and emissions
NASA Astrophysics Data System (ADS)
Mahashabde, Anuja; Wolfe, Philip; Ashok, Akshay; Dorbian, Christopher; He, Qinxian; Fan, Alice; Lukachko, Stephen; Mozdzanowska, Aleksandra; Wollersheim, Christoph; Barrett, Steven R. H.; Locke, Maryalice; Waitz, Ian A.
2011-01-01
With the projected growth in demand for commercial aviation, many anticipate increased environmental impacts associated with noise, air quality, and climate change. Therefore, decision-makers and stakeholders are seeking policies, technologies, and operational procedures that balance environmental and economic interests. The main objective of this paper is to address shortcomings in current decision-making practices for aviation environmental policies. We review knowledge of the noise, air quality, and climate impacts of aviation, and demonstrate how including environmental impact assessment and quantifying uncertainties can enable a more comprehensive evaluation of aviation environmental policies. A comparison is presented between the cost-effectiveness analysis currently used for aviation environmental policy decision-making and an illustrative cost-benefit analysis. We focus on assessing a subset of the engine NO X emissions certification stringency options considered at the eighth meeting of the International Civil Aviation Organization’s Committee on Aviation Environmental Protection. The FAA Aviation environmental Portfolio Management Tool (APMT) is employed to conduct the policy assessments. We show that different conclusions may be drawn about the same policy options depending on whether benefits and interdependencies are estimated in terms of health and welfare impacts versus changes in NO X emissions inventories as is the typical practice. We also show that these conclusions are sensitive to a variety of modeling uncertainties. While our more comprehensive analysis makes the best policy option less clear, it represents a more accurate characterization of the scientific and economic uncertainties underlying impacts and the policy choices.
Audrézet, Marie Pierre; Munck, Anne; Scotet, Virginie; Claustres, Mireille; Roussey, Michel; Delmas, Dominique; Férec, Claude; Desgeorges, Marie
2015-02-01
Newborn screening (NBS) for cystic fibrosis (CF) was implemented throughout France in 2002. It involves a four-tiered procedure: immunoreactive trypsin (IRT)/DNA/IRT/sweat test [corrected] was implemented throughout France in 2002. The aim of this study was to assess the performance of molecular CFTR gene analysis from the French NBS cohort, to evaluate CF incidence, mutation detection rate, and allelic heterogeneity. During the 8-year period, 5,947,148 newborns were screened for cystic fibrosis. The data were collected by the Association Française pour le Dépistage et la Prévention des Handicaps de l'Enfant. The mutations identified were classified into four groups based on their potential for causing disease, and a diagnostic algorithm was proposed. Combining the genetic and sweat test results, 1,160 neonates were diagnosed as having cystic fibrosis. The corresponding incidence, including both the meconium ileus (MI) and false-negative cases, was calculated at 1 in 4,726 live births. The CF30 kit, completed with a comprehensive CFTR gene analysis, provides an excellent detection rate of 99.77% for the mutated alleles, enabling the identification of a complete genotype in 99.55% of affected neonates. With more than 200 different mutations characterized, we confirmed the French allelic heterogeneity. The very good sensitivity, specificity, and positive predictive value obtained suggest that the four-tiered IRT/DNA/IRT/sweat test procedure may provide an effective strategy for newborn screening for cystic fibrosis.
MUHAMMAD, Noor Azimah; SHAMSUDDIN, Khadijah; OMAR, Khairani; SHAH, Shamsul Azhar; MOHD AMIN, Rahmah
2014-01-01
Background: Parenting behaviour is culturally sensitive. The aims of this study were (1) to translate the Parental Bonding Instrument into Malay (PBI-M) and (2) to determine its factorial structure and validity among the Malaysian population. Methods: The PBI-M was generated from a standard translation process and comprehension testing. The validation study of the PBI-M was administered to 248 college students aged 18 to 22 years. Results: Participants in the comprehension testing had difficulty understanding negative items. Five translated double negative items were replaced with five positive items with similar meanings. Exploratory factor analysis showed a three-factor model for the PBI-M with acceptable reliability. Four negative items (items 3, 4, 8, and 16) and item 19 were omitted from the final PBI-M list because of incorrect placement or low factor loading (< 0.32). Out of the final 20 items of the PBI-M, there were 10 items for the care factor, five items for the autonomy factor and five items for the overprotection factor. All the items loaded positively on their respective factors. Conclusion: The Malaysian population favoured positive items in answering questions. The PBI-M confirmed the three-factor model that consisted of care, autonomy and overprotection. The PBI-M is a valid and reliable instrument to assess the Malaysian parenting style. Confirmatory factor analysis may further support this finding. Keywords: Malaysia, parenting, questionnaire, validity PMID:25977634
Spencer, Mercedes; Wagner, Richard K.
2016-01-01
We conducted a meta-analysis of 16 existing studies to examine the nature of the comprehension problems for children who were second-language learners with poor reading comprehension despite adequate decoding. Results indicated that these children had deficits in oral language (d = −0.80), but these deficits were not as severe as their reading comprehension deficit (d = −2.47). Second-language learners also had weaker oral language skills compared to native-speaking children regardless of comprehension status (d = −0.84). We discuss theoretical and practical implications of the finding that second-language learners who are poor at reading comprehension despite adequate decoding have deficits in oral language but the deficit is not sufficient to explain their deficit in reading comprehension. PMID:28461711
Spencer, Mercedes; Wagner, Richard K
2017-05-01
We conducted a meta-analysis of 16 existing studies to examine the nature of the comprehension problems for children who were second-language learners with poor reading comprehension despite adequate decoding. Results indicated that these children had deficits in oral language ( d = -0.80), but these deficits were not as severe as their reading comprehension deficit ( d = -2.47). Second-language learners also had weaker oral language skills compared to native-speaking children regardless of comprehension status ( d = -0.84). We discuss theoretical and practical implications of the finding that second-language learners who are poor at reading comprehension despite adequate decoding have deficits in oral language but the deficit is not sufficient to explain their deficit in reading comprehension.
Aszyk, Justyna; Woźniak, Mateusz Kacper; Kubica, Paweł; Kot-Wasik, Agata; Namieśnik, Jacek; Wasik, Andrzej
2017-09-29
Flavouring compounds are an essential part of e-liquid products for cigarettes. In general, they are regarded as safe for ingestion, but they may have unrecognized risks when they are inhaled. In some cases, manufactures do not currently abide by the Tobacco Products Directive (2014/40/EU) and do not declare the detailed contents of e-liquids on their labels. To help evaluate the health impact of flavouring substances, there is a need for comprehensive approaches to determine their concentrations in e-liquids. For this purpose, a GC-EI-MS method was developed and validated for the simultaneous determination of 46 commonly used flavour additives in e-liquids. The proposed method performed well in terms of the key validation parameters: accuracy (84-113%), inter-/intra-day precision: 0.1-10% and 1-11%, respectively, and sensitivity (limit of detection: 3-87ng/mL). The sample preparation step was based on a simple "dilute & shoot" approach. This study is a complementary method to the LC-MS/MS procedure described in Part I. Both approaches are suitable for the comprehensive determination of 88 flavouring compounds and nicotine and can be used as tools for the rapid evaluation of the quality and safety of e-cigarette products. Copyright © 2017 Elsevier B.V. All rights reserved.
Stößel, Maria; Rehra, Lena; Haastert-Talini, Kirsten
2017-10-01
The rat median nerve injury and repair model gets increasingly important for research on novel bioartificial nerve grafts. It allows follow-up evaluation of the recovery of the forepaw functional ability with several sensitive techniques. The reflex-based grasping test, the skilled forelimb reaching staircase test, as well as electrodiagnostic recordings have been described useful in this context. Currently, no standard values exist, however, for comparison or comprehensive correlation of results obtained in each of the three methods after nerve gap repair in adult rats. Here, we bilaterally reconstructed 7-mm median nerve gaps with autologous nerve grafts (ANG) or autologous muscle-in-vein grafts (MVG), respectively. During 8 and 12 weeks of observation, functional recovery of each paw was separately monitored using the grasping test (weekly), the staircase test, and noninvasive electrophysiological recordings from the thenar muscles (both every 4 weeks). Evaluation was completed by histomorphometrical analyses at 8 and 12 weeks postsurgery. The comprehensive evaluation detected a significant difference in the recovery of forepaw functional motor ability between the ANG and MVG groups. The correlation between the different functional tests evaluated precisely displayed the recovery of distinct levels of forepaw functional ability over time. Thus, this multimodal evaluation model represents a valuable preclinical model for peripheral nerve reconstruction approaches.
Cunningham, Charles E; Kostrzewa, Linda; Rimas, Heather; Chen, Yvonne; Deal, Ken; Blatz, Susan; Bowman, Alida; Buchanan, Don H; Calvert, Randy; Jennings, Barbara
2013-01-01
Patients value health service teams that function effectively. Organizational justice is linked to the performance, health, and emotional adjustment of the members of these teams. We used a discrete-choice conjoint experiment to study the organizational justice improvement preferences of pediatric health service providers. Using themes from a focus group with 22 staff, we composed 14 four-level organizational justice improvement attributes. A sample of 652 staff (76 % return) completed 30 choice tasks, each presenting three hospitals defined by experimentally varying the attribute levels. Latent class analysis yielded three segments. Procedural justice attributes were more important to the Decision Sensitive segment, 50.6 % of the sample. They preferred to contribute to and understand how all decisions were made and expected management to act promptly on more staff suggestions. Interactional justice attributes were more important to the Conduct Sensitive segment (38.5 %). A universal code of respectful conduct, consequences encouraging respectful interaction, and management's response when staff disagreed with them were more important to this segment. Distributive justice attributes were more important to the Benefit Sensitive segment, 10.9 % of the sample. Simulations predicted that, while Decision Sensitive (74.9 %) participants preferred procedural justice improvements, Conduct (74.6 %) and Benefit Sensitive (50.3 %) participants preferred interactional justice improvements. Overall, 97.4 % of participants would prefer an approach combining procedural and interactional justice improvements. Efforts to create the health service environments that patients value need to be comprehensive enough to address the preferences of segments of staff who are sensitive to different dimensions of organizational justice.
Eberhardt, Melanie; Nadig, Aparna
2018-01-01
We present two experiments examining the universality and uniqueness of reduced context sensitivity in language processing in Autism Spectrum Disorders (ASD), as proposed by the Weak Central Coherence account (Happé & Frith, 2006, Journal of Autism and Developmental Disorders, 36(1), 25). That is, do all children with ASD exhibit decreased context sensitivity, and is this characteristic specific to ASD versus other neurodevelopmental conditions? Experiment 1, conducted in English, was a comparison of children with ASD with normal language and their typically-developing peers on a picture selection task where interpretation of sentential context was required to identify homonyms. Contrary to the predictions of Weak Central Coherence, the ASD-normal language group exhibited no difficulty on this task. Experiment 2, conducted in German, compared children with ASD with variable language abilities, typically-developing children, and a second control group of children with Language Impairment (LI) on a sentence completion task where a context sentence had to be considered to produce the continuation of an ambiguous sentence fragment. Both ASD-variable language and LI groups exhibited reduced context sensitivity and did not differ from each other. Finally, to directly test which factors contribute to reduced context sensitivity, we conducted a regression analysis for each experiment, entering nonverbal IQ, structural language ability, and autism diagnosis as predictors. For both experiments structural language ability emerged as the only significant predictor. These convergent findings demonstrate that reduced sensitivity to context in language processing is linked to low structural language rather than ASD diagnosis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nosheen, Erum; Shah, Syed Mujtaba; Hussain, Hazrat; Murtaza, Ghulam
2016-09-01
This article presents a comprehensive relative report on the grafting of ZnS with renowned ruthenium ((Ru) dyes i.e. N3, N719 and Z907) and gives insight into their charge transfer interaction and sensitization mechanism for boosting solar cell efficiency. Influence of dye concentration on cell performance is also reported here. ZnS nanoparticles synthesized by a simple coprecipitation method with an average particle size of 15±2nm were characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FESEM), Elemental dispersive X-ray analysis (EDAX), tunneling electron microscopy (TEM) and UV-Visible (UV-Vis) spectroscopy. UV-Vis, photoluminescence (PL) and Fourier transform infra-red (FT-IR) spectroscopy confirms the successful grafting of these dyes over ZnS nanoparticles surface. Low-energy metal-to-ligand charge-transfer transition (MLCT) bands of dyes are mainly affected on grafting over the nanoparticle surface. Moreover their current voltage (I-V) results confirm the efficiency enhancement in ZnS solid state dye sensitized solar cells (SSDSSCs) owing to effective sensitization of this material with Ru dyes and helps in finding the optimum dye concentration for nanoparticles sensitization. Highest rise in overall solar cell efficiency i.e. 64% of the reference device has been observed for 0.3mM N719-ZnS sample owing to increased open circuit voltage (Voc) and fill factor (FF). Experimental and proposed results were found in good agreement with each other. Copyright © 2016 Elsevier B.V. All rights reserved.
Cognitive capital, equity and child-sensitive social protection in Asia and the Pacific.
Samson, Michael; Fajth, Gaspar; François, Daphne
2016-01-01
Promoting child development and welfare delivers human rights and builds sustainable economies through investment in 'cognitive capital'. This analysis looks at conditions that support optimal brain development in childhood and highlights how social protection promotes these conditions and strengthens the achievement of the Sustainable Development Goals (SDGs) in Asia and the Pacific. Embracing child-sensitive social protection offers multiple benefits. The region has been a leader in global poverty reduction but the underlying pattern of economic growth exacerbates inequality and is increasingly unsustainable. The strategy of channelling low-skilled rural labour to industrial jobs left millions of children behind with limited opportunities for development. Building child-sensitive social protection and investing better in children's cognitive capacity could check these trends and trigger powerful long-term human capital development-enabling labour productivity to grow faster than populations age. While governments are investing more in social protection, the region's spending remains low by international comparison. Investment is particularly inadequate where it yields the highest returns: during the first 1000 days of life. Five steps are recommended for moving forward: (1) building cognitive capital by adjusting the region's development paradigms to reflect better the economic and social returns from investing in children; (2) understand and track better child poverty and vulnerability; (3) progressively build universal, child-sensitive systems that strengthen comprehensive interventions within life cycle frameworks; (4) mobilise national resources for early childhood investments and child-sensitive social protection; and (5) leverage the SDGs and other channels of national and international collaboration.
Post, Harm; Penning, Renske; Fitzpatrick, Martin A; Garrigues, Luc B; Wu, W; MacGillavry, Harold D; Hoogenraad, Casper C; Heck, Albert J R; Altelaar, A F Maarten
2017-02-03
Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC-MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts, placing new demands on enrichment protocols to make them less labor-intensive, more sensitive, and less prone to variability. Here we assessed an automated enrichment protocol using Fe(III)-IMAC cartridges on an AssayMAP Bravo platform to meet these demands. The automated Fe(III)-IMAC-based enrichment workflow proved to be more effective when compared to a TiO 2 -based enrichment using the same platform and a manual Ti(IV)-IMAC-based enrichment workflow. As initial samples, a dilution series of both human HeLa cell and primary rat hippocampal neuron lysates was used, going down to 0.1 μg of peptide starting material. The optimized workflow proved to be efficient, sensitive, and reproducible, identifying, localizing, and quantifying thousands of phosphosites from just micrograms of starting material. To further test the automated workflow in genuine biological applications, we monitored EGF-induced signaling in hippocampal neurons, starting with only 200 000 primary cells, resulting in ∼50 μg of protein material. This revealed a comprehensive phosphoproteome, showing regulation of multiple members of the MAPK pathway and reduced phosphorylation status of two glutamate receptors involved in synaptic plasticity.
Rustagi, Tarun; Njei, Basile
2014-08-01
This study aimed to perform a structured meta-analysis of all eligible studies to assess the overall diagnostic use of magnetic resonance cholangiopancreatography (MRCP) alone or with secretin enhancement (secretin-enhanced MRCP [S-MRCP]) in the detection of pancreas divisum. Two authors independently performed a comprehensive search of PubMed, MEDLINE, and the Cochrane Library from inception to September 2013. Studies were included if they allowed construction of 2 × 2 contingency tables of MRCP and/or S-MRCP compared with criterion standard. DerSimonian-Laird random effect models were used to estimate the pooled sensitivity, specificity, specificity, and quantitative receiver operating characteristics. Of 51 citations, 10 studies with 1474 patients were included. Secretin-enhanced MRCP had a higher overall diagnostic performance than MRCP (S-MRCP: pooled sensitivity, 86% [95% confidence interval (CI), 77%-93%]; specificity, 97% [95% CI, 94%-99%]; and area under the curve, 0.93 ± 0.056 compared with MRCP: sensitivity, 52% [95% CI, 45%-59%]; specificity, 97% [95% CI, 94%-99%]; and area under the curve, 0.76 ± 0.104). Pooled diagnostic odds ratios were 72.19 (95% CI, 5.66-938.8) and 23.39 (95% CI, 7.93-69.02) for S-MRCP and MRCP, respectively. Visual inspection of the funnel plot showed low potential for publication bias. Secretin-enhanced MRCP has a much higher diagnostic accuracy than MRCP and should be preferred for diagnosis of pancreas divisum.
Drury, Helena; Shah, Shivani; Stern, Jeremy S; Crawford, Sarah; Channon, Shelley
2018-05-01
Previous research has reported that aspects of social cognition such as nonliteral language comprehension are impaired in adults with Tourette's syndrome (TS), but little is known about social cognition in children and adolescents with TS. The present study aims to evaluate a measure of sarcasm comprehension suitable for use with children and adolescents (Experiment 1), and to examine sarcasm comprehension in children and adolescents with TS-alone or TS and attention deficit hyperactivity disorder (ADHD; Experiment 2). In Experiment 1, the measure of sarcasm comprehension was found to be sensitive to differences in nonliteral language comprehension for typically-developing children aged 10 to 11 years old compared to children aged 8 to 9 years old; the older group performed significantly better on the comprehension of scenarios ending with either direct or indirect sarcastic remarks, whereas the two age groups did not differ on the comprehension of scenarios ending with sincere remarks. In Experiment 2, both the TS-alone and TS+ADHD groups performed below the level of the control participants on the comprehension of indirect sarcasm items but not on the comprehension of direct sarcasm items and sincere items. Those with TS+ADHD also performed below the level of the control participants on measures of interference control and fluency. The findings are discussed with reference to the possible contribution of executive functioning and mentalizing to the patterns of performance.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Pefkou, Maria; Arnal, Luc H; Fontolan, Lorenzo; Giraud, Anne-Lise
2017-08-16
Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving β activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of β (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and β activity, with β activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and β activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous β activity, but not primarily by the capacity of θ activity to track the syllabic rhythm. SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-β oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process. Copyright © 2017 the authors 0270-6474/17/377930-09$15.00/0.
Comprehensive School Reform and Student Achievement: A Meta-Analysis.
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Hewes, Gina M.; Overman, Laura T.; Brown, Shelly
Using 232 studies, this meta analysis reviewed the research on the achievement effects of the nationally disseminated and externally developed school improvement programs known as "whole-school" or "comprehensive" reforms. In addition to reviewing the overall achievement effects of comprehensive school reform (CSR), the meta…
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)
NASA Astrophysics Data System (ADS)
Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.
2017-09-01
Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
García Vicente, Ana María; Delgado-Bolton, Roberto C; Amo-Salas, Mariano; López-Fidalgo, Jesús; Caresia Aróztegui, Ana Paula; García Garzón, José Ramón; Orcajo Rincón, Javier; García Velloso, María José; de Arcocha Torres, María; Alvárez Ruíz, Soledad
2017-08-01
The detection of occult cancer in patients suspected of having a paraneoplastic neurological syndrome (PNS) poses a diagnostic challenge. The aim of our study was to perform a systematic review and meta-analysis to assess the diagnostic performance of FDG PET for the detection of occult malignant disease responsible for PNS. A systematic review of the literature (MEDLINE, EMBASE, Cochrane, and DARE) was undertaken to identify studies published in any language. The search strategy was structured after addressing clinical questions regarding the validity or usefulness of the test, following the PICO framework. Inclusion criteria were studies involving patients with PNS in whom FDG PET was performed to detect malignancy, and which reported sufficient primary data to allow calculation of diagnostic accuracy parameters. When possible, a meta-analysis was performed to calculate the joint sensitivity, specificity, and detection rate for malignancy (with 95% confidence intervals [CIs]), as well as a subgroup analysis based on patient characteristics (antibodies, syndrome). The comprehensive literature search revealed 700 references. Sixteen studies met the inclusion criteria and were ultimately selected. Most of the studies were retrospective (12/16). For the quality assessment, the QUADAS-2 tool was applied to assess the risk of bias. Across 16 studies (793 patients), the joint sensitivity, specificity, and detection rate for malignancy with FDG PET were 0.87 (95% CI: 0.80-0.93), 0.86 (95% CI: 0.83-0.89), and 14.9% (95% CI: 11.5-18.7), respectively. The area under the curve (AUC) of the summary ROC curve was 0.917. Homogeneity of results was observed for sensitivity but not for specificity. Some of the individual studies showed large 95% CIs as a result of small sample size. The results of our meta-analysis reveal high diagnostic performance of FDG PET in the detection of malignancy responsible for PNS, not affected by the presence of onconeural antibodies or clinical characteristics.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Kao, Hua-Lin; Yeh, Yi-Chen; Lin, Chin-Hsuan; Hsu, Wei-Fang; Hsieh, Wen-Yu; Ho, Hsiang-Ling; Chou, Teh-Ying
2016-11-01
Analysis of the targetable driver mutations is now recommended in all patients with advanced lung adenocarcinoma. Molecular-based methods are usually adopted, however, along with the implementation of highly sensitive and/or mutation-specific antibodies, immunohistochemistry (IHC) has been considered an alternative method for identifying driver mutations in lung adenocarcinomas. A total of 205 lung adenocarcinomas were examined for EGFR mutations and ALK and ROS1 rearrangements using real-time PCR, fluorescence in situ hybridization (FISH) and IHC in parallel. The performance of different commercially available IHC antibody clones toward targetable driver mutations was evaluated. The association between these driver mutations and clinicopathological characteristics was also analyzed. In 205 cases we studied, 58.5% were found to harbor EGFR mutations, 6.3% ALK rearrangements and 1.0% ROS1 rearrangements. Compared to molecular-based methods, IHC of EGFR mutations showed an excellent specificity but the sensitivity is suboptimal, while IHC of ALK and ROS1 rearrangements demonstrated high sensitivity and specificity. No significant difference regarding the performance of different antibody clones toward these driver mutations was observed, except that clone SP125 showed a higher sensitivity than 43B2 in the detection of p.L858R of EGFR. In circumstances such as poor quality of nucleic acids or low content of tumor cells, IHC of EGFR mutation-specific antibodies could be used as an alternative method. Patients negative for EGFR mutations are subjected to further analysis on ALK and ROS1 rearrangements using IHC methods. Herein, we proposed a lung adenocarcinoma testing algorithm for the application of IHC in therapeutic diagnosis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Paul, D.; Biswas, R.
2018-05-01
We report a highly sensitive Localized surface plasmon resonance (LSPR) based photonic crystal fiber (PCF) sensor by embedding an array of gold nanospheres into the first layer of air-holes of PCF. We present a comprehensive analysis on the basis of progressive variation of refractive indices of analytes as well as sizes of the nanospheres. In the proposed sensing scheme, refractive indices of the analytes have been changed from 1 to 1.41(RIU), accompanied by alteration of the sizes of nanospheres ranging 40-70 nm. The entire study has been executed in the context of different material based PCFs (viz. phosphate and crown) and the corresponding results have been analyzed and compared. We observe a declining trend in modal loss in each set of PCFs with increment of RI of the analyte. Lower loss has been observed in case of crown based PCF. The sensor shows highest sensitivity ∼27,000 nm/RIU for crown based PCF for nanosphere of 70 nm with average wavelength interrogation sensitivity ∼5333.53 nm/RIU. In case of phosphate based PCF, highest sensitivity is found to be ∼18,000 nm/RIU with an average interrogation sensitivity ∼4555.56 nm/RIU for 40 nm of Au nanosphere. Moreover, the additional sensing parameters have been observed to highlight the better design of the modelled LSPR based photonic crystal fiber sensor. As such, the resolution (R), limit of detection (LOD) and sensitivity (S) of the proposed sensor in each case (viz. phosphate and crown PCF) have been discussed by using wavelength interrogation technique. The proposed study provides a basis for detailed investigation of LSPR phenomenon for PCF utilizing noble metal nanospheres (AuNPs).
NASA Astrophysics Data System (ADS)
Sunilkumar, K.; Narayana Rao, T.; Saikranthi, K.; Purnachandra Rao, M.
2015-09-01
This study presents a comprehensive evaluation of five widely used multisatellite precipitation estimates (MPEs) against 1° × 1° gridded rain gauge data set as ground truth over India. One decade observations are used to assess the performance of various MPEs (Climate Prediction Center (CPC)-South Asia data set, CPC Morphing Technique (CMORPH), Precipitation Estimation From Remotely Sensed Information Using Artificial Neural Networks, Tropical Rainfall Measuring Mission's Multisatellite Precipitation Analysis (TMPA-3B42), and Global Precipitation Climatology Project). All MPEs have high detection skills of rain with larger probability of detection (POD) and smaller "missing" values. However, the detection sensitivity differs from one product (and also one region) to the other. While the CMORPH has the lowest sensitivity of detecting rain, CPC shows highest sensitivity and often overdetects rain, as evidenced by large POD and false alarm ratio and small missing values. All MPEs show higher rain sensitivity over eastern India than western India. These differential sensitivities are found to alter the biases in rain amount differently. All MPEs show similar spatial patterns of seasonal rain bias and root-mean-square error, but their spatial variability across India is complex and pronounced. The MPEs overestimate the rainfall over the dry regions (northwest and southeast India) and severely underestimate over mountainous regions (west coast and northeast India), whereas the bias is relatively small over the core monsoon zone. Higher occurrence of virga rain due to subcloud evaporation and possible missing of small-scale convective events by gauges over the dry regions are the main reasons for the observed overestimation of rain by MPEs. The decomposed components of total bias show that the major part of overestimation is due to false precipitation. The severe underestimation of rain along the west coast is attributed to the predominant occurrence of shallow rain and underestimation of moderate to heavy rain by MPEs. The decomposed components suggest that the missed precipitation and hit bias are the leading error sources for the total bias along the west coast. All evaluation metrics are found to be nearly equal in two contrasting monsoon seasons (southwest and northeast), indicating that the performance of MPEs does not change with the season, at least over southeast India. Among various MPEs, the performance of TMPA is found to be better than others, as it reproduced most of the spatial variability exhibited by the reference.
Gamazon, Eric R.; Lamba, Jatinder K.; Pounds, Stanley; Stark, Amy L.; Wheeler, Heather E.; Cao, Xueyuan; Im, Hae K.; Mitra, Amit K.; Rubnitz, Jeffrey E.; Ribeiro, Raul C.; Raimondi, Susana; Campana, Dario; Crews, Kristine R.; Wong, Shan S.; Welsh, Marleen; Hulur, Imge; Gorsic, Lidija; Hartford, Christine M.; Zhang, Wei; Cox, Nancy J.; Dolan, M. Eileen
2013-01-01
A whole-genome approach was used to investigate the genetic determinants of cytarabine-induced cytotoxicity. We performed a meta-analysis of genome-wide association studies involving 523 lymphoblastoid cell lines (LCLs) from individuals of European, African, Asian, and African American ancestry. Several of the highest-ranked single-nucleotide polymorphisms (SNPs) were within the mutated in colorectal cancers (MCC) gene. MCC expression was induced by cytarabine treatment from 1.7- to 26.6-fold in LCLs. A total of 33 SNPs ranked at the top of the meta-analysis (P < 10−5) were successfully tested in a clinical trial of patients randomized to receive low-dose or high-dose cytarabine plus daunorubicin and etoposide; of these, 18 showed association (P < .05) with either cytarabine 50% inhibitory concentration in leukemia cells or clinical response parameters (minimal residual disease, overall survival (OS), and treatment-related mortality). This count (n = 18) was significantly greater than expected by chance (P = .016). For rs1203633, LCLs with AA genotype were more sensitive to cytarabine-induced cytotoxicity (P = 1.31 × 10−6) and AA (vs GA or GG) genotype was associated with poorer OS (P = .015), likely as a result of greater treatment-related mortality (P = .0037) in patients with acute myeloid leukemia (AML). This multicenter AML02 study trial was registered at www.clinicaltrials.gov as #NCT00136084. PMID:23538338
Canstein, C; Cachot, P; Faust, A; Stalder, A F; Bock, J; Frydrychowicz, A; Küffer, J; Hennig, J; Markl, M
2008-03-01
The knowledge of local vascular anatomy and function in the human body is of high interest for the diagnosis and treatment of cardiovascular disease. A comprehensive analysis of the hemodynamics in the thoracic aorta is presented based on the integration of flow-sensitive 4D MRI with state-of-the-art rapid prototyping technology and computational fluid dynamics (CFD). Rapid prototyping was used to transform aortic geometries as measured by contrast-enhanced MR angiography into realistic vascular models with large anatomical coverage. Integration into a flow circuit with patient-specific pulsatile in-flow conditions and application of flow-sensitive 4D MRI permitted detailed analysis of local and global 3D flow dynamics in a realistic vascular geometry. Visualization of characteristic 3D flow patterns and quantitative comparisons of the in vitro experiments with in vivo data and CFD simulations in identical vascular geometries were performed to evaluate the accuracy of vascular model systems. The results indicate the potential of such patient-specific model systems for detailed experimental simulation of realistic vascular hemodynamics. Further studies are warranted to examine the influence of refined boundary conditions of the human circulatory system such as fluid-wall interaction and their effect on normal and pathological blood flow characteristics associated with vascular geometry. (c) 2008 Wiley-Liss, Inc.
Paulitschke, Verena; Berger, Walter; Paulitschke, Philipp; Hofstätter, Elisabeth; Knapp, Bernhard; Dingelmaier-Hovorka, Ruth; Födinger, Dagmar; Jäger, Walter; Szekeres, Thomas; Meshcheryakova, Anastasia; Bileck, Andrea; Pirker, Christine; Pehamberger, Hubert; Gerner, Christopher; Kunstfeld, Rainer
2015-03-01
The FDA-approved BRAF inhibitor vemurafenib achieves outstanding clinical response rates in patients with melanoma, but early resistance is common. Understanding the pathologic mechanisms of drug resistance and identification of effective therapeutic alternatives are key scientific challenges in the melanoma setting. Using proteomic techniques, including shotgun analysis and 2D-gel electrophoresis, we identified a comprehensive signature of the vemurafenib-resistant M24met in comparison with the vemurafenib-sensitive A375 melanoma cell line. The resistant cells were characterized by loss of differentiation, induction of transformation, enhanced expression of the lysosomal compartment, increased potential for metastasis, migration, adherence and Ca2(+) ion binding, enhanced expression of the MAPK pathway and extracellular matrix proteins, and epithelial-mesenchymal transformation. The main features were verified by shotgun analysis with QEXACTIVE orbitrap MS, electron microscopy, lysosomal staining, Western blotting, and adherence assay in a VM-1 melanoma cell line with acquired vemurafenib resistance. On the basis of the resistance profile, we were able to successfully predict that a novel resveratrol-derived COX-2 inhibitor, M8, would be active against the vemurafenib-resistant but not the vemurafenib-sensitive melanoma cells. Using high-throughput methods for cell line and drug characterization may thus offer a new way to identify key features of vemurafenib resistance, facilitating the design of effective rational therapeutic alternatives. ©2015 American Association for Cancer Research.
Aspartame sensitivity? A double blind randomised crossover study.
Sathyapalan, Thozhukat; Thatcher, Natalie J; Hammersley, Richard; Rigby, Alan S; Courts, Fraser L; Pechlivanis, Alexandros; Gooderham, Nigel J; Holmes, Elaine; le Roux, Carel W; Atkin, Stephen L
2015-01-01
Aspartame is a commonly used intense artificial sweetener, being approximately 200 times sweeter than sucrose. There have been concerns over aspartame since approval in the 1980s including a large anecdotal database reporting severe symptoms. The objective of this study was to compare the acute symptom effects of aspartame to a control preparation. This was a double-blind randomized cross over study conducted in a clinical research unit in United Kingdom. Forty-eight individual who has self reported sensitivity to aspartame were compared to 48 age and gender matched aspartame non-sensitive individuals. They were given aspartame (100mg)-containing or control snack bars randomly at least 7 days apart. The main outcome measures were acute effects of aspartame measured using repeated ratings of 14 symptoms, biochemistry and metabonomics. Aspartame sensitive and non-sensitive participants differed psychologically at baseline in handling feelings and perceived stress. Sensitive participants had higher triglycerides (2.05 ± 1.44 vs. 1.26 ± 0.84mmol/L; p value 0.008) and lower HDL-C (1.16 ± 0.34 vs. 1.35 ± 0.54 mmol/L; p value 0.04), reflected in 1H NMR serum analysis that showed differences in the baseline lipid content between the two groups. Urine metabonomic studies showed no significant differences. None of the rated symptoms differed between aspartame and control bars, or between sensitive and control participants. However, aspartame sensitive participants rated more symptoms particularly in the first test session, whether this was placebo or control. Aspartame and control bars affected GLP-1, GIP, tyrosine and phenylalanine levels equally in both aspartame sensitive and non-sensitive subjects. Using a comprehensive battery of psychological tests, biochemistry and state of the art metabonomics there was no evidence of any acute adverse responses to aspartame. This independent study gives reassurance to both regulatory bodies and the public that acute ingestion of aspartame does not have any detectable psychological or metabolic effects in humans. ISRCTN Registry ISRCTN39650237.
Chen, Xiaoxia; Zhao, Jing; Chen, Tianshu; Gao, Tao; Zhu, Xiaoli; Li, Genxi
2018-01-01
Comprehensive analysis of the expression level and location of tumor-associated membrane proteins (TMPs) is of vital importance for the profiling of tumor cells. Currently, two kinds of independent techniques, i.e. ex situ detection and in situ imaging, are usually required for the quantification and localization of TMPs respectively, resulting in some inevitable problems. Methods: Herein, based on a well-designed and fluorophore-labeled DNAzyme, we develop an integrated and facile method, in which imaging and quantification of TMPs in situ are achieved simultaneously in a single system. The labeled DNAzyme not only produces localized fluorescence for the visualization of TMPs but also catalyzes the cleavage of a substrate to produce quantitative fluorescent signals that can be collected from solution for the sensitive detection of TMPs. Results: Results from the DNAzyme-based in situ imaging and quantification of TMPs match well with traditional immunofluorescence and western blotting. In addition to the advantage of two-in-one, the DNAzyme-based method is highly sensitivity, allowing the detection of TMPs in only 100 cells. Moreover, the method is nondestructive. Cells after analysis could retain their physiological activity and could be cultured for other applications. Conclusion: The integrated system provides solid results for both imaging and quantification of TMPs, making it a competitive method over some traditional techniques for the analysis of TMPs, which offers potential application as a toolbox in the future.
Burnett, Jonathan L; Miley, Harry S; Milbrath, Brian D
2016-03-01
In 2014 the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) undertook an Integrated Field Exercise (IFE14) in Jordan. The exercise consisted of a simulated 0.5-2 kT underground nuclear explosion triggering an On-site Inspection (OSI) to search for evidence of a Treaty violation. This research paper evaluates two of the OSI techniques used during the IFE14, laboratory-based gamma-spectrometry of soil samples and in-situ gamma-spectrometry, both of which were implemented to search for 17 OSI relevant particulate radionuclides indicative of nuclear explosions. The detection sensitivity is evaluated using real IFE and model data. It indicates that higher sensitivity laboratory measurements are the optimum technique during the IFE and within the Treaty/Protocol-specified OSI timeframes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao
2018-05-01
Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Cantwell, H; O'Keeffe, M
2006-02-01
The Premi Test, a test kit designed for the rapid screening of antimicrobial residues in meat, fish and eggs, was evaluated and compared with the (modified) One-Plate Test, an agar diffusion assay. The performance characteristics described for qualitative, screening methods in Commission Decision 2002/657/EC were used for the evaluation. The Premi Test was found to detect a range of antimicrobials to MRL levels in kidney fluid but to have poorer sensitivity for some antimicrobials such as tetracyclines, sulphonamides, flumequine and streptomycin. The test was found not to be sensitive for the banned antimicrobial chloramphenicol. The One-Plate Test was found to detect most tetracyclines and flumequine to MRL levels but to be less sensitive than the Premi Test for most of the other classes of antimicrobials. Neither test alone provides a comprehensive screening test for antimicrobial residues in kidney at MRL levels. However, the Premi Test is fast, easy to use and rugged and, in combination with other antimicrobial tests, may be used to provide a comprehensive screening system for antimicrobials in tissues.
Bedside diagnosis of dysphagia: a systematic review.
O'Horo, John C; Rogus-Pulia, Nicole; Garcia-Arguello, Lisbeth; Robbins, JoAnne; Safdar, Nasia
2015-04-01
Dysphagia is associated with aspiration, pneumonia, and malnutrition, but remains challenging to identify at the bedside. A variety of exam protocols and maneuvers are commonly used, but the efficacy of these maneuvers is highly variable. We conducted a comprehensive search of 7 databases, including MEDLINE, Embase, and Scopus, from each database's earliest inception through June 9, 2014. Studies reporting diagnostic performance of a bedside examination maneuver compared to a reference gold standard (videofluoroscopic swallow study or flexible endoscopic evaluation of swallowing with sensory testing) were included for analysis. From each study, data were abstracted based on the type of diagnostic method and reference standard study population and inclusion/exclusion characteristics, design, and prediction of aspiration. The search strategy identified 38 articles meeting inclusion criteria. Overall, most bedside examinations lacked sufficient sensitivity to be used for screening purposes across all patient populations examined. Individual studies found dysphonia assessments, abnormal pharyngeal sensation assessments, dual axis accelerometry, and 1 description of water swallow testing to be sensitive tools, but none were reported as consistently sensitive. A preponderance of identified studies was in poststroke adults, limiting the generalizability of results. No bedside screening protocol has been shown to provide adequate predictive value for presence of aspiration. Several individual exam maneuvers demonstrated reasonable sensitivity, but reproducibility and consistency of these protocols was not established. More research is needed to design an optimal protocol for dysphagia detection. © 2015 Society of Hospital Medicine.
NASA Astrophysics Data System (ADS)
Meng, Xia; Guo, Luo
2017-07-01
Huangnan Tibetan Autonomous Prefecture is located in the three-river source region (the TRSR) in the Qinghai-Tibetan Plateau, China, which is characterized with ecological sensitivity and vulnerability. In the paper, we integrated remote sensing images, field investigation and social-economic data , and with the help of analytic hierarchy process (AHP) and comprehensive index methods, a sensitivity assessment system was built to calculate ecological sensitivity scores and assign levels for the study area. Results show that: areas which are moderately or even highly ecologically sensitive account for 54.02%, distributed in south, north and northeast of study area and those that have most apparent ecological sensitivity are mainly located in Zeekog, northwest of Huangnan while other counties enjoy relatively lower sensitivity. The results will facilitate future region management and planning for decision-makers.
Wang, Zixing; Wang, Yuyan; Sui, Xin; Zhang, Wei; Shi, Ruihong; Zhang, Yingqiang; Dang, Yonghong; Qiao, Zhen; Zhang, Biao; Song, Wei; Jiang, Jingmei
2015-07-01
Widely used (18)F 2'-deoxy-2'-fluoro-d-glucose (FDG) positron emission tomography (PET) can be problematic with false positives in cancer imaging. This study aims to investigate the diagnostic accuracy of a candidate PET tracer, (18)F 2',3'-dideoxy-3'-fluoro-2-thiothymidine (FLT), in diagnosing pulmonary lesions compared with FDG. After comprehensive search and study selection, a meta-analysis was performed on data from 548 patients pooled from 17 studies for evaluating FLT accuracy, in which data from 351 patients pooled from ten double-tracer studies was used for direct comparison with FDG. Weighted sensitivity and specificity were used as main indicators of test performance. Individual data was extracted and patient subgroup analyses were performed. Overall, direct comparisons showed lower sensitivity (0.80 vs. 0.89) yet higher specificity (0.82 vs. 0.66) for FLT compared with FDG (both p<0.01). Patient subgroup analysis showed FLT was less sensitive than FDG in detecting lung cancers staged as T1 or T2, and those ≤2.0 cm in diameter (0.81 vs. 0.93, and 0.53 vs. 0.78, respectively, both p<0.05), but was comparable for cancers staged as T3 or T4, and those >2.0 cm in diameter (0.95 vs. 1.00, 0.96 vs. 0.88, both p>0.05). For benignities, FLT performed better compared with FDG in ruling out inflammation-based lesions (0.57 vs. 0.32, p<0.05), and demonstrated greater specificity regardless of lesion sizes. Although FLT cannot replace FDG in detecting small and early lung cancers, it may help to prevent patients with larger or inflammatory lesions from cancer misdiagnosis or even over-treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bittner, Mario; Faes, Livia; Boehni, Sophie C; Bachmann, Lucas M; Schlingemann, Reinier O; Schmid, Martin K
2016-12-07
Colour Doppler analysis of ophthalmic vessels has been proposed as a promising tool in the diagnosis of various eye diseases, but the available diagnostic evidence has not yet been assessed systematically. We performed a comprehensive systematic review of the literature on the diagnostic properties of Colour Doppler imaging (CDI) assessing ophthalmic vessels and provide an inventory of the available evidence. Eligible papers were searched electronically in (Pre) Medline, Embase and Scopus, and via cross-checking of reference lists. The minimum requirement to be included was the availability of original data and the possibility to construct a two-by-two table. Study selection, critical appraisal using the QUADAS II instrument and extraction of salient study characteristics was made in duplicate. Sensitivity and specificity was computed for each study. We included 11 studies (15 two-by-two tables) of moderate methodological quality enrolling 820 participants (range 30 to 118). In 44.4% participants were female (range 37-59% in specific subgroups). CDI was assessed for internal carotid stenosis, diabetic retinopathy, glaucoma, and branch or central retinal vein occlusion diagnosis. There was insufficient data to pool the results for specific illnesses. For the assessments of ophthalmic arteries, mean sensitivity was 0.69 (range 0.27-0.96) with a corresponding mean specificity of 0.83 (range 0.70-0.96). Mean sensitivity of the central retinal artery assessments was 0.58 (range 0.31-0.84) and the corresponding mean specificity was 0.82 (range 0.63-0.94). Robust assessments of the diagnostic value of colour Doppler analysis remain uncommon, limiting the possibilities to extrapolate its true potential for clinical practice. PROSPERO 2014:CRD42014014027.
Willett, N J; Thote, T; Hart, M; Moran, S; Guldberg, R E; Kamath, R V
2016-09-01
The development of effective therapies for cartilage protection has been limited by a lack of efficient quantitative cartilage imaging modalities in pre-clinical in vivo models. Our objectives were two-fold: first, to validate a new contrast-enhanced 3D imaging analysis technique, equilibrium partitioning of an ionic contrast agent-micro computed tomography (EPIC-μCT), in a rat medial meniscal transection (MMT) osteoarthritis (OA) model; and second, to quantitatively assess the sensitivity of EPIC-μCT to detect the effects of matrix metalloproteinase inhibitor (MMPi) therapy on cartilage degeneration. Rats underwent MMT surgery and tissues were harvested at 1, 2, and 3 weeks post-surgery or rats received an MMPi or vehicle treatment and tissues harvested 3 weeks post-surgery. Parameters of disease progression were evaluated using histopathology and EPIC-μCT. Correlations and power analyses were performed to compare the techniques. EPIC-μCT was shown to provide simultaneous 3D quantification of multiple parameters, including cartilage degeneration and osteophyte formation. In MMT animals treated with MMPi, OA progression was attenuated, as measured by 3D parameters such as lesion volume and osteophyte size. A post-hoc power analysis showed that 3D parameters for EPIC-μCT were more sensitive than 2D parameters requiring fewer animals to detect a therapeutic effect of MMPi. 2D parameters were comparable between EPIC-μCT and histopathology. This study demonstrated that EPIC-μCT has high sensitivity to provide 3D structural and compositional measurements of cartilage and bone in the joint. EPIC-μCT can be used in combination with histology to provide a comprehensive analysis to screen new potential therapies. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Impact of comprehensive two-dimensional gas chromatography with mass spectrometry on food analysis.
Tranchida, Peter Q; Purcaro, Giorgia; Maimone, Mariarosa; Mondello, Luigi
2016-01-01
Comprehensive two-dimensional gas chromatography with mass spectrometry has been on the separation-science scene for about 15 years. This three-dimensional method has made a great positive impact on various fields of research, and among these that related to food analysis is certainly at the forefront. The present critical review is based on the use of comprehensive two-dimensional gas chromatography with mass spectrometry in the untargeted (general qualitative profiling and fingerprinting) and targeted analysis of food volatiles; attention is focused not only on its potential in such applications, but also on how recent advances in comprehensive two-dimensional gas chromatography with mass spectrometry will potentially be important for food analysis. Additionally, emphasis is devoted to the many instances in which straightforward gas chromatography with mass spectrometry is a sufficiently-powerful analytical tool. Finally, possible future scenarios in the comprehensive two-dimensional gas chromatography with mass spectrometry food analysis field are discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Instruction of Research-Based Comprehension Strategies in Basal Reading Programs
ERIC Educational Resources Information Center
Pilonieta, Paola
2010-01-01
Research supports using research-based comprehension strategies; however, comprehension strategy instruction is not highly visible in basal reading programs or classroom instruction, resulting in many students who struggle with comprehension. A content analysis examined which research-based comprehension strategies were presented in five…
Comprehensive rotorcraft analysis methods
NASA Technical Reports Server (NTRS)
Stephens, Wendell B.; Austin, Edward E.
1988-01-01
The development and application of comprehensive rotorcraft analysis methods in the field of rotorcraft technology are described. These large scale analyses and the resulting computer programs are intended to treat the complex aeromechanical phenomena that describe the behavior of rotorcraft. They may be used to predict rotor aerodynamics, acoustic, performance, stability and control, handling qualities, loads and vibrations, structures, dynamics, and aeroelastic stability characteristics for a variety of applications including research, preliminary and detail design, and evaluation and treatment of field problems. The principal comprehensive methods developed or under development in recent years and generally available to the rotorcraft community because of US Army Aviation Research and Technology Activity (ARTA) sponsorship of all or part of the software systems are the Rotorcraft Flight Simulation (C81), Dynamic System Coupler (DYSCO), Coupled Rotor/Airframe Vibration Analysis Program (SIMVIB), Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD), General Rotorcraft Aeromechanical Stability Program (GRASP), and Second Generation Comprehensive Helicopter Analysis System (2GCHAS).
Comprehensive care plus creative architecture.
Easter, James G
2005-01-01
The delivery of high-quality, comprehensive cancer care and the treatment environment go hand in hand with the patient's recovery. When the planning and design of a comprehensive cancer care program runs parallel to the operational expectations and functional standards, the building users (patients, staff, and physicians) benefit significantly. This behavioral response requires a sensitive interface during the campus master planning, architectural programming, and design phases. Each building component and user functioning along the "continuum of care" will have different expectations, programmatic needs, and design responses. This article addresses the community- and hospital-based elements of this continuum. The environment does affect the patient care and the care-giving team members. It may be a positive or, unfortunately, a negative response.
Little, Callie W.
2015-01-01
The present study is an examination of the genetic and environmental effects on the associations among reading fluency, spelling and earlier reading comprehension on a later reading comprehension outcome (FCAT) in a combined sample of 3rd and 4th grade students using data from the 2011-2012 school year of the Florida Twin project on Reading (Taylor et al., 2013). A genetically sensitive model was applied to the data with results indicating a common genetic component among all four measures, along with shared and non-shared environmental influences common between reading fluency, spelling and FCAT. PMID:26770052
DOT National Transportation Integrated Search
1995-05-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
DOT National Transportation Integrated Search
1994-07-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
DOT National Transportation Integrated Search
1995-02-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
Azadeh, Ali; Sheikhalishahi, Mohammad
2015-06-01
A unique framework for performance optimization of generation companies (GENCOs) based on health, safety, environment, and ergonomics (HSEE) indicators is presented. To rank this sector of industry, the combination of data envelopment analysis (DEA), principal component analysis (PCA), and Taguchi are used for all branches of GENCOs. These methods are applied in an integrated manner to measure the performance of GENCO. The preferred model between DEA, PCA, and Taguchi is selected based on sensitivity analysis and maximum correlation between rankings. To achieve the stated objectives, noise is introduced into input data. The results show that Taguchi outperforms other methods. Moreover, a comprehensive experiment is carried out to identify the most influential factor for ranking GENCOs. The approach developed in this study could be used for continuous assessment and improvement of GENCO's performance in supplying energy with respect to HSEE factors. The results of such studies would help managers to have better understanding of weak and strong points in terms of HSEE factors.
Identification of misspelled words without a comprehensive dictionary using prevalence analysis.
Turchin, Alexander; Chu, Julia T; Shubina, Maria; Einbinder, Jonathan S
2007-10-11
Misspellings are common in medical documents and can be an obstacle to information retrieval. We evaluated an algorithm to identify misspelled words through analysis of their prevalence in a representative body of text. We evaluated the algorithm's accuracy of identifying misspellings of 200 anti-hypertensive medication names on 2,000 potentially misspelled words randomly selected from narrative medical documents. Prevalence ratios (the frequency of the potentially misspelled word divided by the frequency of the non-misspelled word) in physician notes were computed by the software for each of the words. The software results were compared to the manual assessment by an independent reviewer. Area under the ROC curve for identification of misspelled words was 0.96. Sensitivity, specificity, and positive predictive value were 99.25%, 89.72% and 82.9% for the prevalence ratio threshold (0.32768) with the highest F-measure (0.903). Prevalence analysis can be used to identify and correct misspellings with high accuracy.
de Boer, Johannes F.; Leitgeb, Rainer; Wojtkowski, Maciej
2017-01-01
Optical coherence tomography (OCT) has become one of the most successful optical technologies implemented in medicine and clinical practice mostly due to the possibility of non-invasive and non-contact imaging by detecting back-scattered light. OCT has gone through a tremendous development over the past 25 years. From its initial inception in 1991 [Science 254, 1178 (1991)1957169] it has become an indispensable medical imaging technology in ophthalmology. Also in fields like cardiology and gastro-enterology the technology is envisioned to become a standard of care. A key contributor to the success of OCT has been the sensitivity and speed advantage offered by Fourier domain OCT. In this review paper the development of FD-OCT will be revisited, providing a single comprehensive framework to derive the sensitivity advantage of both SD- and SS-OCT. We point out the key aspects of the physics and the technology that has enabled a more than 2 orders of magnitude increase in sensitivity, and as a consequence an increase in the imaging speed without loss of image quality. This speed increase provided a paradigm shift from point sampling to comprehensive 3D in vivo imaging, whose clinical impact is still actively explored by a large number of researchers worldwide. PMID:28717565
Comprehensive Design Reliability Activities for Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Whitley, M. R.; Knight, K. C.
2000-01-01
This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.
NASA Astrophysics Data System (ADS)
Desa, M. S. M.; Ibrahim, M. H. W.; Shahidan, S.; Ghadzali, N. S.; Misri, Z.
2018-04-01
Acoustic emission (AE) technique is one of the non-destructive (NDT) testing, where it can be used to determine the damage of concrete structures such as crack, corrosion, stability, sensitivity, as structure monitoring and energy formed within cracking opening growth in the concrete structure. This article gives a comprehensive review of the acoustic emission (AE) technique testing due to its application in concrete structure for structural health monitoring (SHM). Assessment of AE technique used for structural are reviewed to give the perception of its structural engineering such as dam, bridge and building, where the previous research has been reviewed based on AE application. The assessment of AE technique focusing on basic fundamental of parametric and signal waveform analysis during analysis process and its capability in structural monitoring. Moreover, the assessment and application of AE due to its function have been summarized and highlighted for future references
Diaby, Vakaramoko; Goeree, Ron
2014-02-01
In recent years, the quest for more comprehensiveness, structure and transparency in reimbursement decision-making in healthcare has prompted the research into alternative decision-making frameworks. In this environment, multi-criteria decision analysis (MCDA) is arising as a valuable tool to support healthcare decision-making. In this paper, we present the main MCDA decision support methods (elementary methods, value-based measurement models, goal programming models and outranking models) using a case study approach. For each family of methods, an example of how an MCDA model would operate in a real decision-making context is presented from a critical perspective, highlighting the parameters setting, the selection of the appropriate evaluation model as well as the role of sensitivity and robustness analyses. This study aims to provide a step-by-step guide on how to use MCDA methods for reimbursement decision-making in healthcare.
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
Good modeling practice guidelines for applying multimedia models in chemical assessments.
Buser, Andreas M; MacLeod, Matthew; Scheringer, Martin; Mackay, Don; Bonnell, Mark; Russell, Mark H; DePinto, Joseph V; Hungerbühler, Konrad
2012-10-01
Multimedia mass balance models of chemical fate in the environment have been used for over 3 decades in a regulatory context to assist decision making. As these models become more comprehensive, reliable, and accepted, there is a need to recognize and adopt principles of Good Modeling Practice (GMP) to ensure that multimedia models are applied with transparency and adherence to accepted scientific principles. We propose and discuss 6 principles of GMP for applying existing multimedia models in a decision-making context, namely 1) specification of the goals of the model assessment, 2) specification of the model used, 3) specification of the input data, 4) specification of the output data, 5) conduct of a sensitivity and possibly also uncertainty analysis, and finally 6) specification of the limitations and limits of applicability of the analysis. These principles are justified and discussed with a view to enhancing the transparency and quality of model-based assessments. Copyright © 2012 SETAC.
Methods for the analysis of azo dyes employed in food industry--A review.
Yamjala, Karthik; Nainar, Meyyanathan Subramania; Ramisetti, Nageswara Rao
2016-02-01
A wide variety of azo dyes are generally added for coloring food products not only to make them visually aesthetic but also to reinstate the original appearance lost during the production process. However, many countries in the world have banned the use of most of the azo dyes in food and their usage is highly regulated by domestic and export food supplies. The regulatory authorities and food analysts adopt highly sensitive and selective analytical methods for monitoring as well as assuring the quality and safety of food products. The present manuscript presents a comprehensive review of various analytical techniques used in the analysis of azo dyes employed in food industries of different parts of the world. A brief description on the use of different extraction methods such as liquid-liquid, solid phase and membrane extraction has also been presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D
2018-05-01
Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.
NASA Astrophysics Data System (ADS)
Solano, Ilaria; Parisse, Pietro; Gramazio, Federico; Ianeselli, Luca; Medagli, Barbara; Cavalleri, Ornella; Casalis, Loredana; Canepa, Maurizio
2017-11-01
The comprehension of mechanisms of interaction between functional layers and proteins is relevant for the development of sensitive and precise biosensors. Here we report our study which combines Atomic Force Microscopy and Spectroscopic Ellipsometry to investigate the His-Ni-NTA mediated interaction between 6His-tagged Small Ubiquitin-like Modifier (SUMO) protein with self assembled monolayers of NTA terminated alkanethiols. The use of AFM-based nanolithograhic tools and the analysis of ellipsometric spectra in situ and ex situ provided us a solid method to disentangle the effects of Ni(II)-mediated interaction between the NTA layer and the 6His-tagged SUMO and to accurately determine in physiological condition the thickness value of the SUMO layer. This investigation is a first step towards the study of layered systems of greater complexity of which the NTA/6His-tagged SUMO is a prototypical example.
Cues, quantification, and agreement in language comprehension.
Tanner, Darren; Bulkes, Nyssa Z
2015-12-01
We investigated factors that affect the comprehension of subject-verb agreement in English, using quantification as a window into the relationship between morphosyntactic processes in language production and comprehension. Event-related brain potentials (ERPs) were recorded while participants read sentences with grammatical and ungrammatical verbs, in which the plurality of the subject noun phrase was either doubly marked (via overt plural quantification and morphological marking on the noun) or singly marked (via only plural morphology on the noun). Both acceptability judgments and the ERP data showed heightened sensitivity to agreement violations when quantification provided an additional cue to the grammatical number of the subject noun phrase, over and above plural morphology. This is consistent with models of grammatical comprehension that emphasize feature prediction in tandem with cue-based memory retrieval. Our results additionally contrast with those of prior studies that showed no effects of plural quantification on agreement in language production. These findings therefore highlight some nontrivial divergences in the cues and mechanisms supporting morphosyntactic processing in language production and comprehension.
Del Din, Silvia; Godfrey, Alan; Rochester, Lynn
2016-05-01
Measurement of gait is becoming important as a tool to identify disease and disease progression, yet to date its application is limited largely to specialist centers. Wearable devices enables gait to be measured in naturalistic environments, however questions remain regarding validity. Previous research suggests that when compared with a laboratory reference, measurement accuracy is acceptable for mean but not variability or asymmetry gait characteristics. Some fundamental reasons for this have been presented, (e.g., synchronization, different sampling frequencies) but to date this has not been systematically examined. The aims of this study were to: 1) quantify a comprehensive range of gait characteristics measured using a single triaxial accelerometer-based monitor; 2) examine outcomes and monitor performance in measuring gait in older adults and those with Parkinson's disease (PD); and 3) carry out a detailed comparison with those derived from an instrumented walkway to account for any discrepancies. Fourteen gait characteristics were quantified in 30 people with incident PD and 30 healthy age-matched controls. Of the 14 gait characteristics compared, agreement between instruments was excellent for four (ICCs 0.913-0.983); moderate for four (ICCs 0.508-0.766); and poor for six characteristics (ICCs 0.637-0.370). Further analysis revealed that differences reflect an increased sensitivity of accelerometry to detect motion, rather than measurement error. This is most likely because accelerometry measures gait as a continuous activity rather than discrete footfall events, per instrumented tools. The increased sensitivity shown for these characteristics will be of particular interest to researchers keen to interpret "real-world" gait data. In conclusion, use of a body-worn monitor is recommended for the measurement of gait but is likely to yield more sensitive data for asymmetry and variability features.
Shuttle filter study. Volume 1: Characterization and optimization of filtration devices
NASA Technical Reports Server (NTRS)
1974-01-01
A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.
Chongqing, Tan; Liubao, Peng; Xiaohui, Zeng; Jianhe, Li; Xiaomin, Wan; Gannong, Chen; Siying, Wang; Lihui, Ouyang; Ziying, Zhao
2014-03-01
Postoperative adjuvant chemotherapy with capecitabine and oxaliplatin was first recommended for resectable gastric cancer patients in the 2011 Chinese National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology: Gastric Cancer, but the economic influence of this therapy in China is unknown. The aim of the present study was to determine the cost-effectiveness of adjuvant chemotherapy with capecitabine and oxaliplatin after a gastrectomy with extended (D2) lymph-node dissection, compared with a D2 gastrectomy alone, for patients with stage II-IIIB gastric cancer. On the basis of data from the CLASSIC trial, a Markov model was created to determine economic and clinical data for patients in the chemotherapy and surgery group (CSG) and the surgery-only group (SOG). The costs, presented in 2010 US dollars and estimated from the perspective of the Chinese health-care system, were obtained from the published literature and the local health system. The utilities were based on published literature. Costs, life years (LYs), quality-adjusted life years (QALYs), and incremental cost-effectiveness ratios (ICER) were estimated. A lifetime horizon and a 3 % annual discount rate were used. One-way and probabilistic sensitivity analyses were performed. For the base case, the CSG compared with SOG would increase LYs and QALYs in a 3-, 5-, 10- or 30-year time horizon (except the QALYs at 3 or 5 years). In the short run (such as in 3 or 5 years), the medical costs would increase owing to adjuvant chemotherapy of capecitabine plus oxaliplatin after D2 gastrectomy, but in the long run the costs would decline. The ICERs suggested that the SOG was dominant at 3 or 5 years and the CSG was dominant at 10 or 30 years. The one-way sensitivity analysis showed that the utility of disease-free survival for 1-10 years for the SOG and the cost of oxaliplatin were the most influential parameters. The probabilistic sensitivity analysis predicted a 98.6 % likelihood that the ICER for the CSG would be less than US$13,527/QALY (three times the per capita gross domestic product of China). For patients in China with resectable disease, our results suggest that adjuvant chemotherapy with capecitabine plus oxaliplatin after a D2 gastrectomy is cost-saving and dominant in the long run on the basis of a current clinical trial, compared with treatment with a D2 gastrectomy alone.
Treglia, Giorgio; Sadeghi, Ramin; Annunziata, Salvatore; Zakavi, Seyed Rasoul; Caldarella, Carmelo; Muoio, Barbara; Bertagna, Francesco; Ceriani, Luca; Giovanella, Luca
2013-12-01
To systematically review and meta-analyse published data about the diagnostic performance of Fluorine-18-Fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET) and PET/computed tomography (PET/CT) in osteomyelitis related to diabetic foot. A comprehensive literature search of studies on (18)F-FDG-PET and PET/CT in patients with diabetic foot was performed. Pooled sensitivity, specificity, positive and negative likelihood ratio (LR+ and LR-) and diagnostic odds ratio (DOR) and area under the summary ROC curve of (18)F-FDG-PET and PET/CT in patients with osteomyelitis related to diabetic foot were calculated. Nine studies comprising 299 patients with diabetic foot were included in the qualitative analysis (systematic review) and discussed. The quantitative analysis (meta-analysis) of four selected studies provided the following results on a per patient-based analysis: sensitivity was 74% [95% confidence interval (95%CI): 60-85%], specificity 91% (95%CI: 85-96%), LR+ 5.56 (95%CI: 2.02-15.27), LR- 0.37 (95%CI: 0.10-1.35), and DOR 16.96 (95%CI: 2.06-139.66). The area under the summary ROC curve was 0.874. In patients with suspected osteomyelitis related to diabetic foot (18)F-FDG-PET and PET/CT demonstrated a high specificity, being potentially useful tools if combined with other imaging methods such as MRI. Nevertheless, the literature focusing on the use of (18)F-FDG-PET and PET/CT in this setting remains still limited. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cheng, Ji; Gao, Jinbo; Shuai, Xiaoming; Wang, Guobin; Tao, Kaixiong
2016-06-28
Bariatric surgery has emerged as a competitive strategy for obese patients. However, its comparative efficacy against non-surgical treatments remains ill-defined, especially among nonseverely obese crowds. Therefore, we implemented a systematic review and meta-analysis in order for an academic addition to current literatures. Literatures were retrieved from databases of PubMed, Web of Science, EMBASE and Cochrane Library. Randomized trials comparing surgical with non-surgical therapies for obesity were included. A Revised Jadad's Scale and Risk of Bias Summary were employed for methodological assessment. Subgroups analysis, sensitivity analysis and publication bias assessment were respectively performed in order to find out the source of heterogeneity, detect the outcome stability and potential publication bias. 25 randomized trials were eligibly included, totally comprising of 1194 participants. Both groups displayed well comparability concerning baseline parameters (P > 0.05). The pooled results of primary endpoints (weight loss and diabetic remission) revealed a significant advantage among surgical patients rather than those receiving non-surgical treatments (P < 0.05). Furthermore, except for certain cardiovascular indicators, bariatric surgery was superior to conventional arms in terms of metabolic secondary parameters (P < 0.05). Additionally, the pooled outcomes were confirmed to be stable by sensitivity analysis. Although Egger's test (P < 0.01) and Begg's test (P<0.05) had reported the presence of publication bias among included studies, "Trim-and-Fill" method verified that the pooled outcomes remained stable. Bariatric surgery is a better therapeutic option for weight loss, irrespective of follow-up duration, surgical techniques and obesity levels.
Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.
Goh, Wilson Wen Bin; Wong, Limsoon
2016-09-02
Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.
Validation of a Comprehensive Early Childhood Allergy Questionnaire.
Minasyan, Anna; Babajanyan, Arman; Campbell, Dianne E; Nanan, Ralph
2015-09-01
Parental questionnaires to assess incidence of pediatric allergic disease have been validated for use in school-aged children. Currently, there is no validated questionnaire-based assessment of food allergy, atopic dermatitis (AD), and asthma for infants and young children. The Comprehensive Early Childhood Allergy Questionnaire was designed for detecting AD, asthma, and IgE-mediated food allergies in children aged 1-5 years. A nested case-control design was applied. Parents of 150 children attending pediatric outpatient clinics completed the questionnaire before being clinically assessed by a pediatrician for allergies. Sensitivity, specificity, and reproducibility of the questionnaire were assessed. Seventy-seven children were diagnosed with one or more current allergic diseases. The questionnaire demonstrated high overall sensitivity of 0.93 (95% CI 0.86-0.98) with a specificity of 0.79 (95% CI 0.68-0.88). Questionnaire reproducibility was good with a kappa agreement rate for symptom-related questions of 0.45-0.90. Comprehensive Early Childhood Allergy Questionnaire accurately and reliably reflects the presence of allergies in children aged 1-5 years. Its use is warranted as a tool for determining prevalence of allergies in this pediatric age group. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Moshtagh-Khorasani, Majid; Akbarzadeh-T, Mohammad-R; Jahangiri, Nader; Khoobdel, Mehdi
2009-01-01
BACKGROUND: Aphasia diagnosis is particularly challenging due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. METHODS: Fuzzy probability is proposed here as the basic framework for handling the uncertainties in medical diagnosis and particularly aphasia diagnosis. To efficiently construct this fuzzy probabilistic mapping, statistical analysis is performed that constructs input membership functions as well as determines an effective set of input features. RESULTS: Considering the high sensitivity of performance measures to different distribution of testing/training sets, a statistical t-test of significance is applied to compare fuzzy approach results with NN results as well as author's earlier work using fuzzy logic. The proposed fuzzy probability estimator approach clearly provides better diagnosis for both classes of data sets. Specifically, for the first and second type of fuzzy probability classifiers, i.e. spontaneous speech and comprehensive model, P-values are 2.24E-08 and 0.0059, respectively, strongly rejecting the null hypothesis. CONCLUSIONS: The technique is applied and compared on both comprehensive and spontaneous speech test data for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. Statistical analysis confirms that the proposed approach can significantly improve accuracy using fewer Aphasia features. PMID:21772867
Metabolomics method to comprehensively analyze amino acids in different domains.
Gu, Haiwei; Du, Jianhai; Carnevale Neto, Fausto; Carroll, Patrick A; Turner, Sally J; Chiorean, E Gabriela; Eisenman, Robert N; Raftery, Daniel
2015-04-21
Amino acids play essential roles in both metabolism and the proteome. Many studies have profiled free amino acids (FAAs) or proteins; however, few have connected the measurement of FAA with individual amino acids in the proteome. In this study, we developed a metabolomics method to comprehensively analyze amino acids in different domains, using two examples of different sample types and disease models. We first examined the responses of FAAs and insoluble-proteome amino acids (IPAAs) to the Myc oncogene in Tet21N human neuroblastoma cells. The metabolic and proteomic amino acid profiles were quite different, even under the same Myc condition, and their combination provided a better understanding of the biological status. In addition, amino acids were measured in 3 domains (FAAs, free and soluble-proteome amino acids (FSPAAs), and IPAAs) to study changes in serum amino acid profiles related to colon cancer. A penalized logistic regression model based on the amino acids from the three domains had better sensitivity and specificity than that from each individual domain. To the best of our knowledge, this is the first study to perform a combined analysis of amino acids in different domains, and indicates the useful biological information available from a metabolomics analysis of the protein pellet. This study lays the foundation for further quantitative tracking of the distribution of amino acids in different domains, with opportunities for better diagnosis and mechanistic studies of various diseases.
NASA Astrophysics Data System (ADS)
Fu, Haiyan; Yin, Qiaobo; Xu, Lu; Wang, Weizheng; Chen, Feng; Yang, Tianming
2017-07-01
The origins and authenticity against frauds are two essential aspects of food quality. In this work, a comprehensive quality evaluation method by FT-NIR spectroscopy and chemometrics were suggested to address the geographical origins and authentication of Chinese Ganoderma lucidum (GL). Classification for 25 groups of GL samples (7 common species from 15 producing areas) was performed using near-infrared spectroscopy and interval-combination One-Versus-One least squares support vector machine (IC-OVO-LS-SVM). Untargeted analysis of 4 adulterants of cheaper mushrooms was performed by one-class partial least squares (OCPLS) modeling for each of the 7 GL species. After outlier diagnosis and comparing the influences of different preprocessing methods and spectral intervals on classification, IC-OVO-LS-SVM with standard normal variate (SNV) spectra obtained a total classification accuracy of 0.9317, an average sensitivity and specificity of 0.9306 and 0.9971, respectively. With SNV or second-order derivative (D2) spectra, OCPLS could detect at least 2% or more doping levels of adulterants for 5 of the 7 GL species and 5% or more doping levels for the other 2 GL species. This study demonstrates the feasibility of using new chemometrics and NIR spectroscopy for fine classification of GL geographical origins and species as well as for untargeted analysis of multiple adulterants.
Celińska, Ewelina; Olkowicz, Mariola; Grajek, Włodzimierz
2015-08-01
A world-wide effort is now being pursued towards the development of flavors and fragrances (F&F) production independently from traditional sources, as well as autonomously from depleting fossil fuel supplies. Biotechnological production of F&F by microbes has emerged as a vivid solution to the current market limitations. Amongst a wide variety of fragrant chemicals, 2-PE is of significant interest to both scientific and industrial community. Although the general overview of the 2-PE synthesis pathway is commonly known, involvement of particular molecular identities in this pathway has not been elucidated in Yarrowia lipolytica to date. The aim of this study was mapping molecular identities involved in 2-PE synthesis in Y. lipolytica. To acquire a comprehensive landscape of the proteins that are directly and indirectly involved in L-Phe degradation and 2-PE synthesis, we took advantage of comprehensibility and sensitivity of high-throughput LC-MS/MS-quantitative analysis. Amongst a number of proteins involved in amino acid turnover and the central carbon metabolism, enzymes involved in L-Phe conversion to 2-PE have been identified. Results on yeast-to-hyphae transition in relation to the character of the provided nitrogen source have been presented. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mourão, Marta P B; Denekamp, Ilse; Kuijper, Sjoukje; Kolk, Arend H J; Janssen, Hans-Gerd
2016-03-25
Tuberculosis is one of the world's most emerging public health problems, particularly in developing countries. Chromatography based methods have been used to tackle this epidemic by focusing on biomarker detection. Unfortunately, interferences from lipids in the sputum matrix, particularly cholesterol, adversely affect the identification and detection of the marker compounds. The present contribution describes the serial combination of normal phase liquid chromatography (NPLC) with thermally assisted hydrolysis and methylation followed by gas chromatography-mass spectrometry (THM-GC-MS) to overcome the difficulties of biomarker evaluation. The in-series combination consists of an LC analysis where fractions are collected and then transferred to the THM-GC-MS system. This was either done with comprehensive coupling, transferring all the fractions, or with hyphenated interfacing, i.e. off-line multi heart-cutting, transferring only selected fractions. Owing to the high sensitivity and selectivity of LC as a sample pre-treatment method, and to the high specificity of the MS as a detector, this analytical approach, NPLC × THM-GC-MS, is extremely sensitive. The results obtained indicate that this analytical set-up is able to detect down to 1 × 10(3) mycobacteria/mL of Mycobacterium tuberculosis strain 124, spiked in blank sputum samples. It is a powerful analytical tool and also has great potential for full automation. If further studies demonstrate its usefulness when applied blind in real sputum specimens, this technique could compete with the current smear microscopy in the early diagnosis of tuberculosis. Copyright © 2015 Elsevier B.V. All rights reserved.
Gobba, F; Ghersi, R; Martinelli, Simona; Richeldi, Arianna; Clerici, Piera; Grazioli, P
2008-01-01
Data on self-reported symptoms and/or functional impairments are important in research on work-related musculoskeletal disorders (WRMSDs). In such cases the availability of international standardized questionnaires is extremely important since they permit comparison of studies performed in different Countries. Translation into Italian and validation of the Nordic Musculoskeletal Questionnaire (NMQ), a tool which is widely used in studies on WRMSDs in the international scientific literature. The extended Canadian version of the NMQ was translated into Italian. As per the current guidelines, the cross-cultural adaptation was performed by translation of the items from French, back-translation by independent mother-tongue translators and committee review. The resulting version of the questionnaire underwent pre-testing in 3 independent groups of subjects. The comprehensibility, reliability (internal consistency and reproducibility) and sensitivity were evaluated. After translation/back-translation and review of the items the comprehensibility of the Italian version of the questionnaire was judged good in a group of 40 workers. The internal consistency was evaluated using the Cronbach's Alpha test in the same group and in another 98 engineering workers: the results were on the whole acceptable. The reproducibility, which was tested with Cohen's Kappa test in the 40 workers, was good/excellent. In a preliminary evaluation, performed in 30 outpatients of a of Rehabilitation Service, sensitivity was very good. The results show that the Italian version of the Nordic Musculoskeletal Questionnaire is valid for self-administration and can be applied in 'field" studies on self-reported musculoskeletal symptoms and functional impairments in group of workers.
ERIC Educational Resources Information Center
Robson, Holly; Keidel, James L.; Lambon Ralph, Matthew A.; Sage, Karen
2012-01-01
Wernicke's aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in Wernicke's aphasia, a view consistent with current functional neuroimaging which finds…
Robson, Holly; Keidel, James L; Ralph, Matthew A Lambon; Sage, Karen
2012-01-01
Wernicke's aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in Wernicke's aphasia, a view consistent with current functional neuroimaging which finds areas in the superior temporal cortex responsive to phonological stimuli. However behavioural evidence to support the link between a phonological analysis deficit and auditory comprehension has not been yet shown. This study extends seminal work by Blumstein, Baker, and Goodglass (1977) to investigate the relationship between acoustic-phonological perception, measured through phonological discrimination, and auditory comprehension in a case series of Wernicke's aphasia participants. A novel adaptive phonological discrimination task was used to obtain reliable thresholds of the phonological perceptual distance required between nonwords before they could be discriminated. Wernicke's aphasia participants showed significantly elevated thresholds compared to age and hearing matched control participants. Acoustic-phonological thresholds correlated strongly with auditory comprehension abilities in Wernicke's aphasia. In contrast, nonverbal semantic skills showed no relationship with auditory comprehension. The results are evaluated in the context of recent neurobiological models of language and suggest that impaired acoustic-phonological perception underlies the comprehension impairment in Wernicke's aphasia and favour models of language which propose a leftward asymmetry in phonological analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wong, Bernice Y. L.
1986-01-01
Successful instructional strategies for enhancing the reading comprehension and comprehension test performance of learning disabled students are described. Students are taught to self-monitor their comprehension of expository materials and stories through recognition and analysis of recurrent elements and problem passages, content summarization,…
Mahong, Bancha; Roytrakul, Suttiruk; Phaonaklop, Narumon; Wongratana, Janewit; Yokthongwattana, Kittisak
2012-03-01
Oxygenic photosynthetic organisms often suffer from excessive irradiance, which cause harmful effects to the chloroplast proteins and lipids. Photoprotection and the photosystem II repair processes are the mechanisms that plants deploy to counteract the drastic effects from irradiance stress. Although the protective and repair mechanisms seemed to be similar in most plants, many species do confer different level of tolerance toward high light. Such diversity may originate from differences at the molecular level, i.e., perception of the light stress, signal transduction and expression of stress responsive genes. Comprehensive analysis of overall changes in the total pool of proteins in an organism can be performed using a proteomic approach. In this study, we employed 2-DE/LC-MS/MS-based comparative proteomic approach to analyze total proteins of the light sensitive model unicellular green alga Chlamydomonas reinhardtii in response to excessive irradiance. Results showed that among all the differentially expressed proteins, several heat-shock proteins and molecular chaperones were surprisingly down-regulated after 3-6 h of high light exposure. Discussions were made on the possible involvement of such down regulation and the light sensitive nature of this model alga.
Lv, Xiaolong; Lan, Shanrong; Guy, Kateta Malangisha; Yang, Jinghua; Zhang, Mingfang; Hu, Zhongyuan
2016-01-01
Watermelon (Citrullus lanatus) is one xerophyte that has relative higher tolerance to drought and salt stresses as well as more sensitivity to cold stress, compared with most model plants. These characteristics facilitate it a potential model crop for researches on salt, drought or cold tolerance. In this study, a genome-wide comprehensive analysis of the ClNAC transcription factor (TF) family was carried out for the first time, to investigate their transcriptional profiles and potential functions in response to these abiotic stresses. The expression profiling analysis reveals that several NAC TFs are highly responsive to abiotic stresses and development, for instance, subfamily IV NACs may play roles in maintaining water status under drought or salt conditions, as well as water and metabolites conduction and translocation toward fruit. In contrast, rapid and negative responses of most of the ClNACs to low-temperature adversity may be related to the sensitivity to cold stress. Crosstalks among these abiotic stresses and hormone (abscisic acid and jasmonic acid) pathways were also discussed based on the expression of ClNAC genes. Our results will provide useful insights for the functional mining of NAC family in watermelon, as well as into the mechanisms underlying abiotic tolerance in other cash crops. PMID:27491393
Lv, Xiaolong; Lan, Shanrong; Guy, Kateta Malangisha; Yang, Jinghua; Zhang, Mingfang; Hu, Zhongyuan
2016-08-05
Watermelon (Citrullus lanatus) is one xerophyte that has relative higher tolerance to drought and salt stresses as well as more sensitivity to cold stress, compared with most model plants. These characteristics facilitate it a potential model crop for researches on salt, drought or cold tolerance. In this study, a genome-wide comprehensive analysis of the ClNAC transcription factor (TF) family was carried out for the first time, to investigate their transcriptional profiles and potential functions in response to these abiotic stresses. The expression profiling analysis reveals that several NAC TFs are highly responsive to abiotic stresses and development, for instance, subfamily IV NACs may play roles in maintaining water status under drought or salt conditions, as well as water and metabolites conduction and translocation toward fruit. In contrast, rapid and negative responses of most of the ClNACs to low-temperature adversity may be related to the sensitivity to cold stress. Crosstalks among these abiotic stresses and hormone (abscisic acid and jasmonic acid) pathways were also discussed based on the expression of ClNAC genes. Our results will provide useful insights for the functional mining of NAC family in watermelon, as well as into the mechanisms underlying abiotic tolerance in other cash crops.
Two retailer-supplier supply chain models with default risk under trade credit policy.
Wu, Chengfeng; Zhao, Qiuhong
2016-01-01
The purpose of the paper is to formulate two uncooperative replenishment models with demand and default risk which are the functions of the trade credit period, i.e., a Nash equilibrium model and a supplier-Stackelberg model. Firstly, we present the optimal results of decentralized decision and centralized decision without trade credit. Secondly, we derive the existence and uniqueness conditions of the optimal solutions under the two games, respectively. Moreover, we present a set of theorems and corollary to determine the optimal solutions. Finally, we provide an example and sensitivity analysis to illustrate the proposed strategy and optimal solutions. Sensitivity analysis reveals that the total profits of supply chain under the two games both are better than the results under the centralized decision only if the optimal trade credit period isn't too short. It also reveals that the size of trade credit period, demand, retailer's profit and supplier's profit have strong relationship with the increasing demand coefficient, wholesale price, default risk coefficient and production cost. The major contribution of the paper is that we comprehensively compare between the results of decentralized decision and centralized decision without trade credit, Nash equilibrium and supplier-Stackelberg models with trade credit, and obtain some interesting managerial insights and practical implications.
Cheah, Kin Wai; Yusup, Suzana; Gurdeep Singh, Haswin Kaur; Uemura, Yoshimitsu; Lam, Hon Loong
2017-12-01
This work describes the economic feasibility of hydroprocessed diesel fuel production via catalytic decarboxylation of rubber seed oil in Malaysia. A comprehensive techno-economic assessment is developed using Aspen HYSYS V8.0 software for process modelling and economic cost estimates. The profitability profile and minimum fuels selling price of this synthetic fuels production using rubber seed oil as biomass feedstock are assessed under a set of assumptions for what can be plausibly be achieved in 10-years framework. In this study, renewable diesel processing facility is modelled to be capable of processing 65,000 L of inedible oil per day and producing a total of 20 million litre of renewable diesel product per annual with assumed annual operational days of 347. With the forecasted renewable diesel retail price of 3.64 RM per kg, the pioneering renewable diesel project investment offers an assuring return of investment of 12.1% and net return as high as 1.35 million RM. Sensitivity analysis conducted showed that renewable diesel production cost is most sensitive to rubber seed oil price and hydrogen gas price, reflecting on the relative importance of feedstock prices in the overall profitability profile. Copyright © 2017 Elsevier Ltd. All rights reserved.
Comprehensive study of observables in Compton scattering on the nucleon
NASA Astrophysics Data System (ADS)
Grießhammer, Harald W.; McGovern, Judith A.; Phillips, Daniel R.
2018-03-01
We present an analysis of 13 observables in Compton scattering on the proton. Cross sections, asymmetries with polarised beam and/or targets, and polarisation-transfer observables are investigated for energies up to the Δ(1232) resonance to determine their sensitivity to the proton's dipole scalar and spin polarisabilities. The Chiral Effective Field Theory Compton amplitude we use is complete at N4LO, O(e2δ4), for photon energies ω˜ m_{π}, and so has an accuracy of a few per cent there. At photon energies in the resonance region, it is complete at NLO, O(e2δ0), and so its accuracy there is about 20%. We find that for energies from pion-production threshold to about 250 MeV, multiple asymmetries have significant sensitivity to presently ill-determined combinations of proton spin polarisabilities. We also argue that the broad outcomes of this analysis will be replicated in complementary theoretical approaches, e.g., dispersion relations. Finally, we show that below the pion-production threshold, 6 observables suffice to reconstruct the Compton amplitude, and above it 11 are required. Although not necessary for polarisability extractions, this opens the possibility to perform "complete" Compton-scattering experiments. An interactive Mathematica notebook, including results for the neutron, is available from judith.mcgovern@manchester.ac.uk.
Cancer risk of polycyclic aromatic hydrocarbons (PAHs) in the soils from Jiaozhou Bay wetland.
Yang, Wei; Lang, Yinhai; Li, Guoliang
2014-10-01
To estimate the cancer risk exposed to the PAHs in Jiaozhou Bay wetland soils, a probabilistic health risk assessment was conducted based on Monte Carlo simulations. A sensitivity analysis was performed to determine the input variables that contribute most to the cancer risk assessment. Three age groups were selected to estimate the cancer risk via four exposure pathways (soil ingestion, food ingestion, dermal contact and inhalation). The results revealed that the 95th percentiles cancer risks for children, teens and adults were 9.11×10(-6), 1.04×10(-5) and 7.08×10(-5), respectively. The cancer risks for three age groups were at acceptable range (10(-6)-10(-4)), indicating no potential cancer risk. For different exposure pathways, food ingestion was the major exposure pathway. For 7 carcinogenic PAHs, the cancer risk caused by BaP was the highest. Sensitivity analysis demonstrated that the parameters of exposure duration (ED) and sum of converted 7 carcinogenic PAHs concentrations in soil based on BaPeq (CSsoil) contribute most to the total uncertainty. This study provides a comprehensive risk assessment on carcinogenic PAHs in Jiaozhou Bay wetland soils, and might be useful in providing potential strategies of cancer risk prevention and controlling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Becker, M; Zweckmair, T; Forneck, A; Rosenau, T; Potthast, A; Liebner, F
2013-03-15
Gas chromatographic analysis of complex carbohydrate mixtures requires highly effective and reliable derivatisation strategies for successful separation, identification, and quantitation of all constituents. Different single-step (per-trimethylsilylation, isopropylidenation) and two-step approaches (ethoximation-trimethylsilylation, ethoximation-trifluoroacetylation, benzoximation-trimethylsilylation, benzoximation-trifluoroacetylation) have been comprehensively studied with regard to chromatographic characteristics, informational value of mass spectra, ease of peak assignment, robustness toward matrix effects, and quantitation using a set of reference compounds that comprise eight monosaccharides (C(5)-C(6)), glycolaldehyde, and dihydroxyacetone. It has been shown that isopropylidenation and the two oximation-trifluoroacetylation approaches are least suitable for complex carbohydrate matrices. Whereas the former is limited to compounds that contain vicinal dihydroxy moieties in cis configuration, the latter two methods are sensitive to traces of trifluoroacetic acid which strongly supports decomposition of ketohexoses. It has been demonstrated for two "real" carbohydrate-rich matrices of biological and synthetic origin, respectively, that two-step ethoximation-trimethylsilylation is superior to other approaches due to the low number of peaks obtained per carbohydrate, good peak separation performance, structural information of mass spectra, low limits of detection and quantitation, minor relative standard deviations, and low sensitivity toward matrix effects. Copyright © 2013 Elsevier B.V. All rights reserved.
Comprehensive benefit analysis of regional water resources based on multi-objective evaluation
NASA Astrophysics Data System (ADS)
Chi, Yixia; Xue, Lianqing; Zhang, Hui
2018-01-01
The purpose of the water resources comprehensive benefits analysis is to maximize the comprehensive benefits on the aspects of social, economic and ecological environment. Aiming at the defects of the traditional analytic hierarchy process in the evaluation of water resources, it proposed a comprehensive benefit evaluation of social, economic and environmental benefits index from the perspective of water resources comprehensive benefit in the social system, economic system and environmental system; determined the index weight by the improved fuzzy analytic hierarchy process (AHP), calculated the relative index of water resources comprehensive benefit and analyzed the comprehensive benefit of water resources in Xiangshui County by the multi-objective evaluation model. Based on the water resources data in Xiangshui County, 20 main comprehensive benefit assessment factors of 5 districts belonged to Xiangshui County were evaluated. The results showed that the comprehensive benefit of Xiangshui County was 0.7317, meanwhile the social economy has a further development space in the current situation of water resources.
Noh, Ka-Won; Lee, Mi-Sook; Lee, Seung Eun; Song, Ji-Young; Shin, Hyun-Tae; Kim, Yu Jin; Oh, Doo Yi; Jung, Kyungsoo; Sung, Minjung; Kim, Mingi; An, Sungbin; Han, Joungho; Shim, Young Mog; Zo, Jae Ill; Kim, Jhingook; Park, Woong-Yang; Lee, Se-Hoon; Choi, Yoon-La
2017-11-01
Most anaplastic lymphoma kinase (ALK)-rearranged non-small cell lung cancers (NSCLCs) show good clinical response to ALK inhibitors. However, some ALK-rearranged NSCLC patients show various primary responses with unknown reasons. Previous studies focused on the clinical aspects of ALK fusions in small cohorts, or were conducted in vitro and/or in vivo to investigate the function of ALK. One of the suggested theories describes how echinoderm microtubule-associated protein-like 4 (EML4)-ALK variants play a role towards different sensitivities in ALK inhibitors. Until now, there has been no integrated comprehensive study that dissects ALK at the molecular level in a large scale. Here, we report the largest extensive molecular analysis of 158 ALK-rearranged NSCLCs and have investigated these findings in a cell line construct experiment. We discovered that NSCLCs with EML4-ALK short forms (variant 3/others) had more advanced stage and frequent metastases than cases with the long forms (variant 1/others) (p = 0.057, p < 0.05). In vitro experiments revealed that EML4-ALK short forms show lower sensitivity to ALK inhibitors than do long forms. Clinical analysis also showed a trend for the short forms showing worse PFS. Interestingly, we found that breakpoints of ALK are evenly distributed mainly in intron 19 and almost all of them undergo a non-homologous end-joining repair to generate ALK fusions. We also discovered four novel somatic ALK mutations in NSCLC (T1151R, R1192P, A1280V, and L1535Q) that confer primary resistance; all of them showed strong resistance to ALK inhibitors, as G1202R does. Through targeted deep sequencing, we discovered three novel ALK fusion partners (GCC2, LMO7, and PHACTR1), and different ALK fusion partners showed different intracellular localization. With our findings that the EML4-ALK variants, new ALK somatic mutations, and novel ALK-fusion partners may affect sensitivity to ALK inhibitors, we stress the importance of targeted therapy to take the ALK molecular profiling into consideration. Copyright © 2017 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd. Copyright © 2017 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Integrated modeling analysis of a novel hexapod and its application in active surface
NASA Astrophysics Data System (ADS)
Yang, Dehua; Zago, Lorenzo; Li, Hui; Lambert, Gregory; Zhou, Guohua; Li, Guoping
2011-09-01
This paper presents the concept and integrated modeling analysis of a novel mechanism, a 3-CPS/RPPS hexapod, for supporting segmented reflectors for radio telescopes and eventually segmented mirrors of optical telescopes. The concept comprises a novel type of hexapod with an original organization of actuators hence degrees of freedom, based on a swaying arm based design concept. Afterwards, with specially designed connecting joints between panels/segments, an iso-static master-slave active surface concept can be achieved for any triangular and/or hexagonal panel/segment pattern. The integrated modeling comprises all the multifold sizing and performance aspects which must be evaluated concurrently in order to optimize and validate the design and the configuration. In particular, comprehensive investigation of kinematic behavior, dynamic analysis, wave-front error and sensitivity analysis are carried out, where, frequently used tools like MATLAB/SimMechanics, CALFEM and ANSYS are used. Especially, we introduce the finite element method as a competent approach for analyses of the multi-degree of freedom mechanism. Some experimental verifications already performed validating single aspects of the integrated concept are also presented with the results obtained.
Fiber-pigtailed silicon photonic sensors for methane leak detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teng, Chu; Xiong, Chi; Zhang, Eric
We present comprehensive characterization of silicon photonic sensors for methane leak detection. Sensitivity of 40 ppmv after 1 second integration is reported. Fourier domain characterization of on-chip etalon drifts is used for further sensor improvement.
Stewart Air National Guard Base, NY, C-5M Painting Refurbishment Assessment
2012-12-06
is asthma due to sensitization. After sensitization, any exposure , even to levels below the occupational exposure limit, can produce an asthmatic...Consultative Services Division provide a comprehensive exposure and risk assessment of the corrosion control process conducted on the C-5M in Building...101 at Stewart Air National Guard Base, NY. This facility was previously a fuels systems maintenance facility. At the time of this assessment, the
Improving Reading Comprehension Using Digital Text: A Meta-Analysis of Interventions
ERIC Educational Resources Information Center
Berkeley, Sheri; Kurz, Leigh Ann; Boykin, Andrea; Evmenova, Anya S.
2015-01-01
Much is known about how to improve students' comprehension when reading printed text; less is known about outcomes when reading digital text. The purpose of this meta-analysis was to analyze research on the impact of digital text interventions. A comprehensive literature search resulted in 27 group intervention studies with 16,513 participants.…
Robinson, Jo; Spittal, Matthew J; Carter, Greg
2016-01-01
Objective To examine the efficacy of psychological and psychosocial interventions for reductions in repeated self-harm. Design We conducted a systematic review, meta-analysis and meta-regression to examine the efficacy of psychological and psychosocial interventions to reduce repeat self-harm in adults. We included a sensitivity analysis of studies with a low risk of bias for the meta-analysis. For the meta-regression, we examined whether the type, intensity (primary analyses) and other components of intervention or methodology (secondary analyses) modified the overall intervention effect. Data sources A comprehensive search of MEDLINE, PsycInfo and EMBASE (from 1999 to June 2016) was performed. Eligibility criteria for selecting studies Randomised controlled trials of psychological and psychosocial interventions for adult self-harm patients. Results Forty-five trials were included with data available from 36 (7354 participants) for the primary analysis. Meta-analysis showed a significant benefit of all psychological and psychosocial interventions combined (risk ratio 0.84; 95% CI 0.74 to 0.96; number needed to treat=33); however, sensitivity analyses showed that this benefit was non-significant when restricted to a limited number of high-quality studies. Meta-regression showed that the type of intervention did not modify the treatment effects. Conclusions Consideration of a psychological or psychosocial intervention over and above treatment as usual is worthwhile; with the public health benefits of ensuring that this practice is widely adopted potentially worth the investment. However, the specific type and nature of the intervention that should be delivered is not yet clear. Cognitive–behavioural therapy or interventions with an interpersonal focus and targeted on the precipitants to self-harm may be the best candidates on the current evidence. Further research is required. PMID:27660314
NASA Astrophysics Data System (ADS)
Huang, M. J.; Liu, Zhao-Cheng; Jhang, Jhen-Huei
2002-11-01
This study demonstrates the feasibility of applying phase-shifting electronic speckle pattern interfometry to measure the deformation field of the front panel of a cathode ray tube, to support analysis to enhance the implosion-resistance capacity under violent collapse. Two effects, the air exhaustion and shrink band constraint effects, are comprehensively investigated. The angle of an adjustable mirror is switched, to provide three sensitivity vectors that are required in 3D-displacement measurement. A Fourier filtration is employed to remove speckle noise and establish a noise-free phase map. Inconsistent points are identified and masked to prevent any possible divergence during phase unwrapping. The results show that the accuracy of this method is satisfactory.
Servo control of an optical trap.
Wulff, Kurt D; Cole, Daniel G; Clark, Robert L
2007-08-01
A versatile optical trap has been constructed to control the position of trapped objects and ultimately to apply specified forces using feedback control. While the design, development, and use of optical traps has been extensive and feedback control has played a critical role in pushing the state of the art, few comprehensive examinations of feedback control of optical traps have been undertaken. Furthermore, as the requirements are pushed to ever smaller distances and forces, the performance of optical traps reaches limits. It is well understood that feedback control can result in both positive and negative effects in controlled systems. We give an analysis of the trapping limits as well as introducing an optical trap with a feedback control scheme that dramatically improves an optical trap's sensitivity at low frequencies.
Statistical models of lunar rocks and regolith
NASA Technical Reports Server (NTRS)
Marcus, A. H.
1973-01-01
The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.
A Flight Dynamics Model for a Multi-Actuated Flexible Rocket Vehicle
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2011-01-01
A comprehensive set of motion equations for a multi-actuated flight vehicle is presented. The dynamics are derived from a vector approach that generalizes the classical linear perturbation equations for flexible launch vehicles into a coupled three-dimensional model. The effects of nozzle and aerosurface inertial coupling, sloshing propellant, and elasticity are incorporated without restrictions on the position, orientation, or number of model elements. The present formulation is well suited to matrix implementation for large-scale linear stability and sensitivity analysis and is also shown to be extensible to nonlinear time-domain simulation through the application of a special form of Lagrange s equations in quasi-coordinates. The model is validated through frequency-domain response comparison with a high-fidelity planar implementation.
Koyama, Tomonori; Inada, Naoko; Tsujii, Hiromi; Kurita, Hiroshi
2008-08-01
An original combination score (i.e. the sum of Vocabulary and Comprehension subtracted from the sum of Block Design and Digit Span) was created from the four Wechsler Intelligence Scale for Children-Third Edition (WISC-III) subtests identified by discriminant analysis on WISC-III data from 139/129 children with/without pervasive developmental disorders (PDD; mean, 8.3/8.1 years) and its utility examined for predicting PDD. Its best cut-off was 2/3, with sensitivity, specificity, positive and negative predictive values of 0.68, 0.61, 0.65 and 0.64, respectively. The score seems useful, so long as clinicians are aware of its limitations and use it only as a supplemental measure in PDD diagnosis.
Right or wrong? The brain's fast response to morally objectionable statements.
Van Berkum, Jos J A; Holleman, Bregje; Nieuwland, Mante; Otten, Marte; Murre, Jaap
2009-09-01
How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable..."). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.
Silagi, Marcela Lima; Rabelo, Camila Maia; Schochat, Eliane; Mansur, Letícia Lessa
2017-11-13
To analyze the effect of education on sentence listening comprehension on cognitively healthy elderly. A total of 111 healthy elderly, aged 60-80 years of both genders were divided into two groups according to educational level: low education (0-8 years of formal education) and high education (≥9 years of formal education). The participants were assessed using the Revised Token Test, an instrument that supports the evaluation of auditory comprehension of orders with different working memory and syntactic complexity demands. The indicators used for performance analysis were the number of correct responses (accuracy analysis) and task execution time (temporal analysis) in the different blocks. The low educated group had a lower number of correct responses than the high educated group on all blocks of the test. In the temporal analysis, participants with low education had longer execution time for commands on the first four blocks related to working memory. However, the two groups had similar execution time for blocks more related to syntactic comprehension. Education influenced sentence listening comprehension on elderly. Temporal analysis allowed to infer over the relationship between comprehension and other cognitive abilities, and to observe that the low educated elderly did not use effective compensation strategies to improve their performances on the task. Therefore, low educational level, associated with aging, may potentialize the risks for language decline.
Bair, Eric; Gaynor, Sheila; Slade, Gary D.; Ohrbach, Richard; Fillingim, Roger B.; Greenspan, Joel D.; Dubner, Ronald; Smith, Shad B.; Diatchenko, Luda; Maixner, William
2016-01-01
The classification of most chronic pain disorders gives emphasis to anatomical location of the pain to distinguish one disorder from the other (eg, back pain vs temporomandibular disorder [TMD]) or to define subtypes (eg, TMD myalgia vs arthralgia). However, anatomical criteria overlook etiology, potentially hampering treatment decisions. This study identified clusters of individuals using a comprehensive array of biopsychosocial measures. Data were collected from a case–control study of 1031 chronic TMD cases and 3247 TMD-free controls. Three subgroups were identified using supervised cluster analysis (referred to as the adaptive, pain-sensitive, and global symptoms clusters). Compared with the adaptive cluster, participants in the pain-sensitive cluster showed heightened sensitivity to experimental pain, and participants in the global symptoms cluster showed both greater pain sensitivity and greater psychological distress. Cluster membership was strongly associated with chronic TMD: 91.5% of TMD cases belonged to the pain-sensitive and global symptoms clusters, whereas 41.2% of controls belonged to the adaptive cluster. Temporomandibular disorder cases in the pain-sensitive and global symptoms clusters also showed greater pain intensity, jaw functional limitation, and more comorbid pain conditions. Similar results were obtained when the same methodology was applied to a smaller case–control study consisting of 199 chronic TMD cases and 201 TMD-free controls. During a median 3-year follow-up period of TMD-free individuals, participants in the global symptoms cluster had greater risk of developing first-onset TMD (hazard ratio = 2.8) compared with participants in the other 2 clusters. Cross-cohort predictive modeling was used to demonstrate the reliability of the clusters. PMID:26928952
Wei, Zhenglun Alan; Trusty, Phillip M; Tree, Mike; Haggerty, Christopher M; Tang, Elaine; Fogel, Mark; Yoganathan, Ajit P
2017-01-04
Cardiovascular simulations have great potential as a clinical tool for planning and evaluating patient-specific treatment strategies for those suffering from congenital heart diseases, specifically Fontan patients. However, several bottlenecks have delayed wider deployment of the simulations for clinical use; the main obstacle is simulation cost. Currently, time-averaged clinical flow measurements are utilized as numerical boundary conditions (BCs) in order to reduce the computational power and time needed to offer surgical planning within a clinical time frame. Nevertheless, pulsatile blood flow is observed in vivo, and its significant impact on numerical simulations has been demonstrated. Therefore, it is imperative to carry out a comprehensive study analyzing the sensitivity of using time-averaged BCs. In this study, sensitivity is evaluated based on the discrepancies between hemodynamic metrics calculated using time-averaged and pulsatile BCs; smaller discrepancies indicate less sensitivity. The current study incorporates a comparison between 3D patient-specific CFD simulations using both the time-averaged and pulsatile BCs for 101 Fontan patients. The sensitivity analysis involves two clinically important hemodynamic metrics: hepatic flow distribution (HFD) and indexed power loss (iPL). Paired demographic group comparisons revealed that HFD sensitivity is significantly different between single and bilateral superior vena cava cohorts but no other demographic discrepancies were observed for HFD or iPL. Multivariate regression analyses show that the best predictors for sensitivity involve flow pulsatilities, time-averaged flow rates, and geometric characteristics of the Fontan connection. These predictors provide patient-specific guidelines to determine the effectiveness of analyzing patient-specific surgical options with time-averaged BCs within a clinical time frame. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Text Comprehensibility of Hospital Report Cards].
Sander, U; Kolb, B; Christoph, C; Emmert, M
2016-12-01
Objectives: Recently, the number of hospital report cards that compare quality of hospitals and present information from German quality reports has greatly increased. Objectives of this study were to a) identify suitable methods for measuring the readability and comprehensibility of hospital report cards, b) to obtain reliable information on the comprehensibility of texts for laymen, c) to give recommendations for improvements and d) to recommend public health actions. Methods: The readability and comprehensibility of the texts were tested with a) a computer-aided evaluation of formal text characteristics (readability indices Flesch (German formula) and 1. Wiener Sachtextformel formula), b) an expert-based heuristic analysis of readability and comprehensibility of texts (counting technical terms and analysis of text simplicity as well as brevity and conciseness using the Hamburg intelligibility model) and c) a survey of subjects about the comprehensibility of individual technical terms, the assessment of the comprehensibility of the presentations and the subjects' decisions in favour of one of the 5 presented clinics due to the better quality of data. In addition, the correlation between the results of the text analysis with the results from the survey of subjects was tested. Results: The assessment of texts with the computer-aided evaluations showed poor comprehensibility values. The assessment of text simplicity using the Hamburg intelligibility model showed poor comprehensibility values (-0.3). On average, 6.8% of the words used were technical terms. A review of 10 technical terms revealed that in all cases only a minority of respondents (from 4.4% to 39.1%) exactly knew what was meant by each of them. Most subjects (62.4%) also believed that unclear terms worsened their understanding of the information offered. The correlation analysis showed that presentations with a lower frequency of technical terms and better values for the text simplicity were better understood. Conclusion: The determination of the frequency of technical terms and the assessment of text simplicity using the Hamburg intelligibility model were suitable methods to determine the readability and comprehensibility of presentations of quality indicators. The analysis showed predominantly poor comprehensibility values and indicated the need to improve the texts of report cards. © Georg Thieme Verlag KG Stuttgart · New York.
Muneer, Sowbiya; Jeong, Hai Kyoung; Park, Yoo Gyeong; Jeong, Byoung Ryong
2018-05-25
The rose is one the most commercially grown and costly ornamental plants because of its aesthetic beauty and aroma. A large number of pests attack its buds, flowers, leaves, and stem at every growing stage due to its high sugar content. The most common pest on roses are aphids which are considered to be the major cause for product loss. Aphid infestations lead to major changes in rose plants, such as large and irregular holes in petals, intact leaves and devouring tissues. It is hypothesized that different cut rose cultivars would have different levels of sensitivity or resistance to aphids, since different levels of infestation are observed in commercially cut rose production greenhouses. The present work compared four cut rose cultivars which were bred in Korea and were either resistant or sensitive to aphid infestation at different flower developmental stages. An integrative study was conducted using comprehensive proteome analyses. Proteins related to ubiquitin metabolism and the stress response were differentially expressed due to aphid infestation. The regulations and possible functions of identified proteins are presented in detail. The differential expressions of the identified proteins were validated by immunoblotting and blue native page. In addition, total sugar and carbohydrate content were also observed.
Two-dimensional Layered MoS2 Biosensors Enable Highly Sensitive Detection of Biomolecules
NASA Astrophysics Data System (ADS)
Lee, Joonhyung; Dak, Piyush; Lee, Yeonsung; Park, Heekyeong; Choi, Woong; Alam, Muhammad A.; Kim, Sunkook
2014-12-01
We present a MoS2 biosensor to electrically detect prostate specific antigen (PSA) in a highly sensitive and label-free manner. Unlike previous MoS2-FET-based biosensors, the device configuration of our biosensors does not require a dielectric layer such as HfO2 due to the hydrophobicity of MoS2. Such an oxide-free operation improves sensitivity and simplifies sensor design. For a quantitative and selective detection of PSA antigen, anti-PSA antibody was immobilized on the sensor surface. Then, introduction of PSA antigen, into the anti-PSA immobilized sensor surface resulted in a lable-free immunoassary format. Measured off-state current of the device showed a significant decrease as the applied PSA concentration was increased. The minimum detectable concentration of PSA is 1 pg/mL, which is several orders of magnitude below the clinical cut-off level of ~4 ng/mL. In addition, we also provide a systematic theoretical analysis of the sensor platform - including the charge state of protein at the specific pH level, and self-consistent channel transport. Taken together, the experimental demonstration and the theoretical framework provide a comprehensive description of the performance potential of dielectric-free MoS2-based biosensor technology.
EUS for the staging of gastric cancer: a meta-analysis.
Mocellin, Simone; Marchet, Alberto; Nitti, Donato
2011-06-01
The role of EUS in the locoregional staging of gastric carcinoma is undefined. We aimed to comprehensively review and quantitatively summarize the available evidence on the staging performance of EUS. We systematically searched the MEDLINE, Cochrane, CANCERLIT, and EMBASE databases for relevant studies published until July 2010. Formal meta-analysis of diagnostic accuracy parameters was performed by using a bivariate random-effects model. Fifty-four studies enrolling 5601 patients with gastric cancer undergoing disease staging with EUS were eligible for the meta-analysis. EUS staging accuracy across eligible studies was measured by computing overall sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR). EUS can differentiate T1-2 from T3-4 gastric cancer with high accuracy, with overall sensitivity, specificity, PLR, NLR, and DOR of 0.86 (95% CI, 0.81-0.90), 0.91 (95% CI, 0.89-0.93), 9.8 (95% CI, 7.5-12.8), 0.15 (95% CI, 0.11-0.21), and 65 (95% CI, 41-105), respectively. In contrast, the diagnostic performance of EUS for lymph node status is less reliable, with overall sensitivity, specificity, PLR, NLR, and DOR of 0.69 (95% CI, 0.63-0.74), 0.84 (95% CI, 0.81-0.88), 4.4 (95% CI, 3.6-5.4), 0.37 (95% CI, 0.32-0.44), and 12 (95% CI, 9-16), respectively. Results regarding single T categories (including T1 substages) and Bayesian nomograms to calculate posttest probabilities for any target condition prevalence are also provided. Statistical heterogeneity was generally high; unfortunately, subgroup analysis did not identify a consistent source of the heterogeneity. Our results support the use of EUS for the locoregional staging of gastric cancer, which can affect the therapeutic management of these patients. However, clinicians must be aware of the performance limits of this staging tool. Copyright © 2011 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Noori, Hamid R; Cosa Linan, Alejandro; Spanagel, Rainer
2016-09-01
Cue reactivity to natural and social rewards is essential for motivational behavior. However, cue reactivity to drug rewards can also elicit craving in addicted subjects. The degree to which drug and natural rewards share neural substrates is not known. The objective of this study is to conduct a comprehensive meta-analysis of neuroimaging studies on drug, gambling and natural stimuli (food and sex) to identify the common and distinct neural substrates of cue reactivity to drug and natural rewards. Neural cue reactivity studies were selected for the meta-analysis by means of activation likelihood estimations, followed by sensitivity and clustering analyses of averaged neuronal response patterns. Data from 176 studies (5573 individuals) suggests largely overlapping neural response patterns towards all tested reward modalities. Common cue reactivity to natural and drug rewards was expressed by bilateral neural responses within anterior cingulate gyrus, insula, caudate head, inferior frontal gyrus, middle frontal gyrus and cerebellum. However, drug cues also generated distinct activation patterns in medial frontal gyrus, middle temporal gyrus, posterior cingulate gyrus, caudate body and putamen. Natural (sexual) reward cues induced unique activation of the pulvinar in thalamus. Neural substrates of cue reactivity to alcohol, drugs of abuse, food, sex and gambling are largely overlapping and comprise a network that processes reward, emotional responses and habit formation. This suggests that cue-mediated craving involves mechanisms that are not exclusive for addictive disorders but rather resemble the intersection of information pathways for processing reward, emotional responses, non-declarative memory and obsessive-compulsive behavior. Copyright © 2016 Elsevier B.V. and ECNP. All rights reserved.
Cognitive capital, equity and child-sensitive social protection in Asia and the Pacific
Samson, Michael; Fajth, Gaspar; François, Daphne
2016-01-01
Promoting child development and welfare delivers human rights and builds sustainable economies through investment in ‘cognitive capital’. This analysis looks at conditions that support optimal brain development in childhood and highlights how social protection promotes these conditions and strengthens the achievement of the Sustainable Development Goals (SDGs) in Asia and the Pacific. Embracing child-sensitive social protection offers multiple benefits. The region has been a leader in global poverty reduction but the underlying pattern of economic growth exacerbates inequality and is increasingly unsustainable. The strategy of channelling low-skilled rural labour to industrial jobs left millions of children behind with limited opportunities for development. Building child-sensitive social protection and investing better in children's cognitive capacity could check these trends and trigger powerful long-term human capital development—enabling labour productivity to grow faster than populations age. While governments are investing more in social protection, the region's spending remains low by international comparison. Investment is particularly inadequate where it yields the highest returns: during the first 1000 days of life. Five steps are recommended for moving forward: (1) building cognitive capital by adjusting the region's development paradigms to reflect better the economic and social returns from investing in children; (2) understand and track better child poverty and vulnerability; (3) progressively build universal, child-sensitive systems that strengthen comprehensive interventions within life cycle frameworks; (4) mobilise national resources for early childhood investments and child-sensitive social protection; and (5) leverage the SDGs and other channels of national and international collaboration. PMID:28588990
Reyman, M; Verrijn Stuart, A A; van Summeren, M; Rakhshandehroo, M; Nuboer, R; de Boer, F K; van den Ham, H J; Kalkhoven, E; Prakken, B; Schipper, H S
2014-01-01
Childhood obesity is accompanied by low-grade systemic inflammation, which contributes to the development of insulin resistance and cardiovascular complications later in life. As vitamin D exhibits profound immunomodulatory functions and vitamin D deficiency is highly prevalent in childhood obesity, we hypothesized that vitamin D deficiency in childhood obesity coincides with enhanced systemic inflammation and reduced insulin sensitivity. In a cross-sectional study of 64 obese and 32 healthy children aged 6-16 years, comprehensive profiling of 32 circulating inflammatory mediators was performed, together with assessment of 25-hydroxyvitamin D (25(OH)D) levels and measures for insulin sensitivity. Severe vitamin D insufficiency, which is further referred to as vitamin D deficiency, was defined as a 25(OH)D level ≤37.5 nmol l(-1), and was highly prevalent in obese (56%) versus healthy control children (16%). Throughout the study, 25(OH)D-deficient children were compared with the other children, including 25(OH)D insufficient (37.5-50 nmol l(-1)) and 25(OH)D sufficient children (≥50 nmol l(-1)). First, 25(OH)D-deficient obese children showed a lower insulin sensitivity than other obese children, as measured by a lower quantitative insulin sensitivity check index. Second, the association between 25(OH)D deficiency and insulin resistance in childhood obesity was confirmed with multiple regression analysis. Third, 25(OH)D-deficient obese children showed higher levels of the inflammatory mediators cathepsin S, chemerin and soluble vascular adhesion molecule (sVCAM), compared with the other obese children. Finally, hierarchical cluster analysis revealed an over-representation of 25(OH)D deficiency in obese children expressing inflammatory mediator clusters with high levels of cathepsin S, sVCAM and chemerin. 25(OH)D deficiency in childhood obesity was associated with enhanced systemic inflammation and reduced insulin sensitivity. The high cathepsin S and sVCAM levels may reflect activation of a pro-inflammatory, pro-diabetic and atherogenic pathway, which could be inhibited by vitamin D supplementation.
Luo, Mingxu; Lv, You; Guo, Xiuyu; Song, Hongmei; Su, Guoqiang; Chen, Bo
2017-08-01
Multidetector computed tomography (MDCT) exhibited wide ranges of sensitivities and specificities for lymph node assessment of gastric cancer (GC) in several individual studies. This present meta-analysis was carried out to evaluate the value of MDCT in diagnosis of preoperative lymph node metastasis (LNM) and to explore the impact factors that might explain the heterogeneity of its diagnostic accuracy in GC. A comprehensive search was conducted to collect all the relevant studies about the value of MDCT in assessing LNM of GC within the PubMed, Cochrane library and Embase databases up to Feb 2, 2016. Two investigators independently screened the studies, extracted data, and evaluated the quality of included studies. The sensitivity, specificity, and area under ROC curve (AUC) were pooled to estimate the overall accuracy of MDCT. Meta-regression and subgroup analysis were carried out to identify the possible factors influencing the heterogeneity of the accuracy. A total of 27 studies with 6519 subjects were finally included. Overall, the pooled sensitivity, specificity, and AUC were 0.67 (95% CI: 0.56-0.77), 0.86 (95% CI: 0.81-0.90), and 0.86 (95% CI: 0.83-0.89), respectively. Meta-regression revealed that MDCT section thickness, proportion of serosal invasion, and publication year were the main significant impact factors in sensitivity, and MDCT section thickness, multiplanar reformation (MPR), and reference standard were the main significant impact factors in specificity. After the included studies were divided into 2 groups (Group A: studies with proportion of serosa-invasive GC subjects ≥50%; Group B: studies with proportion of serosa-invasive GC subjects <50%), the pooled sensitivity in Group A was significantly higher than in Group B (0.84 [95% CI: 0.75-0.90] vs 0.55 [95% CI: 0.41-0.68], P < .01). For early gastric cancer (EGC), the pooled sensitivity, specificity, and AUC were 0.34 (95% CI: 0.15-0.61), 0.91 (95% CI: 0.84-0.95), and 0.83 (95% CI: 0.80-0.86), respectively. To summarize, MDCT tends to be adequate to assess preoperative LNM in serosa-invasive GC, but insufficient for non-serosa-invasive GC (particularly for EGC) owing to its low sensitivity. Proportion of serosa-invasive GC subjects, MDCT section thickness, MPR, and reference standard are the main factors influencing its diagnostic accuracy.
... happens when the light-sensitive cells in the macula slowly break down. Your gradually lose your central vision. A common early symptom is that straight lines appear crooked. Regular comprehensive eye exams can detect macular degeneration before the disease causes vision loss. Treatment can ...
A Multidisciplinary Approach to High Throughput Nuclear Magnetic Resonance Spectroscopy
Pourmodheji, Hossein; Ghafar-Zadeh, Ebrahim; Magierowski, Sebastian
2016-01-01
Nuclear Magnetic Resonance (NMR) is a non-contact, powerful structure-elucidation technique for biochemical analysis. NMR spectroscopy is used extensively in a variety of life science applications including drug discovery. However, existing NMR technology is limited in that it cannot run a large number of experiments simultaneously in one unit. Recent advances in micro-fabrication technologies have attracted the attention of researchers to overcome these limitations and significantly accelerate the drug discovery process by developing the next generation of high-throughput NMR spectrometers using Complementary Metal Oxide Semiconductor (CMOS). In this paper, we examine this paradigm shift and explore new design strategies for the development of the next generation of high-throughput NMR spectrometers using CMOS technology. A CMOS NMR system consists of an array of high sensitivity micro-coils integrated with interfacing radio-frequency circuits on the same chip. Herein, we first discuss the key challenges and recent advances in the field of CMOS NMR technology, and then a new design strategy is put forward for the design and implementation of highly sensitive and high-throughput CMOS NMR spectrometers. We thereafter discuss the functionality and applicability of the proposed techniques by demonstrating the results. For microelectronic researchers starting to work in the field of CMOS NMR technology, this paper serves as a tutorial with comprehensive review of state-of-the-art technologies and their performance levels. Based on these levels, the CMOS NMR approach offers unique advantages for high resolution, time-sensitive and high-throughput bimolecular analysis required in a variety of life science applications including drug discovery. PMID:27294925
NASA Astrophysics Data System (ADS)
Borge, Rafael; Alexandrov, Vassil; José del Vas, Juan; Lumbreras, Julio; Rodríguez, Encarnacion
Meteorological inputs play a vital role on regional air quality modelling. An extensive sensitivity analysis of the Weather Research and Forecasting (WRF) model was performed, in the framework of the Integrated Assessment Modelling System for the Iberian Peninsula (SIMCA) project. Up to 23 alternative model configurations, including Planetary Boundary Layer schemes, Microphysics, Land-surface models, Radiation schemes, Sea Surface Temperature and Four-Dimensional Data Assimilation were tested in a 3 km spatial resolution domain. Model results for the most significant meteorological variables, were assessed through a series of common statistics. The physics options identified to produce better results (Yonsei University Planetary Boundary Layer, WRF Single-Moment 6-class microphysics, Noah Land-surface model, Eta Geophysical Fluid Dynamics Laboratory longwave radiation and MM5 shortwave radiation schemes) along with other relevant user settings (time-varying Sea Surface Temperature and combined grid-observational nudging) where included in a "best case" configuration. This setup was tested and found to produce more accurate estimation of temperature, wind and humidity fields at surface level than any other configuration for the two episodes simulated. Planetary Boundary Layer height predictions showed a reasonable agreement with estimations derived from routine atmospheric soundings. Although some seasonal and geographical differences were observed, the model showed an acceptable behaviour overall. Despite being useful to define the most appropriate setup of the WRF model for air quality modelling over the Iberian Peninsula, this study provides a general overview of WRF sensitivity and can constitute a reference for future mesoscale meteorological modelling exercises.
Swenson, Darrell J.; Geneser, Sarah E.; Stinstra, Jeroen G.; Kirby, Robert M.; MacLeod, Rob S.
2012-01-01
The electrocardiogram (ECG) is ubiquitously employed as a diagnostic and monitoring tool for patients experiencing cardiac distress and/or disease. It is widely known that changes in heart position resulting from, for example, posture of the patient (sitting, standing, lying) and respiration significantly affect the body-surface potentials; however, few studies have quantitatively and systematically evaluated the effects of heart displacement on the ECG. The goal of this study was to evaluate the impact of positional changes of the heart on the ECG in the specific clinical setting of myocardial ischemia. To carry out the necessary comprehensive sensitivity analysis, we applied a relatively novel and highly efficient statistical approach, the generalized polynomial chaos-stochastic collocation method, to a boundary element formulation of the electrocardiographic forward problem, and we drove these simulations with measured epicardial potentials from whole-heart experiments. Results of the analysis identified regions on the body-surface where the potentials were especially sensitive to realistic heart motion. The standard deviation (STD) of ST-segment voltage changes caused by the apex of a normal heart, swinging forward and backward or side-to-side was approximately 0.2 mV. Variations were even larger, 0.3 mV, for a heart exhibiting elevated ischemic potentials. These variations could be large enough to mask or to mimic signs of ischemia in the ECG. Our results suggest possible modifications to ECG protocols that could reduce the diagnostic error related to postural changes in patients possibly suffering from myocardial ischemia. PMID:21909818
Proteomic Analysis of Lipid Raft-Like Detergent-Resistant Membranes of Lens Fiber Cells.
Wang, Zhen; Schey, Kevin L
2015-12-01
Plasma membranes of lens fiber cells have high levels of long-chain saturated fatty acids, cholesterol, and sphingolipids-key components of lipid rafts. Thus, lipid rafts are expected to constitute a significant portion of fiber cell membranes and play important roles in lens biology. The purpose of this study was to characterize the lens lipid raft proteome. Quantitative proteomics, both label-free and iTRAQ methods, were used to characterize lens fiber cell lipid raft proteins. Detergent-resistant, lipid raft membrane (DRM) fractions were isolated by sucrose gradient centrifugation. To confirm protein localization to lipid rafts, protein sensitivity to cholesterol removal by methyl-β-cyclodextrin was quantified by iTRAQ analysis. A total of 506 proteins were identified in raft-like detergent-resistant membranes. Proteins identified support important functions of raft domains in fiber cells, including trafficking, signal transduction, and cytoskeletal organization. In cholesterol-sensitivity studies, 200 proteins were quantified and 71 proteins were strongly affected by cholesterol removal. Lipid raft markers flotillin-1 and flotillin-2 and a significant fraction of AQP0, MP20, and AQP5 were found in the DRM fraction and were highly sensitive to cholesterol removal. Connexins 46 and 50 were more abundant in nonraft fractions, but a small fraction of each was found in the DRM fraction and was strongly affected by cholesterol removal. Quantification of modified AQP0 confirmed that fatty acylation targeted this protein to membrane raft domains. These data represent the first comprehensive profile of the lipid raft proteome of lens fiber cells and provide information on membrane protein organization in these cells.
Schmitt, Michael; Heib, Florian
2013-10-07
Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.
NASA Astrophysics Data System (ADS)
Schmitt, Michael; Heib, Florian
2013-10-01
Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.
Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M
2016-01-01
Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
Improved word comprehension in Global aphasia using a modified semantic feature analysis treatment.
Munro, Philippa; Siyambalapitiya, Samantha
2017-01-01
Limited research has investigated treatment of single word comprehension in people with aphasia, despite numerous studies examining treatment of naming deficits. This study employed a single case experimental design to examine efficacy of a modified semantic feature analysis (SFA) therapy in improving word comprehension in an individual with Global aphasia, who presented with a semantically based comprehension impairment. Ten treatment sessions were conducted over a period of two weeks. Following therapy, the participant demonstrated improved comprehension of treatment items and generalisation to control items, measured by performance on a spoken word picture matching task. Improvements were also observed on other language assessments (e.g. subtests of WAB-R; PALPA subtest 47) and were largely maintained over a period of 12 weeks without further therapy. This study provides support for the efficacy of a modified SFA therapy in remediating single word comprehension in individuals with aphasia with a semantically based comprehension deficit.
Predicting Employment Outcomes of Consumers of State-Operated Comprehensive Rehabilitation Centers
ERIC Educational Resources Information Center
Beach, David Thomas
2009-01-01
This study used records from a state-operated comprehensive rehabilitation center to investigate possible predictive factors related to completing comprehensive rehabilitation center programs and successful vocational rehabilitation (VR) case closure. An analysis of demographic data of randomly selected comprehensive rehabilitation center…
Yeari, Menahem; van den Broek, Paul
2016-09-01
It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.
[On-Orbit Multispectral Sensor Characterization Based on Spectral Tarps].
Li, Xin; Zhang, Li-ming; Chen, Hong-yao; Xu, Wei-wei
2016-03-01
The multispectral remote sensing technology has been a primary means in the research of biomass monitoring, climate change, disaster prediction and etc. The spectral sensitivity is essential in the quantitative analysis of remote sensing data. When the sensor is running in the space, it will be influenced by cosmic radiation, severe change of temperature, chemical molecular contamination, cosmic dust and etc. As a result, the spectral sensitivity will degrade by time, which has great implication on the accuracy and consistency of the physical measurements. This paper presents a characterization method of the degradation based on man-made spectral targets. Firstly, a degradation model is established in the paper. Then, combined with equivalent reflectance of spectral targets measured and inverted from image, the degradation characterization can be achieved. The simulation and on orbit experiment results showed that, using the proposed method, the change of center wavelength and band width can be monotored. The method proposed in the paper has great significance for improving the accuracy of long time series remote sensing data product and comprehensive utilization level of multi sensor data products.
Fozooni, Tahereh; Ravan, Hadi; Sasan, Hosseinali
2017-12-01
Due to their unique properties, such as programmability, ligand-binding capability, and flexibility, nucleic acids can serve as analytes and/or recognition elements for biosensing. To improve the sensitivity of nucleic acid-based biosensing and hence the detection of a few copies of target molecule, different modern amplification methodologies, namely target-and-signal-based amplification strategies, have already been developed. These recent signal amplification technologies, which are capable of amplifying the signal intensity without changing the targets' copy number, have resulted in fast, reliable, and sensitive methods for nucleic acid detection. Working in cell-free settings, researchers have been able to optimize a variety of complex and quantitative methods suitable for deploying in live-cell conditions. In this study, a comprehensive review of the signal amplification technologies for the detection of nucleic acids is provided. We classify the signal amplification methodologies into enzymatic and non-enzymatic strategies with a primary focus on the methods that enable us to shift away from in vitro detecting to in vivo imaging. Finally, the future challenges and limitations of detection for cellular conditions are discussed.
Marques, Márcia M C; Junta, Cristina M; Zárate-Blades, Carlos R; Sakamoto-Hojo, Elza Tiemi; Donadi, Eduardo A; Passos, Geraldo A S
2009-07-01
Since circulating leukocytes, mainly B and T cells, continuously maintain vigilant and comprehensive immune surveillance, these cells could be used as reporters for signs of infection or other pathologies, including cancer. Activated lymphocyte clones trigger a sensitive transcriptional response, which could be identified by gene expression profiling. To assess this hypothesis, we conducted microarray analysis of the gene expression profile of lymphocytes isolated from immunocompetent BALB/c mice subcutaneously injected with different numbers of tumorigenic B61 fibrosarcoma cells. Flow cytometry demonstrated that the number of circulating T (CD3(+)CD4(+) or CD3(+)CD8(+)) or B (CD19(+)) cells did not change. However, the lymphocytes isolated from tumor cell-injected animals expressed a unique transcriptional profile that was identifiable before the development of a palpable tumor mass. This finding demonstrates that the transcriptional response appears before alterations in the main lymphocyte subsets and that the gene expression profile of peripheral lymphocytes can serve as a sensitive and accurate method for the early detection of cancer.
An Integrated Framework for Multipollutant Air Quality Management and Its Application in Georgia
NASA Astrophysics Data System (ADS)
Cohan, Daniel S.; Boylan, James W.; Marmur, Amit; Khan, Maudood N.
2007-10-01
Air protection agencies in the United States increasingly confront non-attainment of air quality standards for multiple pollutants sharing interrelated emission origins. Traditional approaches to attainment planning face important limitations that are magnified in the multipollutant context. Recognizing those limitations, the Georgia Environmental Protection Division has adopted an integrated framework to address ozone, fine particulate matter, and regional haze in the state. Rather than applying atmospheric modeling merely as a final check of an overall strategy, photochemical sensitivity analysis is conducted upfront to compare the effectiveness of controlling various precursor emission species and source regions. Emerging software enables the modeling of health benefits and associated economic valuations resulting from air pollution control. Photochemical sensitivity and health benefits analyses, applied together with traditional cost and feasibility assessments, provide a more comprehensive characterization of the implications of various control options. The fuller characterization both informs the selection of control options and facilitates the communication of impacts to affected stakeholders and the public. Although the integrated framework represents a clear improvement over previous attainment-planning efforts, key remaining shortcomings are also discussed.
Shields, Margaret V; Abdullah, Leath; Namdari, Surena
2016-06-01
Propionibacterium acnes is the most common cause of infection after shoulder arthroplasty. Whereas there are several methods that can aid in the diagnosis of P. acnes infection, there is not a single "gold standard" because of the difficulties inherent in identifying this bacterium. We present an evidence-based discussion of the demographic, clinical, and radiographic predictors of P. acnes infection and review the current options for diagnosis. This review was written after a comprehensive analysis of the current literature related to shoulder periprosthetic joint infection and P. acnes identification. Of the techniques reviewed, α-defensin had the highest sensitivity in detecting P. acnes infection (63%). C-reactive protein level and erythrocyte sedimentation rate were often normal in cases of infection. Whereas P. acnes can be challenging to successfully diagnose, there are several options that are considered preferable because of their higher sensitivities and specificities. The current gold standard is intraoperative culture, but major advances in molecular techniques may provide future improvements in diagnostic accuracy. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Performance of the Spot Vision Screener in Children Younger Than 3 Years of Age.
Forcina, Blake D; Peterseim, M Millicent; Wilson, M Edward; Cheeseman, Edward W; Feldman, Samuel; Marzolf, Amanda L; Wolf, Bethany J; Trivedi, Rupal H
2017-06-01
To evaluate the use of the Spot Vision Screener (Spot; Welch Allyn, Skaneateles Falls, New York, USA) for detection of amblyopia risk factors in children aged 6 months to 3 years, as defined by the 2013 guidelines of the American Association for Pediatric Ophthalmology and Strabismus. Reliability analysis. In this study, children seen from June 1, 2012, to April 30, 2016 were tested with the Spot during a routine visit. Enrolled children underwent a comprehensive eye examination including cycloplegic refraction and sensorimotor testing within 6 months of the testing date by a pediatric ophthalmologist masked to the Spot results. A total of 184 children were included. The Spot successfully obtained readings in 89.7% of patients. Compared with the ophthalmologist's examination, the Spot had an overall sensitivity of 89.8% and a specificity of 70.4%. The Spot achieved good sensitivity and specificity for detection of amblyopia risk factors in this young cohort, particularly in the older subgroup. Our data offer support for automated vision screening in young children. Copyright © 2017 Elsevier Inc. All rights reserved.
An integrated framework for multipollutant air quality management and its application in Georgia.
Cohan, Daniel S; Boylan, James W; Marmur, Amit; Khan, Maudood N
2007-10-01
Air protection agencies in the United States increasingly confront non-attainment of air quality standards for multiple pollutants sharing interrelated emission origins. Traditional approaches to attainment planning face important limitations that are magnified in the multipollutant context. Recognizing those limitations, the Georgia Environmental Protection Division has adopted an integrated framework to address ozone, fine particulate matter, and regional haze in the state. Rather than applying atmospheric modeling merely as a final check of an overall strategy, photochemical sensitivity analysis is conducted upfront to compare the effectiveness of controlling various precursor emission species and source regions. Emerging software enables the modeling of health benefits and associated economic valuations resulting from air pollution control. Photochemical sensitivity and health benefits analyses, applied together with traditional cost and feasibility assessments, provide a more comprehensive characterization of the implications of various control options. The fuller characterization both informs the selection of control options and facilitates the communication of impacts to affected stakeholders and the public. Although the integrated framework represents a clear improvement over previous attainment-planning efforts, key remaining shortcomings are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingol, Kerem; Li, Da-Wei; Zhang, Bo
Identification of metabolites in complex mixtures represents a key step in metabolomics. A new strategy is introduced, which is implemented in a new public web server, COLMARm, that permits the co-analysis of up to three 2D NMR spectra, namely 13C-1H HSQC, 1H-1H TOCSY, and 13C-1H HSQC-TOCSY for the comprehensive, accurate, and efficient performance of this task. The highly versatile and interactive nature of COLMARm permits its application to a wide range of metabolomics samples independent of the magnetic field. Database query is performed using the HSQC spectrum and the top metabolite hits are then validated against the TOCSY-type experiment(s) bymore » superimposing the expected cross-peaks on the mixture spectrum. In this way the user can directly accept or reject candidate metabolites by taking advantage of the complementary spectral information offered by these experiments and their different sensitivities. The power of COLMARm is demonstrated for a human serum sample uncovering the existence of 14 metabolites that hitherto were not identified by NMR.« less
A Method for Comprehensive Glycosite-Mapping and Direct Quantitation of Serum Glycoproteins.
Hong, Qiuting; Ruhaak, L Renee; Stroble, Carol; Parker, Evan; Huang, Jincui; Maverakis, Emanual; Lebrilla, Carlito B
2015-12-04
A comprehensive glycan map was constructed for the top eight abundant glycoproteins in plasma using both specific and nonspecific enzyme digestions followed by nano liquid chromatography (LC)-chip/quadrupole time-of-flight mass spectrometry (MS) analysis. Glycopeptides were identified using an in-house software tool, GPFinder. A sensitive and reproducible multiple reaction monitoring (MRM) technique on a triple quadrupole MS was developed and applied to quantify immunoglobulins G, A, M, and their site-specific glycans simultaneously and directly from human serum/plasma without protein enrichments. A total of 64 glycopeptides and 15 peptides were monitored for IgG, IgA, and IgM in a 20 min ultra high performance (UP)LC gradient. The absolute protein contents were quantified using peptide calibration curves. The glycopeptide ion abundances were normalized to the respective protein abundances to separate protein glycosylation from protein expression. This technique yields higher method reproducibility and less sample loss when compared with the quantitation method that involves protein enrichments. The absolute protein quantitation has a wide linear range (3-4 orders of magnitude) and low limit of quantitation (femtomole level). This rapid and robust quantitation technique, which provides quantitative information for both proteins and glycosylation, will further facilitate disease biomarker discoveries.
NASA Technical Reports Server (NTRS)
Mccllough, J. R.; Sharpe, A.; Doetsch, K. H.
1980-01-01
The SIMFAC has played a vital role in the design, development, and performance verification of the shuttle remote manipulator system (SRMS) to be installed in the space shuttle orbiter. The facility provides for realistic man-in-the-loop operation of the SRMS by an operator in the operator complex, a flightlike crew station patterned after the orbiter aft flight deck with all necessary man machine interface elements, including SRMS displays and controls and simulated out-of-the-window and CCTV scenes. The characteristics of the manipulator system, including arm and joint servo dynamics and control algorithms, are simulated by a comprehensive mathematical model within the simulation subsystem of the facility. Major studies carried out using SIMFAC include: SRMS parameter sensitivity evaluations; the development, evaluation, and verification of operating procedures; and malfunction simulation and analysis of malfunction performance. Among the most important and comprehensive man-in-the-loop simulations carried out to date on SIMFAC are those which support SRMS performance verification and certification when the SRMS is part of the integrated orbiter-manipulator system.
Comprehensive analysis of information dissemination in disasters
NASA Astrophysics Data System (ADS)
Zhang, N.; Huang, H.; Su, Boni
2016-11-01
China is a country that experiences a large number of disasters. The number of deaths caused by large-scale disasters and accidents in past 10 years is around 900,000. More than 92.8 percent of these deaths could be avoided if there were an effective pre-warning system deployed. Knowledge of the information dissemination characteristics of different information media taking into consideration governmental assistance (information published by a government) in disasters in urban areas, plays a critical role in increasing response time and reducing the number of deaths and economic losses. In this paper we have developed a comprehensive information dissemination model to optimize efficiency of pre-warning mechanics. This model also can be used for disseminating information for evacuees making real-time evacuation plans. We analyzed every single information dissemination models for pre-warning in disasters by considering 14 media: short message service (SMS), phone, television, radio, news portals, Wechat, microblogs, email, newspapers, loudspeaker vehicles, loudspeakers, oral communication, and passive information acquisition via visual and auditory senses. Since governmental assistance is very useful in a disaster, we calculated the sensitivity of governmental assistance ratio. The results provide useful references for information dissemination during disasters in urban areas.
Sullivan, Marianne; Green, Donna
2016-01-06
This study assesses the accuracy and comprehensiveness of online public health education materials from the three Australian cities with active lead mines and or smelters: Broken Hill, Mount Isa and Port Pirie. Qualitative content analysis of online Australian material with comparison to international best practice where possible. All materials provided incomplete information about the health effects of lead and pathways of exposure compared to best practice materials. Inconsistent strategies to reduce exposure to lead were identified among the Australian cities, and some evidence-based best practices were not included. The materials normalised environmental lead and neglected to identify that there is no safe level of lead, or that primary prevention is the best strategy for protecting children's health. Health education materials need to clearly state health risks from lead across developmental stages and for sensitive populations, integrate a primary prevention perspective, and provide comprehensive evidence-based recommendations for reducing lead exposure in and around the home. Families who rely on information provided by these online public education materials are likely to be inadequately informed about the importance of protecting their children from exposure to lead and strategies for doing so.
Novel risk score of contrast-induced nephropathy after percutaneous coronary intervention.
Ji, Ling; Su, XiaoFeng; Qin, Wei; Mi, XuHua; Liu, Fei; Tang, XiaoHong; Li, Zi; Yang, LiChuan
2015-08-01
Contrast-induced nephropathy (CIN) post-percutaneous coronary intervention (PCI) is a major cause of acute kidney injury. In this study, we established a comprehensive risk score model to assess risk of CIN after PCI procedure, which could be easily used in a clinical environment. A total of 805 PCI patients, divided into analysis cohort (70%) and validation cohort (30%), were enrolled retrospectively in this study. Risk factors for CIN were identified using univariate analysis and multivariate logistic regression in the analysis cohort. Risk score model was developed based on multiple regression coefficients. Sensitivity and specificity of the new risk score system was validated in the validation cohort. Comparisons between the new risk score model and previous reported models were applied. The incidence of post-PCI CIN in the analysis cohort (n = 565) was 12%. Considerably high CIN incidence (50%) was observed in patients with chronic kidney disease (CKD). Age >75, body mass index (BMI) >25, myoglobin level, cardiac function level, hypoalbuminaemia, history of chronic kidney disease (CKD), Intra-aortic balloon pump (IABP) and peripheral vascular disease (PVD) were identified as independent risk factors of post-PCI CIN. A novel risk score model was established using multivariate regression coefficients, which showed highest sensitivity and specificity (0.917, 95%CI 0.877-0.957) compared with previous models. A new post-PCI CIN risk score model was developed based on a retrospective study of 805 patients. Application of this model might be helpful to predict CIN in patients undergoing PCI procedure. © 2015 Asian Pacific Society of Nephrology.
Dranitsaris, G; Altmayer, C; Quirt, I
1997-06-01
Several randomised comparative trials have shown that granulocyte colony-stimulating factor (G-CSF) reduces the duration of neutropenia, hospitalisation and intravenous antibacterial use in patients with cancer who are receiving high-dosage antineoplastic therapy. However, one area that has received less attention is the role of G-CSF in standard-dosage antineoplastic regimens. One such treatment that is considered to have a low potential for inducing fever and neutropenia is the CHOP regimen (cyclophosphamide, doxorubicin, vincristine and prednisone) for non-Hodgkin's lymphoma. We conducted a cost-benefit analysis from a societal perspective in order to estimate the net cost or benefit of prophylactic G-CSF in this patient population. This included direct costs for hospitalisation with antibacterial support, as well as indirect societal costs, such as time off work and antineoplastic therapy delays secondary to neutropenia. The findings were then tested by a comprehensive sensitivity analysis. The administration of G-CSF at a dosage of 5 micrograms/kg/day for 11 doses following CHOP resulted in an overall net cost of $Can1257. In the sensitivity analysis, lowering the G-CSF dosage to 2 micrograms/kg/day generated a net benefit of $Can6564, indicating a situation that was cost saving to society. The results of the current study suggest that the use of G-CSF in patients receiving CHOP antineoplastic therapy produces a situation that is close to achieving cost neutrality. However, low-dosage (2 micrograms/kg/day) G-CSF is an economically attractive treatment strategy because it may result in overall savings to society.
Chen, Jin-Qiu; Wakefield, Lalage M; Goldstein, David J
2015-06-06
There is an emerging demand for the use of molecular profiling to facilitate biomarker identification and development, and to stratify patients for more efficient treatment decisions with reduced adverse effects. In the past decade, great strides have been made to advance genomic, transcriptomic and proteomic approaches to address these demands. While there has been much progress with these large scale approaches, profiling at the protein level still faces challenges due to limitations in clinical sample size, poor reproducibility, unreliable quantitation, and lack of assay robustness. A novel automated capillary nano-immunoassay (CNIA) technology has been developed. This technology offers precise and accurate measurement of proteins and their post-translational modifications using either charge-based or size-based separation formats. The system not only uses ultralow nanogram levels of protein but also allows multi-analyte analysis using a parallel single-analyte format for increased sensitivity and specificity. The high sensitivity and excellent reproducibility of this technology make it particularly powerful for analysis of clinical samples. Furthermore, the system can distinguish and detect specific protein post-translational modifications that conventional Western blot and other immunoassays cannot easily capture. This review will summarize and evaluate the latest progress to optimize the CNIA system for comprehensive, quantitative protein and signaling event characterization. It will also discuss how the technology has been successfully applied in both discovery research and clinical studies, for signaling pathway dissection, proteomic biomarker assessment, targeted treatment evaluation and quantitative proteomic analysis. Lastly, a comparison of this novel system with other conventional immuno-assay platforms is performed.
New Ways to Detect Pediatric Sickle Cell Retinopathy: A Comprehensive Review.
Pahl, Daniel A; Green, Nancy S; Bhatia, Monica; Chen, Royce W S
2017-11-01
Sickle retinopathy reflects disease-related vascular injury of the eye, which can potentially result in visual loss from vitreous hemorrhage or retinal detachment. Here we review sickle retinopathy among children with sickle cell disease, describe the epidemiology, pediatric risk factors, pathophysiology, ocular findings, and treatment. Newer, more sensitive ophthalmological imaging modalities are available for retinal imaging, including ultra-widefield fluorescein angiography, spectral-domain optical coherence tomography, and optical coherence tomography angiography. Optical coherence tomography angiography provides a noninvasive view of retinal vascular layers that could previously not be imaged and can be quantified for comparative or prospective analyses. Ultra-widefield fluorescein angiography provides a more comprehensive view of the peripheral retina than traditional imaging techniques. Screening for retinopathy by standard fundoscopic imaging modalities detects a prevalence of approximately 10%. In contrast, these more sensitive methods allow for more sensitive examination that includes the retina perimeter where sickle retinopathy is often first detectable. Use of these new imaging modalities may detect a higher prevalence of early sickle pathology among children than has previously been reported. Earlier detection may help in better understanding the pathogenesis of sickle retinopathy and guide future screening and treatment paradigms.
Weekes, Anthony J; Thacker, Gregory; Troha, Daniel; Johnson, Angela K; Chanler-Berat, Jordan; Norton, H James; Runyon, Michael
2016-09-01
We determine the diagnostic accuracy of goal-directed echocardiography, cardiac biomarkers, and computed tomography (CT) in early identification of severe right ventricular dysfunction in normotensive emergency department patients with pulmonary embolism compared with comprehensive echocardiography. This was a prospective observational study of consecutive normotensive patients with confirmed pulmonary embolism. Investigators, blinded to clot burden and biomarkers, performed qualitative goal-directed echocardiography for right ventricular dysfunction: right ventricular enlargement (diameter greater than or equal to that of the left ventricle), severe right ventricular systolic dysfunction, and septal bowing. Brain natriuretic peptide and troponin cutoffs of greater than or equal to 90 pg/mL and greater than or equal to 0.07 ng/mL and CT right ventricular:left ventricular diameter ratio greater than or equal to 1.0 were also compared with comprehensive echocardiography. One hundred sixteen normotensive pulmonary embolism patients (111 confirmed by CT, 5 by ventilation-perfusion scan) were enrolled. Twenty-six of 116 patients (22%) had right ventricular dysfunction on comprehensive echocardiography. Goal-directed echocardiography had a sensitivity of 100% (95% confidence interval [CI] 87% to 100%), specificity of 99% (95% CI 94% to 100%), positive likelihood ratio (+LR) of 90.0 (95% CI 16.3 to 499.8), and negative likelihood ratio (-LR) of 0 (95% CI 0 to 0.13). Brain natriuretic peptide had a sensitivity of 88% (95% CI 70% to 98%), specificity of 68% (95% CI 57% to 78%), +LR of 2.8 (95% CI 2.0 to 3.9), and -LR of 0.17 (95% CI 0.06 to 0.43). Troponin had a sensitivity of 62% (95% CI 41% to 80%), specificity of 93% (95% CI 86% to 98%), +LR of 9.2 (95% CI 4.1 to 20.9), and -LR of 0.41 (95% CI 0.24 to 0.62). CT had a sensitivity of 91% (95% CI 72% to 99%), specificity of 79% (95% CI 69% to 87%), +LR of 4.3 (95% CI 2.8 to 6.7), and -LR of 0.11 (95% CI 0.03 to 0.34). Goal-directed echocardiography was highly accurate for early severe right ventricular dysfunction identification and pulmonary embolism risk-stratification. Brain natriuretic peptide was sensitive but less specific, whereas troponin had lower sensitivity but higher specificity. CT had good sensitivity and moderate specificity. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodrigues, Raquel O.; Bañobre-López, Manuel; Gallo, Juan; Tavares, Pedro B.; Silva, Adrián M. T.; Lima, Rui; Gomes, Helder T.
2016-07-01
The poor heating efficiency of the most reported magnetic nanoparticles (MNPs), allied to the lack of comprehensive biocompatibility and haemodynamic studies, hampers the spread of multifunctional nanoparticles as the next generation of therapeutic bio-agents in medicine. The present work reports the synthesis and characterization, with special focus on biological/toxicological compatibility, of superparamagnetic nanoparticles with diameter around 18 nm, suitable for theranostic applications (i.e. simultaneous diagnosis and therapy of cancer). Envisioning more insights into the complex nanoparticle-red blood cells (RBCs) membrane interaction, the deformability of the human RBCs in contact with magnetic nanoparticles (MNPs) was assessed for the first time with a microfluidic extensional approach, and used as an indicator of haematological disorders in comparison with a conventional haematological test, i.e. the haemolysis analysis. Microfluidic results highlight the potential of this microfluidic tool over traditional haemolysis analysis, by detecting small increments in the rigidity of the blood cells, when traditional haemotoxicology analysis showed no significant alteration (haemolysis rates lower than 2 %). The detected rigidity has been predicted to be due to the wrapping of small MNPs by the bilayer membrane of the RBCs, which is directly related to MNPs size, shape and composition. The proposed microfluidic tool adds a new dimension into the field of nanomedicine, allowing to be applied as a high-sensitivity technique capable of bringing a better understanding of the biological impact of nanoparticles developed for clinical applications.
von Roepenack-Lahaye, Edda; Degenkolb, Thomas; Zerjeski, Michael; Franz, Mathias; Roth, Udo; Wessjohann, Ludger; Schmidt, Jürgen; Scheel, Dierk; Clemens, Stephan
2004-02-01
Large-scale metabolic profiling is expected to develop into an integral part of functional genomics and systems biology. The metabolome of a cell or an organism is chemically highly complex. Therefore, comprehensive biochemical phenotyping requires a multitude of analytical techniques. Here, we describe a profiling approach that combines separation by capillary liquid chromatography with the high resolution, high sensitivity, and high mass accuracy of quadrupole time-of-flight mass spectrometry. About 2000 different mass signals can be detected in extracts of Arabidopsis roots and leaves. Many of these originate from Arabidopsis secondary metabolites. Detection based on retention times and exact masses is robust and reproducible. The dynamic range is sufficient for the quantification of metabolites. Assessment of the reproducibility of the analysis showed that biological variability exceeds technical variability. Tools were optimized or established for the automatic data deconvolution and data processing. Subtle differences between samples can be detected as tested with the chalcone synthase deficient tt4 mutant. The accuracy of time-of-flight mass analysis allows to calculate elemental compositions and to tentatively identify metabolites. In-source fragmentation and tandem mass spectrometry can be used to gain structural information. This approach has the potential to significantly contribute to establishing the metabolome of Arabidopsis and other model systems. The principles of separation and mass analysis of this technique, together with its sensitivity and resolving power, greatly expand the range of metabolic profiling.
Effects of strain differences and vehicles on results of local lymph node assays.
Anzai, Takayuki; Ullmann, Ludwig G; Hayashi, Daisuke; Satoh, Tetsuo; Kumazawa, Takeshi; Sato, Keizo
2010-01-01
The Local Lymph Node Assay (LLNA) is now regarded as the worldwide standard. The analysis of accumulated LLNA data reveals that the animal strains and vehicles employed are likely to affect LLNA results. Here we show that an obvious strain difference in the local lymph node response was observed between DMSO-treated CBA/CaOlaHsd and CBA/CaHsdRcc mice. We also show that a vehicle difference in the response was observed when CBA/CaHsdRcc mice were exposed to 6 vehicles; 4:1 v/v acetone/olive oil (AOO), ethanol/water (70% EtOH), N,N-dimethylformamide (DMF), 2-butanone (BN), propylene glycol (PG), and dimethylsulfoxide (DMSO). The dpm/LN level was lowest in the 70% EtOH group and highest in the DMSO group. When alpha-hexylcinnamaldehyde (HCA) was used as a sensitizer for the LLNA, HCA was a weak sensitizer when AOO or DMSO was used as a vehicle, but a moderate sensitizer when the other 4 vehicles were used. This study showed that there are vehicle differences in the local lymph node response (dpm/LN level) in the LLNA and that the sensitization potency of HCA may be classified in different categories when using different vehicles. This suggests that careful consideration should be exercised in selecting a vehicle for the LLNA. A further comprehensive study will be needed to investigate why vehicle differences are observed in the LLNA.
Cameron, Janette D; Gallagher, Robyn; Pressler, Susan J; McLennan, Skye N; Ski, Chantal F; Tofler, Geoffrey; Thompson, David R
2016-02-01
Cognitive impairment occurs in up to 80% of patients with heart failure (HF). The National Institute for Neurological Disorders and Stroke (NINDS) and the Canadian Stroke Network (CSN) recommend a 5-minute cognitive screening protocol that has yet to be psychometrically evaluated in HF populations. The aim of this study was to conduct a secondary analysis of the sensitivity and specificity of the NINDS-CSN brief cognitive screening protocol in HF patients. The Montreal Cognitive Assessment (MoCA) was administered to 221 HF patients. The NINDS-CSN screen comprises 3 MoCA items, with lower scores indicating poorer cognitive function. Receiver operator characteristic (ROC) curves were constructed, determining the sensitivity, specificity and appropriate cutoff scores of the NINDS-CSN screen. In an HF population aged 76 ± 12 years, 136 (62%) were characterized with cognitive impairment (MoCA <26). Scores on the NINDS-CSN screen ranged from 3-11. The area under the receiver operating characteristic curve indicated good accuracy in screening for cognitive impairment (0.88; P < .01; 95% CI 0.83-0.92). A cutoff score of ≤9 provided 89% sensitivity and 71% specificity. The NINDS-CSN protocol offers clinicians a feasible telephone method to screen for cognitive impairment in patients with HF. Future studies should include a neuropsychologic battery to more comprehensively examine the diagnostic accuracy of brief cognitive screening protocols. Copyright © 2016 Elsevier Inc. All rights reserved.
Bedside Diagnosis of Dysphagia: A Systematic Review
O’Horo, John C.; Rogus-Pulia, Nicole; Garcia-Arguello, Lisbeth; Robbins, JoAnne; Safdar, Nasia
2015-01-01
Background Dysphagia is associated with aspiration, pneumonia and malnutrition, but remains challenging to identify at the bedside. A variety of exam protocols and maneuvers are commonly used, but the efficacy of these maneuvers is highly variable. Methods We conducted a comprehensive search of seven databases, including MEDLINE, EMBASE and Scopus, from each database’s earliest inception through June 5th, 2013. Studies reporting diagnostic performance of a bedside examination maneuver compared to a reference gold standard (videofluoroscopic swallow study [VFSS] or flexible endoscopic evaluation of swallowing with sensory testing [FEEST]) were included for analysis. From each study, data were abstracted based on the type of diagnostic method and reference standard study population and inclusion/exclusion characteristics, design and prediction of aspiration. Results The search strategy identified 38 articles meeting inclusion criteria. Overall, most bedside examinations lacked sufficient sensitivity to be used for screening purposes across all patient populations examined. Individual studies found dysphonia assessments, abnormal pharyngeal sensation assessments, dual axis accelerometry, and one description of water swallow testing to be sensitive tools, but none were reported as consistently sensitive. A preponderance of identified studies was in post-stroke adults, limiting the generalizability of results. Conclusions No bedside screening protocol has been shown to provide adequate predictive value for presence of aspiration. Several individual exam maneuvers demonstrated reasonable sensitivity, but reproducibility and consistency of these protocols was not established. More research is needed to design an optimal protocol for dysphagia detection. PMID:25581840
ERIC Educational Resources Information Center
Melby-Lervag, Monica; Lervag, Arne
2011-01-01
We present a meta-analysis of cross-linguistic transfer of oral language (vocabulary and listening comprehension), phonology (decoding and phonological awareness) and reading comprehension. Our findings show a small meta-correlation between first (L1) and second (L2) oral language and a moderate to large correlation between L1 and L2 phonological…
Living near nuclear power plants and thyroid cancer risk: A systematic review and meta-analysis.
Kim, Jaeyoung; Bang, Yejin; Lee, Won Jin
2016-02-01
There has been public concern regarding the safety of residing near nuclear power plants, and the extent of risk for thyroid cancer among adults living near nuclear power plants has not been fully explored. In the present study, a systematic review and meta-analysis of epidemiologic studies was conducted to investigate the association between living near nuclear power plants and the risk of thyroid cancer. A comprehensive literature search was performed on studies published up to March 2015 on the association between nuclear power plants and thyroid cancer risk. The summary standardized incidence ratio (SIR), standardized mortality ratio (SMR), and 95% confidence intervals (CIs) were calculated using a random-effect model of meta-analysis. Sensitivity analyses were performed by study quality. Thirteen studies were included in the meta-analysis, covering 36 nuclear power stations in 10 countries. Overall, summary estimates showed no significant increased thyroid cancer incidence or mortality among residents living near nuclear power plants (summary SIR=0.98; 95% CI 0.87-1.11, summary SMR=0.80; 95% CI 0.62-1.04). The pooled estimates did not reveal different patterns of risk by gender, exposure definition, or reference population. However, sensitivity analysis by exposure definition showed that living less than 20 km from nuclear power plants was associated with a significant increase in the risk of thyroid cancer in well-designed studies (summary OR=1.75; 95% CI 1.17-2.64). Our study does not support an association between living near nuclear power plants and risk of thyroid cancer but does support a need for well-designed future studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Immediate sensitivity to structural constraints in pronoun resolution
Chow, Wing-Yee; Lewis, Shevaun; Phillips, Colin
2014-01-01
Real-time interpretation of pronouns is sometimes sensitive to the presence of grammatically-illicit antecedents and sometimes not. This occasional sensitivity has been taken as evidence that structural constraints do not immediately impact the initial antecedent retrieval for pronoun interpretation. We argue that it is important to separate effects that reflect the initial antecedent retrieval process from those that reflect later processes. We present results from five reading comprehension experiments. Both the current results and previous evidence support the hypothesis that agreement features and structural constraints immediately constrain the antecedent retrieval process for pronoun interpretation. Occasional sensitivity to grammatically-illicit antecedents may be due to repair processes triggered when the initial retrieval fails to return a grammatical antecedent. PMID:25018739
Kline, Kimberly N
2007-01-01
This study discusses the implications for cultural sensitivity of the rhetorical choices in breast cancer education materials developed specifically for African American audiences by national organizations. Using the PEN-3 model of cultural sensitivity as an analytic framework for a generative rhetorical criticism, this study revealed that adaptations have been made in some pamphlets to acknowledge African American cultural values related to community, self-reliance, spirituality, and distrust of the Western medical establishment, but many messages could be revised to achieve a more comprehensive, balanced, accurate, and audience-specific discussion of the breast cancer issue. Achieving cultural sensitivity in health promotion materials necessitates attention to nuanced meanings in messages, revision of questionable arguments and evidence, and avoidance of ambiguity.
Peng, Qiliang; Zhang, Xueli; Min, Ming; Zou, Li; Shen, Peipei; Zhu, Yaqun
2017-07-04
This systematic analysis aimed to investigate the value of microRNA-21 (miR-21) in colorectal cancer for multiple purposes, including diagnosis and prognosis, as well as its predictive power in combination biomarkers. Fifty-seven eligible studies were included in our meta-analysis, including 25 studies for diagnostic meta-analysis and 32 for prognostic meta-analysis. For the diagnostic meta-analysis of miR-21 alone, the overall pooled results for sensitivity, specificity, and area under the curve (AUC) were 0.64 (95% CI: 0.53-0.74), 0.85 (0.79-0.90), and 0.85 (0.81-0.87), respectively. Circulating samples presented corresponding values of 0.72 (0.63-0.79), 0.84 (0.78-0.89), and 0.86 (0.83-0.89), respectively. For the diagnostic meta-analysis of miR-21-related combination biomarkers, the above three parameters were 0.79 (0.69-0.86), 0.79 (0.68-0.87), and 0.86 (0.83-0.89), respectively. Notably, subgroup analysis suggested that miRNA combination markers in circulation exhibited high predictive power, with sensitivity of 0.85 (0.70-0.93), specificity of 0.86 (0.77-0.92), and AUC of 0.92 (0.89-0.94). For the prognostic meta-analysis, patients with higher expression of miR-21 had significant shorter disease-free survival [DFS; pooled hazard ratio (HR): 1.60; 95% CI: 1.20-2.15] and overall survival (OS; 1.54; 1.27-1.86). The combined HR in tissues for DFS and OS were 1.76 (1.31-2.36) and 1.58 (1.30-1.93), respectively. Our comprehensive systematic review revealed that circulating miR-21 may be suitable as a diagnostic biomarker, while tissue miR-21 could be a prognostic marker for colorectal cancer. In addition, miRNA combination biomarkers may provide a new approach for clinical application.
Prediction During Natural Language Comprehension.
Willems, Roel M; Frank, Stefan L; Nijhof, Annabel D; Hagoort, Peter; van den Bosch, Antal
2016-06-01
The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as well as surprisal A computational model determined entropy and surprisal for each word in 3 literary stories. Twenty-four healthy participants listened to the same 3 stories while their brain activation was measured using fMRI. Reversed speech fragments were presented as a control condition. Brain areas sensitive to entropy were left ventral premotor cortex, left middle frontal gyrus, right inferior frontal gyrus, left inferior parietal lobule, and left supplementary motor area. Areas sensitive to surprisal were left inferior temporal sulcus ("visual word form area"), bilateral superior temporal gyrus, right amygdala, bilateral anterior temporal poles, and right inferior frontal sulcus. We conclude that prediction during language comprehension can occur at several levels of processing, including at the level of word form. Our study exemplifies the power of combining computational linguistics with cognitive neuroscience, and additionally underlines the feasibility of studying continuous spoken language materials with fMRI. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Comprehensive School Reform and Achievement: A Meta-Analysis. Educator's Summary
ERIC Educational Resources Information Center
Center for Data-Driven Reform in Education (NJ3), 2008
2008-01-01
Which comprehensive school reform programs have been proven to help elementary and secondary students achieve? To find out, this review summarizes evidence on comprehensive school reform (CSR) models in elementary and secondary schools. Comprehensive school reform models are programs used schoolwide to improve student achievement. They typically…
Boudewyn, Megan A.; Long, Debra L.; Traxler, Matthew J.; Lesh, Tyler A.; Dave, Shruti; Mangun, George R.; Carter, Cameron S.; Swaab, Tamara Y.
2016-01-01
The establishment of reference is essential to language comprehension. The goal of this study was to examine listeners’ sensitivity to referential ambiguity as a function of individual variation in attention, working memory capacity, and verbal ability. Participants listened to stories in which two entities were introduced that were either very similar (e.g., two oaks) or less similar (e.g., one oak and one elm). The manipulation rendered an anaphor in a subsequent sentence (e.g., oak) ambiguous or unambiguous. EEG was recorded as listeners comprehended the story, after which participants completed tasks to assess working memory, verbal ability, and the ability to use context in task performance. Power in the alpha and theta frequency bands when listeners received critical information about the discourse entities (e.g., oaks) was used to index attention and the involvement of the working memory system in processing the entities. These measures were then used to predict an ERP component that is sensitive to referential ambiguity, the Nref, which was recorded when listeners received the anaphor. Nref amplitude at the anaphor was predicted by alpha power during the earlier critical sentence: Individuals with increased alpha power in ambiguous compared with unambiguous stories were less sensitive to the anaphor's ambiguity. Verbal ability was also predictive of greater sensitivity to referential ambiguity. Finally, increased theta power in the ambiguous compared with unambiguous condition was associated with higher working-memory span. These results highlight the role of attention and working memory in referential processing during listening comprehension. PMID:26401815
Boudewyn, Megan A; Long, Debra L; Traxler, Matthew J; Lesh, Tyler A; Dave, Shruti; Mangun, George R; Carter, Cameron S; Swaab, Tamara Y
2015-12-01
The establishment of reference is essential to language comprehension. The goal of this study was to examine listeners' sensitivity to referential ambiguity as a function of individual variation in attention, working memory capacity, and verbal ability. Participants listened to stories in which two entities were introduced that were either very similar (e.g., two oaks) or less similar (e.g., one oak and one elm). The manipulation rendered an anaphor in a subsequent sentence (e.g., oak) ambiguous or unambiguous. EEG was recorded as listeners comprehended the story, after which participants completed tasks to assess working memory, verbal ability, and the ability to use context in task performance. Power in the alpha and theta frequency bands when listeners received critical information about the discourse entities (e.g., oaks) was used to index attention and the involvement of the working memory system in processing the entities. These measures were then used to predict an ERP component that is sensitive to referential ambiguity, the Nref, which was recorded when listeners received the anaphor. Nref amplitude at the anaphor was predicted by alpha power during the earlier critical sentence: Individuals with increased alpha power in ambiguous compared with unambiguous stories were less sensitive to the anaphor's ambiguity. Verbal ability was also predictive of greater sensitivity to referential ambiguity. Finally, increased theta power in the ambiguous compared with unambiguous condition was associated with higher working-memory span. These results highlight the role of attention and working memory in referential processing during listening comprehension.
Functional organization of the face-sensitive areas in human occipital-temporal cortex.
Shao, Hanyu; Weng, Xuchu; He, Sheng
2017-08-15
Human occipital-temporal cortex features several areas sensitive to faces, presumably forming the biological substrate for face perception. To date, there are piecemeal insights regarding the functional organization of these regions. They have come, however, from studies that are far from homogeneous with regard to the regions involved, the experimental design, and the data analysis approach. In order to provide an overall view of the functional organization of the face-sensitive areas, it is necessary to conduct a comprehensive study that taps into the pivotal functional properties of all the face-sensitive areas, within the context of the same experimental design, and uses multiple data analysis approaches. In this study, we identified the most robustly activated face-sensitive areas in bilateral occipital-temporal cortices (i.e., AFP, aFFA, pFFA, OFA, pcSTS, pSTS) and systemically compared their regionally averaged activation and multivoxel activation patterns to 96 images from 16 object categories, including faces and non-faces. This condition-rich and single-image analysis approach critically samples the functional properties of a brain region, allowing us to test how two basic functional properties, namely face-category selectivity and face-exemplar sensitivity are distributed among these regions. Moreover, by examining the correlational structure of neural responses to the 96 images, we characterize their interactions in the greater face-processing network. We found that (1) r-pFFA showed the highest face-category selectivity, followed by l-pFFA, bilateral aFFA and OFA, and then bilateral pcSTS. In contrast, bilateral AFP and pSTS showed low face-category selectivity; (2) l-aFFA, l-pcSTS and bilateral AFP showed evidence of face-exemplar sensitivity; (3) r-OFA showed high overall response similarities with bilateral LOC and r-pFFA, suggesting it might be a transitional stage between general and face-selective information processing; (4) r-aFFA showed high face-selective response similarity with r-pFFA and r-OFA, indicating it was specifically involved in processing face information. Results also reveal two properties of these face sensitive regions across the two hemispheres: (1) the averaged left intra-hemispheric response similarity for the images was lower than the averaged right intra-hemispheric and the inter-hemispheric response similarity, implying convergence of face processing towards the right hemisphere, and (2) the response similarities between homologous regions in the two hemispheres decreased as information processing proceeded from the early, more posterior, processing stage (OFA), indicating an increasing degree of hemispheric specialization and right hemisphere bias for face information processing. This study contributes to an emerging picture of how faces are processed within the occipital and temporal cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
Tatone, Elise H; Gordon, Jessica L; Hubbs, Jessie; LeBlanc, Stephen J; DeVries, Trevor J; Duffield, Todd F
2016-08-01
Several rapid tests for use on farm have been validated for the detection of hyperketonemia (HK) in dairy cattle, however the reported sensitivity and specificity of each method varies and no single study has compared them all. Meta-analysis of diagnostic test accuracy is becoming more common in human medical literature but there are few veterinary examples. The objective of this work was to perform a systematic review and meta-analysis to determine the point-of-care testing method with the highest combined sensitivity and specificity, the optimal threshold for each method, and to identify gaps in the literature. A comprehensive literature search resulted in 5196 references. After removing duplicates and performing relevance screening, 23 studies were included for the qualitative synthesis and 18 for the meta-analysis. The three index tests evaluated in the meta-analysis were: the Precision Xtra(®) handheld device measuring beta-hydroxybutyrate (BHB) concentration in whole blood, and Ketostix(®) and KetoTest(®) semi-quantitative strips measuring the concentration of acetoacetate in urine and BHB in milk, respectively. The diagnostic accuracy of the 3 index tests relative to the reference standard measurement of BHB in serum or whole blood between 1.0-1.4mmol/L was compared using the hierarchical summary receiver operator characteristic (HSROC) method. Subgroup analysis was conducted for each index test to examine the accuracy at different thresholds. The impact of the reference standard threshold, the reference standard method, the prevalence of HK in the population, the primary study source and risk of bias of the primary study was explored using meta-regression. The Precision Xtra(®) device had the highest summary sensitivity in whole blood BHB at 1.2mmol/L, 94.8% (CI95%: 92.6-97.0), and specificity, 97.5% (CI95%: 96.9-98.1). The threshold employed (1.2-1.4mmol/L) did not impact the diagnostic accuracy of the test. The Ketostix(®) and KetoTest(®) strips had the highest summary sensitivity and specificity when the trace and weak positive thresholds were used, respectively. Controlling for the source of publication, HK prevalence and reference standard employed did not impact the estimated sensitivity and specificity of the tests. Including only peer-reviewed studies reduced the number of primary studies evaluating the Precision Xtra(®) by 43% and Ketostix(®) by 33%. Diagnosing HK with blood, urine or milk are valid options, however, the diagnostic inaccuracy of urine and milk should be considered when making economic and treatment decisions. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fields, Laura; Genser, Krzysztof; Hatcher, Robert
Geant4 is the leading detector simulation toolkit used in high energy physics to design detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured cross-sections and phenomenological predictions with physically motivated parameters estimated by theoretical calculation or measurement. Because these models are tuned to cover a very wide range of possible simulation tasks, they may not always be optimized for a given process or a given material. Thismore » raises several critical questions, e.g. how sensitive Geant4 predictions are to the variations of the model parameters, or what uncertainties are associated with a particular tune of a Geant4 physics model, or a group of models, or how to consistently derive guidance for Geant4 model development and improvement from a wide range of available experimental data. We have designed and implemented a comprehensive, modular, user-friendly software toolkit to study and address such questions. It allows one to easily modify parameters of one or several Geant4 physics models involved in the simulation, and to perform collective analysis of multiple variants of the resulting physics observables of interest and comparison against a variety of corresponding experimental data. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented and illustrated with results obtained with Geant4 key hadronic models.« less
NASA Technical Reports Server (NTRS)
Cho, S. Y.; Yetter, R. A.; Dryer, F. L.
1992-01-01
Various chemically reacting flow problems highlighting chemical and physical fundamentals rather than flow geometry are presently investigated by means of a comprehensive mathematical model that incorporates multicomponent molecular diffusion, complex chemistry, and heterogeneous processes, in the interest of obtaining sensitivity-related information. The sensitivity equations were decoupled from those of the model, and then integrated one time-step behind the integration of the model equations, and analytical Jacobian matrices were applied to improve the accuracy of sensitivity coefficients that are calculated together with model solutions.
Aspartame Sensitivity? A Double Blind Randomised Crossover Study
Sathyapalan, Thozhukat; Thatcher, Natalie J.; Hammersley, Richard; Rigby, Alan S.; Pechlivanis, Alexandros; Gooderham, Nigel J.; Holmes, Elaine; le Roux, Carel W.; Atkin, Stephen L.; Courts, Fraser
2015-01-01
Background Aspartame is a commonly used intense artificial sweetener, being approximately 200 times sweeter than sucrose. There have been concerns over aspartame since approval in the 1980s including a large anecdotal database reporting severe symptoms. The objective of this study was to compare the acute symptom effects of aspartame to a control preparation. Methods This was a double-blind randomized cross over study conducted in a clinical research unit in United Kingdom. Forty-eight individual who has self reported sensitivity to aspartame were compared to 48 age and gender matched aspartame non-sensitive individuals. They were given aspartame (100mg)-containing or control snack bars randomly at least 7 days apart. The main outcome measures were acute effects of aspartame measured using repeated ratings of 14 symptoms, biochemistry and metabonomics. Results Aspartame sensitive and non-sensitive participants differed psychologically at baseline in handling feelings and perceived stress. Sensitive participants had higher triglycerides (2.05 ± 1.44 vs. 1.26 ± 0.84mmol/L; p value 0.008) and lower HDL-C (1.16 ± 0.34 vs. 1.35 ± 0.54 mmol/L; p value 0.04), reflected in 1H NMR serum analysis that showed differences in the baseline lipid content between the two groups. Urine metabonomic studies showed no significant differences. None of the rated symptoms differed between aspartame and control bars, or between sensitive and control participants. However, aspartame sensitive participants rated more symptoms particularly in the first test session, whether this was placebo or control. Aspartame and control bars affected GLP-1, GIP, tyrosine and phenylalanine levels equally in both aspartame sensitive and non-sensitive subjects. Conclusion Using a comprehensive battery of psychological tests, biochemistry and state of the art metabonomics there was no evidence of any acute adverse responses to aspartame. This independent study gives reassurance to both regulatory bodies and the public that acute ingestion of aspartame does not have any detectable psychological or metabolic effects in humans. Trial Registration ISRCTN Registry ISRCTN39650237 PMID:25786106
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study
Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.
2016-01-01
Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917
Rational Deletion Cloze Processing Strategies: ESL and Native English.
ERIC Educational Resources Information Center
Markham, Paul L.
1987-01-01
Explores cloze sensitivity to global comprehension by means of retrospective interview techniques. No significant differences were found between English as a second language (ESL) college students (N=14) and native English-speaking students (N=14) in their processing strategies. (Author/CB)
Comprehensive risk analysis for structure type selection.
DOT National Transportation Integrated Search
2010-04-01
Optimization of bridge selection and design traditionally has been sought in terms of the finished structure. This study presents a : more comprehensive risk-based analysis that includes user costs and accidents during the construction phase. Costs f...
Comprehensive drought characteristics analysis based on a nonlinear multivariate drought index
NASA Astrophysics Data System (ADS)
Yang, Jie; Chang, Jianxia; Wang, Yimin; Li, Yunyun; Hu, Hui; Chen, Yutong; Huang, Qiang; Yao, Jun
2018-02-01
It is vital to identify drought events and to evaluate multivariate drought characteristics based on a composite drought index for better drought risk assessment and sustainable development of water resources. However, most composite drought indices are constructed by the linear combination, principal component analysis and entropy weight method assuming a linear relationship among different drought indices. In this study, the multidimensional copulas function was applied to construct a nonlinear multivariate drought index (NMDI) to solve the complicated and nonlinear relationship due to its dependence structure and flexibility. The NMDI was constructed by combining meteorological, hydrological, and agricultural variables (precipitation, runoff, and soil moisture) to better reflect the multivariate variables simultaneously. Based on the constructed NMDI and runs theory, drought events for a particular area regarding three drought characteristics: duration, peak, and severity were identified. Finally, multivariate drought risk was analyzed as a tool for providing reliable support in drought decision-making. The results indicate that: (1) multidimensional copulas can effectively solve the complicated and nonlinear relationship among multivariate variables; (2) compared with single and other composite drought indices, the NMDI is slightly more sensitive in capturing recorded drought events; and (3) drought risk shows a spatial variation; out of the five partitions studied, the Jing River Basin as well as the upstream and midstream of the Wei River Basin are characterized by a higher multivariate drought risk. In general, multidimensional copulas provides a reliable way to solve the nonlinear relationship when constructing a comprehensive drought index and evaluating multivariate drought characteristics.
Ethical considerations in the study of online illness narratives: a qualitative review.
Heilferty, Catherine McGeehin
2011-05-01
This aim of the review was to describe differences in ethical approaches to research on Internet communication during illness and to report conclusions drawn relevant to a proposed narrative analysis of parent blogs of childhood illness. As the study of the online expression of illness experiences becomes more expansive, discussion of related ethical issues is central to promoting research trustworthiness and rigour. Ethical considerations are central to the patient-provider relationship. The EBSCO Host, CINAHL, Medline, Communication & Mass Media Complete, and Google Scholar databases were searched from January 1990 to September 2009 using the terms 'Internet research and ethics', 'Internet research, illness and ethics' and 'blog, Internet research and ethics'. Of the 4114 references found, 21 met the inclusion criteria for the review. The review was designed to be a comprehensive assessment of the concepts analysed and the qualitative research measures taken concerning ethics in Internet research across formats. Three main approaches to ethical conduct in Internet research on illness experiences were found: human subjects, representation and open source approaches. The personal and sensitive nature of online illness narratives demand their consideration in health care as 'human subjects' research. The best hope for ethical treatment of author-participants is the creation of a comprehensive plan for addressing any and all potential ethical conflicts that may arise in the collection, analysis and reporting of data, taking into consideration rapid changes in technology. © 2011 Blackwell Publishing Ltd.
Croll, Peter R
2011-02-01
To ensure that patient confidentiality is securely maintained, health ICT applications that contain sensitive personal information demand comprehensive privacy policies. Determining the adequacy of these policies to meet legal conformity together with clinical users and patient expectation is demanding in practice. Organisations and agencies looking to analyse their Privacy and Security policies can benefit from guidance provided by outside entities such as the Privacy Office of their State or Government together with law firms and ICT specialists. The advice given is not uniform and often open to different interpretations. Of greater concern is the possibility of overlooking any important aspects that later result in a data breach. Based on three case studies, this paper considers whether a more formal approach to privacy analysis could be taken that would help identify the full coverage of a Privacy Impact Analysis and determine the deficiencies with an organisation's current policies and approach. A diagrammatic model showing the relationships between Confidentiality, Privacy, Trust, Security and Safety is introduced. First the validity of this model is determined by mapping it against the real-world case studies taken from three healthcare services that depend on ICT. Then, by using software engineering methods, a formal mapping of the relationships is undertaken to identify a full set of policies needed to satisfy the model. How effective this approach may prove as a generic method for deriving a comprehensive set of policies in health ICT applications is finally discussed. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Marketing considerations in home health care.
Tanner, D J
1985-12-01
Methods for conducting a comprehensive analysis of the potential for strategic entry or expansion in the home health-care (HHC) market are discussed. By conducting a comprehensive analysis of the HHC market, hospital pharmacists can evaluate the feasibility of developing and implementing a hospital-based HHC service. A comprehensive market analysis should include an initial assessment of potential product-line offerings, development of strengths-and-weaknesses and opportunities-and-threats profiles, evaluations of competing providers of HHC and regulatory issues, and formulation of a business plan. The potential impact of program structure, operations management, product pricing, advertising and promotion, and marketing controls should also be considered. The hospital pharmacist has a unique opportunity to further the organizational objectives of the hospital by participating in the provision of HHC; a comprehensive market analysis represents a useful method of assessing the benefits and costs associated with providing integrated HHC services.
Development of a Self-Report Measure of Reward Sensitivity:A Test in Current and Former Smokers.
Hughes, John R; Callas, Peter W; Priest, Jeff S; Etter, Jean-Francois; Budney, Alan J; Sigmon, Stacey C
2017-06-01
Tobacco use or abstinence may increase or decrease reward sensitivity. Most existing measures of reward sensitivity were developed decades ago, and few have undergone extensive psychometric testing. We developed a 58-item survey of the anticipated enjoyment from, wanting for, and frequency of common rewards (the Rewarding Events Inventory-REI). The current analysis focuses on ratings of anticipated enjoyment. The first validation study recruited current and former smokers from Internet sites. The second study recruited smokers who wished to quit and monetarily reinforced them to stay abstinent in a laboratory study and a comparison group of former smokers. In both studies, participants completed the inventory on two occasions, 3-7 days apart. They also completed four anhedonia scales and a behavioral test of reduced reward sensitivity. Half of the enjoyment ratings loaded on four factors: socializing, active hobbies, passive hobbies, and sex/drug use. Cronbach's alpha coefficients were all ≥0.73 for overall mean and factor scores. Test-retest correlations were all ≥0.83. Correlations of the overall and factor scores with frequency of rewards and anhedonia scales were 0.19-0.53, except for the sex/drugs factor. The scores did not correlate with behavioral tests of reward and did not differ between current and former smokers. Lower overall mean enjoyment score predicted a shorter time to relapse. Internal reliability and test-retest reliability of the enjoyment outcomes of the REI are excellent, and construct and predictive validity are modest but promising. The REI is comprehensive and up-to-date, yet is short enough to use on repeated occasions. Replication tests, especially predictive validity tests, are needed. Both use of and abstinence from nicotine appear to increase or decrease how rewarding nondrug rewards are; however, self-report scales to test this have limitations. Our inventory of enjoyment from 58 rewards appears to be reliable and valid as well as comprehensive and up-to-date, yet is short enough to use on repeated occasions. Replication tests, especially of the predictive validity of our scale, are needed. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Goldstein, B C; Harris, K C; Klein, M D
1993-02-01
This study investigated the relationship between reading comprehension and oral storytelling abilities. Thirty-one Latino junior high school students with learning handicaps were selected as subjects based on learning handicapped designation, home language, and language proficiency status. Reading comprehension was measured by the Reading Comprehension subtest of the Peabody Individual Achievement Test. Storytelling was measured by (a) the Oral Production subtest of the Language Assessment Scales using the standard scoring protocol and (b) a story structure analysis. A comparison of the standard scoring protocol and reading comprehension revealed no relationship, while the comparison of the story structure analysis and reading comprehension revealed a significant correlation. The implications of these results for language assessment of bilingual students are discussed.
The effectiveness of the Herbst appliance for patients with Class II malocclusion: a meta-analysis
Yang, Xin; Zhu, Yafen; Long, Hu; Zhou, Yang; Jian, Fan; Ye, Niansong; Gao, Meiya
2016-01-01
Summary Objective: To systematically investigate review in literature the effects of the Herbst appliance for patients with Class II malocclusion patients. Method: We performed a comprehensive literature survey on PubMed, Web of Science, Embase, CENTRAL, SIGLE, and ClinicalTrial.gov up to December 2014. The selection criteria: randomized controlled trials or clinical controlled trials; using any kind of Herbst appliances to correct Class II division 1 malocclusions; skeletal and/or dental changes evaluated through lateral cephalograms. And the exclusion criteria: syndromic patients; individual case reports and series of cases; surgical interventions. Article screening, data extraction, assessment of risk of bias, and evaluation of evidence quality through GRADE were conducted independently by two well-trained orthodontic doctors. Consensus was made via group discussion of all authors when there is inconsistent information from the two. After that, sensitivity analysis and subgroup analysis were performed to evaluate the robustness of the meta-analysis. Results: Twelve clinical controlled trials meet the above-mentioned criteria, and were included in this analysis. All included studies have eleven measures taken during both active treatment effect and long term effect periods, including four angular ones (i.e., SNA, SNB, ANB, mandibular plane angle) and seven linear ones (i.e. Co-Go, Co-Gn, overjet, overbite, molar relationship, A point-OLp, Pg-OLp) during active treatment effect period were statistically pooled. Meta-analysis and sensitivity analysis demonstrated that all these measures showed consistent results except for SNA, ANB, and overbite. Subgroup analysis showed significant changes in SNA, overbite, and Pg-OLp. Publication bias was detected in SNB, mandibular plane angle, and A point-OLp. Conclusion: The Herbst appliance is effective for patients with Class II malocclusion in active treatment period. Especially, there are obvious changes on dental discrepancy and skeletal changes on Co-Gn. As to its long-term effects, more evidence is needed to draw conclusions. PMID:26306822
Space shuttle post-entry and landing analysis. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Crawford, B. S.; Duiven, E. M.
1973-01-01
Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.
Navigating legal constraints in clinical data warehousing: a case study in personalized medicine.
Jefferys, Benjamin R; Nwankwo, Iheanyi; Neri, Elias; Chang, David C W; Shamardin, Lev; Hänold, Stefanie; Graf, Norbert; Forgó, Nikolaus; Coveney, Peter
2013-04-06
Personalized medicine relies in part upon comprehensive data on patient treatment and outcomes, both for analysis leading to improved models that provide the basis for enhanced treatment, and for direct use in clinical decision-making. A data warehouse is an information technology for combining and standardizing multiple databases. Data warehousing of clinical data is constrained by many legal and ethical considerations, owing to the sensitive nature of the data being stored. We describe an unconstrained clinical data warehousing architecture, some of the legal constraints that have led us to reconsider this architecture, and the legal and technical solutions to these constraints developed for the clinical data warehouse in the personalized medicine project p-medicine. We also propose some changes to the legal constraints that will further enable clinical research.
Keidser, Gitte; Best, Virginia; Freeston, Katrina; Boyce, Alexandra
2015-01-01
It is well-established that communication involves the working memory system, which becomes increasingly engaged in understanding speech as the input signal degrades. The more resources allocated to recovering a degraded input signal, the fewer resources, referred to as cognitive spare capacity (CSC), remain for higher-level processing of speech. Using simulated natural listening environments, the aims of this paper were to (1) evaluate an English version of a recently introduced auditory test to measure CSC that targets the updating process of the executive function, (2) investigate if the test predicts speech comprehension better than the reading span test (RST) commonly used to measure working memory capacity, and (3) determine if the test is sensitive to increasing the number of attended locations during listening. In Experiment I, the CSC test was presented using a male and a female talker, in quiet and in spatially separated babble- and cafeteria-noises, in an audio-only and in an audio-visual mode. Data collected on 21 listeners with normal and impaired hearing confirmed that the English version of the CSC test is sensitive to population group, noise condition, and clarity of speech, but not presentation modality. In Experiment II, performance by 27 normal-hearing listeners on a novel speech comprehension test presented in noise was significantly associated with working memory capacity, but not with CSC. Moreover, this group showed no significant difference in CSC as the number of talker locations in the test increased. There was no consistent association between the CSC test and the RST. It is recommended that future studies investigate the psychometric properties of the CSC test, and examine its sensitivity to the complexity of the listening environment in participants with both normal and impaired hearing. PMID:25999904
ERIC Educational Resources Information Center
Coyne, Michael D.; Zipoli, Richard P., Jr.; Chard, David J.; Faggella-Luby, Michael; Ruby, Maureen; Santoro, Lana E.; Baker, Scott
2009-01-01
This article examines the role of direct instruction in promoting listening and reading comprehension. Instructional examples from 2 programs of intervention research focused on improving comprehension; the Story Read Aloud Program and the Embedded Story Structure Routine are used to illustrate principles of direct instruction. An analysis of…
Meslec, Nicoleta; Aggarwal, Ishani; Curseu, Petru L
2016-01-01
A group's collective intelligence reflects its capacity to perform well across a variety of cognitive tasks and it transcends the individual intelligence of its members. Previous research shows that group members' social sensitivity is a potential antecedent of collective intelligence, yet it is still unclear whether individual or group-level indices are responsible for the positive association between social sensitivity and collective intelligence. In a comprehensive manner, we test the extent to which both compositional (lowest and highest individual score) and compilational aspects (emergent group level) of social sensitivity are associated with collective intelligence. This study has implications for research that explores groups as information processors, and for group design as it indicates how a group should be composed with respect to social sensitivity if the group is to reach high levels of collective intelligence. Our empirical results indicate that collectively intelligent groups are those in which the least socially sensitive group member has a rather high score on social sensitivity. Differently stated, (socially sensitive) group members cannot compensate for the lack of social sensitivity of the other group members.