Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, Zachary; Verduijn, Erik; Wood, Obert R.
2016-04-01
Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less
Causse, Elsa; Félonneau, Marie-Line
2014-01-01
Research on uniqueness is widely focused on cross-cultural comparisons and tends to postulate a certain form of within-culture homogeneity. Taking the opposite course of this classic posture, we aimed at testing an integrative approach enabling the study of within-culture variations of uniqueness. This approach considered different sources of variation: social status, gender, life contexts, and interpersonal comparison. Four hundred seventy-nine participants completed a measure based on descriptions of "self" and "other." Results showed important variations of uniqueness. An interaction between social status and life contexts revealed the expression of uniqueness in the low-status group. This study highlights the complexity of uniqueness that appears to be related to both cultural ideology and social hierarchy.
Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y
1992-01-01
An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.
Divanoglou, A; Tasiemski, T; Augutis, M; Trok, K
2017-06-01
Active Rehabilitation (AR) is a community peer-based approach that started in Sweden in 1976. As a key component of the approach, AR training camps provide intensive, goal-oriented, intentional, group-based, customised training and peer-support opportunities in a community environment for individuals with spinal cord injury. Prospective cross-sectional study. To describe the profile of the organisations that use components of the AR approach, and to explore the characteristics and the international variations of the approach. Twenty-two organisations from 21 countries from Europe, Asia and Africa reported using components of the AR approach during the past 10 years. An electronic survey was developed and distributed through a personalised email. Sampling involved a prospective identification of organisations that met the inclusion criteria and snowball strategies. While there were many collaborating links between the organisations, RG Active Rehabilitation from Sweden and Motivation Charitable Trust from the United Kingdom were identified as key supporting organisations. The 10 key elements of the AR approach were found to be used uniformly across the participating organisations. Small variations were associated with variations in country income and key supporting organisation. This is the first study to describe the key elements and international variations of the AR approach. This will provide the basis for further studies exploring the effectiveness of the approach, it will likely facilitate international collaboration on research and operational aspects and it could potentially support higher integration in the health-care system and long-term funding of these programmes.
Total variation approach for adaptive nonuniformity correction in focal-plane arrays.
Vera, Esteban; Meza, Pablo; Torres, Sergio
2011-01-15
In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.
Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
NASA Astrophysics Data System (ADS)
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
ERIC Educational Resources Information Center
Santos, Elvira Santos; Garcia, Irma Cruz Gavilan; Gomez, Eva Florencia Lejarazo; Vilchis-Reyes, Miguel Angel
2010-01-01
A series of experiments based on problem-solving and collaborative-learning pedagogies are described that encourage students to interpret results and draw conclusions from data. Different approaches including parallel library synthesis, solvent variation, and leaving group variation are used to study a nucleophilic aromatic substitution of…
Doets, Esmée L; Cavelaars, Adrienne E J M; Dhonukshe-Rutten, Rosalie A M; van 't Veer, Pieter; de Groot, Lisette C P G M
2012-05-01
To signal key issues for harmonising approaches for establishing micronutrient recommendations by explaining observed variation in recommended intakes of folate, vitamin B12, Fe and Zn for adults and elderly people. We explored differences in recommended intakes of folate, vitamin B12, Fe and Zn for adults between nine reports on micronutrient recommendations. Approaches used for setting recommendations were compared as well as eminence-based decisions regarding the selection of health indicators indicating adequacy of intakes and the consulted evidence base. In nearly all reports, recommendations were based on the average nutrient requirement. Variation in recommended folate intakes (200-400 μg/d) was related to differences in the consulted evidence base, whereas variation in vitamin B12 recommendations (1.4-3.0 μg/d) was due to the selection of different CV (10-20 %) and health indicators (maintenance of haematological status or basal losses). Variation in recommended Fe intakes (men 8-10 mg/d, premenopausal women 14.8-19.6 mg/d, postmenopausal women 7.5-10.0 mg/d) was explained by different assumed reference weights and bioavailability factors (10-18 %). Variation in Zn recommendations (men 7-14 mg/d, women 4.9-9.0 mg/d) was also explained by different bioavailability factors (24-48 %) as well as differences in the consulted evidence base. For the harmonisation of approaches for setting recommended intakes of folate, vitamin B12, Fe and Zn across European countries, standardised methods are needed to (i) select health indicators and define adequate biomarker concentrations, (ii) make assumptions about inter-individual variation in requirements, (iii) derive bioavailability factors and (iv) collate, select, interpret and integrate evidence on requirements.
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach
Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy
2014-01-01
We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901
Variational approach to direct and inverse problems of atmospheric pollution studies
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2016-04-01
We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X
Zhu, X Q; Gasser, R B
1998-06-01
In this study, we assessed single-strand conformation polymorphism (SSCP)-based approaches for their capacity to fingerprint sequence variation in ribosomal DNA (rDNA) of ascaridoid nematodes of veterinary and/or human health significance. The second internal transcribed spacer region (ITS-2) of rDNA was utilised as the target region because it is known to provide species-specific markers for this group of parasites. ITS-2 was amplified by PCR from genomic DNA derived from individual parasites and subjected to analysis. Direct SSCP analysis of amplicons from seven taxa (Toxocara vitulorum, Toxocara cati, Toxocara canis, Toxascaris leonina, Baylisascaris procyonis, Ascaris suum and Parascaris equorum) showed that the single-strand (ss) ITS-2 patterns produced allowed their unequivocal identification to species. While no variation in SSCP patterns was detected in the ITS-2 within four species for which multiple samples were available, the method allowed the direct display of four distinct sequence types of ITS-2 among individual worms of T. cati. Comparison of SSCP/sequencing with the methods of dideoxy fingerprinting (ddF) and restriction endonuclease fingerprinting (REF) revealed that also ddF allowed the definition of the four sequence types, whereas REF displayed three of four. The findings indicate the usefulness of the SSCP-based approaches for the identification of ascaridoid nematodes to species, the direct display of sequence variation in rDNA and the detection of population variation. The ability to fingerprint microheterogeneity in ITS-2 rDNA using such approaches also has implications for studying fundamental aspects relating to mutational change in rDNA.
A time-parallel approach to strong-constraint four-dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Rao, Vishwas; Sandu, Adrian
2016-05-01
A parallel-in-time algorithm based on an augmented Lagrangian approach is proposed to solve four-dimensional variational (4D-Var) data assimilation problems. The assimilation window is divided into multiple sub-intervals that allows parallelization of cost function and gradient computations. The solutions to the continuity equations across interval boundaries are added as constraints. The augmented Lagrangian approach leads to a different formulation of the variational data assimilation problem than the weakly constrained 4D-Var. A combination of serial and parallel 4D-Vars to increase performance is also explored. The methodology is illustrated on data assimilation problems involving the Lorenz-96 and the shallow water models.
Zhe, Shandian; Xu, Zenglin; Qi, Yuan; Yu, Peng
2014-01-01
A key step for Alzheimer's disease (AD) study is to identify associations between genetic variations and intermediate phenotypes (e.g., brain structures). At the same time, it is crucial to develop a noninvasive means for AD diagnosis. Although these two tasks-association discovery and disease diagnosis-have been treated separately by a variety of approaches, they are tightly coupled due to their common biological basis. We hypothesize that the two tasks can potentially benefit each other by a joint analysis, because (i) the association study discovers correlated biomarkers from different data sources, which may help improve diagnosis accuracy, and (ii) the disease status may help identify disease-sensitive associations between genetic variations and MRI features. Based on this hypothesis, we present a new sparse Bayesian approach for joint association study and disease diagnosis. In this approach, common latent features are extracted from different data sources based on sparse projection matrices and used to predict multiple disease severity levels based on Gaussian process ordinal regression; in return, the disease status is used to guide the discovery of relationships between the data sources. The sparse projection matrices not only reveal the associations but also select groups of biomarkers related to AD. To learn the model from data, we develop an efficient variational expectation maximization algorithm. Simulation results demonstrate that our approach achieves higher accuracy in both predicting ordinal labels and discovering associations between data sources than alternative methods. We apply our approach to an imaging genetics dataset of AD. Our joint analysis approach not only identifies meaningful and interesting associations between genetic variations, brain structures, and AD status, but also achieves significantly higher accuracy for predicting ordinal AD stages than the competing methods.
Mapping evolutionary process: a multi-taxa approach to conservation prioritization
Thomassen, Henri A; Fuller, Trevon; Buermann, Wolfgang; Milá, Borja; Kieswetter, Charles M; Jarrín-V, Pablo; Cameron, Susan E; Mason, Eliza; Schweizer, Rena; Schlunegger, Jasmin; Chan, Janice; Wang, Ophelia; Peralvo, Manuel; Schneider, Christopher J; Graham, Catherine H; Pollinger, John P; Saatchi, Sassan; Wayne, Robert K; Smith, Thomas B
2011-01-01
Human-induced land use changes are causing extensive habitat fragmentation. As a result, many species are not able to shift their ranges in response to climate change and will likely need to adapt in situ to changing climate conditions. Consequently, a prudent strategy to maintain the ability of populations to adapt is to focus conservation efforts on areas where levels of intraspecific variation are high. By doing so, the potential for an evolutionary response to environmental change is maximized. Here, we use modeling approaches in conjunction with environmental variables to model species distributions and patterns of genetic and morphological variation in seven Ecuadorian amphibian, bird, and mammal species. We then used reserve selection software to prioritize areas for conservation based on intraspecific variation or species-level diversity. Reserves selected using species richness and complementarity showed little overlap with those based on genetic and morphological variation. Priority areas for intraspecific variation were mainly located along the slopes of the Andes and were largely concordant among species, but were not well represented in existing reserves. Our results imply that in order to maximize representation of intraspecific variation in reserves, genetic and morphological variation should be included in conservation prioritization. PMID:25567981
Mapping evolutionary process: a multi-taxa approach to conservation prioritization.
Thomassen, Henri A; Fuller, Trevon; Buermann, Wolfgang; Milá, Borja; Kieswetter, Charles M; Jarrín-V, Pablo; Cameron, Susan E; Mason, Eliza; Schweizer, Rena; Schlunegger, Jasmin; Chan, Janice; Wang, Ophelia; Peralvo, Manuel; Schneider, Christopher J; Graham, Catherine H; Pollinger, John P; Saatchi, Sassan; Wayne, Robert K; Smith, Thomas B
2011-03-01
Human-induced land use changes are causing extensive habitat fragmentation. As a result, many species are not able to shift their ranges in response to climate change and will likely need to adapt in situ to changing climate conditions. Consequently, a prudent strategy to maintain the ability of populations to adapt is to focus conservation efforts on areas where levels of intraspecific variation are high. By doing so, the potential for an evolutionary response to environmental change is maximized. Here, we use modeling approaches in conjunction with environmental variables to model species distributions and patterns of genetic and morphological variation in seven Ecuadorian amphibian, bird, and mammal species. We then used reserve selection software to prioritize areas for conservation based on intraspecific variation or species-level diversity. Reserves selected using species richness and complementarity showed little overlap with those based on genetic and morphological variation. Priority areas for intraspecific variation were mainly located along the slopes of the Andes and were largely concordant among species, but were not well represented in existing reserves. Our results imply that in order to maximize representation of intraspecific variation in reserves, genetic and morphological variation should be included in conservation prioritization.
Fokkema, Ivo F A C; den Dunnen, Johan T; Taschner, Peter E M
2005-08-01
The completion of the human genome project has initiated, as well as provided the basis for, the collection and study of all sequence variation between individuals. Direct access to up-to-date information on sequence variation is currently provided most efficiently through web-based, gene-centered, locus-specific databases (LSDBs). We have developed the Leiden Open (source) Variation Database (LOVD) software approaching the "LSDB-in-a-Box" idea for the easy creation and maintenance of a fully web-based gene sequence variation database. LOVD is platform-independent and uses PHP and MySQL open source software only. The basic gene-centered and modular design of the database follows the recommendations of the Human Genome Variation Society (HGVS) and focuses on the collection and display of DNA sequence variations. With minimal effort, the LOVD platform is extendable with clinical data. The open set-up should both facilitate and promote functional extension with scripts written by the community. The LOVD software is freely available from the Leiden Muscular Dystrophy pages (www.DMD.nl/LOVD/). To promote the use of LOVD, we currently offer curators the possibility to set up an LSDB on our Leiden server. (c) 2005 Wiley-Liss, Inc.
Analysis of bHLH coding genes using gene co-expression network approach.
Srivastava, Swati; Sanchita; Singh, Garima; Singh, Noopur; Srivastava, Gaurava; Sharma, Ashok
2016-07-01
Network analysis provides a powerful framework for the interpretation of data. It uses novel reference network-based metrices for module evolution. These could be used to identify module of highly connected genes showing variation in co-expression network. In this study, a co-expression network-based approach was used for analyzing the genes from microarray data. Our approach consists of a simple but robust rank-based network construction. The publicly available gene expression data of Solanum tuberosum under cold and heat stresses were considered to create and analyze a gene co-expression network. The analysis provide highly co-expressed module of bHLH coding genes based on correlation values. Our approach was to analyze the variation of genes expression, according to the time period of stress through co-expression network approach. As the result, the seed genes were identified showing multiple connections with other genes in the same cluster. Seed genes were found to be vary in different time periods of stress. These analyzed seed genes may be utilized further as marker genes for developing the stress tolerant plant species.
Reschovsky, James D; Hadley, Jack; Romano, Patrick S
2013-10-01
Control for area differences in population health (casemix adjustment) is necessary to measure geographic variations in medical spending. Studies use various casemix adjustment methods, resulting in very different geographic variation estimates. We study casemix adjustment methodological issues and evaluate alternative approaches using claims from 1.6 million Medicare beneficiaries in 60 representative communities. Two key casemix adjustment methods-controlling for patient conditions obtained from diagnoses on claims and expenditures of those at the end of life-were evaluated. We failed to find evidence of bias in the former approach attributable to area differences in physician diagnostic patterns, as others have found, and found that the assumption underpinning the latter approach-that persons close to death are equally sick across areas-cannot be supported. Diagnosis-based approaches are more appropriate when current rather than prior year diagnoses are used. Population health likely explains more than 75% to 85% of cost variations across fixed sets of areas.
Elbel, Brian; Corcoran, Sean P.; Schwartz, Amy Ellen
2016-01-01
A common policy approach to reducing childhood obesity aims to shape the environment in which children spend most of their time: neighborhoods and schools. This paper uses richly detailed data on the body mass index (BMI) of all New York City public school students in grades K-8 to assess the potential for place-based approaches to reduce child obesity. We document variation in the prevalence of obesity across NYC public schools and census tracts, and then estimate the extent to which this variation can be explained by differences in individual-level predictors (such as race and household income). Both unadjusted and adjusted variability across neighborhoods and schools suggest place-based policies have the potential to meaningfully reduce child obesity, but under most realistic scenarios the improvement would be modest. PMID:27309533
Elbel, Brian; Corcoran, Sean P; Schwartz, Amy Ellen
2016-01-01
A common policy approach to reducing childhood obesity aims to shape the environment in which children spend most of their time: neighborhoods and schools. This paper uses richly detailed data on the body mass index (BMI) of all New York City public school students in grades K-8 to assess the potential for place-based approaches to reduce child obesity. We document variation in the prevalence of obesity across NYC public schools and census tracts, and then estimate the extent to which this variation can be explained by differences in individual-level predictors (such as race and household income). Both unadjusted and adjusted variability across neighborhoods and schools suggest place-based policies have the potential to meaningfully reduce child obesity, but under most realistic scenarios the improvement would be modest.
NASA Astrophysics Data System (ADS)
Pathak, Nidhi; Kaur, Sukhdeep; Singh, Sukhmander
2018-05-01
In this paper, self-focusing/defocusing effects have been studied by taking into account the combined effect of ponder-motive and relativistic non linearity during the laser plasma interaction with density variation. The formulation is based on the numerical analysis of second order nonlinear differential equation for appropriate set of laser and plasma parameters by employing moment theory approach. We found that self-focusing increases with increasing the laser intensity and density variation. The results obtained are valuable in high harmonic generation, inertial confinement fusion and charge particle acceleration.
Optimization of equivalent uniform dose using the L-curve criterion.
Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R
2007-10-07
Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.
Variation block-based genomics method for crop plants.
Kim, Yul Ho; Park, Hyang Mi; Hwang, Tae-Young; Lee, Seuk Ki; Choi, Man Soo; Jho, Sungwoong; Hwang, Seungwoo; Kim, Hak-Min; Lee, Dongwoo; Kim, Byoung-Chul; Hong, Chang Pyo; Cho, Yun Sung; Kim, Hyunmin; Jeong, Kwang Ho; Seo, Min Jung; Yun, Hong Tai; Kim, Sun Lim; Kwon, Young-Up; Kim, Wook Han; Chun, Hye Kyung; Lim, Sang Jong; Shin, Young-Ah; Choi, Ik-Young; Kim, Young Sun; Yoon, Ho-Sung; Lee, Suk-Ha; Lee, Sunghoon
2014-06-15
In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding.
De, Rajat K.
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision. PMID:26291322
Sinha, Rituparna; Samaddar, Sandip; De, Rajat K
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision.
Andrew D. Richardson; David Y. Hollinger; John D. Aber; Scott V. Ollinger; Bobby H. Braswell
2007-01-01
Tower-based eddy covariance measurements of forest-atmosphere carbon dioxide (CO2) exchange from many sites around the world indicate that there is considerable year-to-year variation in net ecosystem exchange (NEE). Here, we use a statistical modeling approach to partition the interannual variability in NEE (and its component fluxes, ecosystem...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brizard, Alain J.; Tronci, Cesare
The variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange, Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be explicitly symmetric.
The energetic cost of walking: a comparison of predictive methods.
Kramer, Patricia Ann; Sylvester, Adam D
2011-01-01
The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species.
Nicholas C. Coops; Richard H. Waring; Todd A. Schroeder
2009-01-01
Although long-lived tree species experience considerable environmental variation over their life spans, their geographical distributions reflect sensitivity mainly to mean monthly climatic conditions.We introduce an approach that incorporates a physiologically based growth model to illustrate how a half-dozen tree species differ in their responses to monthly variation...
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu; Gao, Jiali
2007-12-01
Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalashilin, Dmitrii V.; Burghardt, Irene
2008-08-28
In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less
Oppel, Steffen; Powell, Abby N.; O'Brien, Diane M.
2010-01-01
The use of stored nutrients for reproduction represents an important component of life-history variation. Recent studies from several species have used stable isotopes to estimate the reliance on stored body reserves in reproduction. Such approaches rely on population-level dietary endpoints to characterize stored reserves (“capital”) and current diet (“income”). Individual variation in diet choice has so far not been incorporated in such approaches, but is crucial for assessing variation in nutrient allocation strategies. We investigated nutrient allocation to egg production in a large-bodied sea duck in northern Alaska, the king eider (Somateria spectabilis). We first used Bayesian isotopic mixing models to quantify at the population level the amount of endogenous carbon and nitrogen invested into egg proteins based on carbon and nitrogen isotope ratios. We then defined the isotopic signature of the current diet of every nesting female based on isotope ratios of eggshell membranes, because diets varied isotopically among individual king eiders on breeding grounds. We used these individual-based dietary isotope signals to characterize nutrient allocation for each female in the study population. At the population level, the Bayesian and the individual-based approaches yielded identical results, and showed that king eiders used an income strategy for the synthesis of egg proteins. The majority of the carbon and nitrogen in albumen (C: 86 ± 18%, N: 99 ± 1%) and the nitrogen in lipid-free yolk (90 ± 15%) were derived from food consumed on breeding grounds. Carbon in lipid-free yolk derived evenly from endogenous sources and current diet (exogenous C: 54 ± 24%), but source contribution was highly variable among individual females. These results suggest that even large-bodied birds traditionally viewed as capital breeders use exogenous nutrients for reproduction. We recommend that investigations of nutrient allocation should incorporate individual variation into mixing models to reveal intraspecific variation in reproductive strategies.
Hjort, Jan; Hugg, Timo T; Antikainen, Harri; Rusanen, Jarmo; Sofiev, Mikhail; Kukkonen, Jaakko; Jaakkola, Maritta S; Jaakkola, Jouni J K
2016-05-01
Despite the recent developments in physically and chemically based analysis of atmospheric particles, no models exist for resolving the spatial variability of pollen concentration at urban scale. We developed a land use regression (LUR) approach for predicting spatial fine-scale allergenic pollen concentrations in the Helsinki metropolitan area, Finland, and evaluated the performance of the models against available empirical data. We used grass pollen data monitored at 16 sites in an urban area during the peak pollen season and geospatial environmental data. The main statistical method was generalized linear model (GLM). GLM-based LURs explained 79% of the spatial variation in the grass pollen data based on all samples, and 47% of the variation when samples from two sites with very high concentrations were excluded. In model evaluation, prediction errors ranged from 6% to 26% of the observed range of grass pollen concentrations. Our findings support the use of geospatial data-based statistical models to predict the spatial variation of allergenic grass pollen concentrations at intra-urban scales. A remote sensing-based vegetation index was the strongest predictor of pollen concentrations for exposure assessments at local scales. The LUR approach provides new opportunities to estimate the relations between environmental determinants and allergenic pollen concentration in human-modified environments at fine spatial scales. This approach could potentially be applied to estimate retrospectively pollen concentrations to be used for long-term exposure assessments. Hjort J, Hugg TT, Antikainen H, Rusanen J, Sofiev M, Kukkonen J, Jaakkola MS, Jaakkola JJ. 2016. Fine-scale exposure to allergenic pollen in the urban environment: evaluation of land use regression approach. Environ Health Perspect 124:619-626; http://dx.doi.org/10.1289/ehp.1509761.
Marinho, V C; Richards, D; Niederman, R
2001-05-01
Variation in health care, and more particularly in dental care, was recently chronicled in a Readers Digest investigative report. The conclusions of this report are consistent with sound scientific studies conducted in various areas of health care, including dental care, which demonstrate substantial variation in the care provided to patients. This variation in care parallels the certainty with which clinicians and faculty members often articulate strongly held, but very different opinions. Using a case-based dental scenario, we present systematic evidence-based methods for accessing dental health care information, evaluating this information for validity and importance, and using this information to make informed curricular and clinical decisions. We also discuss barriers inhibiting these systematic approaches to evidence-based clinical decision making and methods for effectively promoting behavior change in health care professionals.
Genetic approaches in comparative and evolutionary physiology
Bridgham, Jamie T.; Kelly, Scott A.; Garland, Theodore
2015-01-01
Whole animal physiological performance is highly polygenic and highly plastic, and the same is generally true for the many subordinate traits that underlie performance capacities. Quantitative genetics, therefore, provides an appropriate framework for the analysis of physiological phenotypes and can be used to infer the microevolutionary processes that have shaped patterns of trait variation within and among species. In cases where specific genes are known to contribute to variation in physiological traits, analyses of intraspecific polymorphism and interspecific divergence can reveal molecular mechanisms of functional evolution and can provide insights into the possible adaptive significance of observed sequence changes. In this review, we explain how the tools and theory of quantitative genetics, population genetics, and molecular evolution can inform our understanding of mechanism and process in physiological evolution. For example, lab-based studies of polygenic inheritance can be integrated with field-based studies of trait variation and survivorship to measure selection in the wild, thereby providing direct insights into the adaptive significance of physiological variation. Analyses of quantitative genetic variation in selection experiments can be used to probe interrelationships among traits and the genetic basis of physiological trade-offs and constraints. We review approaches for characterizing the genetic architecture of physiological traits, including linkage mapping and association mapping, and systems approaches for dissecting intermediary steps in the chain of causation between genotype and phenotype. We also discuss the promise and limitations of population genomic approaches for inferring adaptation at specific loci. We end by highlighting the role of organismal physiology in the functional synthesis of evolutionary biology. PMID:26041111
Genetic approaches in comparative and evolutionary physiology.
Storz, Jay F; Bridgham, Jamie T; Kelly, Scott A; Garland, Theodore
2015-08-01
Whole animal physiological performance is highly polygenic and highly plastic, and the same is generally true for the many subordinate traits that underlie performance capacities. Quantitative genetics, therefore, provides an appropriate framework for the analysis of physiological phenotypes and can be used to infer the microevolutionary processes that have shaped patterns of trait variation within and among species. In cases where specific genes are known to contribute to variation in physiological traits, analyses of intraspecific polymorphism and interspecific divergence can reveal molecular mechanisms of functional evolution and can provide insights into the possible adaptive significance of observed sequence changes. In this review, we explain how the tools and theory of quantitative genetics, population genetics, and molecular evolution can inform our understanding of mechanism and process in physiological evolution. For example, lab-based studies of polygenic inheritance can be integrated with field-based studies of trait variation and survivorship to measure selection in the wild, thereby providing direct insights into the adaptive significance of physiological variation. Analyses of quantitative genetic variation in selection experiments can be used to probe interrelationships among traits and the genetic basis of physiological trade-offs and constraints. We review approaches for characterizing the genetic architecture of physiological traits, including linkage mapping and association mapping, and systems approaches for dissecting intermediary steps in the chain of causation between genotype and phenotype. We also discuss the promise and limitations of population genomic approaches for inferring adaptation at specific loci. We end by highlighting the role of organismal physiology in the functional synthesis of evolutionary biology. Copyright © 2015 the American Physiological Society.
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.
Environmental flow assessments for transformed estuaries
NASA Astrophysics Data System (ADS)
Sun, Tao; Zhang, Heyue; Yang, Zhifeng; Yang, Wei
2015-01-01
Here, we propose an approach to environmental flow assessment that considers spatial pattern variations in potential habitats affected by river discharges and tidal currents in estuaries. The approach comprises four steps: identifying and simulating the distributions of critical environmental factors for habitats of typical species in an estuary; mapping of suitable habitats based on spatial distributions of the Habitat Suitability Index (HSI) and adopting the habitat aggregation index to understand fragmentation of potential suitable habitats; defining variations in water requirements for a certain species using trade-off analysis for different protection objectives; and recommending environmental flows in the estuary considering the compatibility and conflict of freshwater requirements for different species. This approach was tested using a case study in the Yellow River Estuary. Recommended environmental flows were determined by incorporating the requirements of four types of species into the assessments. Greater variability in freshwater inflows could be incorporated into the recommended environmental flows considering the adaptation of potential suitable habitats with variations in the flow regime. Environmental flow allocations should be conducted in conjunction with land use conflict management in estuaries. Based on the results presented here, the proposed approach offers flexible assessment of environmental flow for aquatic ecosystems that may be subject to future change.
Understanding the P×S Aspect of Within-Person Variation: A Variance Partitioning Approach
Lakey, Brian
2016-01-01
This article reviews a variance partitioning approach to within-person variation based on Generalizability Theory and the Social Relations Model. The approach conceptualizes an important part of within-person variation as Person × Situation (P×S) interactions: differences among persons in their profiles of responses across the same situations. The approach provided the first quantitative method for capturing within-person variation and demonstrated very large P×S effects for a wide range of constructs. These include anxiety, five-factor personality traits, perceived social support, leadership, and task performance. Although P×S effects are commonly very large, conceptual, and analytic obstacles have thwarted consistent progress. For example, how does one develop a psychological, versus purely statistical, understanding of P×S effects? How does one forecast future behavior when the criterion is a P×S effect? How can understanding P×S effects contribute to psychological theory? This review describes potential solutions to these and other problems developed in the course of conducting research on the P×S aspect of social support. Additional problems that need resolution are identified. PMID:26858661
Copy number variation of individual cattle genomes using next-generation sequencing
USDA-ARS?s Scientific Manuscript database
Copy number variations (CNVs) affect a wide range of phenotypic traits; however, CNVs in or near segmental duplication regions are often intractable. Using a read depth approach based on next-generation sequencing, we examined genome-wide copy number differences among five taurine (three Angus, one ...
Copy number variation of individual cattle genomes using next-generation sequencing
USDA-ARS?s Scientific Manuscript database
Copy Number Variations (CNVs) affect a wide range of phenotypic traits; however, CNVs in or near segmental duplication regions are often difficult to track. Using a read depth approach based on next generation sequencing, we examined genome-wide copy number differences among five taurine (three Angu...
ERIC Educational Resources Information Center
Squires, Lauren M.
2011-01-01
This dissertation investigates the sociolinguistic perception of morphosyntactic variation and is motivated by exemplar-based approaches to grammar. The study uses syntactic priming experiments to test the effects of participants' exposure to subject-verb agreement variants. Experiments also manipulate the gender, social status, and individual…
Hybrid generative-discriminative approach to age-invariant face recognition
NASA Astrophysics Data System (ADS)
Sajid, Muhammad; Shafique, Tamoor
2018-03-01
Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.
NASA Astrophysics Data System (ADS)
Amin, Asad; Nasim, Wajid; Mubeen, Muhammad; Kazmi, Dildar Hussain; Lin, Zhaohui; Wahid, Abdul; Sultana, Syeda Refat; Gibbs, Jim; Fahad, Shah
2017-09-01
Unpredictable precipitation trends have largely influenced by climate change which prolonged droughts or floods in South Asia. Statistical analysis of monthly, seasonal, and annual precipitation trend carried out for different temporal (1996-2015 and 2041-2060) and spatial scale (39 meteorological stations) in Pakistan. Statistical downscaling model (SimCLIM) was used for future precipitation projection (2041-2060) and analyzed by statistical approach. Ensemble approach combined with representative concentration pathways (RCPs) at medium level used for future projections. The magnitude and slop of trends were derived by applying Mann-Kendal and Sen's slop statistical approaches. Geo-statistical application used to generate precipitation trend maps. Comparison of base and projected precipitation by statistical analysis represented by maps and graphical visualization which facilitate to detect trends. Results of this study projects that precipitation trend was increasing more than 70% of weather stations for February, March, April, August, and September represented as base years. Precipitation trend was decreased in February to April but increase in July to October in projected years. Highest decreasing trend was reported in January for base years which was also decreased in projected years. Greater variation in precipitation trends for projected and base years was reported in February to April. Variations in projected precipitation trend for Punjab and Baluchistan highly accredited in March and April. Seasonal analysis shows large variation in winter, which shows increasing trend for more than 30% of weather stations and this increased trend approaches 40% for projected precipitation. High risk was reported in base year pre-monsoon season where 90% of weather station shows increasing trend but in projected years this trend decreased up to 33%. Finally, the annual precipitation trend has increased for more than 90% of meteorological stations in base (1996-2015) which has decreased for projected year (2041-2060) up to 76%. These result revealed that overall precipitation trend is decreasing in future year which may prolonged the drought in 14% of weather stations under study.
Petri net modeling of high-order genetic systems using grammatical evolution.
Moore, Jason H; Hahn, Lance W
2003-11-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two DNA sequence variations. In the present study, we evaluate whether the Petri net approach is capable of identifying biochemical networks that are consistent with disease susceptibility due to higher order nonlinear interactions between three DNA sequence variations. The results indicate that our model-building approach is capable of routinely identifying good, but not perfect, Petri net models. Ideas for improving the algorithm for this high-dimensional problem are presented.
Jesse, Stephen; Kalinin, Sergei V
2009-02-25
An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.
Tenti, Lorenzo; Maynau, Daniel; Angeli, Celestino; Calzado, Carmen J
2016-07-21
A new strategy based on orthogonal valence-bond analysis of the wave function combined with intermediate Hamiltonian theory has been applied to the evaluation of the magnetic coupling constants in two AF systems. This approach provides both a quantitative estimate of the J value and a detailed analysis of the main physical mechanisms controlling the coupling, using a combined perturbative + variational scheme. The procedure requires a selection of the dominant excitations to be treated variationally. Two methods have been employed: a brute-force selection, using a logic similar to that of the CIPSI approach, or entanglement measures, which identify the most interacting orbitals in the system. Once a reduced set of excitations (about 300 determinants) is established, the interaction matrix is dressed at the second-order of perturbation by the remaining excitations of the CI space. The diagonalization of the dressed matrix provides J values in good agreement with experimental ones, at a very low-cost. This approach demonstrates the key role of d → d* excitations in the quantitative description of the magnetic coupling, as well as the importance of using an extended active space, including the bridging ligand orbitals, for the binuclear model of the intermediates of multicopper oxidases. The method is a promising tool for dealing with complex systems containing several active centers, as an alternative to both pure variational and DFT approaches.
Mean Field Variational Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Vrettas, M.; Cornford, D.; Opper, M.
2012-04-01
Current data assimilation schemes propose a range of approximate solutions to the classical data assimilation problem, particularly state estimation. Broadly there are three main active research areas: ensemble Kalman filter methods which rely on statistical linearization of the model evolution equations, particle filters which provide a discrete point representation of the posterior filtering or smoothing distribution and 4DVAR methods which seek the most likely posterior smoothing solution. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the most probably posterior distribution over the states, within the family of non-stationary Gaussian processes. Our original work on variational Bayesian approaches to data assimilation sought the best approximating time varying Gaussian process to the posterior smoothing distribution for stochastic dynamical systems. This approach was based on minimising the Kullback-Leibler divergence between the true posterior over paths, and our Gaussian process approximation. So long as the observation density was sufficiently high to bring the posterior smoothing density close to Gaussian the algorithm proved very effective, on lower dimensional systems. However for higher dimensional systems, the algorithm was computationally very demanding. We have been developing a mean field version of the algorithm which treats the state variables at a given time as being independent in the posterior approximation, but still accounts for their relationships between each other in the mean solution arising from the original dynamical system. In this work we present the new mean field variational Bayesian approach, illustrating its performance on a range of classical data assimilation problems. We discuss the potential and limitations of the new approach. We emphasise that the variational Bayesian approach we adopt, in contrast to other variational approaches, provides a bound on the marginal likelihood of the observations given parameters in the model which also allows inference of parameters such as observation errors, and parameters in the model and model error representation, particularly if this is written as a deterministic form with small additive noise. We stress that our approach can address very long time window and weak constraint settings. However like traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem. We finish with a sketch of the future directions for our approach.
Tseng, Shu-Ping; Li, Shou-Hsien; Hsieh, Chia-Hung; Wang, Hurng-Yi; Lin, Si-Min
2014-10-01
Dating the time of divergence and understanding speciation processes are central to the study of the evolutionary history of organisms but are notoriously difficult. The difficulty is largely rooted in variations in the ancestral population size or in the genealogy variation across loci. To depict the speciation processes and divergence histories of three monophyletic Takydromus species endemic to Taiwan, we sequenced 20 nuclear loci and combined with one mitochondrial locus published in GenBank. They were analysed by a multispecies coalescent approach within a Bayesian framework. Divergence dating based on the gene tree approach showed high variation among loci, and the divergence was estimated at an earlier date than when derived by the species-tree approach. To test whether variations in the ancestral population size accounted for the majority of this variation, we conducted computer inferences using isolation-with-migration (IM) and approximate Bayesian computation (ABC) frameworks. The results revealed that gene flow during the early stage of speciation was strongly favoured over the isolation model, and the initiation of the speciation process was far earlier than the dates estimated by gene- and species-based divergence dating. Due to their limited dispersal ability, it is suggested that geographical isolation may have played a major role in the divergence of these Takydromus species. Nevertheless, this study reveals a more complex situation and demonstrates that gene flow during the speciation process cannot be overlooked and may have a great impact on divergence dating. By using multilocus data and incorporating Bayesian coalescence approaches, we provide a more biologically realistic framework for delineating the divergence history of Takydromus. © 2014 John Wiley & Sons Ltd.
Organizational Training across Cultures: Variations in Practices and Attitudes
ERIC Educational Resources Information Center
Hassi, Abderrahman; Storti, Giovanna
2011-01-01
Purpose: The purpose of this paper is to provide a synthesis based on a review of the existing literature with respect to the variations in training practices and attitudes across national cultures. Design/methodology/approach: A content analysis technique was adopted with a comparative cross-cultural management perspective as a backdrop to…
Continuous, age-related plumage variation in male Kirtland's Warblers
John R. Probst; Deahn M. Donner; Michael A. Bozek
2007-01-01
The ability to age individual birds visually in the field based on plumage variation could provide important demographic and biogeographical information. We describe an approach to infer ages from a distribution of plumage scores of free-ranging male Kirtland's Warblers (Dendroica kinlandii). We assigned ages to males using a scoring scheme (0-...
A parameter-free variational coupling approach for trimmed isogeometric thin shells
NASA Astrophysics Data System (ADS)
Guo, Yujie; Ruess, Martin; Schillinger, Dominik
2017-04-01
The non-symmetric variant of Nitsche's method was recently applied successfully for variationally enforcing boundary and interface conditions in non-boundary-fitted discretizations. In contrast to its symmetric variant, it does not require stabilization terms and therefore does not depend on the appropriate estimation of stabilization parameters. In this paper, we further consolidate the non-symmetric Nitsche approach by establishing its application in isogeometric thin shell analysis, where variational coupling techniques are of particular interest for enforcing interface conditions along trimming curves. To this end, we extend its variational formulation within Kirchhoff-Love shell theory, combine it with the finite cell method, and apply the resulting framework to a range of representative shell problems based on trimmed NURBS surfaces. We demonstrate that the non-symmetric variant applied in this context is stable and can lead to the same accuracy in terms of displacements and stresses as its symmetric counterpart. Based on our numerical evidence, the non-symmetric Nitsche method is a viable parameter-free alternative to the symmetric variant in elastostatic shell analysis.
Can traits predict individual growth performance? A test in a hyperdiverse tropical forest.
Poorter, Lourens; Castilho, Carolina V; Schietti, Juliana; Oliveira, Rafael S; Costa, Flávia R C
2018-07-01
The functional trait approach has, as a central tenet, that plant traits are functional and shape individual performance, but this has rarely been tested in the field. Here, we tested the individual-based trait approach in a hyperdiverse Amazonian tropical rainforest and evaluated intraspecific variation in trait values, plant strategies at the individual level, and whether traits are functional and predict individual performance. We evaluated > 1300 tree saplings belonging to > 383 species, measured 25 traits related to growth and defense, and evaluated the effects of environmental conditions, plant size, and traits on stem growth. A total of 44% of the trait variation was observed within species, indicating a strong potential for acclimation. Individuals showed two strategy spectra, related to tissue toughness and organ size vs leaf display. In this nutrient- and light-limited forest, traits measured at the individual level were surprisingly poor predictors of individual growth performance because of convergence of traits and growth rates. Functional trait approaches based on individuals or species are conceptually fundamentally different: the species-based approach focuses on the potential and the individual-based approach on the realized traits and growth rates. Counterintuitively, the individual approach leads to a poor prediction of individual performance, although it provides a more realistic view on community dynamics. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.
The Energetic Cost of Walking: A Comparison of Predictive Methods
Kramer, Patricia Ann; Sylvester, Adam D.
2011-01-01
Background The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is “best”, but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. Methodology/Principal Findings We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Conclusion Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species. PMID:21731693
Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.
Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M
2017-01-01
In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.
Reliable Detection of Herpes Simplex Virus Sequence Variation by High-Throughput Resequencing.
Morse, Alison M; Calabro, Kaitlyn R; Fear, Justin M; Bloom, David C; McIntyre, Lauren M
2017-08-16
High-throughput sequencing (HTS) has resulted in data for a number of herpes simplex virus (HSV) laboratory strains and clinical isolates. The knowledge of these sequences has been critical for investigating viral pathogenicity. However, the assembly of complete herpesviral genomes, including HSV, is complicated due to the existence of large repeat regions and arrays of smaller reiterated sequences that are commonly found in these genomes. In addition, the inherent genetic variation in populations of isolates for viruses and other microorganisms presents an additional challenge to many existing HTS sequence assembly pipelines. Here, we evaluate two approaches for the identification of genetic variants in HSV1 strains using Illumina short read sequencing data. The first, a reference-based approach, identifies variants from reads aligned to a reference sequence and the second, a de novo assembly approach, identifies variants from reads aligned to de novo assembled consensus sequences. Of critical importance for both approaches is the reduction in the number of low complexity regions through the construction of a non-redundant reference genome. We compared variants identified in the two methods. Our results indicate that approximately 85% of variants are identified regardless of the approach. The reference-based approach to variant discovery captures an additional 15% representing variants divergent from the HSV1 reference possibly due to viral passage. Reference-based approaches are significantly less labor-intensive and identify variants across the genome where de novo assembly-based approaches are limited to regions where contigs have been successfully assembled. In addition, regions of poor quality assembly can lead to false variant identification in de novo consensus sequences. For viruses with a well-assembled reference genome, a reference-based approach is recommended.
NASA Technical Reports Server (NTRS)
Wallis, Graham B.
1989-01-01
Some features of two recent approaches of two-phase potential flow are presented. The first approach is based on a set of progressive examples that can be analyzed using common techniques, such as conservation laws, and taken together appear to lead in the direction of a general theory. The second approach is based on variational methods, a classical approach to conservative mechanical systems that has a respectable history of application to single phase flows. This latter approach, exemplified by several recent papers by Geurst, appears generally to be consistent with the former approach, at least in those cases for which it is possible to obtain comparable results. Each approach has a justifiable theoretical base and is self-consistent. Moreover, both approaches appear to give the right prediction for several well-defined situations.
Papers in Syntax. Working Papers in Linguistics No. 42.
ERIC Educational Resources Information Center
Kathol, Andreas, Ed.; Pollard, Carl, Ed.
1993-01-01
This collection of working papers in syntax includes: "Null Objects in Mandarin Chinese" (Christie Block); "Toward a Linearization-Based Approach to Word Order Variation in Japanese" (Mike Calcagno); "A Lexical Approach to Inalienable Possession Constructions in Korean" (Chung, Chan); "Chinese NP Structure"…
PoMo: An Allele Frequency-Based Approach for Species Tree Estimation
De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin
2015-01-01
Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
NASA Astrophysics Data System (ADS)
Singh, K.; Sandu, A.; Bowman, K. W.; Parrington, M.; Jones, D. B. A.; Lee, M.
2011-08-01
Chemistry transport models determine the evolving chemical state of the atmosphere by solving the fundamental equations that govern physical and chemical transformations subject to initial conditions of the atmospheric state and surface boundary conditions, e.g., surface emissions. The development of data assimilation techniques synthesize model predictions with measurements in a rigorous mathematical framework that provides observational constraints on these conditions. Two families of data assimilation methods are currently widely used: variational and Kalman filter (KF). The variational approach is based on control theory and formulates data assimilation as a minimization problem of a cost functional that measures the model-observations mismatch. The Kalman filter approach is rooted in statistical estimation theory and provides the analysis covariance together with the best state estimate. Suboptimal Kalman filters employ different approximations of the covariances in order to make the computations feasible with large models. Each family of methods has both merits and drawbacks. This paper compares several data assimilation methods used for global chemical data assimilation. Specifically, we evaluate data assimilation approaches for improving estimates of the summertime global tropospheric ozone distribution in August 2006 based on ozone observations from the NASA Tropospheric Emission Spectrometer and the GEOS-Chem chemistry transport model. The resulting analyses are compared against independent ozonesonde measurements to assess the effectiveness of each assimilation method. All assimilation methods provide notable improvements over the free model simulations, which differ from the ozonesonde measurements by about 20 % (below 200 hPa). Four dimensional variational data assimilation with window lengths between five days and two weeks is the most accurate method, with mean differences between analysis profiles and ozonesonde measurements of 1-5 %. Two sequential assimilation approaches (three dimensional variational and suboptimal KF), although derived from different theoretical considerations, provide similar ozone estimates, with relative differences of 5-10 % between the analyses and ozonesonde measurements. Adjoint sensitivity analysis techniques are used to explore the role of of uncertainties in ozone precursors and their emissions on the distribution of tropospheric ozone. A novel technique is introduced that projects 3-D-Variational increments back to an equivalent initial condition, which facilitates comparison with 4-D variational techniques.
Level set formulation of two-dimensional Lagrangian vortex detection methods
NASA Astrophysics Data System (ADS)
Hadjighasem, Alireza; Haller, George
2016-10-01
We propose here the use of the variational level set methodology to capture Lagrangian vortex boundaries in 2D unsteady velocity fields. This method reformulates earlier approaches that seek material vortex boundaries as extremum solutions of variational problems. We demonstrate the performance of this technique for two different variational formulations built upon different notions of coherence. The first formulation uses an energy functional that penalizes the deviation of a closed material line from piecewise uniform stretching [Haller and Beron-Vera, J. Fluid Mech. 731, R4 (2013)]. The second energy function is derived for a graph-based approach to vortex boundary detection [Hadjighasem et al., Phys. Rev. E 93, 063107 (2016)]. Our level-set formulation captures an a priori unknown number of vortices simultaneously at relatively low computational cost. We illustrate the approach by identifying vortices from different coherence principles in several examples.
Evaluating abundance and trends in a Hawaiian avian community using state-space analysis
Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.
2016-01-01
Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.
In Silico Detection of Sequence Variations Modifying Transcriptional Regulation
Andersen, Malin C; Engström, Pär G; Lithwick, Stuart; Arenillas, David; Eriksson, Per; Lenhard, Boris; Wasserman, Wyeth W; Odeberg, Jacob
2008-01-01
Identification of functional genetic variation associated with increased susceptibility to complex diseases can elucidate genes and underlying biochemical mechanisms linked to disease onset and progression. For genes linked to genetic diseases, most identified causal mutations alter an encoded protein sequence. Technological advances for measuring RNA abundance suggest that a significant number of undiscovered causal mutations may alter the regulation of gene transcription. However, it remains a challenge to separate causal genetic variations from linked neutral variations. Here we present an in silico driven approach to identify possible genetic variation in regulatory sequences. The approach combines phylogenetic footprinting and transcription factor binding site prediction to identify variation in candidate cis-regulatory elements. The bioinformatics approach has been tested on a set of SNPs that are reported to have a regulatory function, as well as background SNPs. In the absence of additional information about an analyzed gene, the poor specificity of binding site prediction is prohibitive to its application. However, when additional data is available that can give guidance on which transcription factor is involved in the regulation of the gene, the in silico binding site prediction improves the selection of candidate regulatory polymorphisms for further analyses. The bioinformatics software generated for the analysis has been implemented as a Web-based application system entitled RAVEN (regulatory analysis of variation in enhancers). The RAVEN system is available at http://www.cisreg.ca for all researchers interested in the detection and characterization of regulatory sequence variation. PMID:18208319
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Sun, Y.; Ditmar, P.; Riva, R.
2016-12-01
Time-varying gravity field solutions of the GRACE satellite mission enable an observation of Earth's mass transport on a monthly basis since 2002. One of the remaining challenges is how to complement these solutions with sufficiently accurate estimates of very low-degree spherical harmonic coefficients, particularly degree-1 coefficients and C20. An absence or inaccurate estimation of these coefficients may result in strong biases in mass transports estimates. Variations in degree-1 coefficients reflect geocenter motion and variations in the C20coefficients describe changes in the Earth's dynamic oblateness (ΔJ2). In this study, we developed a novel methodology to estimate monthly variations in degree-1 and C20coefficients by combing GRACE data with oceanic mass anomalies (combination approach). Unlike the method by Swenson et al. (2008), the proposed approach exploits noise covariance information of both input datasets and thus produces stochastically optimal solutions. A numerical simulation study is carried out to verify the correctness and performance of the proposed approach. We demonstrate that solutions obtained with the proposed approach have a significantly higher quality, as compared to the method by Swenson et al. Finally, we apply the proposed approach to real monthly GRACE solutions. To evaluate the obtained results, we calculate mass transport time-series over selected regions where minimal mass anomalies are expected. A clear reduction in the RMS of the mass transport time-series (more than 50 %) is observed there when the degree-1 and C20 coefficients obtained with the proposed approach are used. In particular, the seasonal pattern in the mass transport time-series disappears almost entirely. The traditional approach (degree-1 coefficients based on Swenson et al. (2008) and C20 based on SLR data), in contrast, does not reduce that RMS or even makes it larger (e.g., over the Sahara desert). We further show that the degree-1 variations play a major role in the observed improvement. At the same time, the usage of the C20 solutions obtained with the combination approach yields a similar accuracy of mass anomaly estimates, as compared to the results based on SLR analysis. The computed degree-1 and C20 coefficients will be made publicly available.
Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Global mapping of transposon location.
Gabriel, Abram; Dapprich, Johannes; Kunkel, Mark; Gresham, David; Pratt, Stephen C; Dunham, Maitreya J
2006-12-15
Transposable genetic elements are ubiquitous, yet their presence or absence at any given position within a genome can vary between individual cells, tissues, or strains. Transposable elements have profound impacts on host genomes by altering gene expression, assisting in genomic rearrangements, causing insertional mutations, and serving as sources of phenotypic variation. Characterizing a genome's full complement of transposons requires whole genome sequencing, precluding simple studies of the impact of transposition on interindividual variation. Here, we describe a global mapping approach for identifying transposon locations in any genome, using a combination of transposon-specific DNA extraction and microarray-based comparative hybridization analysis. We use this approach to map the repertoire of endogenous transposons in different laboratory strains of Saccharomyces cerevisiae and demonstrate that transposons are a source of extensive genomic variation. We also apply this method to mapping bacterial transposon insertion sites in a yeast genomic library. This unique whole genome view of transposon location will facilitate our exploration of transposon dynamics, as well as defining bases for individual differences and adaptive potential.
NASA Astrophysics Data System (ADS)
Still, C. J.; Griffith, D.; Edwards, E.; Forrestel, E.; Lehmann, C.; Anderson, M.; Craine, J.; Pau, S.; Osborne, C.
2014-12-01
Variation in plant species traits, such as photosynthetic and hydraulic properties, can indicate vulnerability or resilience to climate change, and feed back to broad-scale spatial and temporal patterns in biogeochemistry, demographics, and biogeography. Yet, predicting how vegetation will respond to future environmental changes is severely limited by the inability of our models to represent species-level trait variation in processes and properties, as current generation process-based models are mostly based on the generalized and abstracted concept of plant functional types (PFTs) which were originally developed for hydrological modeling. For example, there are close to 11,000 grass species, but most vegetation models have only a single C4 grass and one or two C3 grass PFTs. However, while species trait databases are expanding rapidly, they have been produced mostly from unstructured research, with a focus on easily researched traits that are not necessarily the most important for determining plant function. Additionally, implementing realistic species-level trait variation in models is challenging. Combining related and ecologically similar species in these models might ameliorate this limitation. Here we argue for an intermediate, lineage-based approach to PFTs, which draws upon recent advances in gene sequencing and phylogenetic modeling, and where trait complex variations and anatomical features are constrained by a shared evolutionary history. We provide an example of this approach with grass lineages that vary in photosynthetic pathway (C3 or C4) and other functional and structural traits. We use machine learning approaches and geospatial databases to infer the most important environmental controls and climate niche variation for the distribution of grass lineages, and utilize a rapidly expanding grass trait database to demonstrate examples of lineage-based grass PFTs. For example, grasses in the Andropogoneae are typically tall species that dominate wet and seasonally burned ecosystems, whereas Chloridoideae grasses are associated with semi-arid regions. These two C4 lineages are expected to respond quite differently to climate change, but are often modelled as a single PFT.
FROG - Fingerprinting Genomic Variation Ontology
Bhardwaj, Anshu
2015-01-01
Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: “FingeRprinting Ontology of Genomic variations” is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog. PMID:26244889
Effective Implementation of a Comprehension-Improvement Approach in Secondary Schools.
ERIC Educational Resources Information Center
Levine, Daniel U.; Sherk, John K.
This report describes in depth the implementation and impact of instructional strategies to improve students' comprehension skills at three diverse urban secondary schools. While activities and characteristics varied, educators at all three locations were implementing local variations of a school-improvement approach based on the use of the…
2015-07-01
the radius of gyration in detail as a function FIG. 5. Variation of the root mean square (RMS) displacement of the center of mass of the protein with...depends on the temperature. The global motion can be examined by analyzing the variation of the root mean square displacement (RMS) of the center of...and global physical quantities during the course of simula- tion, including the energy of each residue, its mobility, mean square displacement of the
Griffiths, Jason I.; Fronhofer, Emanuel A.; Garnier, Aurélie; Seymour, Mathew; Altermatt, Florian; Petchey, Owen L.
2017-01-01
The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML) algorithms into meaningful ecological information. ML uses user defined classes (e.g. species), derived from a subset (i.e. training data) of video-observed quantitative features (e.g. phenotypic variation), to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our classification pipeline can be applied in fields assessing species community dynamics, such as eco-toxicology, ecology and evolutionary ecology. PMID:28472193
Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami
2015-01-01
6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448
The effect of individually-induced processes on image-based overlay and diffraction-based overlay
NASA Astrophysics Data System (ADS)
Oh, SeungHwa; Lee, Jeongjin; Lee, Seungyoon; Hwang, Chan; Choi, Gilheyun; Kang, Ho-Kyu; Jung, EunSeung
2014-04-01
In this paper, set of wafers with separated processes was prepared and overlay measurement result was compared in two methods; IBO and DBO. Based on the experimental result, theoretical approach of relationship between overlay mark deformation and overlay variation is presented. Moreover, overlay reading simulation was used in verification and prediction of overlay variation due to deformation of overlay mark caused by induced processes. Through this study, understanding of individual process effects on overlay measurement error is given. Additionally, guideline of selecting proper overlay measurement scheme for specific layer is presented.
Moore, Jason H; Boczko, Erik M; Summar, Marshall L
2005-02-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two or more DNA sequence variations. We review here this approach and then discuss how it can be used to model biochemical and metabolic data in the context of genetic studies of human disease susceptibility.
Vogeler, Iris; Mackay, Alec; Vibart, Ronaldo; Rendel, John; Beautrais, Josef; Dennis, Samuel
2016-09-15
Farm system and nutrient budget models are increasingly being used in analysis to inform on farm decision making and evaluate land use policy options at regional scales. These analyses are generally based on the use of average annual pasture yields. In New Zealand (NZ), like in many countries, there is considerable inter-annual variation in pasture growth rates, due to climate. In this study a modelling approach was used to (i) include inter-annual variability as an integral part of the analysis and (ii) test the approach in an economic analysis of irrigation in a case study within the Hawkes Bay Region of New Zealand. The Agricultural Production Systems Simulator (APSIM) was used to generate pasture dry matter yields (DMY) for 20 different years and under both dryland and irrigation. The generated DMY were linked to outputs from farm-scale modelling for both Sheep and Beef Systems (Farmaxx Pro) and Dairy Systems (Farmax® Dairy Pro) to calculate farm production over 20 different years. Variation in DMY and associated livestock production due to inter-annual variation in climate was large, with a coefficient of variations up to 20%. Irrigation decreased this inter-annual variation. On average irrigation, with unlimited available water, increased income by $831 to 1195/ha, but when irrigation was limited to 250mm/ha/year income only increased by $525 to 883/ha. Using pasture responses in individual years to capturing the inter-annual variation, rather than the pasture response averaged over 20years resulted in lower financial benefits. In the case study income from irrigation based on an average year were 10 to >20% higher compared with those obtained from individual years. Copyright © 2016 Elsevier B.V. All rights reserved.
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...
2017-01-03
Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less
Natural Allelic Variations in Highly Polyploidy Saccharum Complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jian; Yang, Xiping; Resende, Jr., Marcio F. R.
Sugarcane ( Saccharum spp.) is an important sugar and biofuel crop with high polyploid and complex genomes. The Saccharum complex, comprised of Saccharum genus and a few related genera, are important genetic resources for sugarcane breeding. A large amount of natural variation exists within the Saccharum complex. Though understanding their allelic variation has been challenging, it is critical to dissect allelic structure and to identify the alleles controlling important traits in sugarcane. To characterize natural variations in Saccharum complex, a target enrichment sequencing approach was used to assay 12 representative germplasm accessions. In total, 55,946 highly efficient probes were designedmore » based on the sorghum genome and sugarcane unigene set targeting a total of 6 Mb of the sugarcane genome. A pipeline specifically tailored for polyploid sequence variants and genotype calling was established. BWAmem and sorghum genome approved to be an acceptable aligner and reference for sugarcane target enrichment sequence analysis, respectively. Genetic variations including 1,166,066 non-redundant SNPs, 150,421 InDels, 919 gene copy number variations, and 1,257 gene presence/absence variations were detected. SNPs from three different callers (Samtools, Freebayes, and GATK) were compared and the validation rates were nearly 90%. Based on the SNP loci of each accession and their ploidy levels, 999,258 single dosage SNPs were identified and most loci were estimated as largely homozygotes. An average of 34,397 haplotype blocks for each accession was inferred. The highest divergence time among the Saccharum spp. was estimated as 1.2 million years ago (MYA). Saccharum spp. diverged from Erianthus and Sorghum approximately 5 and 6 MYA, respectively. Furthermore, the target enrichment sequencing approach provided an effective way to discover and catalog natural allelic variation in highly polyploid or heterozygous genomes.« less
Natural Allelic Variations in Highly Polyploidy Saccharum Complex
Song, Jian; Yang, Xiping; Resende, Jr., Marcio F. R.; ...
2016-06-08
Sugarcane ( Saccharum spp.) is an important sugar and biofuel crop with high polyploid and complex genomes. The Saccharum complex, comprised of Saccharum genus and a few related genera, are important genetic resources for sugarcane breeding. A large amount of natural variation exists within the Saccharum complex. Though understanding their allelic variation has been challenging, it is critical to dissect allelic structure and to identify the alleles controlling important traits in sugarcane. To characterize natural variations in Saccharum complex, a target enrichment sequencing approach was used to assay 12 representative germplasm accessions. In total, 55,946 highly efficient probes were designedmore » based on the sorghum genome and sugarcane unigene set targeting a total of 6 Mb of the sugarcane genome. A pipeline specifically tailored for polyploid sequence variants and genotype calling was established. BWAmem and sorghum genome approved to be an acceptable aligner and reference for sugarcane target enrichment sequence analysis, respectively. Genetic variations including 1,166,066 non-redundant SNPs, 150,421 InDels, 919 gene copy number variations, and 1,257 gene presence/absence variations were detected. SNPs from three different callers (Samtools, Freebayes, and GATK) were compared and the validation rates were nearly 90%. Based on the SNP loci of each accession and their ploidy levels, 999,258 single dosage SNPs were identified and most loci were estimated as largely homozygotes. An average of 34,397 haplotype blocks for each accession was inferred. The highest divergence time among the Saccharum spp. was estimated as 1.2 million years ago (MYA). Saccharum spp. diverged from Erianthus and Sorghum approximately 5 and 6 MYA, respectively. Furthermore, the target enrichment sequencing approach provided an effective way to discover and catalog natural allelic variation in highly polyploid or heterozygous genomes.« less
NASA Astrophysics Data System (ADS)
Asahi, H.; Nam, S. I.; Stein, R. H.; Mackensen, A.; Son, Y. J.
2017-12-01
The usability of planktic foraminiferal census data in Arctic paleoceanography is limited by the predominance of Neogloboquadrina pachyderma (sinistral). Though a potential usability of their morphological variation has been suggested by recent studies, its application is restricted to the central part of the Arctic Ocean. Here we present their regional distribution, using 80 surface sediment samples from the central and the western Arctic Ocean. Among seven morphological variations encountered, distinct presence of "large-sized" N. pachyderma morphotypes at the summer sea-ice edge in the western Arctic demonstrates its strong potential as sea-ice distribution indicator. Based on their regional patterns, we further developed planktic foraminifer (PF)-based transfer functions (TFs) to reconstruct summer surface-water temperature, salinity and sea-ice concentration in the western and central Arctic. The comparison of sea-ice reconstructions by PF-based TF to other pre-existed approaches showed their recognizable advantages/disadvantages: the PF-based approach in the nearby/within heavily ice-covered region, the dinocyst-based approach in the extensively seasonal ice retreat region, and the IP25-based approach with overall reflection over a wide range of sea-ice coverage, which is likely attributed to their (a) taphonomical information-loss, (b) different seasonal production patterns or combination of both. The application of these TFs on a sediment core from Northwind Ridge suggests general warming, freshening, and sea-ice reduction after 6.0 ka. This generally agrees with PF stable isotope records and sea-ice reconstructions from dinocyst-based TF at proximal locations, indicating that the sea-ice behavior at the Northwind Ridge is notably different from the IP25-based sea-ice reconstructions reported from elsewhere in the Arctic Ocean. Lack of regional coverage of PF-based reconstructions hampers further discussion whether the observed inconsistency is simply caused by different regional coverage of data and/or their different sensitivity yet. Thus, additional PF-census data with their isotope signatures from other cores from different ice regimes in the Arctic Ocean (e.g., Lomonosov Ridge and Mendeelev Ridge) will provide further discussion for such inconsisntency.
Effective Implementation of a Comprehension-Improvement Approach in Secondary Schools. Summary.
ERIC Educational Resources Information Center
Levine, Daniel U.; Sherk, John K.
This document summarizes a report on the implementation and impact of instructional strategies to improve students' comprehension skills at three diverse urban secondary schools. While activities and characteristics varied, educators at all three locations were implementing local variations of a school-improvement approach based on the use of the…
Bannister-Tyrrell, Melanie; Williams, Craig; Ritchie, Scott A.; Rau, Gina; Lindesay, Janette; Mercer, Geoff; Harley, David
2013-01-01
The impact of weather variation on dengue transmission in Cairns, Australia, was determined by applying a process-based dengue simulation model (DENSiM) that incorporated local meteorologic, entomologic, and demographic data. Analysis showed that inter-annual weather variation is one of the significant determinants of dengue outbreak receptivity. Cross-correlation analyses showed that DENSiM simulated epidemics of similar relative magnitude and timing to those historically recorded in reported dengue cases in Cairns during 1991–2009, (r = 0.372, P < 0.01). The DENSiM model can now be used to study the potential impacts of future climate change on dengue transmission. Understanding the impact of climate variation on the geographic range, seasonality, and magnitude of dengue transmission will enhance development of adaptation strategies to minimize future disease burden in Australia. PMID:23166197
NASA Astrophysics Data System (ADS)
Kim, Sungho
2017-06-01
Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.
Cisneros, Laura M; Fagan, Matthew E; Willig, Michael R
2016-01-01
Assembly of species into communities following human disturbance (e.g., deforestation, fragmentation) may be governed by spatial (e.g., dispersal) or environmental (e.g., niche partitioning) mechanisms. Variation partitioning has been used to broadly disentangle spatial and environmental mechanisms, and approaches utilizing functional and phylogenetic characteristics of communities have been implemented to determine the relative importance of particular environmental (or niche-based) mechanisms. Nonetheless, few studies have integrated these quantitative approaches to comprehensively assess the relative importance of particular structuring processes. We employed a novel variation partitioning approach to evaluate the relative importance of particular spatial and environmental drivers of taxonomic, functional, and phylogenetic aspects of bat communities in a human-modified landscape in Costa Rica. Specifically, we estimated the amount of variation in species composition (taxonomic structure) and in two aspects of functional and phylogenetic structure (i.e., composition and dispersion) along a forest loss and fragmentation gradient that are uniquely explained by landscape characteristics (i.e., environment) or space to assess the importance of competing mechanisms. The unique effects of space on taxonomic, functional and phylogenetic structure were consistently small. In contrast, landscape characteristics (i.e., environment) played an appreciable role in structuring bat communities. Spatially-structured landscape characteristics explained 84% of the variation in functional or phylogenetic dispersion, and the unique effects of landscape characteristics significantly explained 14% of the variation in species composition. Furthermore, variation in bat community structure was primarily due to differences in dispersion of species within functional or phylogenetic space along the gradient, rather than due to differences in functional or phylogenetic composition. Variation among bat communities was related to environmental mechanisms, especially niche-based (i.e., environmental) processes, rather than spatial mechanisms. High variation in functional or phylogenetic dispersion, as opposed to functional or phylogenetic composition, suggests that loss or gain of niche space is driving the progressive loss or gain of species with particular traits from communities along the human-modified gradient. Thus, environmental characteristics associated with landscape structure influence functional or phylogenetic aspects of bat communities by effectively altering the ways in which species partition niche space.
Fagan, Matthew E.; Willig, Michael R.
2016-01-01
Background Assembly of species into communities following human disturbance (e.g., deforestation, fragmentation) may be governed by spatial (e.g., dispersal) or environmental (e.g., niche partitioning) mechanisms. Variation partitioning has been used to broadly disentangle spatial and environmental mechanisms, and approaches utilizing functional and phylogenetic characteristics of communities have been implemented to determine the relative importance of particular environmental (or niche-based) mechanisms. Nonetheless, few studies have integrated these quantitative approaches to comprehensively assess the relative importance of particular structuring processes. Methods We employed a novel variation partitioning approach to evaluate the relative importance of particular spatial and environmental drivers of taxonomic, functional, and phylogenetic aspects of bat communities in a human-modified landscape in Costa Rica. Specifically, we estimated the amount of variation in species composition (taxonomic structure) and in two aspects of functional and phylogenetic structure (i.e., composition and dispersion) along a forest loss and fragmentation gradient that are uniquely explained by landscape characteristics (i.e., environment) or space to assess the importance of competing mechanisms. Results The unique effects of space on taxonomic, functional and phylogenetic structure were consistently small. In contrast, landscape characteristics (i.e., environment) played an appreciable role in structuring bat communities. Spatially-structured landscape characteristics explained 84% of the variation in functional or phylogenetic dispersion, and the unique effects of landscape characteristics significantly explained 14% of the variation in species composition. Furthermore, variation in bat community structure was primarily due to differences in dispersion of species within functional or phylogenetic space along the gradient, rather than due to differences in functional or phylogenetic composition. Discussion Variation among bat communities was related to environmental mechanisms, especially niche-based (i.e., environmental) processes, rather than spatial mechanisms. High variation in functional or phylogenetic dispersion, as opposed to functional or phylogenetic composition, suggests that loss or gain of niche space is driving the progressive loss or gain of species with particular traits from communities along the human-modified gradient. Thus, environmental characteristics associated with landscape structure influence functional or phylogenetic aspects of bat communities by effectively altering the ways in which species partition niche space. PMID:27761338
NASA Astrophysics Data System (ADS)
Saturnino, Diana; Langlais, Benoit; Amit, Hagay; Mandea, Mioara; Civet, François; Beucler, Éric
2017-04-01
A complete description of the main geomagnetic field temporal variation is crucial to understand dynamics in the core. This variation, termed secular variation (SV), is known with high accuracy at ground magnetic observatory locations. However the description of its spatial variability is hampered by the globally uneven distribution of the observatories. For the past two decades a global coverage of the field changes has been allowed by satellites. Their surveys of the geomagnetic field have been used to derive and improve global spherical harmonic (SH) models through some strict data selection schemes to minimise external field contributions. But discrepancies remain between ground measurements and field predictions by these models. Indeed, the global models do not reproduce small spatial scales of the field temporal variations. To overcome this problem we propose a modified Virtual Observatory (VO) approach by defining a globally homogeneous mesh of VOs at satellite altitude. With this approach we directly extract time series of the field and its temporal variation from satellite measurements as it is done at observatory locations. As satellite measurements are acquired at different altitudes a correction for the altitude is needed. Therefore, we apply an Equivalent Source Dipole (ESD) technique for each VO and each given time interval to reduce all measurements to a unique location, leading to time series similar to those available at ground magnetic observatories. Synthetic data is first used to validate the new VO-ESD approach. Then, we apply our scheme to measurements from the Swarm mission. For the first time, a 2.5 degrees resolution global mesh of VO times series is built. The VO-ESD derived time series are locally compared to ground observations as well as to satellite-based model predictions. The approach is able to describe detailed temporal variations of the field at local scales. The VO-ESD time series are also used to derive global SH models. Without regularization these models describe well the secular trend of the magnetic field. The derivation of longer VO-ESD time series, as more data will be made available, will allow the study of field temporal variations features such as geomagnetic jerks.
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
Tapio, I; Värv, S; Bennewitz, J; Maleviciute, J; Fimland, E; Grislis, Z; Meuwissen, T H E; Miceikiene, I; Olsaker, I; Viinalass, H; Vilkki, J; Kantanen, J
2006-12-01
Northern European indigenous cattle breeds are currently endangered and at a risk of becoming extinct. We analyzed variation at 20 microsatellite loci in 23 indigenous, 3 old imported, and 9 modern commercial cattle breeds that are presently distributed in northern Europe. We measured the breeds' allelic richness and heterozygosity, and studied their genetic relationships with a neighbor-joining tree based on the Chord genetic distance matrix. We used the Weitzman approach and the core set diversity measure of Eding et al. (2002) to quantify the contribution of each breed to the maximum amount of genetic diversity and to identify breeds important for the conservation of genetic diversity. We defined 11 breeds as a "safe set" of breeds (not endangered) and estimated a reduction in genetic diversity if all nonsafe (endangered) breeds were lost. We then calculated the increase in genetic diversity by adding one by one each of the nonsafe breeds to the safe set (the safe-set-plus-one approach). The neighbor-joining tree grouped the northern European cattle breeds into Black-and-White type, Baltic Red, and Nordic cattle groups. Väne cattle, Bohus Poll, and Danish Jersey had the highest relative contribution to the maximum amount of genetic diversity when the diversity was quantified by the Weitzman diversity measure. These breeds not only showed phylogenetic distinctiveness but also low within-population variation. When the Eding et al. method was applied, Eastern Finncattle and Lithuanian White Backed cattle contributed most of the genetic variation. If the loss of the nonsafe set of breeds happens, the reduction in genetic diversity would be substantial (72%) based on the Weitzman approach, but relatively small (1.81%) based on the Eding et al. method. The safe set contained only 66% of the observed microsatellite alleles. The safe-set-plus-one approach indicated that Bohus Poll and Väne cattle contributed most to the Weitzman diversity, whereas the Eastern Finncattle contribution was the highest according to the Eding et al. method. Our results indicate that both methods of Weitzman and Eding et al. recognize the importance of local populations as a valuable resource of genetic variation.
Scaling up functional traits for ecosystem services with remote sensing: concepts and methods.
Abelleira Martínez, Oscar J; Fremier, Alexander K; Günter, Sven; Ramos Bendaña, Zayra; Vierling, Lee; Galbraith, Sara M; Bosque-Pérez, Nilsa A; Ordoñez, Jenny C
2016-07-01
Ecosystem service-based management requires an accurate understanding of how human modification influences ecosystem processes and these relationships are most accurate when based on functional traits. Although trait variation is typically sampled at local scales, remote sensing methods can facilitate scaling up trait variation to regional scales needed for ecosystem service management. We review concepts and methods for scaling up plant and animal functional traits from local to regional spatial scales with the goal of assessing impacts of human modification on ecosystem processes and services. We focus our objectives on considerations and approaches for (1) conducting local plot-level sampling of trait variation and (2) scaling up trait variation to regional spatial scales using remotely sensed data. We show that sampling methods for scaling up traits need to account for the modification of trait variation due to land cover change and species introductions. Sampling intraspecific variation, stratification by land cover type or landscape context, or inference of traits from published sources may be necessary depending on the traits of interest. Passive and active remote sensing are useful for mapping plant phenological, chemical, and structural traits. Combining these methods can significantly improve their capacity for mapping plant trait variation. These methods can also be used to map landscape and vegetation structure in order to infer animal trait variation. Due to high context dependency, relationships between trait variation and remotely sensed data are not directly transferable across regions. We end our review with a brief synthesis of issues to consider and outlook for the development of these approaches. Research that relates typical functional trait metrics, such as the community-weighted mean, with remote sensing data and that relates variation in traits that cannot be remotely sensed to other proxies is needed. Our review narrows the gap between functional trait and remote sensing methods for ecosystem service management.
NASA Astrophysics Data System (ADS)
Nicgorski, Dana; Avitabile, Peter
2010-07-01
Frequency-based substructuring is a very popular approach for the generation of system models from component measured data. Analytically the approach has been shown to produce accurate results. However, implementation with actual test data can cause difficulties and cause problems with the system response prediction. In order to produce good results, extreme care is needed in the measurement of the drive point and transfer impedances of the structure as well as observe all the conditions for a linear time invariant system. Several studies have been conducted to show the sensitivity of the technique to small variations that often occur during typical testing of structures. These variations have been observed in actual tested configurations and have been substantiated with analytical models to replicate the problems typically encountered. The use of analytically simulated issues helps to clearly see the effects of typical measurement difficulties often observed in test data. This paper presents some of these common problems observed and provides guidance and recommendations for data to be used for this modeling approach.
NASA Astrophysics Data System (ADS)
Gallup, G. A.; Gerratt, J.
1985-09-01
The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.
Mertens, Franz G.; Cooper, Fred; Arevalo, Edward; ...
2016-09-15
Here in this paper, we discuss the behavior of solitary wave solutions of the nonlinear Schrödinger equation (NLSE) as they interact with complex potentials, using a four-parameter variational approximation based on a dissipation functional formulation of the dynamics. We concentrate on spatially periodic potentials with the periods of the real and imaginary part being either the same or different. Our results for the time evolution of the collective coordinates of our variational ansatz are in good agreement with direct numerical simulation of the NLSE. We compare our method with a collective coordinate approach of Kominis and give examples where themore » two methods give qualitatively different answers. In our variational approach, we are able to give analytic results for the small oscillation frequency of the solitary wave oscillating parameters which agree with the numerical solution of the collective coordinate equations. We also verify that instabilities set in when the slope dp(t)/dv(t) becomes negative when plotted parametrically as a function of time, where p(t) is the momentum of the solitary wave and v(t) the velocity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mertens, Franz G.; Cooper, Fred; Arevalo, Edward
Here in this paper, we discuss the behavior of solitary wave solutions of the nonlinear Schrödinger equation (NLSE) as they interact with complex potentials, using a four-parameter variational approximation based on a dissipation functional formulation of the dynamics. We concentrate on spatially periodic potentials with the periods of the real and imaginary part being either the same or different. Our results for the time evolution of the collective coordinates of our variational ansatz are in good agreement with direct numerical simulation of the NLSE. We compare our method with a collective coordinate approach of Kominis and give examples where themore » two methods give qualitatively different answers. In our variational approach, we are able to give analytic results for the small oscillation frequency of the solitary wave oscillating parameters which agree with the numerical solution of the collective coordinate equations. We also verify that instabilities set in when the slope dp(t)/dv(t) becomes negative when plotted parametrically as a function of time, where p(t) is the momentum of the solitary wave and v(t) the velocity.« less
Variation in reaction norms: Statistical considerations and biological interpretation.
Morrissey, Michael B; Liefting, Maartje
2016-09-01
Analysis of reaction norms, the functions by which the phenotype produced by a given genotype depends on the environment, is critical to studying many aspects of phenotypic evolution. Different techniques are available for quantifying different aspects of reaction norm variation. We examine what biological inferences can be drawn from some of the more readily applicable analyses for studying reaction norms. We adopt a strongly biologically motivated view, but draw on statistical theory to highlight strengths and drawbacks of different techniques. In particular, consideration of some formal statistical theory leads to revision of some recently, and forcefully, advocated opinions on reaction norm analysis. We clarify what simple analysis of the slope between mean phenotype in two environments can tell us about reaction norms, explore the conditions under which polynomial regression can provide robust inferences about reaction norm shape, and explore how different existing approaches may be used to draw inferences about variation in reaction norm shape. We show how mixed model-based approaches can provide more robust inferences than more commonly used multistep statistical approaches, and derive new metrics of the relative importance of variation in reaction norm intercepts, slopes, and curvatures. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hovel, Harold; Prettyman, Kevin
A side-by-side analysis was done on then currently available technology, along with roadmaps to push each particular option forward. Variations in turnkey line processes can and do result in finished solar device performance. Together with variations in starting material quality, the result is a distribution of effciencies. Forensic analysis and characterization of each crystalline Si based technology will determine the most promising approach with respect to cost, efficiency and reliability. Forensic analysis will also shed light on the causes of binning variations. Si solar cells were forensically analyzed from each turn key supplier using a host of techniques
Turner, Thomas L.; Stewart, Andrew D.; Fields, Andrew T.; Rice, William R.; Tarone, Aaron M.
2011-01-01
Body size is a classic quantitative trait with evolutionarily significant variation within many species. Locating the alleles responsible for this variation would help understand the maintenance of variation in body size in particular, as well as quantitative traits in general. However, successful genome-wide association of genotype and phenotype may require very large sample sizes if alleles have low population frequencies or modest effects. As a complementary approach, we propose that population-based resequencing of experimentally evolved populations allows for considerable power to map functional variation. Here, we use this technique to investigate the genetic basis of natural variation in body size in Drosophila melanogaster. Significant differentiation of hundreds of loci in replicate selection populations supports the hypothesis that the genetic basis of body size variation is very polygenic in D. melanogaster. Significantly differentiated variants are limited to single genes at some loci, allowing precise hypotheses to be formed regarding causal polymorphisms, while other significant regions are large and contain many genes. By using significantly associated polymorphisms as a priori candidates in follow-up studies, these data are expected to provide considerable power to determine the genetic basis of natural variation in body size. PMID:21437274
Efficient Mean Field Variational Algorithm for Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Vrettas, M. D.; Cornford, D.; Opper, M.
2013-12-01
Data assimilation algorithms combine available observations of physical systems with the assumed model dynamics in a systematic manner, to produce better estimates of initial conditions for prediction. Broadly they can be categorized in three main approaches: (a) sequential algorithms, (b) sampling methods and (c) variational algorithms which transform the density estimation problem to an optimization problem. However, given finite computational resources, only a handful of ensemble Kalman filters and 4DVar algorithms have been applied operationally to very high dimensional geophysical applications, such as weather forecasting. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the ';optimal' posterior distribution over the continuous time states, within a family of non-stationary Gaussian processes. Our initial work on variational Bayesian approaches to data assimilation, unlike the well-known 4DVar method which seeks only the most probable solution, computes the best time varying Gaussian process approximation to the posterior smoothing distribution for dynamical systems that can be represented by stochastic differential equations. This approach was based on minimising the Kullback-Leibler divergence, over paths, between the true posterior and our Gaussian process approximation. Whilst the observations were informative enough to keep the posterior smoothing density close to Gaussian the algorithm proved very effective on low dimensional systems (e.g. O(10)D). However for higher dimensional systems, the high computational demands make the algorithm prohibitively expensive. To overcome the difficulties presented in the original framework and make our approach more efficient in higher dimensional systems we have been developing a new mean field version of the algorithm which treats the state variables at any given time as being independent in the posterior approximation, while still accounting for their relationships in the mean solution arising from the original system dynamics. Here we present this new mean field approach, illustrating its performance on a range of benchmark data assimilation problems whose dimensionality varies from O(10) to O(10^3)D. We emphasise that the variational Bayesian approach we adopt, unlike other variational approaches, provides a natural bound on the marginal likelihood of the observations given the model parameters which also allows for inference of (hyper-) parameters such as observational errors, parameters in the dynamical model and model error representation. We also stress that since our approach is intrinsically parallel it can be implemented very efficiently to address very long data assimilation time windows. Moreover, like most traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem therefore its complexity can be tuned to the available computational resources. We finish with a sketch of possible future directions.
NASA Astrophysics Data System (ADS)
Clément, A.; Laurens, S.
2011-07-01
The Structural Health Monitoring of civil structures subjected to ambient vibrations is very challenging. Indeed, the variations of environmental conditions and the difficulty to characterize the excitation make the damage detection a hard task. Auto-regressive (AR) models coefficients are often used as damage sensitive feature. The presented work proposes a comparison of the AR approach with a state-space feature formed by the Jacobian matrix of the dynamical process. Since the detection of damage can be formulated as a novelty detection problem, Mahalanobis distance is applied to track new points from an undamaged reference collection of feature vectors. Data from a concrete beam subjected to temperature variations and damaged by several static loading are analyzed. It is observed that the damage sensitive features are effectively sensitive to temperature variations. However, the use of the Mahalanobis distance makes possible the detection of cracking with both of them. Early damage (before cracking) is only revealed by the AR coefficients with a good sensibility.
Adapting legume crops to climate change using genomic approaches.
Mousavi-Derazmahalleh, Mahsa; Bayer, Philipp E; Hane, James K; Valliyodan, Babu; Nguyen, Henry T; Nelson, Matthew N; Erskine, William; Varshney, Rajeev K; Papa, Roberto; Edwards, David
2018-03-30
Our agricultural system and hence food security is threatened by combination of events, such as increasing population, the impacts of climate change, and the need to a more sustainable development. Evolutionary adaptation may help some species to overcome environmental changes through new selection pressures driven by climate change. However, success of evolutionary adaptation is dependent on various factors, one of which is the extent of genetic variation available within species. Genomic approaches provide an exceptional opportunity to identify genetic variation that can be employed in crop improvement programs. In this review, we illustrate some of the routinely used genomics-based methods as well as recent breakthroughs, which facilitate assessment of genetic variation and discovery of adaptive genes in legumes. Although additional information is needed, the current utility of selection tools indicate a robust ability to utilize existing variation among legumes to address the challenges of climate uncertainty. © 2018 The Authors. Plant, Cell & Environment Published by John Wiley & Sons Ltd.
Aragón, Pedro; Fitze, Patrick S.
2014-01-01
Geographical body size variation has long interested evolutionary biologists, and a range of mechanisms have been proposed to explain the observed patterns. It is considered to be more puzzling in ectotherms than in endotherms, and integrative approaches are necessary for testing non-exclusive alternative mechanisms. Using lacertid lizards as a model, we adopted an integrative approach, testing different hypotheses for both sexes while incorporating temporal, spatial, and phylogenetic autocorrelation at the individual level. We used data on the Spanish Sand Racer species group from a field survey to disentangle different sources of body size variation through environmental and individual genetic data, while accounting for temporal and spatial autocorrelation. A variation partitioning method was applied to separate independent and shared components of ecology and phylogeny, and estimated their significance. Then, we fed-back our models by controlling for relevant independent components. The pattern was consistent with the geographical Bergmann's cline and the experimental temperature-size rule: adults were larger at lower temperatures (and/or higher elevations). This result was confirmed with additional multi-year independent data-set derived from the literature. Variation partitioning showed no sex differences in phylogenetic inertia but showed sex differences in the independent component of ecology; primarily due to growth differences. Interestingly, only after controlling for independent components did primary productivity also emerge as an important predictor explaining size variation in both sexes. This study highlights the importance of integrating individual-based genetic information, relevant ecological parameters, and temporal and spatial autocorrelation in sex-specific models to detect potentially important hidden effects. Our individual-based approach devoted to extract and control for independent components was useful to reveal hidden effects linked with alternative non-exclusive hypothesis, such as those of primary productivity. Also, including measurement date allowed disentangling and controlling for short-term temporal autocorrelation reflecting sex-specific growth plasticity. PMID:25090025
Capomaccio, Stefano; Milanesi, Marco; Bomba, Lorenzo; Cappelli, Katia; Nicolazzi, Ezequiel L; Williams, John L; Ajmone-Marsan, Paolo; Stefanon, Bruno
2015-08-01
Genome-wide association studies (GWAS) have been widely applied to disentangle the genetic basis of complex traits. In cattle breeds, classical GWAS approaches with medium-density marker panels are far from conclusive, especially for complex traits. This is due to the intrinsic limitations of GWAS and the assumptions that are made to step from the association signals to the functional variations. Here, we applied a gene-based strategy to prioritize genotype-phenotype associations found for milk production and quality traits with classical approaches in three Italian dairy cattle breeds with different sample sizes (Italian Brown n = 745; Italian Holstein n = 2058; Italian Simmental n = 477). Although classical regression on single markers revealed only a single genome-wide significant genotype-phenotype association, for Italian Holstein, the gene-based approach identified specific genes in each breed that are associated with milk physiology and mammary gland development. As no standard method has yet been established to step from variation to functional units (i.e., genes), the strategy proposed here may contribute to revealing new genes that play significant roles in complex traits, such as those investigated here, amplifying low association signals using a gene-centric approach. © 2015 Stichting International Foundation for Animal Genetics.
Moody, Julia; Septimus, Edward; Hickok, Jason; Huang, Susan S; Platt, Richard; Gombosev, Adrijana; Terpstra, Leah; Avery, Taliser; Lankiewicz, Julie; Perlin, Jonathan B
2013-02-01
A range of strategies and approaches have been developed for preventing health care-associated infections. Understanding the variation in practices among facilities is necessary to improve compliance with existing programs and aid the implementation of new interventions. In 2009, HCA Inc administered an electronic survey to measure compliance with evidence-based infection prevention practices as well as identify variation in products or methods, such as use of special approach technology for central vascular catheters and ventilator care. Responding adult intensive care units (ICUs) were those considering participation in a clinical trial to reduce health care-associated infections. Responses from 99 ICUs in 55 hospitals indicated that many evidenced-based practices were used consistently, including methicillin-resistant Staphylococcus aureus (MRSA) screening and use of contact precautions for MRSA-positive patients. Other practices exhibited wide variability including discontinuation of precautions and use of antimicrobial technology or chlorhexidine patches for central vascular catheters. MRSA decolonization was not a predominant practice in ICUs. In this large, community-based health care system, there was substantial variation in the products and methods to reduce health care-associated infections. Despite system-wide emphasis on basic practices as a precursor to adding special approach technologies, this survey showed that these technologies were commonplace, including in facilities where improvement in basic practices was needed. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
ERIC Educational Resources Information Center
Ursavas, Omer Faruk; Reisoglu, Ilknur
2017-01-01
Purpose: The purpose of this paper is to explore the validity of extended technology acceptance model (TAM) in explaining pre-service teachers' Edmodo acceptance and the variation of variables related to TAM among pre-service teachers having different cognitive styles. Design/methodology/approach: Structural equation modeling approach was used to…
Addressing unwarranted clinical variation: A rapid review of current evidence.
Harrison, Reema; Manias, Elizabeth; Mears, Stephen; Heslop, David; Hinchcliff, Reece; Hay, Liz
2018-05-15
Unwarranted clinical variation (UCV) can be described as variation that can only be explained by differences in health system performance. There is a lack of clarity regarding how to define and identify UCV and, once identified, to determine whether it is sufficiently problematic to warrant action. As such, the implementation of systemic approaches to reducing UCV is challenging. A review of approaches to understand, identify, and address UCV was undertaken to determine how conceptual and theoretical frameworks currently attempt to define UCV, the approaches used to identify UCV, and the evidence of their effectiveness. Rapid evidence assessment (REA) methodology was used. A range of text words, synonyms, and subject headings were developed for the major concepts of unwarranted clinical variation, standards (and deviation from these standards), and health care environment. Two electronic databases (Medline and Pubmed) were searched from January 2006 to April 2017, in addition to hand searching of relevant journals, reference lists, and grey literature. Results were merged using reference-management software (Endnote) and duplicates removed. Inclusion criteria were independently applied to potentially relevant articles by 3 reviewers. Findings were presented in a narrative synthesis to highlight key concepts addressed in the published literature. A total of 48 relevant publications were included in the review; 21 articles were identified as eligible from the database search, 4 from hand searching published work and 23 from the grey literature. The search process highlighted the voluminous literature reporting clinical variation internationally; yet, there is a dearth of evidence regarding systematic approaches to identifying or addressing UCV. Wennberg's classification framework is commonly cited in relation to classifying variation, but no single approach is agreed upon to systematically explore and address UCV. The instances of UCV that warrant investigation and action are largely determined at a systems level currently, and stakeholder engagement in this process is limited. Lack of consensus on an evidence-based definition for UCV remains a substantial barrier to progress in this field. © 2018 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Laursen, Sandra L.; Hassi, Marja-Liisa; Kogan, Marina; Weston, Timothy J.
2014-01-01
Slow faculty uptake of research-based, student-centered teaching and learning approaches limits the advancement of U.S. undergraduate mathematics education. A study of inquiry-based learning (IBL) as implemented in over 100 course sections at 4 universities provides an example of such multicourse, multi-institution uptake. Despite variation in how…
Quantifying individual variation in the propensity to attribute incentive salience to reward cues.
Meyer, Paul J; Lovic, Vedran; Saunders, Benjamin T; Yager, Lindsay M; Flagel, Shelly B; Morrow, Jonathan D; Robinson, Terry E
2012-01-01
If reward-associated cues acquire the properties of incentive stimuli they can come to powerfully control behavior, and potentially promote maladaptive behavior. Pavlovian incentive stimuli are defined as stimuli that have three fundamental properties: they are attractive, they are themselves desired, and they can spur instrumental actions. We have found, however, that there is considerable individual variation in the extent to which animals attribute Pavlovian incentive motivational properties ("incentive salience") to reward cues. The purpose of this paper was to develop criteria for identifying and classifying individuals based on their propensity to attribute incentive salience to reward cues. To do this, we conducted a meta-analysis of a large sample of rats (N = 1,878) subjected to a classic Pavlovian conditioning procedure. We then used the propensity of animals to approach a cue predictive of reward (one index of the extent to which the cue was attributed with incentive salience), to characterize two behavioral phenotypes in this population: animals that approached the cue ("sign-trackers") vs. others that approached the location of reward delivery ("goal-trackers"). This variation in Pavlovian approach behavior predicted other behavioral indices of the propensity to attribute incentive salience to reward cues. Thus, the procedures reported here should be useful for making comparisons across studies and for assessing individual variation in incentive salience attribution in small samples of the population, or even for classifying single animals.
A Trait-Based Approach to Advance Coral Reef Science.
Madin, Joshua S; Hoogenboom, Mia O; Connolly, Sean R; Darling, Emily S; Falster, Daniel S; Huang, Danwei; Keith, Sally A; Mizerek, Toni; Pandolfi, John M; Putnam, Hollie M; Baird, Andrew H
2016-06-01
Coral reefs are biologically diverse and ecologically complex ecosystems constructed by stony corals. Despite decades of research, basic coral population biology and community ecology questions remain. Quantifying trait variation among species can help resolve these questions, but progress has been hampered by a paucity of trait data for the many, often rare, species and by a reliance on nonquantitative approaches. Therefore, we propose filling data gaps by prioritizing traits that are easy to measure, estimating key traits for species with missing data, and identifying 'supertraits' that capture a large amount of variation for a range of biological and ecological processes. Such an approach can accelerate our understanding of coral ecology and our ability to protect critically threatened global ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Probabilistic flood damage modelling at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
Identification and ranking of environmental threats with ecosystem vulnerability distributions.
Zijp, Michiel C; Huijbregts, Mark A J; Schipper, Aafke M; Mulder, Christian; Posthuma, Leo
2017-08-24
Responses of ecosystems to human-induced stress vary in space and time, because both stressors and ecosystem vulnerabilities vary in space and time. Presently, ecosystem impact assessments mainly take into account variation in stressors, without considering variation in ecosystem vulnerability. We developed a method to address ecosystem vulnerability variation by quantifying ecosystem vulnerability distributions (EVDs) based on monitoring data of local species compositions and environmental conditions. The method incorporates spatial variation of both abiotic and biotic variables to quantify variation in responses among species and ecosystems. We show that EVDs can be derived based on a selection of locations, existing monitoring data and a selected impact boundary, and can be used in stressor identification and ranking for a region. A case study on Ohio's freshwater ecosystems, with freshwater fish as target species group, showed that physical habitat impairment and nutrient loads ranked highest as current stressors, with species losses higher than 5% for at least 6% of the locations. EVDs complement existing approaches of stressor assessment and management, which typically account only for variability in stressors, by accounting for variation in the vulnerability of the responding ecosystems.
Bit storage and bit flip operations in an electromechanical oscillator.
Mahboob, I; Yamaguchi, H
2008-05-01
The Parametron was first proposed as a logic-processing system almost 50 years ago. In this approach the two stable phases of an excited harmonic oscillator provide the basis for logic operations. Computer architectures based on LC oscillators were developed for this approach, but high power consumption and difficulties with integration meant that the Parametron was rendered obsolete by the transistor. Here we propose an approach to mechanical logic based on nanoelectromechanical systems that is a variation on the Parametron architecture and, as a first step towards a possible nanomechanical computer, we demonstrate both bit storage and bit flip operations.
A clustering approach to segmenting users of internet-based risk calculators.
Harle, C A; Downs, J S; Padman, R
2011-01-01
Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.
NASA Astrophysics Data System (ADS)
Yu, Maolin; Du, R.
2005-08-01
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.
A model-based approach for the scattering-bar printing avoidance
NASA Astrophysics Data System (ADS)
Du, Yaojun; Li, Liang; Zhang, Jingjing; Shao, Feng; Zuniga, Christian; Deng, Yunfei
2018-03-01
As the technology node for the semiconductor manufacturing approaches advanced nodes, the scattering-bars (SBs) are more crucial than ever to ensure a good on-wafer printability of the line space pattern and hole pattern. The main pattern with small pitches requires a very narrow PV (process variation) band. A delicate SB addition scheme is thus needed to maintain a sufficient PW (process window) for the semi-iso- and iso-patterns. In general, the wider, longer, and closer to main feature SBs will be more effective in enhancing the printability; on the other hand, they are also more likely to be printed on the wafer; resulting in undesired defects transferable to subsequent processes. In this work, we have developed a model based approach for the scattering-bar printing avoidance (SPA). A specially designed optical model was tuned based on a broad range of test patterns which contain a variation of CDs and SB placements showing printing and non-printing scattering bars. A printing threshold is then obtained to check the extra-printings of SBs. The accuracy of this threshold is verified by pre-designed test patterns. The printing threshold associated with our novel SPA model allows us to set up a proper SB rule.
NASA Astrophysics Data System (ADS)
Liang, Zhang; Yanqing, Hou; Jie, Wu
2016-12-01
The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant calibration (LBCC) and MA approaches improved the positioning accuracy of the vertical component, while slightly degrading the accuracy of the horizontal components. The ARMA(1, 0) model, however, improved the positioning accuracy of all three components, with 40 and 50 % improvement of the vertical component for BDS and GPS, respectively. In the short baseline experiment, compared to the DDCP approach, the LBCC approach yielded bad positioning solutions and degraded the AD accuracy; both MA and ARMA-based filter approaches improved the AD accuracy. Moreover, the ARMA(1, 0) and ARMA(1, 1) models have relatively better performance, improving to 55 % and 48 % the elevation angle in ARMA(1, 1) and MA model for GPS, respectively. Furthermore, the drifting LB variation is found to be continuous and slowly cumulative; the variation magnitudes in the unit of length are almost identical on different frequency carrier phases, so the LB variation does not show obvious correlation between different frequencies. Consequently, the wide-lane LB in the unit of cycle is very stable, while the narrow-lane LB varies largely in time. This reasoning probably also explains the phenomenon that the wide-lane LB originating in the satellites is stable, while the narrow-lane LB varies. The results of ARMA-based filters are better than the MA model, which probably implies that the modeling for drifting LB can further improve the precise point positioning accuracy.
Model-based review of Doppler global velocimetry techniques with laser frequency modulation
NASA Astrophysics Data System (ADS)
Fischer, Andreas
2017-06-01
Optical measurements of flow velocity fields are of crucial importance to understand the behavior of complex flow. One flow field measurement technique is Doppler global velocimetry (DGV). A large variety of different DGV approaches exist, e.g., applying different kinds of laser frequency modulation. In order to investigate the measurement capabilities especially of the newer DGV approaches with laser frequency modulation, a model-based review of all DGV measurement principles is performed. The DGV principles can be categorized by the respective number of required time steps. The systematic review of all DGV principle reveals drawbacks and benefits of the different measurement approaches with respect to the temporal resolution, the spatial resolution and the measurement range. Furthermore, the Cramér-Rao bound for photon shot is calculated and discussed, which represents a fundamental limit of the achievable measurement uncertainty. As a result, all DGV techniques provide similar minimal uncertainty limits. With Nphotons as the number of scattered photons, the minimal standard deviation of the flow velocity reads about 106 m / s /√{Nphotons } , which was calculated for a perpendicular arrangement of the illumination and observation direction and a laser wavelength of 895 nm. As a further result, the signal processing efficiencies are determined with a Monte-Carlo simulation. Except for the newest correlation-based DGV method, the signal processing algorithms are already optimal or near the optimum. Finally, the different DGV approaches are compared regarding errors due to temporal variations of the scattered light intensity and the flow velocity. The influence of a linear variation of the scattered light intensity can be reduced by maximizing the number of time steps, because this means to acquire more information for the correction of this systematic effect. However, more time steps can result in a flow velocity measurement with a lower temporal resolution, when operating at the maximal frame rate of the camera. DGV without laser frequency modulation then provides the highest temporal resolutions and is not sensitive with respect to temporal variations but with respect to spatial variations of the scattered light intensity. In contrast to this, all DGV variants suffer from velocity variations during the measurement. In summary, the experimental conditions and the measurement task finally decide about the ideal choice from the reviewed DGV methods.
Patterns of breast cancer mortality trends in Europe.
Amaro, Joana; Severo, Milton; Vilela, Sofia; Fonseca, Sérgio; Fontes, Filipa; La Vecchia, Carlo; Lunet, Nuno
2013-06-01
To identify patterns of variation in breast cancer mortality in Europe (1980-2010), using a model-based approach. Mortality data were obtained from the World Health Organization database and mixed models were used to describe the time trends in the age-standardized mortality rates (ASMR). Model-based clustering was used to identify clusters of countries with homogeneous variation in ASMR. Three patterns were identified. Patterns 1 and 2 are characterized by stable or slightly increasing trends in ASMR in the first half of the period analysed, and a clear decline is observed thereafter; in pattern 1 the median of the ASMR is higher, and the highest rates were achieved sooner. Pattern 3 is characterised by a rapid increase in mortality until 1999, declining slowly thereafter. This study provides a general model for the description and interpretation of the variation in breast cancer mortality in Europe, based in three main patterns. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Survey on Gas Sensing Technology
Liu, Xiao; Cheng, Sitian; Liu, Hong; Hu, Sha; Zhang, Daqiang; Ning, Huansheng
2012-01-01
Sensing technology has been widely investigated and utilized for gas detection. Due to the different applicability and inherent limitations of different gas sensing technologies, researchers have been working on different scenarios with enhanced gas sensor calibration. This paper reviews the descriptions, evaluation, comparison and recent developments in existing gas sensing technologies. A classification of sensing technologies is given, based on the variation of electrical and other properties. Detailed introduction to sensing methods based on electrical variation is discussed through further classification according to sensing materials, including metal oxide semiconductors, polymers, carbon nanotubes, and moisture absorbing materials. Methods based on other kinds of variations such as optical, calorimetric, acoustic and gas-chromatographic, are presented in a general way. Several suggestions related to future development are also discussed. Furthermore, this paper focuses on sensitivity and selectivity for performance indicators to compare different sensing technologies, analyzes the factors that influence these two indicators, and lists several corresponding improved approaches. PMID:23012563
Airborne data measurement system errors reduction through state estimation and control optimization
NASA Astrophysics Data System (ADS)
Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.
2018-02-01
The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.
Bogers, Sophie Helen
2018-01-01
Biological cell-based therapies for the treatment of joint disease in veterinary patients include autologous-conditioned serum, platelet-rich plasma, and expanded or non-expanded mesenchymal stem cell products. This narrative review outlines the processing and known mechanism of action of these therapies and reviews current preclinical and clinical efficacy in joint disease in the context of the processing type and study design. The significance of variation for biological activity and consequently regulatory approval is also discussed. There is significant variation in study outcomes for canine and equine cell-based products derived from whole blood or stem cell sources such as adipose and bone marrow. Variation can be attributed to altering bio-composition due to factors including preparation technique and source. In addition, study design factors like selection of cases with early vs. late stage osteoarthritis (OA), or with intra-articular soft tissue injury, influence outcome variation. In this under-regulated field, variation raises concerns for product safety, consistency, and efficacy. Cell-based therapies used for OA meet the Food and Drug Administration’s (FDA’s) definition of a drug; however, researchers must consider their approach to veterinary cell-based research to meet future regulatory demands. This review explains the USA’s FDA guidelines as an example pathway for cell-based therapies to demonstrate safety, effectiveness, and manufacturing consistency. An understanding of the variation in production consistency, effectiveness, and regulatory concerns is essential for practitioners and researchers to determine what products are indicated for the treatment of joint disease and tactics to improve the quality of future research. PMID:29713634
Machine learning approaches in medical image analysis: From detection to diagnosis.
de Bruijne, Marleen
2016-10-01
Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Han, Ce; Gan, Guanghui; Deng, Zhenxiang; Zhou, Yongqiang; Yi, Jinling; Zheng, Xiaomin; Xie, Congying; Jin, Xiance
2017-04-01
Dynamic myocardial perfusion CT (DMP-CT) imaging provides quantitative functional information for diagnosis and risk stratification of coronary artery disease by calculating myocardial perfusion hemodynamic parameter (MPHP) maps. However, the level of radiation delivered by dynamic sequential scan protocol can be potentially high. The purpose of this work is to develop a pre-contrast normal-dose scan induced structure tensor total variation regularization based on the penalized weighted least-squares (PWLS) criteria to improve the image quality of DMP-CT with a low-mAs CT acquisition. For simplicity, the present approach was termed as ‘PWLS-ndiSTV’. Specifically, the ndiSTV regularization takes into account the spatial-temporal structure information of DMP-CT data and further exploits the higher order derivatives of the objective images to enhance denoising performance. Subsequently, an effective optimization algorithm based on the split-Bregman approach was adopted to minimize the associative objective function. Evaluations with modified dynamic XCAT phantom and preclinical porcine datasets have demonstrated that the proposed PWLS-ndiSTV approach can achieve promising gains over other existing approaches in terms of noise-induced artifacts mitigation, edge details preservation, and accurate MPHP maps calculation.
A probabilistic bridge safety evaluation against floods.
Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho
2016-01-01
To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.
Whiley, David M; Jacob, Kevin; Nakos, Jennifer; Bletchly, Cheryl; Nimmo, Graeme R; Nissen, Michael D; Sloots, Theo P
2012-06-01
Numerous real-time PCR assays have been described for detection of the influenza A H275Y alteration. However, the performance of these methods can be undermined by sequence variation in the regions flanking the codon of interest. This is a problem encountered more broadly in microbial diagnostics. In this study, we developed a modification of hybridization probe-based melting curve analysis, whereby primers are used to mask proximal mutations in the sequence targets of hybridization probes, so as to limit the potential for sequence variation to interfere with typing. The approach was applied to the H275Y alteration of the influenza A (H1N1) 2009 strain, as well as a Neisseria gonorrhoeae mutation associated with antimicrobial resistance. Assay performances were assessed using influenza A and N. gonorrhoeae strains characterized by DNA sequencing. The modified hybridization probe-based approach proved successful in limiting the effects of proximal mutations, with the results of melting curve analyses being 100% consistent with the results of DNA sequencing for all influenza A and N. gonorrhoeae strains tested. Notably, these included influenza A and N. gonorrhoeae strains exhibiting additional mutations in hybridization probe targets. Of particular interest was that the H275Y assay correctly typed influenza A strains harbouring a T822C nucleotide substitution, previously shown to interfere with H275Y typing methods. Overall our modified hybridization probe-based approach provides a simple means of circumventing problems caused by sequence variation, and offers improved detection of the influenza A H275Y alteration and potentially other resistance mechanisms.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2014-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.
Full-wave multiscale anisotropy tomography in Southern California
NASA Astrophysics Data System (ADS)
Lin, Yu-Pin; Zhao, Li; Hung, Shu-Huei
2014-12-01
Understanding the spatial variation of anisotropy in the upper mantle is important for characterizing the lithospheric deformation and mantle flow dynamics. In this study, we apply a full-wave approach to image the upper-mantle anisotropy in Southern California using 5954 SKS splitting data. Three-dimensional sensitivity kernels combined with a wavelet-based model parameterization are adopted in a multiscale inversion. Spatial resolution lengths are estimated based on a statistical resolution matrix approach, showing a finest resolution length of ~25 km in regions with densely distributed stations. The anisotropic model displays structural fabric in relation to surface geologic features such as the Salton Trough, the Transverse Ranges, and the San Andreas Fault. The depth variation of anisotropy does not suggest a lithosphere-asthenosphere decoupling. At long wavelengths, the fast directions of anisotropy are aligned with the absolute plate motion inside the Pacific and North American plates.
Improving Grasp Skills Using Schema Structured Learning
NASA Technical Reports Server (NTRS)
Platt, Robert; Grupen, ROderic A.; Fagg, Andrew H.
2006-01-01
Abstract In the control-based approach to robotics, complex behavior is created by sequencing and combining control primitives. While it is desirable for the robot to autonomously learn the correct control sequence, searching through the large number of potential solutions can be time consuming. This paper constrains this search to variations of a generalized solution encoded in a framework known as an action schema. A new algorithm, SCHEMA STRUCTURED LEARNING, is proposed that repeatedly executes variations of the generalized solution in search of instantiations that satisfy action schema objectives. This approach is tested in a grasping task where Dexter, the UMass humanoid robot, learns which reaching and grasping controllers maximize the probability of grasp success.
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.
2018-06-01
We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.
Panthier, Frédéric; Lareyre, Fabien; Audouin, Marie; Raffort, Juliette
2018-03-01
Pelvi-ureteric junction obstruction corresponds to an impairment of urinary transport that can lead to renal dysfunction if not treated. Several mechanisms can cause the obstruction of the ureter including intrinsic factors or extrinsic factors such as the presence of crossing vessels. The treatment of the disease relies on surgical approaches, pyeloplasty being the standard reference. The technique consists in removing the pathologic ureteric segment and renal pelvis and transposing associated crossing vessels if present. The vascular anatomy of the pelvi-ureteric junction is complex and varies among individuals, and this can impact on the disease development and its surgical treatment. In this review, we summarize current knowledge on vascular anatomic variations in the pelvi-ureteric junction. Based on anatomic characteristics, we discuss implications for surgical approaches during pyeloplasty and vessel transposition.
Lee, A H; Yau, K K
2001-01-01
To identify factors associated with hospital length of stay (LOS) and to model variations in LOS within Diagnosis Related Groups (DRGs). A proportional hazards frailty modelling approach is proposed that accounts for patient transfers and the inherent correlation of patients clustered within hospitals. The investigation is based on patient discharge data extracted for a group of obstetrical DRGs. Application of the frailty approach has highlighted several significant factors after adjustment for patient casemix and random hospital effects. In particular, patients admitted for childbirth with private medical insurance coverage have higher risk of prolonged hospitalization compared to public patients. The determination of pertinent factors provides important information to hospital management and clinicians in assessing the risk of prolonged hospitalization. The analysis also enables the comparison of inter-hospital variations across adjacent DRGs.
Towards a Unified Framework for Pose, Expression, and Occlusion Tolerant Automatic Facial Alignment.
Seshadri, Keshav; Savvides, Marios
2016-10-01
We propose a facial alignment algorithm that is able to jointly deal with the presence of facial pose variation, partial occlusion of the face, and varying illumination and expressions. Our approach proceeds from sparse to dense landmarking steps using a set of specific models trained to best account for the shape and texture variation manifested by facial landmarks and facial shapes across pose and various expressions. We also propose the use of a novel l1-regularized least squares approach that we incorporate into our shape model, which is an improvement over the shape model used by several prior Active Shape Model (ASM) based facial landmark localization algorithms. Our approach is compared against several state-of-the-art methods on many challenging test datasets and exhibits a higher fitting accuracy on all of them.
Khana, Diba; Rossen, Lauren M; Hedegaard, Holly; Warner, Margaret
2018-01-01
Hierarchical Bayes models have been used in disease mapping to examine small scale geographic variation. State level geographic variation for less common causes of mortality outcomes have been reported however county level variation is rarely examined. Due to concerns about statistical reliability and confidentiality, county-level mortality rates based on fewer than 20 deaths are suppressed based on Division of Vital Statistics, National Center for Health Statistics (NCHS) statistical reliability criteria, precluding an examination of spatio-temporal variation in less common causes of mortality outcomes such as suicide rates (SRs) at the county level using direct estimates. Existing Bayesian spatio-temporal modeling strategies can be applied via Integrated Nested Laplace Approximation (INLA) in R to a large number of rare causes of mortality outcomes to enable examination of spatio-temporal variations on smaller geographic scales such as counties. This method allows examination of spatiotemporal variation across the entire U.S., even where the data are sparse. We used mortality data from 2005-2015 to explore spatiotemporal variation in SRs, as one particular application of the Bayesian spatio-temporal modeling strategy in R-INLA to predict year and county-specific SRs. Specifically, hierarchical Bayesian spatio-temporal models were implemented with spatially structured and unstructured random effects, correlated time effects, time varying confounders and space-time interaction terms in the software R-INLA, borrowing strength across both counties and years to produce smoothed county level SRs. Model-based estimates of SRs were mapped to explore geographic variation.
Fast and efficient indexing approach for object recognition
NASA Astrophysics Data System (ADS)
Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi
1999-08-01
This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.
Xu, Henglong; Jiang, Yong; Xu, Guangjian
2016-11-15
Body-size spectra has proved to be a useful taxon-free resolution to summarize a community structure for bioassessment. The spatial variations in annual cycles of body-size spectra of planktonic ciliates and their environmental drivers were studied based on an annual dataset. Samples were biweekly collected at five stations in a bay of the Yellow Sea, northern China during a 1-year cycle. Based on a multivariate approach, the second-stage analysis, it was shown that the annual cycles of the body-size spectra were significantly different among five sampling stations. Correlation analysis demonstrated that the spatial variations in the body-size spectra were significantly related to changes of environmental conditions, especially dissolved nitrogen, alone or in combination with salinity and dissolve oxygen. Based on results, it is suggested that the nutrients may be the environmental drivers to shape the spatial variations in annual cycles of planktonic ciliates in terms of body-size spectra in marine ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Meslamani, Jamel; Rognan, Didier; Kellenberger, Esther
2011-05-01
The sc-PDB database is an annotated archive of druggable binding sites extracted from the Protein Data Bank. It contains all-atoms coordinates for 8166 protein-ligand complexes, chosen for their geometrical and physico-chemical properties. The sc-PDB provides a functional annotation for proteins, a chemical description for ligands and the detailed intermolecular interactions for complexes. The sc-PDB now includes a hierarchical classification of all the binding sites within a functional class. The sc-PDB entries were first clustered according to the protein name indifferent of the species. For each cluster, we identified dissimilar sites (e.g. catalytic and allosteric sites of an enzyme). SCOPE AND APPLICATIONS: The classification of sc-PDB targets by binding site diversity was intended to facilitate chemogenomics approaches to drug design. In ligand-based approaches, it avoids comparing ligands that do not share the same binding site. In structure-based approaches, it permits to quantitatively evaluate the diversity of the binding site definition (variations in size, sequence and/or structure). The sc-PDB database is freely available at: http://bioinfo-pharma.u-strasbg.fr/scPDB.
Emerman, Amy B; Bowman, Sarah K; Barry, Andrew; Henig, Noa; Patel, Kruti M; Gardner, Andrew F; Hendrickson, Cynthia L
2017-07-05
Next-generation sequencing (NGS) is a powerful tool for genomic studies, translational research, and clinical diagnostics that enables the detection of single nucleotide polymorphisms, insertions and deletions, copy number variations, and other genetic variations. Target enrichment technologies improve the efficiency of NGS by only sequencing regions of interest, which reduces sequencing costs while increasing coverage of the selected targets. Here we present NEBNext Direct ® , a hybridization-based, target-enrichment approach that addresses many of the shortcomings of traditional target-enrichment methods. This approach features a simple, 7-hr workflow that uses enzymatic removal of off-target sequences to achieve a high specificity for regions of interest. Additionally, unique molecular identifiers are incorporated for the identification and filtering of PCR duplicates. The same protocol can be used across a wide range of input amounts, input types, and panel sizes, enabling NEBNext Direct to be broadly applicable across a wide variety of research and diagnostic needs. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Face recognition via edge-based Gabor feature representation for plastic surgery-altered images
NASA Astrophysics Data System (ADS)
Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.
2014-12-01
Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.
Modelling vortex-induced fluid-structure interaction.
Benaroya, Haym; Gabbai, Rene D
2008-04-13
The principal goal of this research is developing physics-based, reduced-order, analytical models of nonlinear fluid-structure interactions associated with offshore structures. Our primary focus is to generalize the Hamilton's variational framework so that systems of flow-oscillator equations can be derived from first principles. This is an extension of earlier work that led to a single energy equation describing the fluid-structure interaction. It is demonstrated here that flow-oscillator models are a subclass of the general, physical-based framework. A flow-oscillator model is a reduced-order mechanical model, generally comprising two mechanical oscillators, one modelling the structural oscillation and the other a nonlinear oscillator representing the fluid behaviour coupled to the structural motion.Reduced-order analytical model development continues to be carried out using a Hamilton's principle-based variational approach. This provides flexibility in the long run for generalizing the modelling paradigm to complex, three-dimensional problems with multiple degrees of freedom, although such extension is very difficult. As both experimental and analytical capabilities advance, the critical research path to developing and implementing fluid-structure interaction models entails-formulating generalized equations of motion, as a superset of the flow-oscillator models; and-developing experimentally derived, semi-analytical functions to describe key terms in the governing equations of motion. The developed variational approach yields a system of governing equations. This will allow modelling of multiple d.f. systems. The extensions derived generalize the Hamilton's variational formulation for such problems. The Navier-Stokes equations are derived and coupled to the structural oscillator. This general model has been shown to be a superset of the flow-oscillator model. Based on different assumptions, one can derive a variety of flow-oscillator models.
NASA Astrophysics Data System (ADS)
Yu, Chang Ho; Fan, Zhihua; Lioy, Paul J.; Baptista, Ana; Greenberg, Molly; Laumbach, Robert J.
2016-09-01
Air concentrations of traffic-related air pollutants (TRAPs) vary in space and time within urban communities, presenting challenges for estimating human exposure and potential health effects. Conventional stationary monitoring stations/networks cannot effectively capture spatial characteristics. Alternatively, mobile monitoring approaches became popular to measure TRAPs along roadways or roadsides. However, these linear mobile monitoring approaches cannot thoroughly distinguish spatial variability from temporal variations in monitored TRAP concentrations. In this study, we used a novel mobile monitoring approach to simultaneously characterize spatial/temporal variations in roadside concentrations of TRAPs in urban settings. We evaluated the effectiveness of this mobile monitoring approach by performing concurrent measurements along two parallel paths perpendicular to a major roadway and/or along heavily trafficked roads at very narrow scale (one block away each other) within short time period (<30 min) in an urban community. Based on traffic and particulate matter (PM) source information, we selected 4 neighborhoods to study. The sampling activities utilized real-time monitors, including battery-operated PM2.5 monitor (SidePak), condensation particle counter (CPC 3007), black carbon (BC) monitor (Micro-Aethalometer), carbon monoxide (CO) monitor (Langan T15), and portable temperature/humidity data logger (HOBO U12), and a GPS-based tracker (Trackstick). Sampling was conducted for ∼3 h in the morning (7:30-10:30) in 7 separate days in March/April and 6 days in May/June 2012. Two simultaneous samplings were made at 5 spatially-distributed locations on parallel roads, usually distant one block each other, in each neighborhood. The 5-min averaged BC concentrations (AVG ± SD, [range]) were 2.53 ± 2.47 [0.09-16.3] μg/m3, particle number concentrations (PNC) were 33,330 ± 23,451 [2512-159,130] particles/cm3, PM2.5 mass concentrations were 8.87 ± 7.65 [0.27-46.5] μg/m3, and CO concentrations were 1.22 ± 0.60 [0.22-6.29] ppm in the community. The traffic-related air pollutants, BC and PNC, but not PM2.5 or CO, varied spatially depending on proximity to local stationary/mobile sources. Seasonal differences were observed for all four TRAPs, significantly higher in colder months than in warmer months. The coefficients of variation (CVs) in concurrent measurements from two parallel routes were calculated around 0.21 ± 0.17, and variations were attributed by meteorological variation (25%), temporal variability (19%), concentration level (6%), and spatial variability (2%), respectively. Overall study findings suggest this mobile monitoring approach could effectively capture and distinguish spatial/temporal characteristics in TRAP concentrations for communities impacted by heavy motor vehicle traffic and mixed urban air pollution sources.
Quantifying Individual Variation in the Propensity to Attribute Incentive Salience to Reward Cues
Meyer, Paul J.; Lovic, Vedran; Saunders, Benjamin T.; Yager, Lindsay M.; Flagel, Shelly B.; Morrow, Jonathan D.; Robinson, Terry E.
2012-01-01
If reward-associated cues acquire the properties of incentive stimuli they can come to powerfully control behavior, and potentially promote maladaptive behavior. Pavlovian incentive stimuli are defined as stimuli that have three fundamental properties: they are attractive, they are themselves desired, and they can spur instrumental actions. We have found, however, that there is considerable individual variation in the extent to which animals attribute Pavlovian incentive motivational properties (“incentive salience”) to reward cues. The purpose of this paper was to develop criteria for identifying and classifying individuals based on their propensity to attribute incentive salience to reward cues. To do this, we conducted a meta-analysis of a large sample of rats (N = 1,878) subjected to a classic Pavlovian conditioning procedure. We then used the propensity of animals to approach a cue predictive of reward (one index of the extent to which the cue was attributed with incentive salience), to characterize two behavioral phenotypes in this population: animals that approached the cue (“sign-trackers”) vs. others that approached the location of reward delivery (“goal-trackers”). This variation in Pavlovian approach behavior predicted other behavioral indices of the propensity to attribute incentive salience to reward cues. Thus, the procedures reported here should be useful for making comparisons across studies and for assessing individual variation in incentive salience attribution in small samples of the population, or even for classifying single animals. PMID:22761718
Méndez-Vigo, Belén; Picó, F Xavier; Ramiro, Mercedes; Martínez-Zapater, José M; Alonso-Blanco, Carlos
2011-12-01
Extensive natural variation has been described for the timing of flowering initiation in many annual plants, including the model wild species Arabidopsis (Arabidopsis thaliana), which is presumed to be involved in adaptation to different climates. However, the environmental factors that might shape this genetic variation, as well as the molecular bases of climatic adaptation by modifications of flowering time, remain mostly unknown. To approach both goals, we characterized the flowering behavior in relation to vernalization of 182 Arabidopsis wild genotypes collected in a native region spanning a broad climatic range. Phenotype-environment association analyses identified strong altitudinal clines (0-2600 m) in seven out of nine flowering-related traits. Altitudinal clines were dissected in terms of minimum winter temperature and precipitation, indicating that these are the main climatic factors that might act as selective pressures on flowering traits. In addition, we used an association analysis approach with four candidate genes, FRIGIDA (FRI), FLOWERING LOCUS C (FLC), PHYTOCHROME C (PHYC), and CRYPTOCHROME2, to decipher the genetic bases of this variation. Eleven different loss-of-function FRI alleles of low frequency accounted for up to 16% of the variation for most traits. Furthermore, an FLC allelic series of six novel putative loss- and change-of-function alleles, with low to moderate frequency, revealed that a broader FLC functional diversification might contribute to flowering variation. Finally, environment-genotype association analyses showed that the spatial patterns of FRI, FLC, and PHYC polymorphisms are significantly associated with winter temperatures and spring and winter precipitations, respectively. These results support that allelic variation in these genes is involved in climatic adaptation.
Panoptes: web-based exploration of large scale genome variation data.
Vauterin, Paul; Jeffery, Ben; Miles, Alistair; Amato, Roberto; Hart, Lee; Wright, Ian; Kwiatkowski, Dominic
2017-10-15
The size and complexity of modern large-scale genome variation studies demand novel approaches for exploring and sharing the data. In order to unlock the potential of these data for a broad audience of scientists with various areas of expertise, a unified exploration framework is required that is accessible, coherent and user-friendly. Panoptes is an open-source software framework for collaborative visual exploration of large-scale genome variation data and associated metadata in a web browser. It relies on technology choices that allow it to operate in near real-time on very large datasets. It can be used to browse rich, hybrid content in a coherent way, and offers interactive visual analytics approaches to assist the exploration. We illustrate its application using genome variation data of Anopheles gambiae, Plasmodium falciparum and Plasmodium vivax. Freely available at https://github.com/cggh/panoptes, under the GNU Affero General Public License. paul.vauterin@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Rodushkin, I; Bergman, T; Douglas, G; Engström, E; Sörlin, D; Baxter, D C
2007-02-05
Different analytical approaches for origin differentiation between vendace and whitefish caviars from brackish- and freshwaters were tested using inductively coupled plasma double focusing sector field mass spectrometry (ICP-SFMS) and multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). These approaches involve identifying differences in elemental concentrations or sample-specific isotopic composition (Sr and Os) variations. Concentrations of 72 elements were determined by ICP-SFMS following microwave-assisted digestion in vendace and whitefish caviar samples from Sweden (from both brackish and freshwater), Finland and USA, as well as in unprocessed vendace roe and salt used in caviar production. This data set allows identification of elements whose contents in caviar can be affected by salt addition as well as by contamination during production and packaging. Long-term method reproducibility was assessed for all analytes based on replicate caviar preparations/analyses and variations in element concentrations in caviar from different harvests were evaluated. The greatest utility for differentiation was demonstrated for elements with varying concentrations between brackish and freshwaters (e.g. As, Br, Sr). Elemental ratios, specifically Sr/Ca, Sr/Mg and Sr/Ba, are especially useful for authentication of vendace caviar processed from brackish water roe, due to the significant differences between caviar from different sources, limited between-harvest variations and relatively high concentrations in samples, allowing precise determination by modern analytical instrumentation. Variations in the 87Sr/86Sr ratio for vendace caviar from different harvests (on the order of 0.05-0.1%) is at least 10-fold less than differences between caviar processed from brackish and freshwater roe. Hence, Sr isotope ratio measurements (either by ICP-SFMS or by MC-ICP-MS) have great potential for origin differentiation. On the contrary, it was impossible to differentiate between Swedish caviar processed from brackish water roe and Finnish freshwater caviar based solely on 187Os/188Os ratios.
Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie
2017-09-01
Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
Still-to-video face recognition in unconstrained environments
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing
2015-02-01
Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.
Reasoning over genetic variance information in cause-and-effect models of neurodegenerative diseases
Naz, Mufassra; Kodamullil, Alpha Tom
2016-01-01
The work we present here is based on the recent extension of the syntax of the Biological Expression Language (BEL), which now allows for the representation of genetic variation information in cause-and-effect models. In our article, we describe, how genetic variation information can be used to identify candidate disease mechanisms in diseases with complex aetiology such as Alzheimer’s disease and Parkinson’s disease. In those diseases, we have to assume that many genetic variants contribute moderately to the overall dysregulation that in the case of neurodegenerative diseases has such a long incubation time until the first clinical symptoms are detectable. Owing to the multilevel nature of dysregulation events, systems biomedicine modelling approaches need to combine mechanistic information from various levels, including gene expression, microRNA (miRNA) expression, protein–protein interaction, genetic variation and pathway. OpenBEL, the open source version of BEL, has recently been extended to match this requirement, and we demonstrate in our article, how candidate mechanisms for early dysregulation events in Alzheimer’s disease can be identified based on an integrative mining approach that identifies ‘chains of causation’ that include single nucleotide polymorphism information in BEL models. PMID:26249223
Spectral reconstruction for shifted-excitation Raman difference spectroscopy (SERDS).
Guo, Shuxia; Chernavskaia, Olga; Popp, Jürgen; Bocklitz, Thomas
2018-08-15
Fluorescence emission is one of the major obstacles to apply Raman spectroscopy in biological investigations. It is usually several orders more intense than Raman scattering and hampers further analysis. In cases where the fluorescence emission is too intense to be efficiently removed via routine mathematical baseline correction algorithms, an alternative approach is needed. One alternative approach is shifted-excitation Raman difference spectroscopy (SERDS), where two Raman spectra are recorded with two slightly different excitation wavelengths. Ideally, the fluorescence emission at the two excitations does not change while the Raman spectrum shifts according to the excitation wavelength. Hence the fluorescence is removed in the difference of the two recorded Raman spectra. For better interpretability a spectral reconstruction procedure is necessary to recover the fluorescence-free Raman spectrum. This is challenging due to the intensity variations between the two recorded Raman spectra caused by unavoidable experimental changes as well as the presence of noise. Existent approaches suffer from drawbacks like spectral resolution loss, fluorescence residual, and artefacts. In this contribution, we proposed a reconstruction method based on non-negative least squares (NNLS), where the intensity variations between the two measurements are utilized in the reconstruction model. The method achieved fluorescence-free reconstruction on three real-world SERDS datasets without significant information loss. Thereafter, we quantified the performance of the reconstruction based on artificial datasets from four aspects: reconstructed spectral resolution, precision of reconstruction, signal-to-noise-ratio (SNR), and fluorescence residual. The artificial datasets were constructed with varied Raman to fluorescence intensity ratio (RFIR), SNR, full-width at half-maximum (FWHM), excitation wavelength shift, and fluorescence variation between the two spectra. It was demonstrated that the NNLS approach provides a faithful reconstruction without significantly changing the spectral resolution. Meanwhile, the reconstruction is almost robust to fluorescence variations between the two spectra. Last but not the least the SNR was improved after reconstruction for extremely noisy SERDS datasets. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodionov, S. N.; Martin, J. H.
1999-07-01
A novel approach to climate forecasting on an interannual time scale is described. The approach is based on concepts and techniques from artificial intelligence and expert systems. The suitability of this approach to climate diagnostics and forecasting problems and its advantages compared with conventional forecasting techniques are discussed. The article highlights some practical aspects of the development of climatic expert systems (CESs) and describes an implementation of such a system for the North Atlantic (CESNA). Particular attention is paid to the content of CESNA's knowledge base and those conditions that make climatic forecasts one to several years in advance possible. A detailed evaluation of the quality of the experimental real-time forecasts made by CESNA for the winters of 1995-1996, 1996-1997 and 1997-1998 are presented.
A Web-Based Genetic Polymorphism Learning Approach for High School Students and Science Teachers
ERIC Educational Resources Information Center
Amenkhienan, Ehichoya; Smith, Edward J.
2006-01-01
Variation and polymorphism are concepts that are central to genetics and genomics, primary biological disciplines in which high school students and undergraduates require a solid foundation. From 1998 through 2002, a web-based genetics education program was developed for high school teachers and students. The program included an exercise on using…
ERIC Educational Resources Information Center
Basturkmen, Helen
2012-01-01
Outwardly the rhetorical organisation of sections of research reports in different disciplines can appear similar. Close examination, however, may reveal subtle differences. Numerous studies have drawn on the genre-based approach developed by Swales (1990, 2004) to investigate the schematic structure of sections of articles in a range of…
ERIC Educational Resources Information Center
Gao, Su; Wang, Jian
2016-01-01
Students' frequent exposure to inquiry-based science teaching is presumed more effective than their exposure to traditional didactic instruction in helping improve competence in content knowledge and problem solving. Framed through theoretical perspectives of inquiry-based instruction and culturally relevant pedagogy, this study examines this…
Bilingual Children's Acquisition of the Past Tense: A Usage-Based Approach
ERIC Educational Resources Information Center
Paradis, Johanne; Nicoladis, Elena; Crago, Martha; Genesee, Fred
2011-01-01
Bilingual and monolingual children's (mean age = 4;10) elicited production of the past tense in both English and French was examined in order to test predictions from Usage-Based theory regarding the sensitivity of children's acquisition rates to input factors such as variation in exposure time and the type/token frequency of morphosyntactic…
ERIC Educational Resources Information Center
Aditomo, Anindito; Goodyear, Peter; Bliuc, Ana-Maria; Ellis, Robert A.
2013-01-01
Learning through inquiry is a widely advocated pedagogical approach. However, there is currently little systematic knowledge about the practice of inquiry-based learning (IBL) in higher education. This study examined descriptions of learning tasks that were put forward as examples of IBL by 224 university teachers from various disciplines in three…
Warner, Daniel A
2014-11-01
Environmental factors strongly influence phenotypic variation within populations. The environment contributes to this variation in two ways: (1) by acting as a determinant of phenotypic variation (i.e., plastic responses) and (2) as an agent of selection that "chooses" among existing phenotypes. Understanding how these two environmental forces contribute to phenotypic variation is a major goal in the field of evolutionary biology and a primary objective of my research program. The objective of this article is to provide a framework to guide studies of environmental sources of phenotypic variation (specifically, developmental plasticity and maternal effects, and their adaptive significance). Two case studies from my research on reptiles are used to illustrate the general approaches I have taken to address these conceptual topics. Some key points for advancing our understanding of environmental influences on phenotypic variation include (1) merging laboratory-based research that identifies specific environmental effects with field studies to validate ecological relevance; (2) using controlled experimental approaches that mimic complex environments found in nature; (3) integrating data across biological fields (e.g., genetics, morphology, physiology, behavior, and ecology) under an evolutionary framework to provide novel insights into the underlying mechanisms that generate phenotypic variation; (4) assessing fitness consequences using measurements of survival and/or reproductive success across ontogeny (from embryos to adults) and under multiple ecologically-meaningful contexts; and (5) quantifying the strength and form of natural selection in multiple populations over multiple periods of time to understand the spatial and temporal consistency of phenotypic selection. Research programs that focus on organisms that are amenable to these approaches will provide the most promise for advancing our understanding of the environmental factors that generate the remarkable phenotypic diversity observed within populations. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Damage assessment in multilayered MEMS structures under thermal fatigue
NASA Astrophysics Data System (ADS)
Maligno, A. R.; Whalley, D. C.; Silberschmidt, V. V.
2011-07-01
This paper reports on the application of a Physics of Failure (PoF) methodology to assessing the reliability of a micro electro mechanical system (MEMS). Numerical simulations, based on the finite element method (FEM) using a sub-domain approach was used to examine the damage onset due to temperature variations (e.g. yielding of metals which may lead to thermal fatigue). In this work remeshing techniques were employed in order to develop a damage tolerance approach based on the assumption that initial flaws exist in the multi-layered.
A Non-parametric Approach to Constrain the Transfer Function in Reverberation Mapping
NASA Astrophysics Data System (ADS)
Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming
2016-11-01
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (I.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.
Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei
2015-11-01
Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents' ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents' daily experiences of spotting, mapping, and negotiating their faculty members' thresholds and defending their own emerging thresholds. Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners' development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context.
Despeckling Polsar Images Based on Relative Total Variation Model
NASA Astrophysics Data System (ADS)
Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.
2018-04-01
Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.
NASA Astrophysics Data System (ADS)
Beck, Christoph; Philipp, Andreas; Jacobeit, Jucundus
2015-08-01
This contribution investigates the relationship between the large-scale atmospheric circulation and interannual variations of the standardized precipitation index (SPI) in Central Europe. To this end, circulation types (CT) have been derived from a variety of circulation type classifications (CTC) applied to daily sea level pressure (SLP) data and mean circulation indices of vorticity ( V), zonality ( Z) and meridionality ( M) have been calculated. Occurrence frequencies of CTs and circulation indices have been utilized as predictors within multiple regression models (MRM) for the estimation of gridded 3-month SPI values over Central Europe, for the period 1950 to 2010. CTC-based MRMs used in the analyses comprise variants concerning the basic method for CT classification, the number of CTs, the size and location of the spatial domain used for CTCs and the exclusive use of CT frequencies or the combined use of CT frequencies and mean circulation indices as predictors. Adequate MRM predictor combinations have been identified by applying stepwise multiple regression analyses within a resampling framework. The performance (robustness) of the resulting MRMs has been quantified based on a leave-one-out cross-validation procedure applying several skill scores. Furthermore, the relative importance of individual predictors has been estimated for each MRM. From these analyses, it can be stated that model skill is improved by (i) the consideration of vorticity characteristics within CTCs, (ii) a relatively small size of the spatial domain to which CTCs are applied and (iii) the inclusion of mean circulation indices. However, model skill exhibits distinct variations between seasons and regions. Whereas promising skill can be stated for the western and northwestern parts of the Central European domain, only unsatisfactory skill is reached in the more continental regions and particularly during summer. Thus, it can be concluded that the presented approaches feature the potential for the downscaling of Central European drought index variations from the large-scale circulation, at least for some regions. Further improvements of CTC-based approaches may be expected from the optimization of CTCs for explaining the SPI, e.g. via the inclusion of additional variables in the classification procedure.
A fully traits-based approach to modeling global vegetation distribution.
van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M
2014-09-23
Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.
Probabilistic, meso-scale flood loss modelling
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2016-04-01
Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.
NASA Astrophysics Data System (ADS)
Farrokhabadi, A.; Mokhtari, J.; Koochi, A.; Abadyan, M.
2015-06-01
In this paper, the impact of the Casimir attraction on the electromechanical stability of nanowire-fabricated nanotweezers is investigated using a theoretical continuum mechanics model. The Dirichlet mode is considered and an asymptotic solution, based on path integral approach, is applied to consider the effect of vacuum fluctuations in the model. The Euler-Bernoulli beam theory is employed to derive the nonlinear governing equation of the nanotweezers. The governing equations are solved by three different approaches, i.e. the modified variation iteration method, generalized differential quadrature method and using a lumped parameter model. Various perspectives of the problem, including the comparison with the van der Waals force regime, the variation of instability parameters and effects of geometry are addressed in present paper. The proposed approach is beneficial for the precise determination of the electrostatic response of the nanotweezers in the presence of Casimir force.
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
Norlyk, Annelise; Harder, Ingegerd
2010-03-01
This article contributes to the debate about phenomenology as a research approach in nursing by providing a systematic review of what nurse researchers hold as phenomenology in published empirical studies. Based on the assumption that presentations of phenomenological approaches in peer-reviewed journals have consequences for the quality of future research, the aim was to analyze articles presenting phenomenological studies and, in light of the findings, raise a discussion about addressing scientific criteria. The analysis revealed considerable variations, ranging from brief to detailed descriptions of the stated phenomenological approach, and from inconsistencies to methodological clarity and rigor. Variations, apparent inconsistencies, and omissions made it unclear what makes a phenomenological study phenomenological. There is a need for clarifying how the principles of the phenomenological philosophy are implemented in a particular study before publishing. This should include an articulation of methodological keywords of the investigated phenomenon, and how an open attitude was adopted.
Multi-classification of cell deformation based on object alignment and run length statistic.
Li, Heng; Liu, Zhiwen; An, Xing; Shi, Yonggang
2014-01-01
Cellular morphology is widely applied in digital pathology and is essential for improving our understanding of the basic physiological processes of organisms. One of the main issues of application is to develop efficient methods for cell deformation measurement. We propose an innovative indirect approach to analyze dynamic cell morphology in image sequences. The proposed approach considers both the cellular shape change and cytoplasm variation, and takes each frame in the image sequence into account. The cell deformation is measured by the minimum energy function of object alignment, which is invariant to object pose. Then an indirect analysis strategy is employed to overcome the limitation of gradual deformation by run length statistic. We demonstrate the power of the proposed approach with one application: multi-classification of cell deformation. Experimental results show that the proposed method is sensitive to the morphology variation and performs better than standard shape representation methods.
Limited-angle effect compensation for respiratory binned cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Wenyuan; Yang, Yongyi, E-mail: yy@ece.iit.edu; Wernick, Miles N.
Purpose: In cardiac single photon emission computed tomography (SPECT), respiratory-binned study is used to combat the motion blur associated with respiratory motion. However, owing to the variability in respiratory patterns during data acquisition, the acquired data counts can vary significantly both among respiratory bins and among projection angles within individual bins. If not properly accounted for, such variation could lead to artifacts similar to limited-angle effect in image reconstruction. In this work, the authors aim to investigate several reconstruction strategies for compensating the limited-angle effect in respiratory binned data for the purpose of reducing the image artifacts. Methods: The authorsmore » first consider a model based correction approach, in which the variation in acquisition time is directly incorporated into the imaging model, such that the data statistics are accurately described among both the projection angles and respiratory bins. Afterward, the authors consider an approximation approach, in which the acquired data are rescaled to accommodate the variation in acquisition time among different projection angles while the imaging model is kept unchanged. In addition, the authors also consider the use of a smoothing prior in reconstruction for suppressing the artifacts associated with limited-angle effect. In our evaluation study, the authors first used Monte Carlo simulated imaging with 4D NCAT phantom wherein the ground truth is known for quantitative comparison. The authors evaluated the accuracy of the reconstructed myocardium using a number of metrics, including regional and overall accuracy of the myocardium, uniformity and spatial resolution of the left ventricle (LV) wall, and detectability of perfusion defect using a channelized Hotelling observer. As a preliminary demonstration, the authors also tested the different approaches on five sets of clinical acquisitions. Results: The quantitative evaluation results show that the three compensation methods could all, but to different extents, reduce the reconstruction artifacts over no compensation. In particular, the model based approach reduced the mean-squared-error of the reconstructed myocardium by as much as 40%. Compared to the approach of data rescaling, the model based approach further improved both the overall and regional accuracy of the myocardium; it also further improved the lesion detectability and the uniformity of the LV wall. When ML reconstruction was used, the model based approach was notably more effective for improving the LV wall; when MAP reconstruction was used, the smoothing prior could reduce the noise level and artifacts with little or no increase in bias, but at the cost of a slight resolution loss of the LV wall. The improvements in image quality by the different compensation methods were also observed in the clinical acquisitions. Conclusions: Compensating for the uneven distribution of acquisition time among both projection angles and respiratory bins can effectively reduce the limited-angle artifacts in respiratory-binned cardiac SPECT reconstruction. Direct incorporation of the time variation into the imaging model together with a smoothing prior in reconstruction can lead to the most improvement in the accuracy of the reconstructed myocardium.« less
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
Finite-temperature Gutzwiller approximation from the time-dependent variational principle
NASA Astrophysics Data System (ADS)
Lanatà, Nicola; Deng, Xiaoyu; Kotliar, Gabriel
2015-08-01
We develop an extension of the Gutzwiller approximation to finite temperatures based on the Dirac-Frenkel variational principle. Our method does not rely on any entropy inequality, and is substantially more accurate than the approaches proposed in previous works. We apply our theory to the single-band Hubbard model at different fillings, and show that our results compare quantitatively well with dynamical mean field theory in the metallic phase. We discuss potential applications of our technique within the framework of first-principle calculations.
Thermodynamical properties of liquid lanthanides-A variational approach
NASA Astrophysics Data System (ADS)
Patel, H. P.; Thakor, P. B.; Sonvane, Y. A.
2015-06-01
Thermodynamical properties like Entropy (S), Internal energy (E) and Helmholtz free energy (F) of liquid lanthanides using a variation principle based on the Gibbs-Bogoliubuv (GB) inequality with Percus Yevick hard sphere reference system have been reported in the present investigation. To describe electron-ion interaction we have used our newly constructed parameter free model potential along with Sarkar et al. local field correction function. Lastly, we conclude that our newly constructed model potential is capable to explain the thermodynamical properties of liquid lanthanides.
Thermodynamical properties of liquid lanthanides-A variational approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, H. P.; Department of Applied Physics, S. V. National Institute of Technology, Surat 395 007, Gujarat; Thakor, P. B., E-mail: pbthakor@rediffmail.com
2015-06-24
Thermodynamical properties like Entropy (S), Internal energy (E) and Helmholtz free energy (F) of liquid lanthanides using a variation principle based on the Gibbs-Bogoliubuv (GB) inequality with Percus Yevick hard sphere reference system have been reported in the present investigation. To describe electron-ion interaction we have used our newly constructed parameter free model potential along with Sarkar et al. local field correction function. Lastly, we conclude that our newly constructed model potential is capable to explain the thermodynamical properties of liquid lanthanides.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2015-03-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2014-11-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Quantification of the tissue-culture induced variation in barley (Hordeum vulgare L.)
Bednarek, Piotr T; Orłowska, Renata; Koebner, Robert MD; Zimny, Janusz
2007-01-01
Background When plant tissue is passaged through in vitro culture, many regenerated plants appear to be no longer clonal copies of their donor genotype. Among the factors that affect this so-called tissue culture induced variation are explant genotype, explant tissue origin, medium composition, and the length of time in culture. Variation is understood to be generated via a combination of genetic and/or epigenetic changes. A lack of any phenotypic variation between regenerants does not necessarily imply a concomitant lack of genetic (or epigenetic) change, and it is therefore of interest to assay the outcomes of tissue culture at the genotypic level. Results A variant of methylation sensitive AFLP, based on the isoschizomeric combinations Acc65I/MseI and KpnI/MseI was applied to analyze, at both the sequence and methylation levels, the outcomes of regeneration from tissue culture in barley. Both sequence mutation and alteration in methylation pattern were detected. Two sets of regenerants from each of five DH donor lines were compared. One set was derived via androgenesis, and the other via somatic embryogenesis, developed from immature embryos. These comparisons delivered a quantitative assessment of the various types of somaclonal variation induced. The average level of variation was 6%, of which almost 1.7% could be accounted for by nucleotide mutation, and the remainder by changes in methylation state. The nucleotide mutation rates and the rate of epimutations were substantially similar between the andro- and embryo-derived sets of regenerants across all the donors. Conclusion We have developed an AFLP based approach that is capable of describing the qualitative and quantitative characteristics of the tissue culture-induced variation. We believe that this approach will find particular value in the study of patterns of inheritance of somaclonal variation, since non-heritable variation is of little interest for the improvement of plant species which are sexually propagated. Of significant biological interest is the conclusion that the mode of regeneration has no significant effect on the balance between sequence and methylation state change induced by the tissue culture process. PMID:17335560
Assessing Variations in Areal Organization for the Intrinsic Brain: From Fingerprints to Reliability
Xu, Ting; Opitz, Alexander; Craddock, R. Cameron; Wright, Margaret J.; Zuo, Xi-Nian; Milham, Michael P.
2016-01-01
Resting state fMRI (R-fMRI) is a powerful in-vivo tool for examining the functional architecture of the human brain. Recent studies have demonstrated the ability to characterize transitions between functionally distinct cortical areas through the mapping of gradients in intrinsic functional connectivity (iFC) profiles. To date, this novel approach has primarily been applied to iFC profiles averaged across groups of individuals, or in one case, a single individual scanned multiple times. Here, we used a publically available R-fMRI dataset, in which 30 healthy participants were scanned 10 times (10 min per session), to investigate differences in full-brain transition profiles (i.e., gradient maps, edge maps) across individuals, and their reliability. 10-min R-fMRI scans were sufficient to achieve high accuracies in efforts to “fingerprint” individuals based upon full-brain transition profiles. Regarding test–retest reliability, the image-wise intraclass correlation coefficient (ICC) was moderate, and vertex-level ICC varied depending on region; larger durations of data yielded higher reliability scores universally. Initial application of gradient-based methodologies to a recently published dataset obtained from twins suggested inter-individual variation in areal profiles might have genetic and familial origins. Overall, these results illustrate the utility of gradient-based iFC approaches for studying inter-individual variation in brain function. PMID:27600846
Raji, J. A.; Atkinson, Carter T.
2016-01-01
The distribution and amount of genetic variation within and between populations of plant species are important for their adaptability to future habitat changes and also critical for their restoration and overall management. This study was initiated to assess the genetic status of the remnant population of Melicope zahlbruckneri–a critically endangered species in Hawaii, and determine the extent of genetic variation and diversity in order to propose valuable conservation approaches. Estimated genetic structure of individuals based on molecular marker allele frequencies identified genetic groups with low overall differentiation but identified the most genetically diverse individuals within the population. Analysis of Amplified Fragment Length Polymorphic (AFLP) marker loci in the population based on Bayesian model and multivariate statistics classified the population into four subgroups. We inferred a mixed species population structure based on Bayesian clustering and frequency of unique alleles. The percentage of Polymorphic Fragment (PPF) ranged from 18.8 to 64.6% for all marker loci with an average of 54.9% within the population. Inclusion of all surviving M. zahlbruckneri trees in future restorative planting at new sites are suggested, and approaches for longer term maintenance of genetic variability are discussed. To our knowledge, this study represents the first report of molecular genetic analysis of the remaining population of M. zahlbruckneri and also illustrates the importance of genetic variability for conservation of a small endangered population.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Malik, M.
2000-01-01
A study is made of the effects of variation in the lamination and geometric parameters, and boundary conditions of multi-layered composite panels on the accuracy of the detailed response characteristics obtained by five different modeling approaches. The modeling approaches considered include four two-dimensional models, each with five parameters to characterize the deformation in the thickness direction, and a predictor-corrector approach with twelve displacement parameters. The two-dimensional models are first-order shear deformation theory, third-order theory; a theory based on trigonometric variation of the transverse shear stresses through the thickness, and a discrete layer theory. The combination of the following four key elements distinguishes the present study from previous studies reported in the literature: (1) the standard of comparison is taken to be the solutions obtained by using three-dimensional continuum models for each of the individual layers; (2) both mechanical and thermal loadings are considered; (3) boundary conditions other than simply supported edges are considered; and (4) quantities compared include detailed through-the-thickness distributions of transverse shear and transverse normal stresses. Based on the numerical studies conducted, the predictor-corrector approach appears to be the most effective technique for obtaining accurate transverse stresses, and for thermal loading, none of the two-dimensional models is adequate for calculating transverse normal stresses, even when used in conjunction with three-dimensional equilibrium equations.
Speed, speed variation and crash relationships for urban arterials.
Wang, Xuesong; Zhou, Qingya; Quddus, Mohammed; Fan, Tianxiang; Fang, Shou'en
2018-04-01
Speed and speed variation are closely associated with traffic safety. There is, however, a dearth of research on this subject for the case of urban arterials in general, and in the context of developing nations. In downtown Shanghai, the traffic conditions in each direction are very different by time of day, and speed characteristics during peak hours are also greatly different from those during off-peak hours. Considering that traffic demand changes with time and in different directions, arterials in this study were divided into one-way segments by the direction of flow, and time of day was differentiated and controlled for. In terms of data collection, traditional fixed-based methods have been widely used in previous studies, but they fail to capture the spatio-temporal distributions of speed along a road. A new approach is introduced to estimate speed variation by integrating spatio-temporal speed fluctuation of a single vehicle with speed differences between vehicles using taxi-based high frequency GPS data. With this approach, this paper aims to comprehensively establish a relationship between mean speed, speed variation and traffic crashes for the purpose of formulating effective speed management measures, specifically using an urban dataset. From a total of 234 one-way road segments from eight arterials in Shanghai, mean speed, speed variation, geometric design features, traffic volume, and crash data were collected. Because the safety effects of mean speed and speed variation may vary at different segment lengths, arterials with similar signal spacing density were grouped together. To account for potential correlations among these segments, a hierarchical Poisson log-normal model with random effects was developed. Results show that a 1% increase in mean speed on urban arterials was associated with a 0.7% increase in total crashes, and larger speed variation was also associated with increased crash frequency. Copyright © 2018 Elsevier Ltd. All rights reserved.
Yue, Yong; Osipov, Arsen; Fraass, Benedick; Sandler, Howard; Zhang, Xiao; Nissen, Nicholas; Hendifar, Andrew; Tuli, Richard
2017-02-01
To stratify risks of pancreatic adenocarcinoma (PA) patients using pre- and post-radiotherapy (RT) PET/CT images, and to assess the prognostic value of texture variations in predicting therapy response of patients. Twenty-six PA patients treated with RT from 2011-2013 with pre- and post-treatment 18F-FDG-PET/CT scans were identified. Tumor locoregional texture was calculated using 3D kernel-based approach, and texture variations were identified by fitting discrepancies of texture maps of pre- and post-treatment images. A total of 48 texture and clinical variables were identified and evaluated for association with overall survival (OS). The prognostic heterogeneity features were selected using lasso/elastic net regression, and further were evaluated by multivariate Cox analysis. Median age was 69 y (range, 46-86 y). The texture map and temporal variations between pre- and post-treatment were well characterized by histograms and statistical fitting. The lasso analysis identified seven predictors (age, node stage, post-RT SUVmax, variations of homogeneity, variance, sum mean, and cluster tendency). The multivariate Cox analysis identified five significant variables: age, node stage, variations of homogeneity, variance, and cluster tendency (with P=0.020, 0.040, 0.065, 0.078, and 0.081, respectively). The patients were stratified into two groups based on the risk score of multivariate analysis with log-rank P=0.001: a low risk group (n=11) with a longer mean OS (29.3 months) and higher texture variation (>30%), and a high risk group (n=15) with a shorter mean OS (17.7 months) and lower texture variation (<15%). Locoregional metabolic texture response provides a feasible approach for evaluating and predicting clinical outcomes following treatment of PA with RT. The proposed method can be used to stratify patient risk and help select appropriate treatment strategies for individual patients toward implementing response-driven adaptive RT.
NASA Astrophysics Data System (ADS)
Serpieri, Roberto; Travascio, Francesco
2016-03-01
In poroelasticity, the effective stress law relates the external stress applied to the medium to the macroscopic strain of the solid phase and the interstitial pressure of the fluid saturating the mixture. Such relationship has been formerly introduced by Terzaghi in form of a principle. To date, no poroelastic theory is capable of recovering a stress partitioning law in agreement with Terzaghi's postulated one in the absence of ad hoc constitutive assumptions on the medium. We recently proposed a variational macroscopic continuum description of two-phase poroelasticity to derive a general biphasic formulation at finite deformations, termed variational macroscopic theory of porous media (VMTPM). Such approach proceeds from the inclusion of the intrinsic volumetric strain among the kinematic descriptors aside to macroscopic displacements, and as a variational theory, uses the Hamilton least-action principle as the unique primitive concept of mechanics invoked to derive momentum balance equations. In a previous related work it was shown that, for the subclass of undrained problems, VMTPM predicts that stress is partitioned in the two phases in strict compliance with Terzaghi's law, irrespective of the microstructural and constitutive features of a given medium. In the present contribution, we further develop the linearized framework of VMTPM to arrive at a general operative formula that allows the quantitative determination of stress partitioning in a jacketed test over a generic isotropic biphasic specimen. This formula is quantitative and general, in that it relates the partial phase stresses to the externally applied stress as function of partitioning coefficients that are all derived by strictly following a purely variational and purely macroscopic approach, and in the absence of any specific hypothesis on the microstructural or constitutive features of a given medium. To achieve this result, the stiffness coefficients of the theory are derived by using exclusively variational arguments. We derive the boundary conditions attained across the boundary of a poroelastic saturated medium in contact with an impermeable surface also based on purely variational arguments. A technique to retrieve bounds for the resulting elastic moduli, based on Hashin's composite spheres assemblage method, is also reported. Notably, in spite of the minimal mechanical hypotheses introduced, a rich mechanical behavior is observed.
Robust 3D face landmark localization based on local coordinate coding.
Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J
2014-12-01
In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.
Impact of MR Acquisition Parameters on DTI Scalar Indexes: A Tractography Based Approach.
Barrio-Arranz, Gonzalo; de Luis-García, Rodrigo; Tristán-Vega, Antonio; Martín-Fernández, Marcos; Aja-Fernández, Santiago
2015-01-01
Acquisition parameters play a crucial role in Diffusion Tensor Imaging (DTI), as they have a major impact on the values of scalar measures such as Fractional Anisotropy (FA) or Mean Diffusivity (MD) that are usually the focus of clinical studies based on white matter analysis. This paper presents an analysis on the impact of the variation of several acquisition parameters on these scalar measures with a novel double focus. First, a tractography-based approach is employed, motivated by the significant number of clinical studies that are carried out using this technique. Second, the consequences of simultaneous changes in multiple parameters are analyzed: number of gradient directions, b-value and voxel resolution. Results indicate that the FA is most affected by changes in the number of gradients and voxel resolution, while MD is specially influenced by variations in the b-value. Even if the choice of a tractography algorithm has an effect on the numerical values of the final scalar measures, the evolution of these measures when acquisition parameters are modified is parallel.
Impact of MR Acquisition Parameters on DTI Scalar Indexes: A Tractography Based Approach
Barrio-Arranz, Gonzalo; de Luis-García, Rodrigo; Tristán-Vega, Antonio; Martín-Fernández, Marcos; Aja-Fernández, Santiago
2015-01-01
Acquisition parameters play a crucial role in Diffusion Tensor Imaging (DTI), as they have a major impact on the values of scalar measures such as Fractional Anisotropy (FA) or Mean Diffusivity (MD) that are usually the focus of clinical studies based on white matter analysis. This paper presents an analysis on the impact of the variation of several acquisition parameters on these scalar measures with a novel double focus. First, a tractography-based approach is employed, motivated by the significant number of clinical studies that are carried out using this technique. Second, the consequences of simultaneous changes in multiple parameters are analyzed: number of gradient directions, b-value and voxel resolution. Results indicate that the FA is most affected by changes in the number of gradients and voxel resolution, while MD is specially influenced by variations in the b-value. Even if the choice of a tractography algorithm has an effect on the numerical values of the final scalar measures, the evolution of these measures when acquisition parameters are modified is parallel. PMID:26457415
Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P
2014-01-01
Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.
Zheng, Guangyong; Hamdani, Saber; Essemine, Jemaa; Song, Qingfeng; Wang, Hongru
2017-01-01
Mining natural variations is a major approach to identify new options to improve crop light use efficiency. So far, successes in identifying photosynthetic parameters positively related to crop biomass accumulation through this approach are scarce, possibly due to the earlier emphasis on properties related to leaf instead of canopy photosynthetic efficiency. This study aims to uncover rice (Oryza sativa) natural variations to identify leaf physiological parameters that are highly correlated with biomass accumulation, a surrogate of canopy photosynthesis. To do this, we systematically investigated 14 photosynthetic parameters and four morphological traits in a rice population, which consists of 204 U.S. Department of Agriculture-curated minicore accessions collected globally and 11 elite Chinese rice cultivars in both Beijing and Shanghai. To identify key components responsible for the variance of biomass accumulation, we applied a stepwise feature-selection approach based on linear regression models. Although there are large variations in photosynthetic parameters measured in different environments, we observed that photosynthetic rate under low light (Alow) was highly related to biomass accumulation and also exhibited high genomic inheritability in both environments, suggesting its great potential to be used as a target for future rice breeding programs. Large variations in Alow among modern rice cultivars further suggest the great potential of using this parameter in contemporary rice breeding for the improvement of biomass and, hence, yield potential. PMID:28739819
On Exact Solutions of Rarefaction-Rarefaction Interactions in Compressible Isentropic Flow
NASA Astrophysics Data System (ADS)
Jenssen, Helge Kristian
2017-12-01
Consider the interaction of two centered rarefaction waves in one-dimensional, compressible gas flow with pressure function p(ρ )=a^2ρ ^γ with γ >1. The classic hodograph approach of Riemann provides linear 2nd order equations for the time and space variables t, x as functions of the Riemann invariants r, s within the interaction region. It is well known that t( r, s) can be given explicitly in terms of the hypergeometric function. We present a direct calculation (based on works by Darboux and Martin) of this formula, and show how the same approach provides an explicit formula for x( r, s) in terms of Appell functions (two-variable hypergeometric functions). Motivated by the issue of vacuum and total variation estimates for 1-d Euler flows, we then use the explicit t-solution to monitor the density field and its spatial variation in interactions of two centered rarefaction waves. It is found that the variation is always non-monotone, and that there is an overall increase in density variation if and only if γ >3. We show that infinite duration of the interaction is characterized by approach toward vacuum in the interaction region, and that this occurs if and only if the Riemann problem defined by the extreme initial states generates a vacuum. Finally, it is verified that the minimal density in such interactions decays at rate O(1)/ t.
Buehler, James W; Holtgrave, David R
2007-03-29
Controversy and debate can arise whenever public health agencies determine how program funds should be allocated among constituent jurisdictions. Two common strategies for making such allocations are expert review of competitive applications and the use of funding formulas. Despite widespread use of funding formulas by public health agencies in the United States, formula allocation strategies in public health have been subject to relatively little formal scrutiny, with the notable exception of the attention focused on formula funding of HIV care programs. To inform debates and deliberations in the selection of a formula-based approach, we summarize key challenges to formula-based funding, based on prior reviews of federal programs in the United States. The primary challenge lies in identifying data sources and formula calculation methods that both reflect and serve program objectives, with or without adjustments for variations in the cost of delivering services, the availability of local resources, capacity, or performance. Simplicity and transparency are major advantages of formula-based allocations, but these advantages can be offset if formula-based allocations are perceived to under- or over-fund some jurisdictions, which may result from how guaranteed minimum funding levels are set or from "hold-harmless" provisions intended to blunt the effects of changes in formula design or random variations in source data. While fairness is considered an advantage of formula-based allocations, the design of a formula may implicitly reflect unquestioned values concerning equity versus equivalence in setting funding policies. Whether or how past or projected trends are taken into account can also have substantial impacts on allocations. Insufficient attention has been focused on how the approach to designing funding formulas in public health should differ for treatment or service versus prevention programs. Further evaluations of formula-based versus competitive allocation methods are needed to promote the optimal use of public health funds. In the meantime, those who use formula-based strategies to allocate funds should be familiar with the nuances of this approach.
Day, Ryan; Joo, Hyun; Chavan, Archana; Lennox, Kristin P.; Chen, Ann; Dahl, David B.; Vannucci, Marina; Tsai, Jerry W.
2012-01-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. PMID:23266765
Day, Ryan; Joo, Hyun; Chavan, Archana C; Lennox, Kristin P; Chen, Y Ann; Dahl, David B; Vannucci, Marina; Tsai, Jerry W
2013-02-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tan, Chun-Wei; Kumar, Ajay
2014-07-10
Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance.
Simulating effects of microtopography on wetland specific yield and hydroperiod
Summer, David M.; Wang, Xixi
2011-01-01
Specific yield and hydroperiod have proven to be useful parameters in hydrologic analysis of wetlands. Specific yield is a critical parameter to quantitatively relate hydrologic fluxes (e.g., rainfall, evapotranspiration, and runoff) and water level changes. Hydroperiod measures the temporal variability and frequency of land-surface inundation. Conventionally, hydrologic analyses used these concepts without considering the effects of land surface microtopography and assumed a smoothly-varying land surface. However, these microtopographic effects could result in small-scale variations in land surface inundation and water depth above or below the land surface, which in turn affect ecologic and hydrologic processes of wetlands. The objective of this chapter is to develop a physically-based approach for estimating specific yield and hydroperiod that enables the consideration of microtopographic features of wetlands, and to illustrate the approach at sites in the Florida Everglades. The results indicate that the physically-based approach can better capture the variations of specific yield with water level, in particular when the water level falls between the minimum and maximum land surface elevations. The suggested approach for hydroperiod computation predicted that the wetlands might be completely dry or completely wet much less frequently than suggested by the conventional approach neglecting microtopography. One reasonable generalization may be that the hydroperiod approaches presented in this chapter can be a more accurate prediction tool for water resources management to meet the specific hydroperiod threshold as required by a species of plant or animal of interest.
Assessing FRET using Spectral Techniques
Leavesley, Silas J.; Britain, Andrea L.; Cichon, Lauren K.; Nikolaev, Viacheslav O.; Rich, Thomas C.
2015-01-01
Förster resonance energy transfer (FRET) techniques have proven invaluable for probing the complex nature of protein–protein interactions, protein folding, and intracellular signaling events. These techniques have traditionally been implemented with the use of one or more fluorescence band-pass filters, either as fluorescence microscopy filter cubes, or as dichroic mirrors and band-pass filters in flow cytometry. In addition, new approaches for measuring FRET, such as fluorescence lifetime and acceptor photobleaching, have been developed. Hyperspectral techniques for imaging and flow cytometry have also shown to be promising for performing FRET measurements. In this study, we have compared traditional (filter-based) FRET approaches to three spectral-based approaches: the ratio of acceptor-to-donor peak emission, linear spectral unmixing, and linear spectral unmixing with a correction for direct acceptor excitation. All methods are estimates of FRET efficiency, except for one-filter set and three-filter set FRET indices, which are included for consistency with prior literature. In the first part of this study, spectrofluorimetric data were collected from a CFP–Epac–YFP FRET probe that has been used for intracellular cAMP measurements. All comparisons were performed using the same spectrofluorimetric datasets as input data, to provide a relevant comparison. Linear spectral unmixing resulted in measurements with the lowest coefficient of variation (0.10) as well as accurate fits using the Hill equation. FRET efficiency methods produced coefficients of variation of less than 0.20, while FRET indices produced coefficients of variation greater than 8.00. These results demonstrate that spectral FRET measurements provide improved response over standard, filter-based measurements. Using spectral approaches, single-cell measurements were conducted through hyperspectral confocal microscopy, linear unmixing, and cell segmentation with quantitative image analysis. Results from these studies confirmed that spectral imaging is effective for measuring subcellular, time-dependent FRET dynamics and that additional fluorescent signals can be readily separated from FRET signals, enabling multilabel studies of molecular interactions. PMID:23929684
Assessing FRET using spectral techniques.
Leavesley, Silas J; Britain, Andrea L; Cichon, Lauren K; Nikolaev, Viacheslav O; Rich, Thomas C
2013-10-01
Förster resonance energy transfer (FRET) techniques have proven invaluable for probing the complex nature of protein-protein interactions, protein folding, and intracellular signaling events. These techniques have traditionally been implemented with the use of one or more fluorescence band-pass filters, either as fluorescence microscopy filter cubes, or as dichroic mirrors and band-pass filters in flow cytometry. In addition, new approaches for measuring FRET, such as fluorescence lifetime and acceptor photobleaching, have been developed. Hyperspectral techniques for imaging and flow cytometry have also shown to be promising for performing FRET measurements. In this study, we have compared traditional (filter-based) FRET approaches to three spectral-based approaches: the ratio of acceptor-to-donor peak emission, linear spectral unmixing, and linear spectral unmixing with a correction for direct acceptor excitation. All methods are estimates of FRET efficiency, except for one-filter set and three-filter set FRET indices, which are included for consistency with prior literature. In the first part of this study, spectrofluorimetric data were collected from a CFP-Epac-YFP FRET probe that has been used for intracellular cAMP measurements. All comparisons were performed using the same spectrofluorimetric datasets as input data, to provide a relevant comparison. Linear spectral unmixing resulted in measurements with the lowest coefficient of variation (0.10) as well as accurate fits using the Hill equation. FRET efficiency methods produced coefficients of variation of less than 0.20, while FRET indices produced coefficients of variation greater than 8.00. These results demonstrate that spectral FRET measurements provide improved response over standard, filter-based measurements. Using spectral approaches, single-cell measurements were conducted through hyperspectral confocal microscopy, linear unmixing, and cell segmentation with quantitative image analysis. Results from these studies confirmed that spectral imaging is effective for measuring subcellular, time-dependent FRET dynamics and that additional fluorescent signals can be readily separated from FRET signals, enabling multilabel studies of molecular interactions. © 2013 International Society for Advancement of Cytometry. Copyright © 2013 International Society for Advancement of Cytometry.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
NASA Astrophysics Data System (ADS)
Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.
2005-01-01
Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.
Automatic video summarization driven by a spatio-temporal attention model
NASA Astrophysics Data System (ADS)
Barland, R.; Saadane, A.
2008-02-01
According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming, E-mail: liyanrong@mail.ihep.ac.cn
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function ismore » expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.« less
Radiobiological Implications of Fukushima Nuclear Accident for Personalized Medical Approach.
Fukunaga, Hisanori; Yokoya, Akinari; Taki, Yasuyuki; Prise, Kevin M
2017-05-01
On March 11, 2011, a devastating earthquake and subsequent tsunami caused serious damage to areas of the Pacific coast in Fukushima prefecture and prompted fears among the residents about a possible meltdown of the Fukushima Daiichi Nuclear Power Plant reactors. As of 2017, over six years have passed since the Fukushima nuclear crisis and yet the full ramifications of the biological exposures to this accidental release of radioactive substances remain unclear. Furthermore, although several genetic studies have determined that the variation in radiation sensitivity among different individuals is wider than expected, personalized medical approaches for Fukushima victims have seemed to be insufficient. In this commentary, we discuss radiobiological issues arising from low-dose radiation exposure, from the cell-based to the population level. We also introduce the scientific utility of the Integrative Japanese Genome Variation Database (iJGVD), an online database released by the Tohoku Medical Megabank Organization, Tohoku University that covered the whole genome sequences of 2,049 healthy individuals in the northeastern part of Japan in 2016. Here we propose a personalized radiation risk assessment and medical approach, which considers the genetic variation of radiation sensitivity among individuals, for next-step developments in radiological protection.
A variational Bayes spatiotemporal model for electromagnetic brain mapping.
Nathoo, F S; Babul, A; Moiseev, A; Virji-Babul, N; Beg, M F
2014-03-01
In this article, we present a new variational Bayes approach for solving the neuroelectromagnetic inverse problem arising in studies involving electroencephalography (EEG) and magnetoencephalography (MEG). This high-dimensional spatiotemporal estimation problem involves the recovery of time-varying neural activity at a large number of locations within the brain, from electromagnetic signals recorded at a relatively small number of external locations on or near the scalp. Framing this problem within the context of spatial variable selection for an underdetermined functional linear model, we propose a spatial mixture formulation where the profile of electrical activity within the brain is represented through location-specific spike-and-slab priors based on a spatial logistic specification. The prior specification accommodates spatial clustering in brain activation, while also allowing for the inclusion of auxiliary information derived from alternative imaging modalities, such as functional magnetic resonance imaging (fMRI). We develop a variational Bayes approach for computing estimates of neural source activity, and incorporate a nonparametric bootstrap for interval estimation. The proposed methodology is compared with several alternative approaches through simulation studies, and is applied to the analysis of a multimodal neuroimaging study examining the neural response to face perception using EEG, MEG, and fMRI. © 2013, The International Biometric Society.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
Classification-Based Spatial Error Concealment for Visual Communications
NASA Astrophysics Data System (ADS)
Chen, Meng; Zheng, Yefeng; Wu, Min
2006-12-01
In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.
Evaluation of redundancy analysis to identify signatures of local adaptation.
Capblancq, Thibaut; Luu, Keurcien; Blum, Michael G B; Bazin, Eric
2018-05-26
Ordination is a common tool in ecology that aims at representing complex biological information in a reduced space. In landscape genetics, ordination methods such as principal component analysis (PCA) have been used to detect adaptive variation based on genomic data. Taking advantage of environmental data in addition to genotype data, redundancy analysis (RDA) is another ordination approach that is useful to detect adaptive variation. This paper aims at proposing a test statistic based on RDA to search for loci under selection. We compare redundancy analysis to pcadapt, which is a nonconstrained ordination method, and to a latent factor mixed model (LFMM), which is a univariate genotype-environment association method. Individual-based simulations identify evolutionary scenarios where RDA genome scans have a greater statistical power than genome scans based on PCA. By constraining the analysis with environmental variables, RDA performs better than PCA in identifying adaptive variation when selection gradients are weakly correlated with population structure. Additionally, we show that if RDA and LFMM have a similar power to identify genetic markers associated with environmental variables, the RDA-based procedure has the advantage to identify the main selective gradients as a combination of environmental variables. To give a concrete illustration of RDA in population genomics, we apply this method to the detection of outliers and selective gradients on an SNP data set of Populus trichocarpa (Geraldes et al., 2013). The RDA-based approach identifies the main selective gradient contrasting southern and coastal populations to northern and continental populations in the northwestern American coast. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2011-06-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.
Zhang, Bitao; Pi, YouGuo
2013-07-01
The traditional integer order proportional-integral-differential (IO-PID) controller is sensitive to the parameter variation or/and external load disturbance of permanent magnet synchronous motor (PMSM). And the fractional order proportional-integral-differential (FO-PID) control scheme based on robustness tuning method is proposed to enhance the robustness. But the robustness focuses on the open-loop gain variation of controlled plant. In this paper, an enhanced robust fractional order proportional-plus-integral (ERFOPI) controller based on neural network is proposed. The control law of the ERFOPI controller is acted on a fractional order implement function (FOIF) of tracking error but not tracking error directly, which, according to theory analysis, can enhance the robust performance of system. Tuning rules and approaches, based on phase margin, crossover frequency specification and robustness rejecting gain variation, are introduced to obtain the parameters of ERFOPI controller. And the neural network algorithm is used to adjust the parameter of FOIF. Simulation and experimental results show that the method proposed in this paper not only achieve favorable tracking performance, but also is robust with regard to external load disturbance and parameter variation. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Dhakal, Rajendra; Kim, E S; Jo, Yong-Hwa; Kim, Sung-Soo; Kim, Nam-Young
2017-03-01
We present a concept for the characterization of micro-fabricated based resonator incorporating air-bridge metal-insulator-semiconductor (MIS) capacitor to continuously monitor an individual's state of glucose levels based on frequency variation. The investigation revealed that, the micro-resonator based on MIS capacitor holds considerable promise for implementation and recognition as a glucose sensor for human serum. The discrepancy in complex permittivity as a result of enhanced capacitor was achieved for the detection and determination of random glucose concentration levels using a unique variation of capacitor that indeed results in an adequate variation of the resonance frequency. Moreover, the design and development of micro-resonator with enhanced MIS capacitor generate a resolution of 112.38 × 10 -3 pF/mg/dl, minimum detectable glucose level of 7.45mg/dl, and a limit of quantification of 22.58mg/dl. Additionally, this unique approach offers long-term reliability for mediator-free glucose sensing with a relative standard deviation of less than 0.5%. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Traditional and modern plant breeding methods with examples in rice (Oryza sativa L.).
Breseghello, Flavio; Coelho, Alexandre Siqueira Guedes
2013-09-04
Plant breeding can be broadly defined as alterations caused in plants as a result of their use by humans, ranging from unintentional changes resulting from the advent of agriculture to the application of molecular tools for precision breeding. The vast diversity of breeding methods can be simplified into three categories: (i) plant breeding based on observed variation by selection of plants based on natural variants appearing in nature or within traditional varieties; (ii) plant breeding based on controlled mating by selection of plants presenting recombination of desirable genes from different parents; and (iii) plant breeding based on monitored recombination by selection of specific genes or marker profiles, using molecular tools for tracking within-genome variation. The continuous application of traditional breeding methods in a given species could lead to the narrowing of the gene pool from which cultivars are drawn, rendering crops vulnerable to biotic and abiotic stresses and hampering future progress. Several methods have been devised for introducing exotic variation into elite germplasm without undesirable effects. Cases in rice are given to illustrate the potential and limitations of different breeding approaches.
NASA Astrophysics Data System (ADS)
Boissard, C.; Chervier, F.; Dutot, A. L.
2007-08-01
Using a statistical approach based on artificial neural networks, an emission algorithm (ISO_LF) accounting for high (instantaneous) to low (seasonal) frequency variations was developed for isoprene. ISO_LF was optimised using an isoprene emission data base (ISO-DB) specifically designed for this work. ISO-DB consists of 1321 emission rates collected in the literature, together with 34 environmental variables, measured or assessed using NCDC (National Climatic Data Center) or NCEP (National Centers for Environmental Predictions) meteorological databases. ISO-DB covers a large variety of emitters (25 species) and environmental conditions (10° S to 60° N). When only instantaneous environmental regressors (air temperature and photosynthetic active radiation, PAR) were used, a maximum of 60% of the overall isoprene variability was assessed and the highest emissions were underestimated. Considering a total of 9 high (instantaneous) to low (up to 3 weeks) frequency regressors, ISO_LF accounts for up to 91% of the isoprene emission variability, whatever the emission range, species or climate. Diurnal and seasonal variations are correctly reproduced for textit{Ulex europaeus} with a maximum factor of discrepancy of 4. ISO-LF was found to be mainly sensitive to air temperature cumulated over 3 weeks T21 and to instantaneous light L0 and air temperature T0 variations. T21, T0 and L0 only accounts for 76% of the overall variability. The use of ISO-LF for non stored monoterpene emissions was shown to give poor results.
Mapping copy number variation by population-scale genome sequencing.
Mills, Ryan E; Walter, Klaudia; Stewart, Chip; Handsaker, Robert E; Chen, Ken; Alkan, Can; Abyzov, Alexej; Yoon, Seungtai Chris; Ye, Kai; Cheetham, R Keira; Chinwalla, Asif; Conrad, Donald F; Fu, Yutao; Grubert, Fabian; Hajirasouliha, Iman; Hormozdiari, Fereydoun; Iakoucheva, Lilia M; Iqbal, Zamin; Kang, Shuli; Kidd, Jeffrey M; Konkel, Miriam K; Korn, Joshua; Khurana, Ekta; Kural, Deniz; Lam, Hugo Y K; Leng, Jing; Li, Ruiqiang; Li, Yingrui; Lin, Chang-Yun; Luo, Ruibang; Mu, Xinmeng Jasmine; Nemesh, James; Peckham, Heather E; Rausch, Tobias; Scally, Aylwyn; Shi, Xinghua; Stromberg, Michael P; Stütz, Adrian M; Urban, Alexander Eckehart; Walker, Jerilyn A; Wu, Jiantao; Zhang, Yujun; Zhang, Zhengdong D; Batzer, Mark A; Ding, Li; Marth, Gabor T; McVean, Gil; Sebat, Jonathan; Snyder, Michael; Wang, Jun; Ye, Kenny; Eichler, Evan E; Gerstein, Mark B; Hurles, Matthew E; Lee, Charles; McCarroll, Steven A; Korbel, Jan O
2011-02-03
Genomic structural variants (SVs) are abundant in humans, differing from other forms of variation in extent, origin and functional impact. Despite progress in SV characterization, the nucleotide resolution architecture of most SVs remains unknown. We constructed a map of unbalanced SVs (that is, copy number variants) based on whole genome DNA sequencing data from 185 human genomes, integrating evidence from complementary SV discovery approaches with extensive experimental validations. Our map encompassed 22,025 deletions and 6,000 additional SVs, including insertions and tandem duplications. Most SVs (53%) were mapped to nucleotide resolution, which facilitated analysing their origin and functional impact. We examined numerous whole and partial gene deletions with a genotyping approach and observed a depletion of gene disruptions amongst high frequency deletions. Furthermore, we observed differences in the size spectra of SVs originating from distinct formation mechanisms, and constructed a map of SV hotspots formed by common mechanisms. Our analytical framework and SV map serves as a resource for sequencing-based association studies.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
Variational optical flow estimation based on stick tensor voting.
Rashwan, Hatem A; Garcia, Miguel A; Puig, Domenec
2013-07-01
Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark.
Spatial resolution enhancement of satellite image data using fusion approach
NASA Astrophysics Data System (ADS)
Lestiana, H.; Sukristiyanti
2018-02-01
Object identification using remote sensing data has a problem when the spatial resolution is not in accordance with the object. The fusion approach is one of methods to solve the problem, to improve the object recognition and to increase the objects information by combining data from multiple sensors. The application of fusion image can be used to estimate the environmental component that is needed to monitor in multiple views, such as evapotranspiration estimation, 3D ground-based characterisation, smart city application, urban environments, terrestrial mapping, and water vegetation. Based on fusion application method, the visible object in land area has been easily recognized using the method. The variety of object information in land area has increased the variation of environmental component estimation. The difficulties in recognizing the invisible object like Submarine Groundwater Discharge (SGD), especially in tropical area, might be decreased by the fusion method. The less variation of the object in the sea surface temperature is a challenge to be solved.
NASA Technical Reports Server (NTRS)
Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.
1987-01-01
An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.
A Novel Framework for Characterizing Exposure-Related ...
Descriptions of where and how individuals spend their time are important for characterizing exposures to chemicals in consumer products and in indoor environments. Herein we create an agent-based model (ABM) that is able to simulate longitudinal patterns in behaviors. By basing our ABM upon a needs-based artificial intelligence (AI) system, we create agents that mimic human decisions on these exposure-relevant behaviors. In a case study of adults, we use the AI to predict the inter-individual variation in the start time and duration of four behaviors: sleeping, eating, commuting, and working. The results demonstrate that the ABM can capture both inter-individual variation and how decisions on one behavior can affect subsequent behaviors. Preset NERL's research on the use of agent based modeling in exposure assessments. To obtain feed back on the approach from the leading experts in the field.
Biometric identification based on novel frequency domain facial asymmetry measures
NASA Astrophysics Data System (ADS)
Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-03-01
In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.
Yin, Zhong; Zhang, Jianhua
2014-07-01
Identifying the abnormal changes of mental workload (MWL) over time is quite crucial for preventing the accidents due to cognitive overload and inattention of human operators in safety-critical human-machine systems. It is known that various neuroimaging technologies can be used to identify the MWL variations. In order to classify MWL into a few discrete levels using representative MWL indicators and small-sized training samples, a novel EEG-based approach by combining locally linear embedding (LLE), support vector clustering (SVC) and support vector data description (SVDD) techniques is proposed and evaluated by using the experimentally measured data. The MWL indicators from different cortical regions are first elicited by using the LLE technique. Then, the SVC approach is used to find the clusters of these MWL indicators and thereby to detect MWL variations. It is shown that the clusters can be interpreted as the binary class MWL. Furthermore, a trained binary SVDD classifier is shown to be capable of detecting slight variations of those indicators. By combining the two schemes, a SVC-SVDD framework is proposed, where the clear-cut (smaller) cluster is detected by SVC first and then a subsequent SVDD model is utilized to divide the overlapped (larger) cluster into two classes. Finally, three-class MWL levels (low, normal and high) can be identified automatically. The experimental data analysis results are compared with those of several existing methods. It has been demonstrated that the proposed framework can lead to acceptable computational accuracy and has the advantages of both unsupervised and supervised training strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A new approach to children's footwear based on foot type classification.
Mauch, M; Grau, S; Krauss, I; Maiwald, C; Horstmann, T
2009-08-01
Current shoe designs do not allow for the comprehensive 3-D foot shape, which means they are unable to reproduce the wide variability in foot morphology. Therefore, the purpose of this study was to capture these variations of children's feet by classifying them into groups (types) and thereby provide a basis for their implementation in the design of children's shoes. The feet of 2867 German children were measured using a 3-D foot scanner. Cluster analysis was then applied to classify the feet into three different foot types. The characteristics of these foot types differ regarding their volume and forefoot shape both within and between shoe sizes. This new approach is in clear contrast to previous systems, since it captures the variability of foot morphology in a more comprehensive way by using a foot typing system and therefore paves the way for the unimpaired development of children's feet. Previous shoe systems do not allow for the wide variations in foot morphology. A new approach was developed regarding different morphological foot types based on 3-D measurements relevant in shoe construction. This can be directly applied to create specific designs for children's shoes.
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei
2017-01-01
Background Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. Method We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents’ ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. Results The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents’ daily experiences of spotting, mapping, and negotiating their faculty members’ thresholds and defending their own emerging thresholds. Conclusions Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners’ development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context. PMID:26505105
The estimation of absorbed dose rates for non-human biota : an extended inter-comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batlle, J. V. I.; Beaugelin-Seiller, K.; Beresford, N. A.
An exercise to compare 10 approaches for the calculation of unweighted whole-body absorbed dose rates was conducted for 74 radionuclides and five of the ICRP's Reference Animals and Plants, or RAPs (duck, frog, flatfish egg, rat and elongated earthworm), selected for this exercise to cover a range of body sizes, dimensions and exposure scenarios. Results were analysed using a non-parametric method requiring no specific hypotheses about the statistical distribution of data. The obtained unweighted absorbed dose rates for internal exposure compare well between the different approaches, with 70% of the results falling within a range of variation of {+-}20%. Themore » variation is greater for external exposure, although 90% of the estimates are within an order of magnitude of one another. There are some discernible patterns where specific models over- or under-predicted. These are explained based on the methodological differences including number of daughter products included in the calculation of dose rate for a parent nuclide; source-target geometry; databases for discrete energy and yield of radionuclides; rounding errors in integration algorithms; and intrinsic differences in calculation methods. For certain radionuclides, these factors combine to generate systematic variations between approaches. Overall, the technique chosen to interpret the data enabled methodological differences in dosimetry calculations to be quantified and compared, allowing the identification of common issues between different approaches and providing greater assurance on the fundamental dose conversion coefficient approaches used in available models for assessing radiological effects to biota.« less
Atlas warping for brain morphometry
NASA Astrophysics Data System (ADS)
Machado, Alexei M. C.; Gee, James C.
1998-06-01
In this work, we describe an automated approach to morphometry based on spatial normalizations of the data, and demonstrate its application to the analysis of gender differences in the human corpus callosum. The purpose is to describe a population by a reduced and representative set of variables, from which a prior model can be constructed. Our approach is rooted in the assumption that individual anatomies can be considered as quantitative variations on a common underlying qualitative plane. We can therefore imagine that a given individual's anatomy is a warped version of some referential anatomy, also known as an atlas. The spatial warps which transform a labeled atlas into anatomic alignment with a population yield immediate knowledge about organ size and shape in the group. Furthermore, variation within the set of spatial warps is directly related to the anatomic variation among the subjects. Specifically, the shape statistics--mean and variance of the mappings--for the population can be calculated in a special basis, and an eigendecomposition of the variance performed to identify the most significant modes of shape variation. The results obtained with the corpus callosum study confirm the existence of substantial anatomical differences between males and females, as reported in previous experimental work.
Demidov, German; Simakova, Tamara; Vnuchkova, Julia; Bragin, Anton
2016-10-22
Multiplex polymerase chain reaction (PCR) is a common enrichment technique for targeted massive parallel sequencing (MPS) protocols. MPS is widely used in biomedical research and clinical diagnostics as the fast and accurate tool for the detection of short genetic variations. However, identification of larger variations such as structure variants and copy number variations (CNV) is still being a challenge for targeted MPS. Some approaches and tools for structural variants detection were proposed, but they have limitations and often require datasets of certain type, size and expected number of amplicons affected by CNVs. In the paper, we describe novel algorithm for high-resolution germinal CNV detection in the PCR-enriched targeted sequencing data and present accompanying tool. We have developed a machine learning algorithm for the detection of large duplications and deletions in the targeted sequencing data generated with PCR-based enrichment step. We have performed verification studies and established the algorithm's sensitivity and specificity. We have compared developed tool with other available methods applicable for the described data and revealed its higher performance. We showed that our method has high specificity and sensitivity for high-resolution copy number detection in targeted sequencing data using large cohort of samples.
Perlman, Christopher
2018-01-01
Mental Health has been known to vary geographically. Different rates of utilization of mental health services in local areas reflect geographic variation of mental health and complexity of health care. Variations and inequalities in how the health care system addresses risks are two critical issues for addressing population mental health. This study examines these issues by analyzing the utilization of mental health services in Toronto at the neighbourhood level. We adopted a shared component spatial modeling approach that allows simultaneous analysis of two main health service utilizations: doctor visits and hospitalizations related to mental health conditions. Our results reflect a geographic variation of both types of mental health service utilization across neighbourhoods in Toronto. We identified hot and cold spots of mental health risks that are common to both or specific to only one type of health service utilization. Based on the evidence found, we discuss intervention strategies, focusing on the hotspots and provision of health services about doctors and hospitals, to improve mental health for the neighbourhoods. Limitations of the study and further research directions are also discussed. PMID:29587426
Chemical disorder as an engineering tool for spin polarization in Mn3Ga -based Heusler systems
NASA Astrophysics Data System (ADS)
Chadov, S.; D'Souza, S. W.; Wollmann, L.; Kiss, J.; Fecher, G. H.; Felser, C.
2015-03-01
Our study highlights spin-polarization mechanisms in metals by focusing on the mobilities of conducting electrons with different spins instead of their quantities. Here, we engineer electron mobility by applying chemical disorder induced by nonstoichiometric variations. As a practical example, we discuss the scheme that establishes such variations in tetragonal Mn3Ga Heusler material. We justify this approach using first-principles calculations of the spin-projected conductivity components based on the Kubo-Greenwood formalism. It follows that, in the majority of cases, even a small substitution of some other transition element instead of Mn may lead to a substantial increase in spin polarization along the tetragonal axis.
Comments on the variational modified-hypernetted-chain theory for simple fluids
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1986-02-01
The variational modified-hypernetted-chain (VMHNC) theory, based on the approximation of universality of the bridge functions, is reformulated. The new formulation includes recent calculations by Lado and by Lado, Foiles, and Ashcroft, as two stages in a systematic approach which is analyzed. A variational iterative procedure for solving the exact (diagrammatic) equations for the fluid structure which is formally identical to the VMHNC is described, featuring the theory of simple classical fluids as a one-iteration theory. An accurate method for calculating the pair structure for a given potential and for inverting structure factor data in order to obtain the potential and the thermodynamic functions, follows from our analysis.
Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik
2009-11-14
Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.
Adaptive control applied to Space Station attitude control system
NASA Technical Reports Server (NTRS)
Lam, Quang M.; Chipman, Richard; Hu, Tsay-Hsin G.; Holmes, Eric B.; Sunkel, John
1992-01-01
This paper presents an adaptive control approach to enhance the performance of current attitude control system used by the Space Station Freedom. The proposed control law was developed based on the direct adaptive control or model reference adaptive control scheme. Performance comparisons, subject to inertia variation, of the adaptive controller and the fixed-gain linear quadratic regulator currently implemented for the Space Station are conducted. Both the fixed-gain and the adaptive gain controllers are able to maintain the Station stability for inertia variations of up to 35 percent. However, when a 50 percent inertia variation is applied to the Station, only the adaptive controller is able to maintain the Station attitude.
A thermodynamic approach to the 'mitosis/apoptosis' ratio in cancer
NASA Astrophysics Data System (ADS)
Lucia, Umberto; Ponzetto, Antonio; Deisboeck, Thomas S.
2015-10-01
Cancer can be considered as an open, complex, (bio-thermo)dynamic and self-organizing system. Consequently, an entropy generation approach has been employed to analyze its mitosis/apoptosis ratio. Specifically, a novel thermodynamic anticancer strategy is suggested, based on the variation of entropy generation caused by the application of external fields, for example electro-magnetic fields, for therapeutic purposes. Eventually, this innovative approach could support conventional therapies, particularly for inoperable tumors or advanced stages of cancer, when larger tumor burden is diagnosed, and therapeutic options are often limited.
A System for Dosage-Based Functional Genomics in Poplar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, Isabelle M.; Zinkgraf, Matthew S.; Groover, Andrew T.
Altering gene dosage through variation in gene copy number is a powerful approach to addressing questions regarding gene regulation, quantitative trait loci, and heterosis, but one that is not easily applied to sexually transmitted species. Elite poplar (Populus spp) varieties are created through interspecific hybridization, followed by clonal propagation. Altered gene dosage relationships are believed to contribute to hybrid performance. Clonal propagation allows for replication and maintenance of meiotically unstable ploidy or structural variants and provides an alternative approach to investigating gene dosage effects not possible in sexually propagated species. Here, we built a genome-wide structural variation system for dosage-basedmore » functional genomics and breeding of poplar. We pollinated Populus deltoides with gamma-irradiated Populus nigra pollen to produce >500 F1 seedlings containing dosage lesions in the form of deletions and insertions of chromosomal segments (indel mutations). Using high-precision dosage analysis, we detected indel mutations in ~55% of the progeny. These indels varied in length, position, and number per individual, cumulatively tiling >99% of the genome, with an average of 10 indels per gene. Combined with future phenotype and transcriptome data, this population will provide an excellent resource for creating and characterizing dosage-based variation in poplar, including the contribution of dosage to quantitative traits and heterosis.« less
A System for Dosage-Based Functional Genomics in Poplar
Henry, Isabelle M.; Zinkgraf, Matthew S.; Groover, Andrew T.; ...
2015-08-28
Altering gene dosage through variation in gene copy number is a powerful approach to addressing questions regarding gene regulation, quantitative trait loci, and heterosis, but one that is not easily applied to sexually transmitted species. Elite poplar (Populus spp) varieties are created through interspecific hybridization, followed by clonal propagation. Altered gene dosage relationships are believed to contribute to hybrid performance. Clonal propagation allows for replication and maintenance of meiotically unstable ploidy or structural variants and provides an alternative approach to investigating gene dosage effects not possible in sexually propagated species. Here, we built a genome-wide structural variation system for dosage-basedmore » functional genomics and breeding of poplar. We pollinated Populus deltoides with gamma-irradiated Populus nigra pollen to produce >500 F1 seedlings containing dosage lesions in the form of deletions and insertions of chromosomal segments (indel mutations). Using high-precision dosage analysis, we detected indel mutations in ~55% of the progeny. These indels varied in length, position, and number per individual, cumulatively tiling >99% of the genome, with an average of 10 indels per gene. Combined with future phenotype and transcriptome data, this population will provide an excellent resource for creating and characterizing dosage-based variation in poplar, including the contribution of dosage to quantitative traits and heterosis.« less
Litchfield, Kevin; Thomsen, Hauke; Mitchell, Jonathan S; Sundquist, Jan; Houlston, Richard S; Hemminki, Kari; Turnbull, Clare
2015-09-09
A sizable fraction of testicular germ cell tumour (TGCT) risk is expected to be explained by heritable factors. Recent genome-wide association studies (GWAS) have successfully identified a number of common SNPs associated with TGCT. It is however, unclear how much common variation there is left to be accounted for by other, yet to be identified, common SNPs and what contribution common genetic variation makes to the heritable risk of TGCT. We approached this question using two complimentary analytical techniques. We undertook a population-based analysis of the Swedish family-cancer database, through which we estimated that the heritability of TGCT at 48.9% (CI:47.2%-52.3%). We also applied Genome-Wide Complex Trait Analysis to 922 cases and 4,842 controls to estimate the heritability of TGCT. The heritability explained by known common risk SNPs identified by GWAS was 9.1%, whereas the heritability explained by all common SNPs was 37.4% (CI:27.6%-47.2%). These complementary findings indicate that the known TGCT SNPs only explain a small proportion of the heritability and many additional common SNPs remain to be identified. The data also suggests that a fraction of the heritability of TGCT is likely to be explained by other classes of genetic variation, such as rare disease-causing alleles.
Automatic segmentation of colon glands using object-graphs.
Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk
2010-02-01
Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.
CHAMP: a locally adaptive unmixing-based hyperspectral anomaly detection algorithm
NASA Astrophysics Data System (ADS)
Crist, Eric P.; Thelen, Brian J.; Carrara, David A.
1998-10-01
Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.
Cluster-based exposure variation analysis
2013-01-01
Background Static posture, repetitive movements and lack of physical variation are known risk factors for work-related musculoskeletal disorders, and thus needs to be properly assessed in occupational studies. The aims of this study were (i) to investigate the effectiveness of a conventional exposure variation analysis (EVA) in discriminating exposure time lines and (ii) to compare it with a new cluster-based method for analysis of exposure variation. Methods For this purpose, we simulated a repeated cyclic exposure varying within each cycle between “low” and “high” exposure levels in a “near” or “far” range, and with “low” or “high” velocities (exposure change rates). The duration of each cycle was also manipulated by selecting a “small” or “large” standard deviation of the cycle time. Theses parameters reflected three dimensions of exposure variation, i.e. range, frequency and temporal similarity. Each simulation trace included two realizations of 100 concatenated cycles with either low (ρ = 0.1), medium (ρ = 0.5) or high (ρ = 0.9) correlation between the realizations. These traces were analyzed by conventional EVA, and a novel cluster-based EVA (C-EVA). Principal component analysis (PCA) was applied on the marginal distributions of 1) the EVA of each of the realizations (univariate approach), 2) a combination of the EVA of both realizations (multivariate approach) and 3) C-EVA. The least number of principal components describing more than 90% of variability in each case was selected and the projection of marginal distributions along the selected principal component was calculated. A linear classifier was then applied to these projections to discriminate between the simulated exposure patterns, and the accuracy of classified realizations was determined. Results C-EVA classified exposures more correctly than univariate and multivariate EVA approaches; classification accuracy was 49%, 47% and 52% for EVA (univariate and multivariate), and C-EVA, respectively (p < 0.001). All three methods performed poorly in discriminating exposure patterns differing with respect to the variability in cycle time duration. Conclusion While C-EVA had a higher accuracy than conventional EVA, both failed to detect differences in temporal similarity. The data-driven optimality of data reduction and the capability of handling multiple exposure time lines in a single analysis are the advantages of the C-EVA. PMID:23557439
NASA Astrophysics Data System (ADS)
Jochimsen, Thies H.; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-01
This study explores the possibility of using simultaneous positron emission tomography—magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of 18F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Jochimsen, Thies H; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-21
This study explores the possibility of using simultaneous positron emission tomography--magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of (18)F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto
2015-01-01
The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was an effective approach for delimitating Melampsora species on willows in China. PMID:26680416
NASA Astrophysics Data System (ADS)
Carlo Ponzo, Felice; Ditommaso, Rocco
2015-04-01
This study presents an innovative strategy for automatic evaluation of the variable fundamental frequency and related damping factor of nonlinear structures during strong motion phases. Most of methods for damage detection are based on the assessment of the variations of the dynamic parameters characterizing the monitored structure. A crucial aspect of these methods is the automatic and accurate estimation of both structural eigen-frequencies and related damping factors also during the nonlinear behaviour. A new method, named STIRF (Short-Time Impulse Response Function - STIRF), based on the nonlinear interferometric analysis combined with the Fourier Transform (FT) here is proposed in order to allow scientists and engineers to characterize frequencies and damping variations of a monitored structure. The STIRF approach helps to overcome some limitation derived from the use of techniques based on simple Fourier Transform. These latter techniques provide good results when the response of the monitored system is stationary, but fails when the system exhibits a non-stationary, time-varying behaviour: even non-stationary input, soil-foundation and/or adjacent structures interaction phenomena can show the inadequacy of classic techniques to analysing the nonlinear and/or non-stationary behaviour of structures. In fact, using this kind of approach it is possible to improve some of the existing methods for the automatic damage detection providing stable results also during the strong motion phase. Results are consistent with those expected if compared with other techniques. The main advantage derived from the use of the proposed approach (STIRF) for Structural Health Monitoring is based on the simplicity of the interpretation of the nonlinear variations of the fundamental frequency and the related equivalent viscous damping factor. The proposed methodology has been tested on both numerical and experimental models also using data retrieved from shaking table tests. Based on the results provided in this study, the methodology seems to be able to evaluate fast variations (over time) of dynamic parameters of a generic reinforced concrete framed structure. Further analyses are necessary to better calibrate the length of the moving time-window (in order to minimize the spurious frequency within each Interferometric Response Function evaluated on both weak and strong motion phases) and to verify the possibility to use the STIRF to analyse the nonlinear behaviour of general systems. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2014 - RS4 ''Seismic observatory of structures and health monitoring''. References R. Ditommaso, F.C. Ponzo (2015). Automatic evaluation of the fundamental frequency variations and related damping factor of reinforced concrete framed structures using the Short Time Impulse Response Function (STIRF). Engineering Structures, 82 (2015), 104-112. http://dx.doi.org/10.1016/j.engstruct.2014.10.023.
Kim, Dongchul; Kang, Mingon; Biswas, Ashis; Liu, Chunyu; Gao, Jean
2016-08-10
Inferring gene regulatory networks is one of the most interesting research areas in the systems biology. Many inference methods have been developed by using a variety of computational models and approaches. However, there are two issues to solve. First, depending on the structural or computational model of inference method, the results tend to be inconsistent due to innately different advantages and limitations of the methods. Therefore the combination of dissimilar approaches is demanded as an alternative way in order to overcome the limitations of standalone methods through complementary integration. Second, sparse linear regression that is penalized by the regularization parameter (lasso) and bootstrapping-based sparse linear regression methods were suggested in state of the art methods for network inference but they are not effective for a small sample size data and also a true regulator could be missed if the target gene is strongly affected by an indirect regulator with high correlation or another true regulator. We present two novel network inference methods based on the integration of three different criteria, (i) z-score to measure the variation of gene expression from knockout data, (ii) mutual information for the dependency between two genes, and (iii) linear regression-based feature selection. Based on these criterion, we propose a lasso-based random feature selection algorithm (LARF) to achieve better performance overcoming the limitations of bootstrapping as mentioned above. In this work, there are three main contributions. First, our z score-based method to measure gene expression variations from knockout data is more effective than similar criteria of related works. Second, we confirmed that the true regulator selection can be effectively improved by LARF. Lastly, we verified that an integrative approach can clearly outperform a single method when two different methods are effectively jointed. In the experiments, our methods were validated by outperforming the state of the art methods on DREAM challenge data, and then LARF was applied to inferences of gene regulatory network associated with psychiatric disorders.
Barman, Linda; Silén, Charlotte; Bolander Laksov, Klara
2014-12-01
This paper reports on how teachers within health sciences education translate outcome-based education (OBE) into practice when they design courses. The study is an empirical contribution to the debate about outcome- and competency-based approaches in health sciences education. A qualitative method was used to study how teachers from 14 different study programmes designed courses before and after OBE was implemented. Using an interpretative approach, analysis of documents and interviews was carried out. The findings show that teachers enacted OBE either to design for more competency-oriented teaching-learning, or to further detail knowledge and thus move towards reductionism. Teachers mainly understood the outcome-based framework as useful to support students' learning, although the demand for accountability created tension and became a bureaucratic hindrance to design for development of professional competence. The paper shows variations of how teachers enacted the same outcome-based framework for instructional design. These differences can add a richer understanding of how outcome- or competency-based approaches relate to teaching-learning at a course level.
Palgrave, Christopher J.; Gilmour, Linzi; Lowden, C. Stewart; Lillico, Simon G.; Mellencamp, Martha A.; Whitelaw, C. Bruce A.
2011-01-01
African swine fever virus (ASFV) is a highly infectious disease of domestic pigs, with virulent isolates causing a rapidly fatal hemorrhagic fever. In contrast, the porcine species endogenous to Africa tolerate infection. The ability of the virus to persist in one host while killing another genetically related host implies that disease severity may be, in part, modulated by host genetic variation. To complement transcription profiling approaches to identify the underlying genetic variation in the host response to ASFV, we have taken a candidate gene approach based on known signaling pathways that interact with the virus-encoded immunomodulatory protein A238L. We report the sequencing of these genes from different pig species and the identification and initial in vitro characterization of polymorphic variation in RELA (p65; v-rel reticuloendotheliosis viral oncogene homolog A), the major component of the NF-κB transcription factor. Warthog RELA and domestic pig RELA differ at three amino acids. Transient cell transfection assays indicate that this variation is reflected in reduced NF-κB activity in vitro for warthog RELA but not for domestic pig RELA. Induction assays indicate that warthog RELA and domestic pig RELA are elevated essentially to the same extent. Finally, mutational studies indicate that the S531P site conveys the majority of the functional variation between warthog RELA and domestic pig RELA. We propose that the variation in RELA identified between the warthog and domestic pig has the potential to underlie the difference between tolerance and rapid death upon ASFV infection. PMID:21450812
Variational Approach in the Theory of Liquid-Crystal State
NASA Astrophysics Data System (ADS)
Gevorkyan, E. V.
2018-03-01
The variational calculus by Leonhard Euler is the basis for modern mathematics and theoretical physics. The efficiency of variational approach in statistical theory of liquid-crystal state and in general case in condensed state theory is shown. The developed approach in particular allows us to introduce correctly effective pair interactions and optimize the simple models of liquid crystals with help of realistic intermolecular potentials.
Nishizawa, Masafumi; Hoshide, Satoshi; Okawara, Yukie; Matsuo, Takefumi; Kario, Kazuomi
2017-01-01
At the time of the Great East Japan earthquake and tsunami (March 2011), the authors developed a web-based information and communications technology (ICT)-based blood pressure (BP) monitoring system (the Disaster CArdiovascular Prevention [DCAP] Network) and introduced it in an area that was catastrophically damaged (Minamisanriku town) to help control the survivors' BP. Using this system, home BP (HBP) was monitored and the data were automatically transmitted to a central computer database and to the survivors' attending physicians. The study participants, 341 hypertensive patients, continued to use this system for 4 years after the disaster and all of the obtained HBP readings were analyzed. This DCAP HBP-guided approach helped achieve a decrease in the participants' HBPs (initial average: 151.3±20.0/86.9±10.2 mm Hg to 120.2±12.1/70.8±10.2 mm Hg) over the 4 years. In addition, the amplitude of seasonal BP variation was suppressed and the duration from the summer lowest HBP values to the winter peak HBP values was gradually prolonged. This ICT-based approach was useful to achieve strict HBP control and minimize the seasonal BP variation even in a catastrophically damaged area during a 4-year period after the disaster, suggesting that this approach could be a routine way to monitor BP in the community. ©2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Interpretable functional principal component analysis.
Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo
2016-09-01
Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.
A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal
Zhu, Qingxin; Song, Xiuli; Tao, Jinsong
2017-01-01
Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jelev, L., E-mail: ljelev@abv.bg; Surchev, L.
2008-09-15
In routine clinical practice the variations of the radial artery are the main reason for technical failure during transradial catheterization. If these variations are well documented, however, they do not represent a problem in the transradial approach. Therefore, we report here a rare case of the radial artery which is very strange but potentially valuable for the clinical practice: it arises at a right angle from the brachial artery and passes behind the biceps brachii tendon. Based on our findings and on an extensive literature review, we propose for the first time a clinically oriented classification of the variations ofmore » the radial artery. This classification is related to the catheterization success at the usual access site of the radial artery at the wrist.« less
NASA Technical Reports Server (NTRS)
Asenov, Asen; Kaya, S.; Davies, J. H.; Saini, S.
2000-01-01
We use the density gradient (DG) simulation approach to study, in 3D, the effect of local oxide thickness fluctuations on the threshold voltage of decanano MOSFETs in a statistical manner. A description of the reconstruction procedure for the random 2D surfaces representing the 'atomistic' Si-SiO2 interface variations is presented. The procedure is based on power spectrum synthesis in the Fourier domain and can include either Gaussian or exponential spectra. The simulations show that threshold voltage variations induced by oxide thickness fluctuation become significant when the gate length of the devices become comparable to the correlation length of the fluctuations. The extent of quantum corrections in the simulations with respect to the classical case and the dependence of threshold variations on the oxide thickness are examined.
Geometric constrained variational calculus. III: The second variation (Part II)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico
2016-03-01
The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.
The normal-equivalent: a patient-specific assessment of facial harmony.
Claes, P; Walters, M; Gillett, D; Vandermeulen, D; Clement, J G; Suetens, P
2013-09-01
Evidence-based practice in oral and maxillofacial surgery would greatly benefit from an objective assessment of facial harmony or gestalt. Normal reference faces have previously been introduced, but they describe harmony in facial form as an average only and fail to report on harmonic variations found between non-dysmorphic faces. In this work, facial harmony, in all its complexity, is defined using a face-space, which describes all possible variations within a non-dysmorphic population; this was sampled here, based on 400 healthy subjects. Subsequently, dysmorphometrics, which involves the measurement of morphological abnormalities, is employed to construct the normal-equivalent within the given face-space of a presented dysmorphic face. The normal-equivalent can be seen as a synthetic identical but unaffected twin that is a patient-specific and population-based normal. It is used to extract objective scores of facial discordancy. This technique, along with a comparing approach, was used on healthy subjects to establish ranges of discordancy that are accepted to be normal, as well as on two patient examples before and after surgical intervention. The specificity of the presented normal-equivalent approach was confirmed by correctly attributing abnormality and providing regional depictions of the known dysmorphologies. Furthermore, it proved to be superior to the comparing approach. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Park, Juneyoung; Abdel-Aty, Mohamed; Lee, Jaeyoung
2016-09-01
Although many researchers have estimated the crash modification factors (CMFs) for specific treatments (or countermeasures), there is a lack of prior studies that have explored the variation of CMFs. Thus, the main objectives of this study are: (a) to estimate CMFs for the installation of different types of roadside barriers, and (b) to determine the changes of safety effects for different crash types, severities, and conditions. Two observational before-after analyses (i.e. empirical Bayes (EB) and full Bayes (FB) approaches) were utilized in this study to estimate CMFs. To consider the variation of safety effects based on different vehicle, driver, weather, and time of day information, the crashes were categorized based on vehicle size (passenger and heavy), driver age (young, middle, and old), weather condition (normal and rain), and time difference (day time and night time). The results show that the addition of roadside barriers is safety effective in reducing severe crashes for all types and run-off roadway (ROR) crashes. On the other hand, it was found that roadside barriers tend to increase all types of crashes for all severities. The results indicate that the treatment might increase the total number of crashes but it might be helpful in reducing injury and severe crashes. In this study, the variation of CMFs was determined for ROR crashes based on the different vehicle, driver, weather, and time information. Based on the findings from this study, the variation of CMFs can enhance the reliability of CMFs for different roadway conditions in decision making process. Also, it can be recommended to identify the safety effects of specific treatments for different crash types and severity levels with consideration of the different vehicle, driver, weather, and time of day information. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galarraga, Haize; Warren, Robert J.; Lados, Diana A.
Electron beam melting (EBM) is a metal powder bed fusion additive manufacturing (AM) technology that is used to fabricate three-dimensional near-net-shaped parts directly from computer models. Ti-6Al-4V is the most widely used and studied alloy for this technology and is the focus of this work in its ELI (Extra Low Interstitial) variation. The mechanisms of microstructure formation, evolution, and its subsequent influence on mechanical properties of the alloy in as-fabricated condition have been documented by various researchers. In the present work, the thermal history resulting in the formation of the as-fabricated microstructure was analyzed and studied by a thermal simulation.more » Subsequently different heat treatments were performed based on three approaches in order to study the effects of heat treatments on the singular and exclusive microstructure formed during the EBM fabrication process. In the first approach, the effect of cooling rate after the solutionizing process was studied. In the second approach, the variation of α lath thickness during annealing treatment and correlation with mechanical properties was established. In the last approach, several solutionizing and aging experiments were conducted.« less
SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A
2014-06-01
Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less
Buckley, Mike
2016-03-24
Collagen is one of the most ubiquitous proteins in the animal kingdom and the dominant protein in extracellular tissues such as bone, skin and other connective tissues in which it acts primarily as a supporting scaffold. It has been widely investigated scientifically, not only as a biomedical material for regenerative medicine, but also for its role as a food source for both humans and livestock. Due to the long-term stability of collagen, as well as its abundance in bone, it has been proposed as a source of biomarkers for species identification not only for heat- and pressure-rendered animal feed but also in ancient archaeological and palaeontological specimens, typically carried out by peptide mass fingerprinting (PMF) as well as in-depth liquid chromatography (LC)-based tandem mass spectrometric methods. Through the analysis of the three most common domesticates species, cow, sheep, and pig, this research investigates the advantages of each approach over the other, investigating sites of sequence variation with known functional properties of the collagen molecule. Results indicate that the previously identified species biomarkers through PMF analysis are not among the most variable type 1 collagen peptides present in these tissues, the latter of which can be detected by LC-based methods. However, it is clear that the highly repetitive sequence motif of collagen throughout the molecule, combined with the variability of the sites and relative abundance levels of hydroxylation, can result in high scoring false positive peptide matches using these LC-based methods. Additionally, the greater alpha 2(I) chain sequence variation, in comparison to the alpha 1(I) chain, did not appear to be specific to any particular functional properties, implying that intra-chain functional constraints on sequence variation are not as great as inter-chain constraints. However, although some of the most variable peptides were only observed in LC-based methods, until the range of publicly available collagen sequences improves, the simplicity of the PMF approach and suitable range of peptide sequence variation observed makes it the ideal method for initial taxonomic identification prior to further analysis by LC-based methods only when required.
The diversity and evolution of ecological and environmental citizen science.
Pocock, Michael J O; Tweddle, John C; Savage, Joanna; Robinson, Lucy D; Roy, Helen E
2017-01-01
Citizen science-the involvement of volunteers in data collection, analysis and interpretation-simultaneously supports research and public engagement with science, and its profile is rapidly rising. Citizen science represents a diverse range of approaches, but until now this diversity has not been quantitatively explored. We conducted a systematic internet search and discovered 509 environmental and ecological citizen science projects. We scored each project for 32 attributes based on publicly obtainable information and used multiple factor analysis to summarise this variation to assess citizen science approaches. We found that projects varied according to their methodological approach from 'mass participation' (e.g. easy participation by anyone anywhere) to 'systematic monitoring' (e.g. trained volunteers repeatedly sampling at specific locations). They also varied in complexity from approaches that are 'simple' to those that are 'elaborate' (e.g. provide lots of support to gather rich, detailed datasets). There was a separate cluster of entirely computer-based projects but, in general, we found that the range of citizen science projects in ecology and the environment showed continuous variation and cannot be neatly categorised into distinct types of activity. While the diversity of projects begun in each time period (pre 1990, 1990-99, 2000-09 and 2010-13) has not increased, we found that projects tended to have become increasingly different from each other as time progressed (possibly due to changing opportunities, including technological innovation). Most projects were still active so consequently we found that the overall diversity of active projects (available for participation) increased as time progressed. Overall, understanding the landscape of citizen science in ecology and the environment (and its change over time) is valuable because it informs the comparative evaluation of the 'success' of different citizen science approaches. Comparative evaluation provides an evidence-base to inform the future development of citizen science activities.
Pascual, Laura; Xu, Jiaxin; Causse, Mathilde
2013-01-01
Integrative systems biology proposes new approaches to decipher the variation of phenotypic traits. In an effort to link the genetic variation and the physiological and molecular bases of fruit composition, the proteome (424 protein spots), metabolome (26 compounds), enzymatic profile (26 enzymes), and phenotypes of eight tomato accessions, covering the genetic diversity of the species, and four of their F1 hybrids, were characterized at two fruit developmental stages (cell expansion and orange-red). The contents of metabolites varied among the genetic backgrounds, while enzyme profiles were less variable, particularly at the cell expansion stage. Frequent genotype by stage interactions suggested that the trends observed for one accession at a physiological level may change in another accession. In agreement with this, the inheritance modes varied between crosses and stages. Although additivity was predominant, 40% of the traits were non-additively inherited. Relationships among traits revealed associations between different levels of expression and provided information on several key proteins. Notably, the role of frucktokinase, invertase, and cysteine synthase in the variation of metabolites was highlighted. Several stress-related proteins also appeared related to fruit weight differences. These key proteins might be targets for improving metabolite contents of the fruit. This systems biology approach provides better understanding of networks controlling the genetic variation of tomato fruit composition. In addition, the wide data sets generated provide an ideal framework to develop innovative integrated hypothesis and will be highly valuable for the research community. PMID:24151307
Identifying environmental correlates of intraspecific genetic variation.
Harrisson, K A; Yen, J D L; Pavlova, A; Rourke, M L; Gilligan, D; Ingram, B A; Lyon, J; Tonkin, Z; Sunnucks, P
2016-09-01
Genetic variation is critical to the persistence of populations and their capacity to adapt to environmental change. The distribution of genetic variation across a species' range can reveal critical information that is not necessarily represented in species occurrence or abundance patterns. We identified environmental factors associated with the amount of intraspecific, individual-based genetic variation across the range of a widespread freshwater fish species, the Murray cod Maccullochella peelii. We used two different approaches to statistically quantify the relative importance of predictor variables, allowing for nonlinear relationships: a random forest model and a Bayesian approach. The latter also accounted for population history. Both approaches identified associations between homozygosity by locus and both disturbance to the natural flow regime and mean annual flow. Homozygosity by locus was negatively associated with disturbance to the natural flow regime, suggesting that river reaches with more disturbed flow regimes may support larger, more genetically diverse populations. Our findings are consistent with the hypothesis that artificially induced perennial flows in regulated channels may provide greater and more consistent habitat and reduce the frequency of population bottlenecks that can occur frequently under the highly variable and unpredictable natural flow regime of the system. Although extensive river regulation across eastern Australia has not had an overall positive effect on Murray cod numbers over the past century, regulation may not represent the primary threat to Murray cod survival. Instead, pressures other than flow regulation may be more critical to the persistence of Murray cod (for example, reduced frequency of large floods, overfishing and chemical pollution).
NASA Astrophysics Data System (ADS)
Bartels, A.; Bartel, T.; Canadija, M.; Mosler, J.
2015-09-01
This paper deals with the thermomechanical coupling in dissipative materials. The focus lies on finite strain plasticity theory and the temperature increase resulting from plastic deformation. For this type of problem, two fundamentally different modeling approaches can be found in the literature: (a) models based on thermodynamical considerations and (b) models based on the so-called Taylor-Quinney factor. While a naive straightforward implementation of thermodynamically consistent approaches usually leads to an over-prediction of the temperature increase due to plastic deformation, models relying on the Taylor-Quinney factor often violate fundamental physical principles such as the first and the second law of thermodynamics. In this paper, a thermodynamically consistent framework is elaborated which indeed allows the realistic prediction of the temperature evolution. In contrast to previously proposed frameworks, it is based on a fully three-dimensional, finite strain setting and it naturally covers coupled isotropic and kinematic hardening - also based on non-associative evolution equations. Considering a variationally consistent description based on incremental energy minimization, it is shown that the aforementioned problem (thermodynamical consistency and a realistic temperature prediction) is essentially equivalent to correctly defining the decomposition of the total energy into stored and dissipative parts. Interestingly, this decomposition shows strong analogies to the Taylor-Quinney factor. In this respect, the Taylor-Quinney factor can be well motivated from a physical point of view. Furthermore, certain intervals for this factor can be derived in order to guarantee that fundamental physically principles are fulfilled a priori. Representative examples demonstrate the predictive capabilities of the final constitutive modeling framework.
Bessems, Jos G M; Paini, Alicia; Gajewska, Monika; Worth, Andrew
2017-12-01
Route-to-route extrapolation is a common part of human risk assessment. Data from oral animal toxicity studies are commonly used to assess the safety of various but specific human dermal exposure scenarios. Using theoretical examples of various user scenarios, it was concluded that delineation of a generally applicable human dermal limit value is not a practicable approach, due to the wide variety of possible human exposure scenarios, including its consequences for internal exposure. This paper uses physiologically based kinetic (PBK) modelling approaches to predict animal as well as human internal exposure dose metrics and for the first time, introduces the concept of Margin of Internal Exposure (MOIE) based on these internal dose metrics. Caffeine was chosen to illustrate this approach. It is a substance that is often found in cosmetics and for which oral repeated dose toxicity data were available. A rat PBK model was constructed in order to convert the oral NOAEL to rat internal exposure dose metrics, i.e. the area under the curve (AUC) and the maximum concentration (C max ), both in plasma. A human oral PBK model was constructed and calibrated using human volunteer data and adapted to accommodate dermal absorption following human dermal exposure. Use of the MOIE approach based on internal dose metrics predictions provides excellent opportunities to investigate the consequences of variations in human dermal exposure scenarios. It can accommodate within-day variation in plasma concentrations and is scientifically more robust than assuming just an exposure in mg/kg bw/day. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hub, Martina; Thieke, Christian; Kessler, Marc L.
2012-04-15
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts formore » the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.« less
Bias correction of satellite-based rainfall data
NASA Astrophysics Data System (ADS)
Bhattacharya, Biswa; Solomatine, Dimitri
2015-04-01
Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall
Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.
2012-01-01
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well. PMID:22482640
Pappa-Louisi, A; Agrafiotou, P; Papachristos, K
2010-07-01
The combined effect of the ion-pairing reagent concentration, C(ipr), and organic modifier content, phi, on the retention under phi-gradient conditions at different constant C(ipr) was treated in this study by using two approaches. In the first approach, the prediction of the retention time of a sample solute is based on a direct fitting procedure of a proper retention model to 3-D phi-gradient retention data obtained under the same phi-linear variation but with different slope and time duration of the initial isocratic part and in the presence of various constant C(ipr) values in the eluent. The second approach is based on a retention model describing the combined effect of C(ipr) and phi on the retention of solutes in isocratic mode and consequently analyzes isocratic data obtained in mobile phases containing different C(ipr) values. The effectiveness of the above approaches was tested in the retention prediction of a mixture of 16 underivatized amino acids using mobile phases containing acetonitrile as organic modifier and sodium dodecyl sulfate as ion-pairing reagent. From these approaches, only the first one gives satisfactory predictions and can be successfully used in optimization of ion-pair chromatographic separations under gradient conditions. The failure of the second approach to predict the retention of solutes in the gradient elution mode in the presence of different C(ipr) values was attributed to slow changes in the distribution equilibrium of ion-pairing reagents caused by phi-variation.
ERIC Educational Resources Information Center
Aagaard, Jesper
2017-01-01
In time, phenomenology has become a viable approach to conducting qualitative studies in education. Popular and well-established methods include descriptive and hermeneutic phenomenology. Based on critiques of the essentialism and receptivity of these two methods, however, this article offers a third variation of empirical phenomenology:…
ERIC Educational Resources Information Center
Guskey, Thomas R.; Jung, Lee Ann
2012-01-01
The field of education is moving rapidly toward a standards-based approach to grading. School leaders have become increasingly aware of the tremendous variation that exists in grading practices, even among teachers of the same courses in the same department in the same school. Consequently, students' grades often have little relation to their…
Individualized cattle copy number and segmental duplication maps using next generation sequencing
USDA-ARS?s Scientific Manuscript database
Copy Number Variations (CNVs) affect a wide range of phenotypic traits; however, CNVs in or near segmental duplication regions are often intractable. Using a read depth approach based on next generation sequencing, we examined genome-wide copy number differences among five taurine (three Angus, one ...
An investigation of equilibrium concepts
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
A different approach to modeling of the thermochemistry of rocket engine combustion phenomena is presented. The methodology described is based on the hypothesis of a new variational principle applicable to compressible fluid mechanics. This hypothesis is extended to treat the thermochemical behavior of a reacting (equilibrium) gas in an open system.
Investigating Psychometric Isomorphism for Traditional and Performance-Based Assessment
ERIC Educational Resources Information Center
Fay, Derek M.; Levy, Roy; Mehta, Vandhana
2018-01-01
A common practice in educational assessment is to construct multiple forms of an assessment that consists of tasks with similar psychometric properties. This study utilizes a Bayesian multilevel item response model and descriptive graphical representations to evaluate the psychometric similarity of variations of the same task. These approaches for…
Navigating across Cultures: Narrative Constructions of Lived Experience
ERIC Educational Resources Information Center
Pufall-Jones, Elizabeth; Mistry, Jayanthi
2010-01-01
In this study, we investigated how individuals from diverse backgrounds learn to navigate the many worlds in which they live and explore how variations in life experiences are associated with aspects of navigating across cultures. We conducted the study using a phenomenological approach based on retrospective personal narratives from 19 young…
A system for dosage-based functional genomics in poplar
Isabelle M. Henry; Matthew S. Zinkgraf; Andrew T. Groover; Luca Comai
2015-01-01
Altering gene dosage through variation in gene copy number is a powerful approach to addressing questions regarding gene regulation, quantitative trait loci, and heterosis, but one that is not easily applied to sexually transmitted species. Elite poplar (Populus spp) varieties are created through interspecific hybridization, followed by...
NASA Astrophysics Data System (ADS)
Saturnino, Diana; Langlais, Benoit; Amit, Hagay; Civet, François; Mandea, Mioara; Beucler, Éric
2018-03-01
A detailed description of the main geomagnetic field and of its temporal variations (i.e., the secular variation or SV) is crucial to understanding the geodynamo. Although the SV is known with high accuracy at ground magnetic observatory locations, the globally uneven distribution of the observatories hampers the determination of a detailed global pattern of the SV. Over the past two decades, satellites have provided global surveys of the geomagnetic field which have been used to derive global spherical harmonic (SH) models through some strict data selection schemes to minimise external field contributions. However, discrepancies remain between ground measurements and field predictions by these models; indeed the global models do not reproduce small spatial scales of the field temporal variations. To overcome this problem we propose to directly extract time series of the field and its temporal variation from satellite measurements as it is done at observatory locations. We follow a Virtual Observatory (VO) approach and define a global mesh of VOs at satellite altitude. For each VO and each given time interval we apply an Equivalent Source Dipole (ESD) technique to reduce all measurements to a unique location. Synthetic data are first used to validate the new VO-ESD approach. Then, we apply our scheme to data from the first two years of the Swarm mission. For the first time, a 2.5° resolution global mesh of VO time series is built. The VO-ESD derived time series are locally compared to ground observations as well as to satellite-based model predictions. Our approach is able to describe detailed temporal variations of the field at local scales. The VO-ESD time series are then used to derive global spherical harmonic models. For a simple SH parametrization the model describes well the secular trend of the magnetic field both at satellite altitude and at the surface. As more data will be made available, longer VO-ESD time series can be derived and consequently used to study sharp temporal variation features, such as geomagnetic jerks.
Understanding the individual to implement the ecosystem approach to fisheries management.
Ward, Taylor D; Algera, Dirk A; Gallagher, Austin J; Hawkins, Emily; Horodysky, Andrij; Jørgensen, Christian; Killen, Shaun S; McKenzie, David J; Metcalfe, Julian D; Peck, Myron A; Vu, Maria; Cooke, Steven J
2016-01-01
Ecosystem-based approaches to fisheries management (EAFMs) have emerged as requisite for sustainable use of fisheries resources. At the same time, however, there is a growing recognition of the degree of variation among individuals within a population, as well as the ecological consequences of this variation. Managing resources at an ecosystem level calls on practitioners to consider evolutionary processes, and ample evidence from the realm of fisheries science indicates that anthropogenic disturbance can drive changes in predominant character traits (e.g. size at maturity). Eco-evolutionary theory suggests that human-induced trait change and the modification of selective regimens might contribute to ecosystem dynamics at a similar magnitude to species extirpation, extinction and ecological dysfunction. Given the dynamic interaction between fisheries and target species via harvest and subsequent ecosystem consequences, we argue that individual diversity in genetic, physiological and behavioural traits are important considerations under EAFMs. Here, we examine the role of individual variation in a number of contexts relevant to fisheries management, including the potential ecological effects of rapid trait change. Using select examples, we highlight the extent of phenotypic diversity of individuals, as well as the ecological constraints on such diversity. We conclude that individual phenotypic diversity is a complex phenomenon that needs to be considered in EAFMs, with the ultimate realization that maintaining or increasing individual trait diversity may afford not only species, but also entire ecosystems, with enhanced resilience to environmental perturbations. Put simply, individuals are the foundation from which population- and ecosystem-level traits emerge and are therefore of central importance for the ecosystem-based approaches to fisheries management.
Understanding the individual to implement the ecosystem approach to fisheries management
Ward, Taylor D.; Algera, Dirk A.; Gallagher, Austin J.; Hawkins, Emily; Horodysky, Andrij; Jørgensen, Christian; Killen, Shaun S.; McKenzie, David J.; Metcalfe, Julian D.; Peck, Myron A.; Vu, Maria; Cooke, Steven J.
2016-01-01
Ecosystem-based approaches to fisheries management (EAFMs) have emerged as requisite for sustainable use of fisheries resources. At the same time, however, there is a growing recognition of the degree of variation among individuals within a population, as well as the ecological consequences of this variation. Managing resources at an ecosystem level calls on practitioners to consider evolutionary processes, and ample evidence from the realm of fisheries science indicates that anthropogenic disturbance can drive changes in predominant character traits (e.g. size at maturity). Eco-evolutionary theory suggests that human-induced trait change and the modification of selective regimens might contribute to ecosystem dynamics at a similar magnitude to species extirpation, extinction and ecological dysfunction. Given the dynamic interaction between fisheries and target species via harvest and subsequent ecosystem consequences, we argue that individual diversity in genetic, physiological and behavioural traits are important considerations under EAFMs. Here, we examine the role of individual variation in a number of contexts relevant to fisheries management, including the potential ecological effects of rapid trait change. Using select examples, we highlight the extent of phenotypic diversity of individuals, as well as the ecological constraints on such diversity. We conclude that individual phenotypic diversity is a complex phenomenon that needs to be considered in EAFMs, with the ultimate realization that maintaining or increasing individual trait diversity may afford not only species, but also entire ecosystems, with enhanced resilience to environmental perturbations. Put simply, individuals are the foundation from which population- and ecosystem-level traits emerge and are therefore of central importance for the ecosystem-based approaches to fisheries management. PMID:27293757
The oxygen-18 isotope approach for measuring aquatic metabolism in high-productivity waters
Tobias, C.R.; Böhlke, J.K.; Harvey, J.W.
2007-01-01
We examined the utility of ??18O2 measurements in estimating gross primary production (P), community respiration (R), and net metabolism (P:R) through diel cycles in a productive agricultural stream located in the midwestern U.S.A. Large diel swings in O2 (??200 ??mol L-1) were accompanied by large diel variation in ??18O2 (??10???). Simultaneous gas transfer measurements and laboratory-derived isotopic fractionation factors for O2 during respiration (??r) were used in conjunction with the diel monitoring of O2 and ??18O2 to calculate P, R, and P:R using three independent isotope-based methods. These estimates were compared to each other and against the traditional "open-channel diel O2-change" technique that lacked ??18O2. A principal advantage of the ??18O2 measurements was quantification of diel variation in R, which increased by up to 30% during the day, and the diel pattern in R was variable and not necessarily predictable from assumed temperature effects on R. The P, R, and P:R estimates calculated using the isotope-based approaches showed high sensitivity to the assumed system fractionation factor (??r). The optimum modeled ??r values (0.986-0.989) were roughly consistent with the laboratory-derived values, but larger (i.e., less fractionation) than ??r values typically reported for enzyme-limited respiration in open water environments. Because of large diel variation in O2, P:R could not be estimated by directly applying the typical steady-state solution to the O2 and 18O-O2 mass balance equations in the absence of gas transfer data. Instead, our results indicate that a modified steady-state solution (the daily mean value approach) could be used with time-averaged O2 and ??18O2 measurements to calculate P:R independent of gas transfer. This approach was applicable under specifically defined, net heterotrophic conditions. The diel cycle of increasing daytime R and decreasing nighttime R was only partially explained by temperature variation, but could be consistent with the diel production/consumption of labile dissolved organic carbon from photosynthesis. ?? 2007, by the American Society of Limnology and Oceanography, Inc.
Novel image processing approach to detect malaria
NASA Astrophysics Data System (ADS)
Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev
2015-09-01
In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.
2016-09-07
approach in co simulation with fluid-dynamics solvers is used. An original variational formulation is developed for the inverse problem of...by the inverse solution meshing. The same approach is used to map the structural and fluid interface kinematics and loads during the fluid structure...co-simulation. The inverse analysis is verified by reconstructing the deformed solution obtained with a corresponding direct formulation, based on
Guo, Xiao-Shuang; Situ, Shu-Ping; Wang, Xue-Mei; Ding, Xiang; Wang, Xin-Ming; Yan, Cai-Qing; Li, Xiao-Ying; Zheng, Mei
2014-05-01
Two simulations were conducted with different secondary organic aerosol (SOA) methods-VBS (volatile basis set) approach and SORGAM (secondary organic aerosol model) , which have been coupled in the WRF/Chem (weather research and forecasting model with chemistry) model. Ground-based observation data from 18th to 25th November 2008 were used to examine the model performance of SOA in the Pearl River Delta(PRD)region. The results showed that VBS approach could better reproduce the temporal variation and magnitude of SOA compared with SORGAM, and the mean absolute deviation and correlation coefficient between the observed and the simulated data using VBS approach were -4.88 microg m-3 and 0.91, respectively, while they were -5.32 microg.m-3 and 0. 18 with SORGAM. This is mainly because the VBS approach considers SOA precursors with a wider volatility range and the process of chemical aging in SOA formation. Spatiotemporal distribution of SOA in the PRD from the VBS simulation was also analyzed. The results indicated that the SOA has a significant diurnal variation, and the maximal SOA concentration occurred at noon and in the early afternoon. Because of the transport and the considerable spatial distribution of O3 , the SOA concentrations were different in different PRD cities, and the highest concentration of SOA was observed in the downwind area, including Zhongshan, Zhuhai and Jiangmen.
Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A
2018-06-01
Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.
Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity
NASA Astrophysics Data System (ADS)
Yaesoubi, Maziar; Calhoun, Vince D.
2017-08-01
In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.
DOT National Transportation Integrated Search
1982-02-01
Previous experiments have demonstrated illusions due to variations in both length and width of runways in nighttime 'black hole' approaches. Even though approach lighting is not designed to provide vertical guidance, it is possible that cues from app...
A new approach for measuring power spectra and reconstructing time series in active galactic nuclei
NASA Astrophysics Data System (ADS)
Li, Yan-Rong; Wang, Jian-Min
2018-05-01
We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2016-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.
Lee, Chung-Hao; Amini, Rouzbeh; Gorman, Robert C.; Gorman, Joseph H.; Sacks, Michael S.
2013-01-01
Estimation of regional tissue stresses in the functioning heart valve remains an important goal in our understanding of normal valve function and in developing novel engineered tissue strategies for valvular repair and replacement. Methods to accurately estimate regional tissue stresses are thus needed for this purpose, and in particular to develop accurate, statistically informed means to validate computational models of valve function. Moreover, there exists no currently accepted method to evaluate engineered heart valve tissues and replacement heart valve biomaterials undergoing valvular stresses in blood contact. While we have utilized mitral valve anterior leaflet valvuloplasty as an experimental approach to address this limitation, robust computational techniques to estimate implant stresses are required. In the present study, we developed a novel numerical analysis approach for estimation of the in-vivo stresses of the central region of the mitral valve anterior leaflet (MVAL) delimited by a sonocrystal transducer array. The in-vivo material properties of the MVAL were simulated using an inverse FE modeling approach based on three pseudo-hyperelastic constitutive models: the neo-Hookean, exponential-type isotropic, and full collagen-fiber mapped transversely isotropic models. A series of numerical replications with varying structural configurations were developed by incorporating measured statistical variations in MVAL local preferred fiber directions and fiber splay. These model replications were then used to investigate how known variations in the valve tissue microstructure influence the estimated ROI stresses and its variation at each time point during a cardiac cycle. Simulations were also able to include estimates of the variation in tissue stresses for an individual specimen dataset over the cardiac cycle. Of the three material models, the transversely anisotropic model produced the most accurate results, with ROI averaged stresses at the fully-loaded state of 432.6±46.5 kPa and 241.4±40.5 kPa in the radial and circumferential directions, respectively. We conclude that the present approach can provide robust instantaneous mean and variation estimates of tissue stresses of the central regions of the MVAL. PMID:24275434
Wong, Kin-Yiu; Gao, Jiali
2008-09-09
In this paper, we describe an automated integration-free path-integral (AIF-PI) method, based on Kleinert's variational perturbation (KP) theory, to treat internuclear quantum-statistical effects in molecular systems. We have developed an analytical method to obtain the centroid potential as a function of the variational parameter in the KP theory, which avoids numerical difficulties in path-integral Monte Carlo or molecular dynamics simulations, especially at the limit of zero-temperature. Consequently, the variational calculations using the KP theory can be efficiently carried out beyond the first order, i.e., the Giachetti-Tognetti-Feynman-Kleinert variational approach, for realistic chemical applications. By making use of the approximation of independent instantaneous normal modes (INM), the AIF-PI method can readily be applied to many-body systems. Previously, we have shown that in the INM approximation, the AIF-PI method is accurate for computing the quantum partition function of a water molecule (3 degrees of freedom) and the quantum correction factor for the collinear H(3) reaction rate (2 degrees of freedom). In this work, the accuracy and properties of the KP theory are further investigated by using the first three order perturbations on an asymmetric double-well potential, the bond vibrations of H(2), HF, and HCl represented by the Morse potential, and a proton-transfer barrier modeled by the Eckart potential. The zero-point energy, quantum partition function, and tunneling factor for these systems have been determined and are found to be in excellent agreement with the exact quantum results. Using our new analytical results at the zero-temperature limit, we show that the minimum value of the computed centroid potential in the KP theory is in excellent agreement with the ground state energy (zero-point energy) and the position of the centroid potential minimum is the expectation value of particle position in wave mechanics. The fast convergent property of the KP theory is further examined in comparison with results from the traditional Rayleigh-Ritz variational approach and Rayleigh-Schrödinger perturbation theory in wave mechanics. The present method can be used for thermodynamic and quantum dynamic calculations, including to systematically determine the exact value of zero-point energy and to study kinetic isotope effects for chemical reactions in solution and in enzymes.
A Variational Approach to the Analysis of Dissipative Electromechanical Systems
Allison, Andrew; Pearce, Charles E. M.; Abbott, Derek
2014-01-01
We develop a method for systematically constructing Lagrangian functions for dissipative mechanical, electrical, and electromechanical systems. We derive the equations of motion for some typical electromechanical systems using deterministic principles that are strictly variational. We do not use any ad hoc features that are added on after the analysis has been completed, such as the Rayleigh dissipation function. We generalise the concept of potential, and define generalised potentials for dissipative lumped system elements. Our innovation offers a unified approach to the analysis of electromechanical systems where there are energy and power terms in both the mechanical and electrical parts of the system. Using our novel technique, we can take advantage of the analytic approach from mechanics, and we can apply these powerful analytical methods to electrical and to electromechanical systems. We can analyse systems that include non-conservative forces. Our methodology is deterministic, and does does require any special intuition, and is thus suitable for automation via a computer-based algebra package. PMID:24586221
Reproducibility and quantitation of amplicon sequencing-based detection
Zhou, Jizhong; Wu, Liyou; Deng, Ye; Zhi, Xiaoyang; Jiang, Yi-Huei; Tu, Qichao; Xie, Jianping; Van Nostrand, Joy D; He, Zhili; Yang, Yunfeng
2011-01-01
To determine the reproducibility and quantitation of the amplicon sequencing-based detection approach for analyzing microbial community structure, a total of 24 microbial communities from a long-term global change experimental site were examined. Genomic DNA obtained from each community was used to amplify 16S rRNA genes with two or three barcode tags as technical replicates in the presence of a small quantity (0.1% wt/wt) of genomic DNA from Shewanella oneidensis MR-1 as the control. The technical reproducibility of the amplicon sequencing-based detection approach is quite low, with an average operational taxonomic unit (OTU) overlap of 17.2%±2.3% between two technical replicates, and 8.2%±2.3% among three technical replicates, which is most likely due to problems associated with random sampling processes. Such variations in technical replicates could have substantial effects on estimating β-diversity but less on α-diversity. A high variation was also observed in the control across different samples (for example, 66.7-fold for the forward primer), suggesting that the amplicon sequencing-based detection approach could not be quantitative. In addition, various strategies were examined to improve the comparability of amplicon sequencing data, such as increasing biological replicates, and removing singleton sequences and less-representative OTUs across biological replicates. Finally, as expected, various statistical analyses with preprocessed experimental data revealed clear differences in the composition and structure of microbial communities between warming and non-warming, or between clipping and non-clipping. Taken together, these results suggest that amplicon sequencing-based detection is useful in analyzing microbial community structure even though it is not reproducible and quantitative. However, great caution should be taken in experimental design and data interpretation when the amplicon sequencing-based detection approach is used for quantitative analysis of the β-diversity of microbial communities. PMID:21346791
A Probabilistic Approach to Predict Thermal Fatigue Life for Ball Grid Array Solder Joints
NASA Astrophysics Data System (ADS)
Wei, Helin; Wang, Kuisheng
2011-11-01
Numerous studies of the reliability of solder joints have been performed. Most life prediction models are limited to a deterministic approach. However, manufacturing induces uncertainty in the geometry parameters of solder joints, and the environmental temperature varies widely due to end-user diversity, creating uncertainties in the reliability of solder joints. In this study, a methodology for accounting for variation in the lifetime prediction for lead-free solder joints of ball grid array packages (PBGA) is demonstrated. The key aspects of the solder joint parameters and the cyclic temperature range related to reliability are involved. Probabilistic solutions of the inelastic strain range and thermal fatigue life based on the Engelmaier model are developed to determine the probability of solder joint failure. The results indicate that the standard deviation increases significantly when more random variations are involved. Using the probabilistic method, the influence of each variable on the thermal fatigue life is quantified. This information can be used to optimize product design and process validation acceptance criteria. The probabilistic approach creates the opportunity to identify the root causes of failed samples from product fatigue tests and field returns. The method can be applied to better understand how variation affects parameters of interest in an electronic package design with area array interconnections.
Evaluation of an Integrated Framework for Biodiversity with a New Metric for Functional Dispersion
Presley, Steven J.; Scheiner, Samuel M.; Willig, Michael R.
2014-01-01
Growing interest in understanding ecological patterns from phylogenetic and functional perspectives has driven the development of metrics that capture variation in evolutionary histories or ecological functions of species. Recently, an integrated framework based on Hill numbers was developed that measures three dimensions of biodiversity based on abundance, phylogeny and function of species. This framework is highly flexible, allowing comparison of those diversity dimensions, including different aspects of a single dimension and their integration into a single measure. The behavior of those metrics with regard to variation in data structure has not been explored in detail, yet is critical for ensuring an appropriate match between the concept and its measurement. We evaluated how each metric responds to particular data structures and developed a new metric for functional biodiversity. The phylogenetic metric is sensitive to variation in the topology of phylogenetic trees, including variation in the relative lengths of basal, internal and terminal branches. In contrast, the functional metric exhibited multiple shortcomings: (1) species that are functionally redundant contribute nothing to functional diversity and (2) a single highly distinct species causes functional diversity to approach the minimum possible value. We introduced an alternative, improved metric based on functional dispersion that solves both of these problems. In addition, the new metric exhibited more desirable behavior when based on multiple traits. PMID:25148103
The Further Evolution of Cooperation
NASA Astrophysics Data System (ADS)
Axelrod, Robert; Dion, Douglas
1988-12-01
Axelrod's model of the evolution of cooperation was based on the iterated Prisoner's Dilemma. Empirical work following this approach has helped establish the prevalence of cooperation based on reciprocity. Theoretical work has led to a deeper understanding of the role of other factors in the evolution of cooperation: the number of players, the range of possible choices, variation in the payoff structure, noise, the shadow of the future, population dynamics, and population structure.
Akdemir, Hülya; Suzerer, Veysel; Tilkat, Engin; Onay, Ahmet; Çiftçi, Yelda Ozden
2016-12-01
Determination of genetic stability of in vitro-grown plantlets is needed for safe and large-scale production of mature trees. In this study, genetic variation of long-term micropropagated mature pistachio developed through direct shoot bud regeneration using apical buds (protocol A) and in vitro-derived leaves (protocol B) was assessed via DNA-based molecular markers. Randomly amplified polymorphic DNA (RAPD), inter-simple sequence repeat (ISSR), and amplified fragment length polymorphism (AFLP) were employed, and the obtained PIC values from RAPD (0.226), ISSR (0.220), and AFLP (0.241) showed that micropropagation of pistachio for different periods of time resulted in "reasonable polymorphism" among donor plant and its 18 clones. Mantel's test showed a consistence polymorphism level between marker systems based on similarity matrices. In conclusion, this is the first study on occurrence of genetic variability in long-term micropropagated mature pistachio plantlets. The obtained results clearly indicated that different marker approaches used in this study are reliable for assessing tissue culture-induced variations in long-term cultured pistachio plantlets.
Zhang, L; Price, R; Aweeka, F; Bellibas, S E; Sheiner, L B
2001-02-01
A small-scale clinical investigation was done to quantify the penetration of stavudine (D4T) into cerebrospinal fluid (CSF). A model-based analysis estimates the steady-state ratio of AUCs of CSF and plasma concentrations (R(AUC)) to be 0.270, and the mean residence time of drug in the CSF to be 7.04 h. The analysis illustrates the advantages of a causal (scientific, predictive) model-based approach to analysis over a noncausal (empirical, descriptive) approach when the data, as here, demonstrate certain problematic features commonly encountered in clinical data, namely (i) few subjects, (ii) sparse sampling, (iii) repeated measures, (iv) imbalance, and (v) individual design variation. These features generally require special attention in data analysis. The causal-model-based analysis deals with features (i) and (ii), both of which reduce efficiency, by combining data from different studies and adding subject-matter prior information. It deals with features (iii)--(v), all of which prevent 'averaging' individual data points directly, first, by adjusting in the model for interindividual data differences due to design differences, secondly, by explicitly differentiating between interpatient, interoccasion, and measurement error variation, and lastly, by defining a scientifically meaningful estimand (R(AUC)) that is independent of design.
NASA Technical Reports Server (NTRS)
Meyer, Peter; Green, Robert O.; Staenz, Karl; Itten, Klaus I.
1994-01-01
A geocoding procedure for remotely sensed data of airborne systems in rugged terrain is affected by several factors: buffeting of the aircraft by turbulence, variations in ground speed, changes in altitude, attitude variations, and surface topography. The current investigation was carried out with an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene of central Switzerland (Rigi) from NASA's Multi Aircraft Campaign (MAC) in Europe (1991). The parametric approach reconstructs for every pixel the observation geometry based on the flight line, aircraft attitude, and surface topography. To utilize the data for analysis of materials on the surface, the AVIRIS data are corrected to apparent reflectance using algorithms based on MODTRAN (moderate resolution transfer code).
Keefer, Matthew W; Wilson, Sara E; Dankowicz, Harry; Loui, Michael C
2014-03-01
Recent research in ethics education shows a potentially problematic variation in content, curricular materials, and instruction. While ethics instruction is now widespread, studies have identified significant variation in both the goals and methods of ethics education, leaving researchers to conclude that many approaches may be inappropriately paired with goals that are unachievable. This paper speaks to these concerns by demonstrating the importance of aligning classroom-based assessments to clear ethical learning objectives in order to help students and instructors track their progress toward meeting those objectives. Two studies at two different universities demonstrate the usefulness of classroom-based, formative assessments for improving the quality of students' case responses in computational modeling and research ethics.
Image denoising by a direct variational minimization
NASA Astrophysics Data System (ADS)
Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan
2011-12-01
In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
Tang, Tao; Stevenson, R Jan; Infante, Dana M
2016-10-15
Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mirkhalili, Seyedhamzeh
2016-07-01
Chlorophyll is an extremely important bio-molecule, critical in photosynthesis, which allows plants to absorb energy from light. At the base of the ocean food web are single-celled algae and other plant-like organisms known as Phytoplankton. Like plants on land, Phytoplankton use chlorophyll and other light-harvesting pigments to carry out photosynthesis. Where Phytoplankton grow depends on available sunlight, temperature, and nutrient levels. In this research a GIS Approach using ARCGIS software and QuikSCAT satellite data was applied to visualize WIND,SST(Sea Surface Temperature) and CHL(Chlorophyll) variations in the Caspian Sea.Results indicate that increase in chlorophyll concentration in coastal areas is primarily driven by terrestrial nutrients and does not imply that warmer SST will lead to an increase in chlorophyll concentration and consequently Phytoplankton abundance.
Excitonic Order and Superconductivity in the Two-Orbital Hubbard Model: Variational Cluster Approach
NASA Astrophysics Data System (ADS)
Fujiuchi, Ryo; Sugimoto, Koudai; Ohta, Yukinori
2018-06-01
Using the variational cluster approach based on the self-energy functional theory, we study the possible occurrence of excitonic order and superconductivity in the two-orbital Hubbard model with intra- and inter-orbital Coulomb interactions. It is known that an antiferromagnetic Mott insulator state appears in the regime of strong intra-orbital interaction, a band insulator state appears in the regime of strong inter-orbital interaction, and an excitonic insulator state appears between them. In addition to these states, we find that the s±-wave superconducting state appears in the small-correlation regime, and the dx2 - y2-wave superconducting state appears on the boundary of the antiferromagnetic Mott insulator state. We calculate the single-particle spectral function of the model and compare the band gap formation due to the superconducting and excitonic orders.
Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression
Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi
2013-01-01
Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382
A New Proof of the Expected Frequency Spectrum under the Standard Neutral Model.
Hudson, Richard R
2015-01-01
The sample frequency spectrum is an informative and frequently employed approach for summarizing DNA variation data. Under the standard neutral model the expectation of the sample frequency spectrum has been derived by at least two distinct approaches. One relies on using results from diffusion approximations to the Wright-Fisher Model. The other is based on Pólya urn models that correspond to the standard coalescent model. A new proof of the expected frequency spectrum is presented here. It is a proof by induction and does not require diffusion results and does not require the somewhat complex sums and combinatorics of the derivations based on urn models.
Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.
2000-01-01
In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915
Hybrid Data Assimilation without Ensemble Filtering
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Akkraoui, Amal El
2014-01-01
The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.
Variation in clinical decision-making for induction of labour: a qualitative study.
Nippita, Tanya A; Porter, Maree; Seeho, Sean K; Morris, Jonathan M; Roberts, Christine L
2017-09-22
Unexplained variation in induction of labour (IOL) rates exist between hospitals, even after accounting for casemix and hospital differences. We aimed to explore factors that influence clinical decision-making for IOL that may be contributing to the variation in IOL rates between hospitals. We undertook a qualitative study involving semi-structured, audio-recorded interviews with obstetricians and midwives. Using purposive sampling, participants known to have diverse opinions on IOL were selected from ten Australian maternity hospitals (based on differences in hospital IOL rate, size, location and case-mix complexities). Transcripts were indexed, coded, and analysed using the Framework Approach to identify main themes and subthemes. Forty-five participants were interviewed (21 midwives, 24 obstetric medical staff). Variations in decision-making for IOL were based on the obstetrician's perception of medical risk in the pregnancy (influenced by the obstetrician's personality and knowledge), their care relationship with the woman, how they involved the woman in decision-making, and resource availability. The role of a 'gatekeeper' in the procedural aspects of arranging an IOL also influenced decision-making. There was wide variation in the clinical decision-making practices of obstetricians and less accountability for decision-making in hospitals with a high IOL rate, with the converse occurring in hospitals with low IOL rates. Improved communication, standardised risk assessment and accountability for IOL offer potential for reducing variation in hospital IOL rates.
Support-vector-based emergent self-organising approach for emotional understanding
NASA Astrophysics Data System (ADS)
Nguwi, Yok-Yen; Cho, Siu-Yeung
2010-12-01
This study discusses the computational analysis of general emotion understanding from questionnaires methodology. The questionnaires method approaches the subject by investigating the real experience that accompanied the emotions, whereas the other laboratory approaches are generally associated with exaggerated elements. We adopted a connectionist model called support-vector-based emergent self-organising map (SVESOM) to analyse the emotion profiling from the questionnaires method. The SVESOM first identifies the important variables by giving discriminative features with high ranking. The classifier then performs the classification based on the selected features. Experimental results show that the top rank features are in line with the work of Scherer and Wallbott [(1994), 'Evidence for Universality and Cultural Variation of Differential Emotion Response Patterning', Journal of Personality and Social Psychology, 66, 310-328], which approached the emotions physiologically. While the performance measures show that using the full features for classifications can degrade the performance, the selected features provide superior results in terms of accuracy and generalisation.
Decoding brain cancer dynamics: a quantitative histogram-based approach using temporal MRI
NASA Astrophysics Data System (ADS)
Zhou, Mu; Hall, Lawrence O.; Goldgof, Dmitry B.; Russo, Robin; Gillies, Robert J.; Gatenby, Robert A.
2015-03-01
Brain tumor heterogeneity remains a challenge for probing brain cancer evolutionary dynamics. In light of evolution, it is a priority to inspect the cancer system from a time-domain perspective since it explicitly tracks the dynamics of cancer variations. In this paper, we study the problem of exploring brain tumor heterogeneity from temporal clinical magnetic resonance imaging (MRI) data. Our goal is to discover evidence-based knowledge from such temporal imaging data, where multiple clinical MRI scans from Glioblastoma multiforme (GBM) patients are generated during therapy. In particular, we propose a quantitative histogram-based approach that builds a prediction model to measure the difference in histograms obtained from pre- and post-treatment. The study could significantly assist radiologists by providing a metric to identify distinctive patterns within each tumor, which is crucial for the goal of providing patient-specific treatments. We examine the proposed approach for a practical application - clinical survival group prediction. Experimental results show that our approach achieved 90.91% accuracy.
A regularized clustering approach to brain parcellation from functional MRI data
NASA Astrophysics Data System (ADS)
Dillon, Keith; Wang, Yu-Ping
2017-08-01
We consider a data-driven approach for the subdivision of an individual subject's functional Magnetic Resonance Imaging (fMRI) scan into regions of interest, i.e., brain parcellation. The approach is based on a computational technique for calculating resolution from inverse problem theory, which we apply to neighborhood selection for brain connectivity networks. This can be efficiently calculated even for very large images, and explicitly incorporates regularization in the form of spatial smoothing and a noise cutoff. We demonstrate the reproducibility of the method on multiple scans of the same subjects, as well as the variations between subjects.
Hydrodynamic cavitation: from theory towards a new experimental approach
NASA Astrophysics Data System (ADS)
Lucia, Umberto; Gervino, Gianpiero
2009-09-01
Hydrodynamic cavitation is analysed by a global thermodynamics principle following an approach based on the maximum irreversible entropy variation that has already given promising results for open systems and has been successfully applied in specific engineering problems. In this paper we present a new phenomenological method to evaluate the conditions inducing cavitation. We think this method could be useful in the design of turbo-machineries and related technologies: it represents both an original physical approach to cavitation and an economical saving in planning because the theoretical analysis could allow engineers to reduce the experimental tests and the costs of the design process.
Li, Shou-Li; Vasemägi, Anti; Ramula, Satu
2016-01-01
Background and Aims Assessing the demographic consequences of genetic variation is fundamental to invasion biology. However, genetic and demographic approaches are rarely combined to explore the effects of genetic variation on invasive populations in natural environments. This study combined population genetics, demographic data and a greenhouse experiment to investigate the consequences of genetic variation for the population fitness of the perennial, invasive herb Lupinus polyphyllus. Methods Genetic and demographic data were collected from 37 L. polyphyllus populations representing different latitudes in Finland, and genetic variation was characterized based on 13 microsatellite loci. Associations between genetic variation and population size, population density, latitude and habitat were investigated. Genetic variation was then explored in relation to four fitness components (establishment, survival, growth, fecundity) measured at the population level, and the long-term population growth rate (λ). For a subset of populations genetic variation was also examined in relation to the temporal variability of λ. A further assessment was made of the role of natural selection in the observed variation of certain fitness components among populations under greenhouse conditions. Key Results It was found that genetic variation correlated positively with population size, particularly at higher latitudes, and differed among habitat types. Average seedling establishment per population increased with genetic variation in the field, but not under greenhouse conditions. Quantitative genetic divergence (QST) based on seedling establishment in the greenhouse was smaller than allelic genetic divergence (F′ST), indicating that unifying selection has a prominent role in this fitness component. Genetic variation was not associated with average survival, growth or fecundity measured at the population level, λ or its variability. Conclusions The study suggests that although genetic variation may facilitate plant invasions by increasing seedling establishment, it may not necessarily affect the long-term population growth rate. Therefore, established invasions may be able to grow equally well regardless of their genetic diversity. PMID:26420202
García-Roger, Eduardo Moisés; Franch, Belen; Carmona, María José; Serra, Manuel
2017-01-01
Fluctuations in environmental parameters are increasingly being recognized as essential features of any habitat. The quantification of whether environmental fluctuations are prevalently predictable or unpredictable is remarkably relevant to understanding the evolutionary responses of organisms. However, when characterizing the relevant features of natural habitats, ecologists typically face two problems: (1) gathering long-term data and (2) handling the hard-won data. This paper takes advantage of the free access to long-term recordings of remote sensing data (27 years, Landsat TM/ETM+) to assess a set of environmental models for estimating environmental predictability. The case study included 20 Mediterranean saline ponds and lakes, and the focal variable was the water-surface area. This study first aimed to produce a method for accurately estimating the water-surface area from satellite images. Saline ponds can develop salt-crusted areas that make it difficult to distinguish between soil and water. This challenge was addressed using a novel pipeline that combines band ratio water indices and the short near-infrared band as a salt filter. The study then extracted the predictable and unpredictable components of variation in the water-surface area. Two different approaches, each showing variations in the parameters, were used to obtain the stochastic variation around a regular pattern with the objective of dissecting the effect of assumptions on predictability estimations. The first approach, which is based on Colwell’s predictability metrics, transforms the focal variable into a nominal one. The resulting discrete categories define the relevant variations in the water-surface area. In the second approach, we introduced General Additive Model (GAM) fitting as a new metric for quantifying predictability. Both approaches produced a wide range of predictability for the studied ponds. Some model assumptions–which are considered very different a priori–had minor effects, whereas others produced predictability estimations that showed some degree of divergence. We hypothesize that these diverging estimations of predictability reflect the effect of fluctuations on different types of organisms. The fluctuation analysis described in this manuscript is applicable to a wide variety of systems, including both aquatic and non-aquatic systems, and will be valuable for quantifying and characterizing predictability, which is essential within the expected global increase in the unpredictability of environmental fluctuations. We advocate that a priori information for organisms of interest should be used to select the most suitable metrics for estimating predictability, and we provide some guidelines for this approach. PMID:29121667
2013-01-01
Background SNPs&GO is a method for the prediction of deleterious Single Amino acid Polymorphisms (SAPs) using protein functional annotation. In this work, we present the web server implementation of SNPs&GO (WS-SNPs&GO). The server is based on Support Vector Machines (SVM) and for a given protein, its input comprises: the sequence and/or its three-dimensional structure (when available), a set of target variations and its functional Gene Ontology (GO) terms. The output of the server provides, for each protein variation, the probabilities to be associated to human diseases. Results The server consists of two main components, including updated versions of the sequence-based SNPs&GO (recently scored as one of the best algorithms for predicting deleterious SAPs) and of the structure-based SNPs&GO3d programs. Sequence and structure based algorithms are extensively tested on a large set of annotated variations extracted from the SwissVar database. Selecting a balanced dataset with more than 38,000 SAPs, the sequence-based approach achieves 81% overall accuracy, 0.61 correlation coefficient and an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve of 0.88. For the subset of ~6,600 variations mapped on protein structures available at the Protein Data Bank (PDB), the structure-based method scores with 84% overall accuracy, 0.68 correlation coefficient, and 0.91 AUC. When tested on a new blind set of variations, the results of the server are 79% and 83% overall accuracy for the sequence-based and structure-based inputs, respectively. Conclusions WS-SNPs&GO is a valuable tool that includes in a unique framework information derived from protein sequence, structure, evolutionary profile, and protein function. WS-SNPs&GO is freely available at http://snps.biofold.org/snps-and-go. PMID:23819482
Long-term Assessment of Carbon Budget of Terrestrial Ecosystems of Russia
NASA Astrophysics Data System (ADS)
Maksyutov, S. S.; Shvidenko, A.; Shchepashchenko, D.; Kraxner, F.
2016-12-01
We present a reanalysis of Terrestrial Ecosystems Full Verified Carbon Account (FCA) for Russia for the period of 2000-2012 based on understanding that FCA is an underspecified (fuzzy) system. The methodology used is based on integration of major approaches of carbon cycling assessment with following harmonizing and mutual constraints of the results received by independent methods. The landscape-ecosystem approach (LEA) was used for a systemic design of the account and empirical assessment of the LEA based on a relevant combination of pool-based and flux-based methods. The information background of the LEA is presented in a form of an Integrated Land Information System which include the hybrid landcover (HLC) at resolution of 150 m2 and relevant attributive databases. HLC was developed based on remote sensing multi-sensor concept (using 12 different satellite products), geographic weighted regression and Geo-wiki validation (Schepaschenko et al. 2015). Carbon fluxes which are based on long-term measurements were corrected based on seasonal climatic indicators of individual years. Uncertainties of intermediate and final results within LEA are calculated by sequential algorithms. Results of the LEA were compared with those obtained by eddy covariance, process-based models of different types, inverse modeling and GOSAT Level 4 Products. Uncertainty of the final results was calculated based on the Bayesian approach. It has been shown that terrestrial vegetation of Russia served as a net carbon sink at range of 480-650 Tg C yr-1 during the studied period, mostly at the expense of forests, with interannual variation of around 10-20% at the country's scale. The regional variation was significantly higher that depends on specifics of seasonal weather and accompanying regimes of natural disturbances. The overall uncertainty of the FCA is estimated at 22-25% at the annual basis and 7-9% for the period's average.
Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.
Pillai, Jaishanker K; Patel, Vishal M; Chellappa, Rama; Ratha, Nalini K
2011-09-01
Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach.
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
Validation of high-throughput single cell analysis methodology.
Devonshire, Alison S; Baradez, Marc-Olivier; Morley, Gary; Marshall, Damian; Foy, Carole A
2014-05-01
High-throughput quantitative polymerase chain reaction (qPCR) approaches enable profiling of multiple genes in single cells, bringing new insights to complex biological processes and offering opportunities for single cell-based monitoring of cancer cells and stem cell-based therapies. However, workflows with well-defined sources of variation are required for clinical diagnostics and testing of tissue-engineered products. In a study of neural stem cell lines, we investigated the performance of lysis, reverse transcription (RT), preamplification (PA), and nanofluidic qPCR steps at the single cell level in terms of efficiency, precision, and limit of detection. We compared protocols using a separate lysis buffer with cell capture directly in RT-PA reagent. The two methods were found to have similar lysis efficiencies, whereas the direct RT-PA approach showed improved precision. Digital PCR was used to relate preamplified template copy numbers to Cq values and reveal where low-quality signals may affect the analysis. We investigated the impact of calibration and data normalization strategies as a means of minimizing the impact of inter-experimental variation on gene expression values and found that both approaches can improve data comparability. This study provides validation and guidance for the application of high-throughput qPCR workflows for gene expression profiling of single cells. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mai, Sebastian; Marquetand, Philipp; González, Leticia
2014-08-21
An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less
Performance variation in motor imagery brain-computer interface: a brief review.
Ahn, Minkyu; Jun, Sung Chan
2015-03-30
Brain-computer interface (BCI) technology has attracted significant attention over recent decades, and has made remarkable progress. However, BCI still faces a critical hurdle, in that performance varies greatly across and even within subjects, an obstacle that degrades the reliability of BCI systems. Understanding the causes of these problems is important if we are to create more stable systems. In this short review, we report the most recent studies and findings on performance variation, especially in motor imagery-based BCI, which has found that low-performance groups have a less-developed brain network that is incapable of motor imagery. Further, psychological and physiological states influence performance variation within subjects. We propose a possible strategic approach to deal with this variation, which may contribute to improving the reliability of BCI. In addition, the limitations of current work and opportunities for future studies are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Miller, Scott R.; Bebout, Brad M.
2004-01-01
Physiological and molecular phylogenetic approaches were used to investigate variation among 12 cyanobacterial strains in their tolerance of sulfide, an inhibitor of oxygenic photosynthesis. Cyanobacteria from sulfidic habitats were found to be phylogenetically diverse and exhibited an approximately 50-fold variation in photosystem II performance in the presence of sulfide. Whereas the degree of tolerance was positively correlated with sulfide levels in the environment, a strain's phenotype could not be predicted from the tolerance of its closest relatives. These observations suggest that sulfide tolerance is a dynamic trait primarily shaped by environmental variation. Despite differences in absolute tolerance, similarities among strains in the effects of sulfide on chlorophyll fluorescence induction indicated a common mode of toxicity. Based on similarities with treatments known to disrupt the oxygen-evolving complex, it was concluded that sulfide toxicity resulted from inhibition of the donor side of photosystem II.
NASA Astrophysics Data System (ADS)
Savastano, Giorgio; Komjathy, Attila; Verkhoglyadova, Olga; Mazzoni, Augusto; Crespi, Mattia; Wei, Yong; Mannucci, Anthony J.
2017-04-01
It is well known that tsunamis can produce gravity waves that propagate up to the ionosphere generating disturbed electron densities in the E and F regions. These ionospheric disturbances can be studied in detail using ionospheric total electron content (TEC) measurements collected by continuously operating ground-based receivers from the Global Navigation Satellite Systems (GNSS). Here, we present results using a new approach, named VARION (Variometric Approach for Real-Time Ionosphere Observation), and estimate slant TEC (sTEC) variations in a real-time scenario. Using the VARION algorithm we compute TEC variations at 56 GPS receivers in Hawaii as induced by the 2012 Haida Gwaii tsunami event. We observe TEC perturbations with amplitudes of up to 0.25 TEC units and traveling ionospheric perturbations (TIDs) moving away from the earthquake epicenter at an approximate speed of 316 m/s. We perform a wavelet analysis to analyze localized variations of power in the TEC time series and we find perturbation periods consistent with a tsunami typical deep ocean period. Finally, we present comparisons with the real-time tsunami MOST (Method of Splitting Tsunami) model produced by the NOAA Center for Tsunami Research and we observe variations in TEC that correlate in time and space with the tsunami waves.
Erranz, M Benjamín; Wilhelm, B Jan; Riquelme, V Raquel; Cruces, R Pablo
2015-01-01
Acute respiratory distress syndrome (ARDS) is the most severe form of respiratory failure. Theoretically, any acute lung condition can lead to ARDS, but only a small percentage of individuals actually develop the disease. On this basis, genetic factors have been implicated in the risk of developing ARDS. Based on the pathophysiology of this disease, many candidate genes have been evaluated as potential modifiers in patient, as well as in animal models, of ARDS. Recent experimental data and clinical studies suggest that variations of genes involved in key processes of tissue, cellular and molecular lung damage may influence susceptibility and prognosis of ARDS. However, the pathogenesis of pediatric ARDS is complex, and therefore, it can be expected that many genes might contribute. Genetic variations such as single nucleotide polymorphisms and copy-number variations are likely associated with susceptibility to ARDS in children with primary lung injury. Genome-wide association (GWA) studies can objectively examine these variations, and help identify important new genes and pathogenetic pathways for future analysis. This approach might also have diagnostic and therapeutic implications, such as predicting patient risk or developing a personalized therapeutic approach to this serious syndrome. Copyright © 2015. Publicado por Elsevier España, S.L.U.
Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua
2016-10-01
Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Savastano, Giorgio; Komjathy, Attila; Verkhoglyadova, Olga; Mazzoni, Augusto; Crespi, Mattia; Wei, Yong; Mannucci, Anthony J.
2017-01-01
It is well known that tsunamis can produce gravity waves that propagate up to the ionosphere generating disturbed electron densities in the E and F regions. These ionospheric disturbances can be studied in detail using ionospheric total electron content (TEC) measurements collected by continuously operating ground-based receivers from the Global Navigation Satellite Systems (GNSS). Here, we present results using a new approach, named VARION (Variometric Approach for Real-Time Ionosphere Observation), and estimate slant TEC (sTEC) variations in a real-time scenario. Using the VARION algorithm we compute TEC variations at 56 GPS receivers in Hawaii as induced by the 2012 Haida Gwaii tsunami event. We observe TEC perturbations with amplitudes of up to 0.25 TEC units and traveling ionospheric perturbations (TIDs) moving away from the earthquake epicenter at an approximate speed of 316 m/s. We perform a wavelet analysis to analyze localized variations of power in the TEC time series and we find perturbation periods consistent with a tsunami typical deep ocean period. Finally, we present comparisons with the real-time tsunami MOST (Method of Splitting Tsunami) model produced by the NOAA Center for Tsunami Research and we observe variations in TEC that correlate in time and space with the tsunami waves. PMID:28429754
Savastano, Giorgio; Komjathy, Attila; Verkhoglyadova, Olga; Mazzoni, Augusto; Crespi, Mattia; Wei, Yong; Mannucci, Anthony J
2017-04-21
It is well known that tsunamis can produce gravity waves that propagate up to the ionosphere generating disturbed electron densities in the E and F regions. These ionospheric disturbances can be studied in detail using ionospheric total electron content (TEC) measurements collected by continuously operating ground-based receivers from the Global Navigation Satellite Systems (GNSS). Here, we present results using a new approach, named VARION (Variometric Approach for Real-Time Ionosphere Observation), and estimate slant TEC (sTEC) variations in a real-time scenario. Using the VARION algorithm we compute TEC variations at 56 GPS receivers in Hawaii as induced by the 2012 Haida Gwaii tsunami event. We observe TEC perturbations with amplitudes of up to 0.25 TEC units and traveling ionospheric perturbations (TIDs) moving away from the earthquake epicenter at an approximate speed of 316 m/s. We perform a wavelet analysis to analyze localized variations of power in the TEC time series and we find perturbation periods consistent with a tsunami typical deep ocean period. Finally, we present comparisons with the real-time tsunami MOST (Method of Splitting Tsunami) model produced by the NOAA Center for Tsunami Research and we observe variations in TEC that correlate in time and space with the tsunami waves.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Quantifying Proportional Variability
Heath, Joel P.; Borowski, Peter
2013-01-01
Real quantities can undergo such a wide variety of dynamics that the mean is often a meaningless reference point for measuring variability. Despite their widespread application, techniques like the Coefficient of Variation are not truly proportional and exhibit pathological properties. The non-parametric measure Proportional Variability (PV) [1] resolves these issues and provides a robust way to summarize and compare variation in quantities exhibiting diverse dynamical behaviour. Instead of being based on deviation from an average value, variation is simply quantified by comparing the numbers to each other, requiring no assumptions about central tendency or underlying statistical distributions. While PV has been introduced before and has already been applied in various contexts to population dynamics, here we present a deeper analysis of this new measure, derive analytical expressions for the PV of several general distributions and present new comparisons with the Coefficient of Variation, demonstrating cases in which PV is the more favorable measure. We show that PV provides an easily interpretable approach for measuring and comparing variation that can be generally applied throughout the sciences, from contexts ranging from stock market stability to climate variation. PMID:24386334
Integrating evolutionary and functional approaches to infer adaptation at specific loci.
Storz, Jay F; Wheat, Christopher W
2010-09-01
Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally,population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation.
Slob, Wout
2006-07-01
Probabilistic dietary exposure assessments that are fully based on Monte Carlo sampling from the raw intake data may not be appropriate. This paper shows that the data should first be analysed by using a statistical model that is able to take the various dimensions of food consumption patterns into account. A (parametric) model is discussed that takes into account the interindividual variation in (daily) consumption frequencies, as well as in amounts consumed. Further, the model can be used to include covariates, such as age, sex, or other individual attributes. Some illustrative examples show how this model may be used to estimate the probability of exceeding an (acute or chronic) exposure limit. These results are compared with the results based on directly counting the fraction of observed intakes exceeding the limit value. This comparison shows that the latter method is not adequate, in particular for the acute exposure situation. A two-step approach for probabilistic (acute) exposure assessment is proposed: first analyse the consumption data by a (parametric) statistical model as discussed in this paper, and then use Monte Carlo techniques for combining the variation in concentrations with the variation in consumption (by sampling from the statistical model). This approach results in an estimate of the fraction of the population as a function of the fraction of days at which the exposure limit is exceeded by the individual.
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
Quality Internships for School Leaders: Meeting the Challenge
ERIC Educational Resources Information Center
Gaudreau, Patricia A.; Kufel, Andrew P.; Parks, David J.
2006-01-01
An internship is essential for the development of competency-based leadership. Variation in the quality of time spent in clinical settings depends on the use of approaches that provide interns with opportunities to observe, participate in, and reflect on the problems of leadership and management found in schools. In essence, the internship is an…
Nonlocal dark solitons under competing cubic-quintic nonlinearities.
Chen, L; Wang, Q; Shen, M; Zhao, H; Lin, Y-Y; Jeng, C-C; Lee, R-K; Krolikowski, W
2013-01-01
We investigate properties of dark solitons under competing nonlocal cubic-local quintic nonlinearities. Analytical results, based on a variational approach and confirmed by direct numerical simulations, reveal the existence of a unique dark soliton solutions with their width being independent of the degree of nonlocality, due to the competing cubic-quintic nonlinearities.
Brittle fracture phase-field modeling of a short-rod specimen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escobar, Ivana; Tupek, Michael R.; Bishop, Joseph E.
2015-09-01
Predictive simulation capabilities for modeling fracture evolution provide further insight into quantities of interest in comparison to experimental testing. Based on the variational approach to fracture, the advent of phase-field modeling achieves the goal to robustly model fracture for brittle materials and captures complex crack topologies in three dimensions.
Pre-Service English Teachers' Beliefs on Speaking Skill Based on Motivational Orientations
ERIC Educational Resources Information Center
Dinçer, Ali; Yesilyurt, Savas
2013-01-01
This study aimed to explore pre-service English teachers' perceptions of teaching speaking in Turkey, the importance they give to this language skill, and their self-evaluation of their speaking competence. With case design and maximum variation sampling approach, seven pre-service English teachers' beliefs about speaking skills were gathered in…
Estuaries in the Pacific Northwest have major intraannual and within estuary variation in sources and magnitudes of nutrient inputs. To develop an approach for setting nutrient criteria for these systems, we conducted a case study for Yaquina Bay, OR based on a synthesis of resea...
Plant trait detection with multi-scale spectrometry
NASA Astrophysics Data System (ADS)
Gamon, J. A.; Wang, R.
2017-12-01
Proximal and remote sensing using imaging spectrometry offers new opportunities for detecting plant traits, with benefits for phenotyping, productivity estimation, stress detection, and biodiversity studies. Using proximal and airborne spectrometry, we evaluated variation in plant optical properties at various spatial and spectral scales with the goal of identifying optimal scales for distinguishing plant traits related to photosynthetic function. Using directed approaches based on physiological vegetation indices, and statistical approaches based on spectral information content, we explored alternate ways of distinguishing plant traits with imaging spectrometry. With both leaf traits and canopy structure contributing to the signals, results exhibit a strong scale dependence. Our results demonstrate the benefits of multi-scale experimental approaches within a clear conceptual framework when applying remote sensing methods to plant trait detection for phenotyping, productivity, and biodiversity studies.
NASA Astrophysics Data System (ADS)
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
NASA Astrophysics Data System (ADS)
Slaski, G.; Ohde, B.
2016-09-01
The article presents the results of a statistical dispersion analysis of an energy and power demand for tractive purposes of a battery electric vehicle. The authors compare data distribution for different values of an average speed in two approaches, namely a short and long period of observation. The short period of observation (generally around several hundred meters) results from a previously proposed macroscopic energy consumption model based on an average speed per road section. This approach yielded high values of standard deviation and coefficient of variation (the ratio between standard deviation and the mean) around 0.7-1.2. The long period of observation (about several kilometers long) is similar in length to standardized speed cycles used in testing a vehicle energy consumption and available range. The data were analysed to determine the impact of observation length on the energy and power demand variation. The analysis was based on a simulation of electric power and energy consumption performed with speed profiles data recorded in Poznan agglomeration.
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
Adaptive metric learning with deep neural networks for video-based facial expression recognition
NASA Astrophysics Data System (ADS)
Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping
2018-01-01
Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.
Lloyd, Christopher W; Shmuylovich, Leonid; Holland, Mark R; Miller, James G; Kovács, Sándor J
2011-08-01
Myocardial tissue characterization represents an extension of currently available echocardiographic imaging. The systematic variation of backscattered energy during the cardiac cycle (the "cyclic variation" of backscatter) has been employed to characterize cardiac function in a wide range of investigations. However, the mechanisms responsible for observed cyclic variation remain incompletely understood. As a step toward determining the features of cardiac structure and function that are responsible for the observed cyclic variation, the present study makes use of a kinematic approach of diastolic function quantitation to identify diastolic function determinants that influence the magnitude and timing of cyclic variation. Echocardiographic measurements of 32 subjects provided data for determination of the cyclic variation of backscatter to diastolic function relation characterized in terms of E-wave determined, kinematic model-based parameters of chamber stiffness, viscosity/relaxation and load. The normalized time delay of cyclic variation appears to be related to the relative viscoelasticity of the chamber and predictive of the kinematic filling dynamics as determined using the parameterized diastolic filling formalism (with r-values ranging from .44 to .59). The magnitude of cyclic variation does not appear to be strongly related to the kinematic parameters. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Finite-temperature time-dependent variation with multiple Davydov states
NASA Astrophysics Data System (ADS)
Wang, Lu; Fujihashi, Yuta; Chen, Lipeng; Zhao, Yang
2017-03-01
The Dirac-Frenkel time-dependent variational approach with Davydov Ansätze is a sophisticated, yet efficient technique to obtain an accurate solution to many-body Schrödinger equations for energy and charge transfer dynamics in molecular aggregates and light-harvesting complexes. We extend this variational approach to finite temperature dynamics of the spin-boson model by adopting a Monte Carlo importance sampling method. In order to demonstrate the applicability of this approach, we compare calculated real-time quantum dynamics of the spin-boson model with that from numerically exact iterative quasiadiabatic propagator path integral (QUAPI) technique. The comparison shows that our variational approach with the single Davydov Ansätze is in excellent agreement with the QUAPI method at high temperatures, while the two differ at low temperatures. Accuracy in dynamics calculations employing a multitude of Davydov trial states is found to improve substantially over the single Davydov Ansatz, especially at low temperatures. At a moderate computational cost, our variational approach with the multiple Davydov Ansatz is shown to provide accurate spin-boson dynamics over a wide range of temperatures and bath spectral densities.
Zhao, Jian; Yang, Ping; Zhao, Yue
2017-06-01
Speckle pattern-based characteristics of digital image correlation (DIC) restrict its application in engineering fields and nonlaboratory environments, since serious decorrelation effect occurs due to localized sudden illumination variation. A simple and efficient speckle pattern adjusting and optimizing approach presented in this paper is aimed at providing a novel speckle pattern robust enough to resist local illumination variation. The new speckle pattern, called neighborhood binary speckle pattern, derived from original speckle pattern, is obtained by means of thresholding the pixels of a neighborhood at its central pixel value and considering the result as a binary number. The efficiency of the proposed speckle pattern is evaluated in six experimental scenarios. Experiment results indicate that the DIC measurements based on neighborhood binary speckle pattern are able to provide reliable and accurate results, even though local brightness and contrast of the deformed images have been seriously changed. It is expected that the new speckle pattern will have more potential value in engineering applications.
Sondak, D.; Shadid, J. N.; Oberai, A. A.; ...
2015-04-29
New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less
The Use of Learning Study in Designing Examples for Teaching Physics
NASA Astrophysics Data System (ADS)
Guo, Jian-Peng; Yang, Ling-Yan; Ding, Yi
2017-07-01
Researchers have consistently demonstrated that studying multiple examples is more effective than studying one example because comparing multiple examples can promote schema construction and facilitate discernment of critical aspects. Teachers, however, are usually absent from those self-led text-based studies. In this experimental study, a learning study approach based on variation theory was adopted to examine the effectiveness of teachers' different ways of designing multiple examples in helping students learn a physics principle. Three hundred and fifty-one tenth-grade students learned to distinguish action-reaction from equilibrium (a) by comparing examples that varied critical aspects first separately and then simultaneously, or (b) by comparing examples that separately varied critical aspects only. Results showed that students with average academic attainment benefited more from comparing examples in the first condition. Students with higher academic attainment learned equally within both conditions. This finding supports the advantage of simultaneous variation. The characteristics of students and instructional support should be taken into account when considering the effectiveness of patterns of variation.
Schmieder, Daniela A.; Benítez, Hugo A.; Borissov, Ivailo M.; Fruciano, Carmelo
2015-01-01
External morphology is commonly used to identify bats as well as to investigate flight and foraging behavior, typically relying on simple length and area measures or ratios. However, geometric morphometrics is increasingly used in the biological sciences to analyse variation in shape and discriminate among species and populations. Here we compare the ability of traditional versus geometric morphometric methods in discriminating between closely related bat species – in this case European horseshoe bats (Rhinolophidae, Chiroptera) – based on morphology of the wing, body and tail. In addition to comparing morphometric methods, we used geometric morphometrics to detect interspecies differences as shape changes. Geometric morphometrics yielded improved species discrimination relative to traditional methods. The predicted shape for the variation along the between group principal components revealed that the largest differences between species lay in the extent to which the wing reaches in the direction of the head. This strong trend in interspecific shape variation is associated with size, which we interpret as an evolutionary allometry pattern. PMID:25965335
Fajardo, Alex; Piper, Frida I
2011-01-01
• The focus of the trait-based approach to study community ecology has mostly been on trait comparisons at the interspecific level. Here we quantified intraspecific variation and covariation of leaf mass per area (LMA) and wood density (WD) in monospecific forests of the widespread tree species Nothofagus pumilio to determine its magnitude and whether it is related to environmental conditions and ontogeny. We also discuss probable mechanisms controlling the trait variation found. • We collected leaf and stem woody tissues from 30-50 trees of different ages (ontogeny) from each of four populations at differing elevations (i.e. temperatures) and placed at each of three locations differing in soil moisture. • The total variation in LMA (coefficient of variation (CV) = 21.14%) was twice that of WD (CV = 10.52%). The total variation in traits was never less than 23% when compared with interspecific studies. Differences in elevation (temperature) for the most part explained variation in LMA, while differences in soil moisture and ontogeny explained the variation in WD. Traits covaried similarly in the altitudinal gradient only. • Functional traits of N. pumilio exhibited nonnegligible variation; LMA varied for the most part with temperature, while WD mostly varied with moisture and ontogeny. We demonstrate that environmental variation can cause important trait variation without species turnover. © The Authors (2010). Journal compilation © New Phytologist Trust (2010).
How many landmarks are enough to characterize shape and size variation?
Watanabe, Akinobu
2018-01-01
Accurate characterization of morphological variation is crucial for generating reliable results and conclusions concerning changes and differences in form. Despite the prevalence of landmark-based geometric morphometric (GM) data in the scientific literature, a formal treatment of whether sampled landmarks adequately capture shape variation has remained elusive. Here, I introduce LaSEC (Landmark Sampling Evaluation Curve), a computational tool to assess the fidelity of morphological characterization by landmarks. This task is achieved by calculating how subsampled data converge to the pattern of shape variation in the full dataset as landmark sampling is increased incrementally. While the number of landmarks needed for adequate shape variation is dependent on individual datasets, LaSEC helps the user (1) identify under- and oversampling of landmarks; (2) assess robustness of morphological characterization; and (3) determine the number of landmarks that can be removed without compromising shape information. In practice, this knowledge could reduce time and cost associated with data collection, maintain statistical power in certain analyses, and enable the incorporation of incomplete, but important, specimens to the dataset. Results based on simulated shape data also reveal general properties of landmark data, including statistical consistency where sampling additional landmarks has the tendency to asymptotically improve the accuracy of morphological characterization. As landmark-based GM data become more widely adopted, LaSEC provides a systematic approach to evaluate and refine the collection of shape data--a goal paramount for accumulation and analysis of accurate morphological information.
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
NASA Astrophysics Data System (ADS)
Kozhikkottu, Vivek J.
The scaling of integrated circuits into the nanometer regime has led to variations emerging as a primary concern for designers of integrated circuits. Variations are an inevitable consequence of the semiconductor manufacturing process, and also arise due to the side-effects of operation of integrated circuits (voltage, temperature, and aging). Conventional design approaches, which are based on design corners or worst-case scenarios, leave designers with an undesirable choice between the considerable overheads associated with over-design and significantly reduced manufacturing yield. Techniques for variation-tolerant design at the logic, circuit and layout levels of the design process have been developed and are in commercial use. However, with the incessant increase in variations due to technology scaling and design trends such as near-threshold computing, these techniques are no longer sufficient to contain the effects of variations, and there is a need to address variations at all stages of design. This thesis addresses the problem of variation-tolerant design at the earliest stages of the design process, where the system-level design decisions that are made can have a very significant impact. There are two key aspects to making system-level design variation-aware. First, analysis techniques must be developed to project the impact of variations on system-level metrics such as application performance and energy. Second, variation-tolerant design techniques need to be developed to absorb the residual impact of variations (that cannot be contained through lower-level techniques). In this thesis, we address both these facets by developing robust and scalable variation-aware analysis and variation mitigation techniques at the system level. The first contribution of this thesis is a variation-aware system-level performance analysis framework. We address the key challenge of translating the per-component clock frequency distributions into a system-level application performance distribution. This task is particularly complex and challenging due to the inter-dependencies between components' execution, indirect effects of shared resources, and interactions between multiple system-level "execution paths". We argue that accurate variation-aware performance analysis requires Monte-Carlo based repeated system execution. Our proposed analysis framework leverages emulation to significantly speedup performance analysis without sacrificing the generality and accuracy achieved by Monte-Carlo based simulations. Our experiments show performance improvements of around 60x compared to state-of-the-art hardware-software co-simulation tools and also underscore the framework's potential to enable variation-aware design and exploration at the system level. Our second contribution addresses the problem of designing variation-tolerant SoCs using recovery based design, a popular circuit design paradigm that addresses variations by eliminating guard-bands and operating circuits at close to "zero margins" while detecting and recovering from timing errors. While previous efforts have demonstrated the potential benefits of recovery based design, we identify several challenges that need to be addressed in order to apply this technique to SoCs. We present a systematic design framework to apply recovery based design at the system level. We propose to partition SoCs into "recovery islands", wherein each recovery island consists of one or more SoC components that can recover independent of the rest of the SoC. We present a variation-aware design methodology that partitions a given SoC into recovery islands and computes the optimal operating points for each island, taking into account the various trade-offs involved. Our experiments demonstrate that the proposed design framework achieves an average of 32% energy savings over conventional worst-case designs, with negligible losses in performance. The third contribution of this thesis introduces disproportionate allocation of shared system resources as a means to combat the adverse impact of within-die variations on multi-core platforms. For multi-threaded programs executing on variation-impacted multi-cores platforms, we make the key observation that thread performance is not only a function of the frequency of the core on which it is executing on, but also depends upon the amount of shared system resources allocated to it. We utilize this insight to design a variation-aware runtime scheme which allocates the ways of a last-level shared L2 cache amongst the different cores/threads of a multi-core platform taking into account both application characteristics as well as chip specific variation profiles. Our experiments on 100 quad-core chips, each with a distinct variation profile, shows on an average 15% performance improvements for a suite of multi-threaded benchmarks. Our final contribution investigates the variation-tolerant design of domain-specific accelerators and demonstrates how the unique architectural properties of these accelerators can be leveraged to create highly effective variation tolerance mechanisms. We explore this concept through the variation-tolerant design of a vector processor that efficiently executes applications from the domains of recognition, mining and synthesis (RMS). We develop a novel design approach for variation tolerance, which leverages the unique nature of the vector reduction operations performed by this processor to effectively predict and preempt the occurrence of timing errors under variations and subsequently restore the correct output at the end of each vector reduction operation. We implement the above predict, preempt and restore operations by suitably enhancing the processor hardware and the application software and demonstrate considerable energy benefits (on an average 32%) across six applications from the domains of RMS. In conclusion, our work provides system designers with powerful tools and mechanisms in their efforts to combat variations, resulting in improved designer productivity and variation-tolerant systems.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1981-01-01
A predicted standing wave pressure and phase angle profile for a hard wall rectangular duct with a region of converging-diverging area variation is compared to published experimental measurements in a study of sound propagation without flow. The factor of 1/2 area variation used is sufficient magnitude to produce large reflections. The prediction is based on a transmission matrix approach developed for the analysis of sound propagation in a variable area duct with and without flow. The agreement between the measured and predicted results is shown to be excellent.
Single-cell copy number variation detection
2011-01-01
Detection of chromosomal aberrations from a single cell by array comparative genomic hybridization (single-cell array CGH), instead of from a population of cells, is an emerging technique. However, such detection is challenging because of the genome artifacts and the DNA amplification process inherent to the single cell approach. Current normalization algorithms result in inaccurate aberration detection for single-cell data. We propose a normalization method based on channel, genome composition and recurrent genome artifact corrections. We demonstrate that the proposed channel clone normalization significantly improves the copy number variation detection in both simulated and real single-cell array CGH data. PMID:21854607
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.; Scott, Robert W.; Chen, J.
1991-01-01
A summary is presented of the progress toward the completion of a comprehensive diagnostic objective analysis system based upon the calculus of variations. The approach was to first develop the objective analysis subject to the constraints that the final product satisfies the five basic primitive equations for a dry inviscid atmosphere: the two nonlinear horizontal momentum equations, the continuity equation, the hydrostatic equation, and the thermodynamic equation. Then, having derived the basic model, there would be added to it the equations for moist atmospheric processes and the radiative transfer equation.
NASA Astrophysics Data System (ADS)
Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.
2018-04-01
In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm
Effect of costing methods on unit cost of hospital medical services.
Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya
2007-04-01
To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.
A variational approach to niche construction.
Constant, Axel; Ramstead, Maxwell J D; Veissière, Samuel P L; Campbell, John O; Friston, Karl J
2018-04-01
In evolutionary biology, niche construction is sometimes described as a genuine evolutionary process whereby organisms, through their activities and regulatory mechanisms, modify their environment such as to steer their own evolutionary trajectory, and that of other species. There is ongoing debate, however, on the extent to which niche construction ought to be considered a bona fide evolutionary force, on a par with natural selection. Recent formulations of the variational free-energy principle as applied to the life sciences describe the properties of living systems, and their selection in evolution, in terms of variational inference. We argue that niche construction can be described using a variational approach. We propose new arguments to support the niche construction perspective, and to extend the variational approach to niche construction to current perspectives in various scientific fields. © 2018 The Authors.
A variational approach to niche construction
Ramstead, Maxwell J. D.; Veissière, Samuel P. L.; Campbell, John O.; Friston, Karl J.
2018-01-01
In evolutionary biology, niche construction is sometimes described as a genuine evolutionary process whereby organisms, through their activities and regulatory mechanisms, modify their environment such as to steer their own evolutionary trajectory, and that of other species. There is ongoing debate, however, on the extent to which niche construction ought to be considered a bona fide evolutionary force, on a par with natural selection. Recent formulations of the variational free-energy principle as applied to the life sciences describe the properties of living systems, and their selection in evolution, in terms of variational inference. We argue that niche construction can be described using a variational approach. We propose new arguments to support the niche construction perspective, and to extend the variational approach to niche construction to current perspectives in various scientific fields. PMID:29643221
An empirical approach to modeling methylmercury concentrations in an Adirondack stream watershed
Burns, Douglas A.; Nystrom, Elizabeth A.; Wolock, David M.; Bradley, Paul M.; Riva-Murray, Karen
2014-01-01
Inverse empirical models can inform and improve more complex process-based models by quantifying the principal factors that control water quality variation. Here we developed a multiple regression model that explains 81% of the variation in filtered methylmercury (FMeHg) concentrations in Fishing Brook, a fourth-order stream in the Adirondack Mountains, New York, a known “hot spot” of Hg bioaccumulation. This model builds on previous observations that wetland-dominated riparian areas are the principal source of MeHg to this stream and were based on 43 samples collected during a 33 month period in 2007–2009. Explanatory variables include those that represent the effects of water temperature, streamflow, and modeled riparian water table depth on seasonal and annual patterns of FMeHg concentrations. An additional variable represents the effects of an upstream pond on decreasing FMeHg concentrations. Model results suggest that temperature-driven effects on net Hg methylation rates are the principal control on annual FMeHg concentration patterns. Additionally, streamflow dilutes FMeHg concentrations during the cold dormant season. The model further indicates that depth and persistence of the riparian water table as simulated by TOPMODEL are dominant controls on FMeHg concentration patterns during the warm growing season, especially evident when concentrations during the dry summer of 2007 were less than half of those in the wetter summers of 2008 and 2009. This modeling approach may help identify the principal factors that control variation in surface water FMeHg concentrations in other settings, which can guide the appropriate application of process-based models.
Physician professionalism and accountability: the role of collaborative improvement networks.
Miles, Paul V; Conway, Patrick H; Pawlson, L Gregory
2013-06-01
The medical profession is facing an imperative to deliver more patient-centered care, improve quality, and reduce unnecessary costs and waste. With significant unexplained variation in resource use and outcomes, even physicians and health care organizations with "the best" reputations cannot assume they always deliver the best care possible. Going forward, physicians will need to demonstrate professionalism and accountability in a different way: to their peers, to society in general, and to individual patients. The new accountability includes quality and clinical outcomes but also resource utilization, appropriateness and patient-centeredness of recommended care, and the responsibility to help improve systems of care. The pediatric collaborative improvement network model represents an important framework for helping transform health care. For individual physicians, participation in a multisite network offers the opportunity to demonstrate accountability by measuring and improving care as part of an approach that addresses the problems of small sample size, attribution, and unnecessary variation in care by pooling patients from individual practices and requiring standardization of care to participate. For patients and families, the model helps ensure that they are likely to receive the current best evidence-based recommendation. Finally, this model aligns with payers' goals of purchasing value-based care, rewarding quality and improvement, and reducing unnecessary variation around current best evidenced-based, effective, and efficient care. In addition, within the profession, the American Board of Pediatrics recognizes participation in a multisite quality improvement network as one of the most rigorous and meaningful approaches for a diplomate to meet practice performance maintenance of certification requirements.
NASA Astrophysics Data System (ADS)
Thomas, Yoann; Mazurié, Joseph; Alunno-Bruscia, Marianne; Bacher, Cédric; Bouget, Jean-François; Gohin, Francis; Pouvreau, Stéphane; Struski, Caroline
2011-11-01
In order to assess the potential of various marine ecosystems for shellfish aquaculture and to evaluate their carrying capacities, there is a need to clarify the response of exploited species to environmental variations using robust ecophysiological models and available environmental data. For a large range of applications and comparison purposes, a non-specific approach based on 'generic' individual growth models offers many advantages. In this context, we simulated the response of blue mussel ( Mytilus edulis L.) to the spatio-temporal fluctuations of the environment in Mont Saint-Michel Bay (North Brittany) by forcing a generic growth model based on Dynamic Energy Budgets with satellite-derived environmental data (i.e. temperature and food). After a calibration step based on data from mussel growth surveys, the model was applied over nine years on a large area covering the entire bay. These simulations provide an evaluation of the spatio-temporal variability in mussel growth and also show the ability of the DEB model to integrate satellite-derived data and to predict spatial and temporal growth variability of mussels. Observed seasonal, inter-annual and spatial growth variations are well simulated. The large-scale application highlights the strong link between food and mussel growth. The methodology described in this study may be considered as a suitable approach to account for environmental effects (food and temperature variations) on physiological responses (growth and reproduction) of filter feeders in varying environments. Such physiological responses may then be useful for evaluating the suitability of coastal ecosystems for shellfish aquaculture.
Genomic Selection in Multi-environment Crop Trials.
Oakey, Helena; Cullis, Brian; Thompson, Robin; Comadran, Jordi; Halpin, Claire; Waugh, Robbie
2016-05-03
Genomic selection in crop breeding introduces modeling challenges not found in animal studies. These include the need to accommodate replicate plants for each line, consider spatial variation in field trials, address line by environment interactions, and capture nonadditive effects. Here, we propose a flexible single-stage genomic selection approach that resolves these issues. Our linear mixed model incorporates spatial variation through environment-specific terms, and also randomization-based design terms. It considers marker, and marker by environment interactions using ridge regression best linear unbiased prediction to extend genomic selection to multiple environments. Since the approach uses the raw data from line replicates, the line genetic variation is partitioned into marker and nonmarker residual genetic variation (i.e., additive and nonadditive effects). This results in a more precise estimate of marker genetic effects. Using barley height data from trials, in 2 different years, of up to 477 cultivars, we demonstrate that our new genomic selection model improves predictions compared to current models. Analyzing single trials revealed improvements in predictive ability of up to 5.7%. For the multiple environment trial (MET) model, combining both year trials improved predictive ability up to 11.4% compared to a single environment analysis. Benefits were significant even when fewer markers were used. Compared to a single-year standard model run with 3490 markers, our partitioned MET model achieved the same predictive ability using between 500 and 1000 markers depending on the trial. Our approach can be used to increase accuracy and confidence in the selection of the best lines for breeding and/or, to reduce costs by using fewer markers. Copyright © 2016 Oakey et al.
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
Windhorst, Dafna A; Mileva-Seitz, Viara R; Rippe, Ralph C A; Tiemeier, Henning; Jaddoe, Vincent W V; Verhulst, Frank C; van IJzendoorn, Marinus H; Bakermans-Kranenburg, Marian J
2016-08-01
In a longitudinal cohort study, we investigated the interplay of harsh parenting and genetic variation across a set of functionally related dopamine genes, in association with children's externalizing behavior. This is one of the first studies to employ gene-based and gene-set approaches in tests of Gene by Environment (G × E) effects on complex behavior. This approach can offer an important alternative or complement to candidate gene and genome-wide environmental interaction (GWEI) studies in the search for genetic variation underlying individual differences in behavior. Genetic variants in 12 autosomal dopaminergic genes were available in an ethnically homogenous part of a population-based cohort. Harsh parenting was assessed with maternal (n = 1881) and paternal (n = 1710) reports at age 3. Externalizing behavior was assessed with the Child Behavior Checklist (CBCL) at age 5 (71 ± 3.7 months). We conducted gene-set analyses of the association between variation in dopaminergic genes and externalizing behavior, stratified for harsh parenting. The association was statistically significant or approached significance for children without harsh parenting experiences, but was absent in the group with harsh parenting. Similarly, significant associations between single genes and externalizing behavior were only found in the group without harsh parenting. Effect sizes in the groups with and without harsh parenting did not differ significantly. Gene-environment interaction tests were conducted for individual genetic variants, resulting in two significant interaction effects (rs1497023 and rs4922132) after correction for multiple testing. Our findings are suggestive of G × E interplay, with associations between dopamine genes and externalizing behavior present in children without harsh parenting, but not in children with harsh parenting experiences. Harsh parenting may overrule the role of genetic factors in externalizing behavior. Gene-based and gene-set analyses offer promising new alternatives to analyses focusing on single candidate polymorphisms when examining the interplay between genetic and environmental factors.
Colloquium: Search for a drifting proton-electron mass ratio from H2
NASA Astrophysics Data System (ADS)
Ubachs, W.; Bagdonaite, J.; Salumbides, E. J.; Murphy, M. T.; Kaper, L.
2016-04-01
An overview is presented of the H2 quasar absorption method to search for a possible variation of the proton-electron mass ratio μ =mp/me on a cosmological time scale. The method is based on a comparison between wavelengths of absorption lines in the H2 Lyman and Werner bands as observed at high redshift with wavelengths of the same lines measured at zero redshift in the laboratory. For such comparison sensitivity coefficients to a relative variation of μ are calculated for all individual lines and included in the fitting routine deriving a value for Δ μ /μ . Details of the analysis of astronomical spectra, obtained with large 8-10 m class optical telescopes, equipped with high-resolution echelle grating based spectrographs, are explained. The methods and results of the laboratory molecular spectroscopy of H2, in particular, the laser-based metrology studies for the determination of rest wavelengths of the Lyman and Werner band absorption lines, are reviewed. Theoretical physics scenarios delivering a rationale for a varying μ are discussed briefly, as well as alternative spectroscopic approaches to probe variation of μ , other than the H2 method. Also a recent approach to detect a dependence of the proton-to-electron mass ratio on environmental conditions, such as the presence of strong gravitational fields, are highlighted. Currently some 56 H2 absorption systems are known and listed. Their usefulness to detect μ variation is discussed, in terms of column densities and brightness of background quasar sources, along with future observational strategies. The astronomical observations of ten quasar systems analyzed so far set a constraint on a varying proton-electron mass ratio of |Δ μ /μ |<5 ×1 0-6 (3 σ ), which is a null result, holding for redshifts in the range z =2.0 - 4.2 . This corresponds to look-back times of (10 - 12.4 )×109 years into cosmic history. Attempts to interpret the results from these ten H2 absorbers in terms of a spatial variation of μ are currently hampered by the small sample size and their coincidental distribution in a relatively narrow band across the sky.
Knoll, Florian; Raya, José G; Halloran, Rafael O; Baete, Steven; Sigmund, Eric; Bammer, Roland; Block, Tobias; Otazo, Ricardo; Sodickson, Daniel K
2015-01-01
Radial spin echo diffusion imaging allows motion-robust imaging of tissues with very low T2 values like articular cartilage with high spatial resolution and signal-to-noise ratio (SNR). However, in vivo measurements are challenging due to the significantly slower data acquisition speed of spin-echo sequences and the less efficient k-space coverage of radial sampling, which raises the demand for accelerated protocols by means of undersampling. This work introduces a new reconstruction approach for undersampled DTI. A model-based reconstruction implicitly exploits redundancies in the diffusion weighted images by reducing the number of unknowns in the optimization problem and compressed sensing is performed directly in the target quantitative domain by imposing a Total Variation (TV) constraint on the elements of the diffusion tensor. Experiments were performed for an anisotropic phantom and the knee and brain of healthy volunteers (3 and 2 volunteers, respectively). Evaluation of the new approach was conducted by comparing the results to reconstructions performed with gridding, combined parallel imaging and compressed sensing, and a recently proposed model-based approach. The experiments demonstrated improvement in terms of reduction of noise and streaking artifacts in the quantitative parameter maps as well as a reduction of angular dispersion of the primary eigenvector when using the proposed method, without introducing systematic errors into the maps. This may enable an essential reduction of the acquisition time in radial spin echo diffusion tensor imaging without degrading parameter quantification and/or SNR. PMID:25594167
Conomos, Matthew P; Miller, Michael B; Thornton, Timothy A
2015-05-01
Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multidimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using 10 (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. © 2015 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Vasco, D. W.
2018-04-01
Following an approach used in quantum dynamics, an exponential representation of the hydraulic head transforms the diffusion equation governing pressure propagation into an equivalent set of ordinary differential equations. Using a reservoir simulator to determine one set of dependent variables leaves a reduced set of equations for the path of a pressure transient. Unlike the current approach for computing the path of a transient, based on a high-frequency asymptotic solution, the trajectories resulting from this new formulation are valid for arbitrary spatial variations in aquifer properties. For a medium containing interfaces and layers with sharp boundaries, the trajectory mechanics approach produces paths that are compatible with travel time fields produced by a numerical simulator, while the asymptotic solution produces paths that bend too strongly into high permeability regions. The breakdown of the conventional asymptotic solution, due to the presence of sharp boundaries, has implications for model parameter sensitivity calculations and the solution of the inverse problem. For example, near an abrupt boundary, trajectories based on the asymptotic approach deviate significantly from regions of high sensitivity observed in numerical computations. In contrast, paths based on the new trajectory mechanics approach coincide with regions of maximum sensitivity to permeability changes.
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Estimating Travel Time in Bank Filtration Systems from a Numerical Model Based on DTS Measurements.
des Tombe, Bas F; Bakker, Mark; Schaars, Frans; van der Made, Kees-Jan
2018-03-01
An approach is presented to determine the seasonal variations in travel time in a bank filtration system using a passive heat tracer test. The temperature in the aquifer varies seasonally because of temperature variations of the infiltrating surface water and at the soil surface. Temperature was measured with distributed temperature sensing along fiber optic cables that were inserted vertically into the aquifer with direct push equipment. The approach was applied to a bank filtration system consisting of a sequence of alternating, elongated recharge basins and rows of recovery wells. A SEAWAT model was developed to simulate coupled flow and heat transport. The model of a two-dimensional vertical cross section is able to simulate the temperature of the water at the well and the measured vertical temperature profiles reasonably well. MODPATH was used to compute flowpaths and the travel time distribution. At the study site, temporal variation of the pumping discharge was the dominant factor influencing the travel time distribution. For an equivalent system with a constant pumping rate, variations in the travel time distribution are caused by variations in the temperature-dependent viscosity. As a result, travel times increase in the winter, when a larger fraction of the water travels through the warmer, lower part of the aquifer, and decrease in the summer, when the upper part of the aquifer is warmer. © 2017 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.
Yellowstone bison genetics: let us move forward
Halbert, Natalie D.; Gogan, Peter J.P.; Hedrick, Philip W.; Wahl, Jacquelyn M.; Derr, James N.
2012-01-01
White and Wallen (2012) disagree with the conclusions and suggestions made in our recent assessment of population structure among Yellowstone National Park (YNP) bison based on 46 autosomal microsatellite loci in 661 animals (Halbert et al. 2012). First, they suggest that "the existing genetic substructure (that we observed) was artificially created." Specifically, they suggest that the substructure observed between the northern and central populations is the result of human activities, both historical and recent. In fact, the genetic composition of all known existing bison herds was created by, or has been influenced by, anthropogenic activities, although this obviously does not reduce the value of these herds for genetic conservation (Dratch and Gogan 2010). As perspective, many, if not most, species of conservation concern have been influenced by human actions and as a result currently exist as isolated populations. However, it is quite difficult to distinguish between genetic differences caused by human actions and important ancestral variation contained in separate populations without data from early time periods. Therefore, to not lose genetic variation that may be significant or indicative of important genetic variation, the generally acceptable management approach is to attempt to retain this variation based on the observed population genetic subdivision (Hedrick et al. 1986).
ERIC Educational Resources Information Center
Fraser, Duncan; Linder, Cedric
2009-01-01
Contemporary learning research and development that is embedded in primary and secondary schooling is increasingly acknowledging the significance of a variation approach for enhancing the possibility of learning. However, the variation approach has so far attracted very little attention in higher education, but where it has, the results have been…
Combining Formal and Functional Approaches to Topic Structure
ERIC Educational Resources Information Center
Zellers, Margaret; Post, Brechtje
2012-01-01
Fragmentation between formal and functional approaches to prosodic variation is an ongoing problem in linguistic research. In particular, the frameworks of the Phonetics of Talk-in-Interaction (PTI) and Empirical Phonology (EP) take very different theoretical and methodological approaches to this kind of variation. We argue that it is fruitful to…
A monolithic Lagrangian approach for fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Ryzhakov, P. B.; Rossi, R.; Idelsohn, S. R.; Oñate, E.
2010-11-01
Current work presents a monolithic method for the solution of fluid-structure interaction problems involving flexible structures and free-surface flows. The technique presented is based upon the utilization of a Lagrangian description for both the fluid and the structure. A linear displacement-pressure interpolation pair is used for the fluid whereas the structure utilizes a standard displacement-based formulation. A slight fluid compressibility is assumed that allows to relate the mechanical pressure to the local volume variation. The method described features a global pressure condensation which in turn enables the definition of a purely displacement-based linear system of equations. A matrix-free technique is used for the solution of such linear system, leading to an efficient implementation. The result is a robust method which allows dealing with FSI problems involving arbitrary variations in the shape of the fluid domain. The method is completely free of spurious added-mass effects.
IPA (v1): a framework for agent-based modelling of soil water movement
NASA Astrophysics Data System (ADS)
Mewes, Benjamin; Schumann, Andreas H.
2018-06-01
In the last decade, agent-based modelling (ABM) became a popular modelling technique in social sciences, medicine, biology, and ecology. ABM was designed to simulate systems that are highly dynamic and sensitive to small variations in their composition and their state. As hydrological systems, and natural systems in general, often show dynamic and non-linear behaviour, ABM can be an appropriate way to model these systems. Nevertheless, only a few studies have utilized the ABM method for process-based modelling in hydrology. The percolation of water through the unsaturated soil is highly responsive to the current state of the soil system; small variations in composition lead to major changes in the transport system. Hence, we present a new approach for modelling the movement of water through a soil column: autonomous water agents that transport water through the soil while interacting with their environment as well as with other agents under physical laws.
Kainz, Philipp; Pfeiffer, Michael; Urschler, Martin
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.
Kainz, Philipp; Pfeiffer, Michael
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses. PMID:29018612
Monitoring of rapid blood pH variations by CO detection in breath with tunable diode laser
NASA Astrophysics Data System (ADS)
Kouznetsov, Andrian I.; Stepanov, Eugene V.; Zyrianov, Pavel V.; Shulagin, Yurii A.; Diachenko, Alexander I.; Gurfinkel, Youri I.
1997-06-01
Detection of endogenous carbon monoxide content in breath with tunable diode lasers (TDL) was proposed for noninvasive monitoring of rapid blood pH variation. Applied approach is based on high sensitivity of the haemoglobin and myoglobin affinity for CO to blood pH value and an ability to detect rapidly small variations of CO content in expired air. Breath CO absorption in 4.7 micrometers spectral region was carefully measured using PbSSe tunable diode laser that can provide 1 ppb CO concentration sensitivity and 10 s time constant. Applied TDL gas analyzer was used to monitor expired air of studied persons in physiological tests including hyperventilation and physical load. Simultaneous blood tests were conducted to demonstrate correlation between blood and breath chemical parameters.
Jastram, John D.; Moyer, Douglas; Hyer, Kenneth
2009-01-01
Fluvial transport of sediment into the Chesapeake Bay estuary is a persistent water-quality issue with major implications for the overall health of the bay ecosystem. Accurately and precisely estimating the suspended-sediment concentrations (SSC) and loads that are delivered to the bay, however, remains challenging. Although manual sampling of SSC produces an accurate series of point-in-time measurements, robust extrapolation to unmeasured periods (especially highflow periods) has proven to be difficult. Sediment concentrations typically have been estimated using regression relations between individual SSC values and associated streamflow values; however, suspended-sediment transport during storm events is extremely variable, and it is often difficult to relate a unique SSC to a given streamflow. With this limitation for estimating SSC, innovative approaches for generating detailed records of suspended-sediment transport are needed. One effective method for improved suspended-sediment determination involves the continuous monitoring of turbidity as a surrogate for SSC. Turbidity measurements are theoretically well correlated to SSC because turbidity represents a measure of water clarity that is directly influenced by suspended sediments; thus, turbidity-based estimation models typically are effective tools for generating SSC data. The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency Chesapeake Bay Program and Virginia Department of Environmental Quality, initiated continuous turbidity monitoring on three major tributaries of the bay - the James, Rappahannock, and North Fork Shenandoah Rivers - to evaluate the use of turbidity as a sediment surrogate in rivers that deliver sediment to the bay. Results of this surrogate approach were compared to the traditionally applied streamflow-based approach for estimating SSC. Additionally, evaluation and comparison of these two approaches were conducted for nutrient estimations. Results demonstrate that the application of turbidity-based estimation models provides an improved method for generating a continuous record of SSC, relative to the classical approach that uses streamflow as a surrogate for SSC. Turbidity-based estimates of SSC were found to be more accurate and precise than SSC estimates from streamflow-based approaches. The turbidity-based SSC estimation models explained 92 to 98 percent of the variability in SSC, while streamflow-based models explained 74 to 88 percent of the variability in SSC. Furthermore, the mean absolute error of turbidity-based SSC estimates was 50 to 87 percent less than the corresponding values from the streamflow-based models. Statistically significant differences were detected between the distributions of residual errors and estimates from the two approaches, indicating that the turbidity-based approach yields estimates of SSC with greater precision than the streamflow-based approach. Similar improvements were identified for turbidity-based estimates of total phosphorus, which is strongly related to turbidity because total phosphorus occurs predominantly in particulate form. Total nitrogen estimation models based on turbidity and streamflow generated estimates of similar quality, with the turbidity-based models providing slight improvements in the quality of estimations. This result is attributed to the understanding that nitrogen transport is dominated by dissolved forms that relate less directly to streamflow and turbidity. Improvements in concentration estimation resulted in improved estimates of load. Turbidity-based suspended-sediment loads estimated for the James River at Cartersville, VA, monitoring station exhibited tighter confidence interval bounds and a coefficient of variation of 12 percent, compared with a coefficient of variation of 38 percent for the streamflow-based load.
Galarraga, Haize; Warren, Robert J.; Lados, Diana A.; ...
2017-01-06
Electron beam melting (EBM) is a metal powder bed fusion additive manufacturing (AM) technology that is used to fabricate three-dimensional near-net-shaped parts directly from computer models. Ti-6Al-4V is the most widely used and studied alloy for this technology and is the focus of this work in its ELI (Extra Low Interstitial) variation. The mechanisms of microstructure formation, evolution, and its subsequent influence on mechanical properties of the alloy in as-fabricated condition have been documented by various researchers. In the present work, the thermal history resulting in the formation of the as-fabricated microstructure was analyzed and studied by a thermal simulation.more » Subsequently different heat treatments were performed based on three approaches in order to study the effects of heat treatments on the singular and exclusive microstructure formed during the EBM fabrication process. In the first approach, the effect of cooling rate after the solutionizing process was studied. In the second approach, the variation of α lath thickness during annealing treatment and correlation with mechanical properties was established. In the last approach, several solutionizing and aging experiments were conducted.« less
Contactless and pose invariant biometric identification using hand surface.
Kanhangad, Vivek; Kumar, Ajay; Zhang, David
2011-05-01
This paper presents a novel approach for hand matching that achieves significantly improved performance even in the presence of large hand pose variations. The proposed method utilizes a 3-D digitizer to simultaneously acquire intensity and range images of the user's hand presented to the system in an arbitrary pose. The approach involves determination of the orientation of the hand in 3-D space followed by pose normalization of the acquired 3-D and 2-D hand images. Multimodal (2-D as well as 3-D) palmprint and hand geometry features, which are simultaneously extracted from the user's pose normalized textured 3-D hand, are used for matching. Individual matching scores are then combined using a new dynamic fusion strategy. Our experimental results on the database of 114 subjects with significant pose variations yielded encouraging results. Consistent (across various hand features considered) performance improvement achieved with the pose correction demonstrates the usefulness of the proposed approach for hand based biometric systems with unconstrained and contact-free imaging. The experimental results also suggest that the dynamic fusion approach employed in this work helps to achieve performance improvement of 60% (in terms of EER) over the case when matching scores are combined using the weighted sum rule.
Stochastic approach to the derivation of emission limits for wastewater treatment plants.
Stransky, D; Kabelkova, I; Bares, V
2009-01-01
Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.
NASA Astrophysics Data System (ADS)
Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2018-04-01
In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.
Ishii, Lisa E
2013-06-01
Unsustainable health care costs coupled with opportunity for improvement in health care outcomes in the United States are stimulating meaningful transformation in the way we deliver care. One approach in this transformation focuses on minimizing unnecessary variation in physician practices, instead focusing on evidence-based medicine in a more uniform manner. Clinical practice guidelines contain evidence-based recommendations, articulate goals of care, and can help to reduce unnecessary variation. While thousands of clinical practice guidelines are in existence, a clinical gap exists between knowledge and clinical performance. With thoughtful guidelines implementation strategies in place, organizations can begin to close the gap and translate best practice knowledge into care. Health systems that have done this effectively have seen improved clinical outcomes, improved patient satisfaction, and lower cost per patient.
The use of resighting data to estimate the rate of population growth of the snail kite in Florida
Dreitz, V.J.; Nichols, J.D.; Hines, J.E.; Bennetts, R.E.; Kitchens, W.M.; DeAngelis, D.L.
2002-01-01
The rate of population growth (lambda) is an important demographic parameter used to assess the viability of a population and to develop management and conservation agendas. We examined the use of resighting data to estimate lambda for the snail kite population in Florida from 1997-2000. The analyses consisted of (1) a robust design approach that derives an estimate of lambda from estimates of population size and (2) the Pradel (1996) temporal symmetry (TSM) approach that directly estimates lambda using an open-population capture-recapture model. Besides resighting data, both approaches required information on the number of unmarked individuals that were sighted during the sampling periods. The point estimates of lambda differed between the robust design and TSM approaches, but the 95% confidence intervals overlapped substantially. We believe the differences may be the result of sparse data and do not indicate the inappropriateness of either modelling technique. We focused on the results of the robust design because this approach provided estimates for all study years. Variation among these estimates was smaller than levels of variation among ad hoc estimates based on previously reported index statistics. We recommend that lambda of snail kites be estimated using capture-resighting methods rather than ad hoc counts.
Estimation of cardiac conductivities in ventricular tissue by a variational approach
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Veneziani, Alessandro
2015-11-01
The bidomain model is the current standard model to simulate cardiac potential propagation. The numerical solution of this system of partial differential equations strongly depends on the model parameters and in particular on the cardiac conductivities. Unfortunately, it is quite problematic to measure these parameters in vivo and even more so in clinical practice, resulting in no common agreement in the literature. In this paper we consider a variational data assimilation approach to estimating those parameters. We consider the parameters as control variables to minimize the mismatch between the computed and the measured potentials under the constraint of the bidomain system. The existence of a minimizer of the misfit function is proved with the phenomenological Rogers-McCulloch ionic model, that completes the bidomain system. We significantly improve the numerical approaches in the literature by resorting to a derivative-based optimization method with settlement of some challenges due to discontinuity. The improvement in computational efficiency is confirmed by a 2D test as a direct comparison with approaches in the literature. The core of our numerical results is in 3D, on both idealized and real geometries, with the minimal ionic model. We demonstrate the reliability and the stability of the conductivity estimation approach in the presence of noise and with an imperfect knowledge of other model parameters.
Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.
2015-01-01
Local adaptation influences plant species’ responses to climate change and their performance in ecological restoration. Fine-scale physiological or phenological adaptations that direct demographic processes may drive intraspecific variability when baseline environmental conditions change. Landscape genomics characterize adaptive differentiation by identifying environmental drivers of adaptive genetic variability and mapping the associated landscape patterns. We applied such an approach to Sphaeralcea ambigua, an important restoration plant in the arid southwestern United States, by analyzing variation at 153 amplified fragment length polymorphism loci in the context of environmental gradients separating 47 Mojave Desert populations. We identified 37 potentially adaptive loci through a combination of genome scan approaches. We then used a generalized dissimilarity model (GDM) to relate variability in potentially adaptive loci with spatial gradients in temperature, precipitation, and topography. We identified non-linear thresholds in loci frequencies driven by summer maximum temperature and water stress, along with continuous variation corresponding to temperature seasonality. Two GDM-based approaches for mapping predicted patterns of local adaptation are compared. Additionally, we assess uncertainty in spatial interpolations through a novel spatial bootstrapping approach. Our study presents robust, accessible methods for deriving spatially-explicit models of adaptive genetic variability in non-model species that will inform climate change modelling and ecological restoration.
Grote, Steffi; Condit, Richard; Hubbell, Stephen; Wirth, Christian; Rüger, Nadja
2013-01-01
For trees in tropical forests, competition for light is thought to be a central process that offers opportunities for niche differentiation through light gradient partitioning. In previous studies, a canopy index based on three-dimensional canopy census data has been shown to be a good predictor of species-specific demographic rates across the entire tree community on Barro Colorado Island, Panama, and has allowed quantifying between-species variation in light response. However, almost all other forest census plots lack data on the canopy structure. Hence, this study aims at assessing whether position-based neighborhood competition indices can replace information from canopy census data and produce similar estimates of the interspecific variation of light responses. We used inventory data from the census plot at Barro Colorado Island and calculated neighborhood competition indices with varying relative effects of the size and distance of neighboring trees. Among these indices, we selected the one that was most strongly correlated with the canopy index. We then compared outcomes of hierarchical Bayesian models for species-specific recruitment and growth rates including either the canopy index or the selected neighborhood competition index as predictor. Mean posterior estimates of light response parameters were highly correlated between models (r>0.85) and indicated that most species regenerate and grow better in higher light. Both light estimation approaches consistently found that the interspecific variation of light response was larger for recruitment than for growth rates. However, the classification of species into different groups of light response, e.g. weaker than linear (decelerating) vs. stronger than linear (accelerating) differed between approaches. These results imply that while the classification into light response groups might be biased when using neighborhood competition indices, they may be useful for determining species rankings and between-species variation of light response and therefore enable large comparative studies between different forest census plots. PMID:24324723
Expert system constant false alarm rate processor
NASA Astrophysics Data System (ADS)
Baldygo, William J., Jr.; Wicks, Michael C.
1993-10-01
The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.
A variational approach to behavioral and neuroelectrical laws.
Noventa, Stefano; Vidotto, Giulio
2012-09-01
Variational methods play a fundamental and unifying role in several fields of physics, chemistry, engineering, economics, and biology, as they allow one to derive the behavior of a system as a consequence of an optimality principle. A possible application of these methods to a model of perception is given by considering a psychophysical law as the solution of an Euler-Lagrange equation. A general class of Lagrangians is identified by requiring the measurability of prothetic continua on interval scales. The associated Hamiltonian (the energy of the process) is tentatively connected with neurophysiological aspects. As an example of the suggested approach a particular choice of the Lagrangian, that is a sufficient condition to obtain classical psychophysical laws, while accounting for psychophysical adaptation and the stationarity of neuronal activity, is used to explore a possible relation between a behavioral law and a neuroelectrical ,response based on the Naka-Rushton model.
Schuemie, Martijn J; Mons, Barend; Weeber, Marc; Kors, Jan A
2007-06-01
Gene and protein name identification in text requires a dictionary approach to relate synonyms to the same gene or protein, and to link names to external databases. However, existing dictionaries are incomplete. We investigate two complementary methods for automatic generation of a comprehensive dictionary: combination of information from existing gene and protein databases and rule-based generation of spelling variations. Both methods have been reported in literature before, but have hitherto not been combined and evaluated systematically. We combined gene and protein names from several existing databases of four different organisms. The combined dictionaries showed a substantial increase in recall on three different test sets, as compared to any single database. Application of 23 spelling variation rules to the combined dictionaries further increased recall. However, many rules appeared to have no effect and some appear to have a detrimental effect on precision.
Geometrical optics approach in liquid crystal films with three-dimensional director variations.
Panasyuk, G; Kelly, J; Gartland, E C; Allender, D W
2003-04-01
A formal geometrical optics approach (GOA) to the optics of nematic liquid crystals whose optic axis (director) varies in more than one dimension is described. The GOA is applied to the propagation of light through liquid crystal films whose director varies in three spatial dimensions. As an example, the GOA is applied to the calculation of light transmittance for the case of a liquid crystal cell which exhibits the homeotropic to multidomainlike transition (HMD cell). Properties of the GOA solution are explored, and comparison with the Jones calculus solution is also made. For variations on a smaller scale, where the Jones calculus breaks down, the GOA provides a fast, accurate method for calculating light transmittance. The results of light transmittance calculations for the HMD cell based on the director patterns provided by two methods, direct computer calculation and a previously developed simplified model, are in good agreement.
A non-destructive selection criterion for fibre content in jute : II. Regression approach.
Arunachalam, V; Iyer, R D
1974-01-01
An experiment with ten populations of jute, comprising varieties and mutants of the two species Corchorus olitorius and C.capsularis was conducted at two different locations with the object of evolving an effective criterion for selecting superior single plants for fibre yield. At Delhi, variation existed only between varieties as a group and mutants as a group, while at Pusa variation also existed among the mutant populations of C. capsularis.A multiple regression approach was used to find the optimum combination of characters for prediction of fibre yield. A process of successive elimination of characters based on the coefficient of determination provided by individual regression equations was employed to arrive at the optimal set of characters for predicting fibre yield. It was found that plant height, basal and mid-diameters and basal and mid-dry fibre weights would provide such an optimal set.
Di Salvo, Francesca; Meneghini, Elisabetta; Vieira, Veronica; Baili, Paolo; Mariottini, Mauro; Baldini, Marco; Micheli, Andrea; Sant, Milena
2015-01-01
Introduction The study investigated the geographic variation of mortality risk for hematological malignancies (HMs) in order to identify potential high-risk areas near an Italian petrochemical refinery. Material and methods A population-based case-control study was conducted and residential histories for 171 cases and 338 sex- and age-matched controls were collected. Confounding factors were obtained from interviews with consenting relatives for 109 HM deaths and 267 controls. To produce risk mortality maps, two different approaches were applied. We mapped (1) adptive kernel density relative risk estimation (KDE) for case-control studies which estimates a spatial relative risk function using the ratio between cases and controls’ densities, and (2) estimated odds ratios for case-control study data using generalized additive models (GAMs) to smooth the effect of location, a proxy for exposure, while adjusting for confounding variables. Results No high-risk areas for HM mortality were identified among all subjects (men and women combined), by applying both approaches. Using the adaptive KDE approach, we found a significant increase in death risk only among women in a large area 2–6 km southeast of the refinery and the application of GAMs also identified a similarly-located significant high-risk area among women only (global p-value<0.025). Potential confounding risk factors we considered in the GAM did not alter the results. Conclusion Both approaches identified a high-risk area close to the refinery among women only. Those spatial methods are useful tools for public policy management to determine priority areas for intervention. Our findings suggest several directions for further research in order to identify other potential environmental exposures that may be assessed in forthcoming studies based on detailed exposure modeling. PMID:26073202
Boumans, Iris J M M; de Boer, Imke J M; Hofstede, Gert Jan; Bokkers, Eddie A M
2018-07-01
Domesticated pigs, Sus scrofa, vary considerably in feeding, social interaction and growth patterns. This variation originates partly from genetic variation that affects physiological factors and partly from behavioural strategies (avoid or approach) in competitive food resource situations. Currently, it is unknown how variation in physiological factors and in behavioural strategies among animals contributes to variation in feeding, social interaction and growth patterns in animals. The aim of this study was to unravel causation of variation in these patterns among pigs. We used an agent-based model to explore the effects of physiological factors and behavioural strategies in pigs on variation in feeding, social interaction and growth patterns. Model results show that variation in feeding, social interaction and growth patterns are caused partly by chance, such as time effects and coincidence of conflicts. Furthermore, results show that seemingly contradictory empirical findings in literature can be explained by variation in pig characteristics (i.e. growth potential, positive feedback, dominance, and coping style). Growth potential mainly affected feeding and growth patterns, whereas positive feedback, dominance and coping style affected feeding patterns, social interaction patterns, as well as growth patterns. Variation in behavioural strategies among pigs can reduce aggression at group level, but also make some pigs more susceptible to social constraints inhibiting them from feeding when they want to, especially low-ranking pigs and pigs with a passive coping style. Variation in feeding patterns, such as feeding rate or meal frequency, can indicate social constraints. Feeding patterns, however, can say something different about social constraints at group versus individual level. A combination of feeding patterns, such as a decreased feed intake, an increased feeding rate, and an increased meal frequency might, therefore, be needed to measure social constraints at individual level. Copyright © 2018 Elsevier Inc. All rights reserved.
Optimal filtering and Bayesian detection for friction-based diagnostics in machines.
Ray, L R; Townsend, J R; Ramasubramanian, A
2001-01-01
Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.
Fornel, Rodrigo; Cordeiro-Estrela, Pedro; de Freitas, Thales Renato O.
2018-01-01
Abstract We tested the association between chromosomal polymorphism and skull shape and size variation in two groups of the subterranean rodent Ctenomys. The hypothesis is based on the premise that chromosomal rearrangements in small populations, as it occurs in Ctenomys, produce reproductive isolation and allow the independent diversification of populations. The mendocinus group has species with low chromosomal diploid number variation (2n=46-48), while species from the torquatus group have a higher karyotype variation (2n=42-70). We analyzed the shape and size variation of skull and mandible by a geometric morphometric approach, with univariate and multivariate statistical analysis in 12 species from mendocinus and torquatus groups of the genus Ctenomys. We used 763 adult skulls in dorsal, ventral, and lateral views, and 515 mandibles in lateral view and 93 landmarks in four views. Although we expected more phenotypic variation in the torquatus than the mendocinus group, our results rejected the hypothesis of an association between chromosomal polymorphism and skull shape and size variation. Moreover, the torquatus group did not show more variation than mendocinus. Habitat heterogeneity associated to biomechanical constraints and other factors like geography, phylogeny, and demography, may affect skull morphological evolution in Ctenomys. PMID:29668015
Robust optimization-based DC optimal power flow for managing wind generation uncertainty
NASA Astrophysics Data System (ADS)
Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn
2012-11-01
Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.
Random matrix approach to the dynamics of stock inventory variations
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János
2012-09-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2014-03-01
It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative study with conventional methods (i.e., fixed reference scheme) demonstrates the superior performance of the proposed method for structural damage detection.
Carroll, Carlos; Roberts, David R; Michalak, Julia L; Lawler, Joshua J; Nielsen, Scott E; Stralberg, Diana; Hamann, Andreas; Mcrae, Brad H; Wang, Tongli
2017-11-01
As most regions of the earth transition to altered climatic conditions, new methods are needed to identify refugia and other areas whose conservation would facilitate persistence of biodiversity under climate change. We compared several common approaches to conservation planning focused on climate resilience over a broad range of ecological settings across North America and evaluated how commonalities in the priority areas identified by different methods varied with regional context and spatial scale. Our results indicate that priority areas based on different environmental diversity metrics differed substantially from each other and from priorities based on spatiotemporal metrics such as climatic velocity. Refugia identified by diversity or velocity metrics were not strongly associated with the current protected area system, suggesting the need for additional conservation measures including protection of refugia. Despite the inherent uncertainties in predicting future climate, we found that variation among climatic velocities derived from different general circulation models and emissions pathways was less than the variation among the suite of environmental diversity metrics. To address uncertainty created by this variation, planners can combine priorities identified by alternative metrics at a single resolution and downweight areas of high variation between metrics. Alternately, coarse-resolution velocity metrics can be combined with fine-resolution diversity metrics in order to leverage the respective strengths of the two groups of metrics as tools for identification of potential macro- and microrefugia that in combination maximize both transient and long-term resilience to climate change. Planners should compare and integrate approaches that span a range of model complexity and spatial scale to match the range of ecological and physical processes influencing persistence of biodiversity and identify a conservation network resilient to threats operating at multiple scales. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Dong, Jian; Kudo, Hiroyuki
2017-03-01
Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.
Zimmerman, Guthrie S; Millspaugh, Joshua J; Link, William A; Woods, Rami J; Gutiérrez, R J
2013-12-01
Population cycles have long interested biologists. The ruffed grouse, Bonasa umbellus, is one such species whose populations cycle over most of their range. Thus, much effort has been expended to understand the mechanisms that might control cycles in this and other species. Corticosterone metabolites are widely used in studies of animals to measure physiological stress. We evaluated corticosterone metabolites in feces of territorial male grouse as a potential tool to study mechanisms governing grouse cycles. However, like most studies of corticosterone in wild animals, we did not know the identity of all individuals for which we had fecal samples. This presented an analytical problem that resulted in either pseudoreplication or confounding. Therefore, we derived an analytical approach that accommodated for uncertainty in individual identification. Because we had relatively low success capturing birds, we estimated turnover probabilities of birds on territorial display sites based on capture histories of a limited number of birds we captured. Hence, we developed a study design and modeling approach to quantify variation in corticosterone levels among individuals and through time that would be applicable to any field study of corticosterone in wild animals. Specifically, we wanted a sampling design and model that was flexible enough to partition variation among individuals, spatial units, and years, while incorporating environmental covariates that would represent potential mechanisms. We conducted our study during the decline phase of the grouse cycle and found high variation among corticosterone samples (11.33-443.92 ng/g [x=113.99 ng/g, SD=69.08, median=99.03 ng/g]). However, there were relatively small differences in corticosterone levels among years, but levels declined throughout each breeding season, which was opposite our predictions for stress hormones correlating with a declining population. We partitioned the residual variation into site, bird, and repetition (i.e., multiple samples collected from the same bird on the same day). After accounting for years and three general periods within breeding seasons, 42% of the residual variation among observations was attributable to differences among individual birds. Thus, we attribute little influence of site on stress level of birds in our study, but disentangling individual from site effects is difficult because site and bird are confounded. Our model structures provided analytical approaches for studying species having different ecologies. Our approach also demonstrates that even incomplete information on individual identity of birds within samples is useful for analyzing these types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
Borghei-Razavi, Hamid; Tomio, Ryosuke; Fereshtehnejad, Seyed-Mohammad; Shibao, Shunsuke; Schick, Uta; Toda, Masahiro; Yoshida, Kazunari; Kawase, Takeshi
2016-02-01
Objectives Numerous surgical approaches have been developed to access the petroclival region. The Kawase approach, through the middle fossa, is a well-described option for addressing cranial base lesions of the petroclival region. Our aim was to gather data about the variation of cranial nerve locations in diverse petroclival pathologies and clarify the most common pathologic variations confirmed during the anterior petrosal approach. Method A retrospective analysis was made of both videos and operative and histologic records of 40 petroclival tumors from January 2009 to September 2013 in which the Kawase approach was used. The anatomical variations of cranial nerves IV-VI related to the tumor were divided into several location categories: superior lateral (SL), inferior lateral (IL), superior medial (SM), inferior medial (IM), and encased (E). These data were then analyzed taking into consideration pathologic subgroups of meningioma, epidermoid, and schwannoma. Results In 41% of meningiomas, the trigeminal nerve is encased by the tumor. In 38% of the meningiomas, the trigeminal nerve is in the SL part of the tumor, and it is in 20% of the IL portion of the tumor. In 38% of the meningiomas, the trochlear nerve is encased by the tumor. The abducens nerve is not always visible (35%). The pathologic nerve pattern differs from that of meningiomas for epidermoid and trigeminal schwannomas. Conclusion The pattern of cranial nerves IV-VI is linked to the type of petroclival tumor. In a meningioma, tumor origin (cavernous, upper clival, tentorial, and petrous apex) is the most important predictor of the location of cranial nerves IV-VI. Classification of four subtypes of petroclival meningiomas using magnetic resonance imaging is very useful to predict the location of deviated cranial nerves IV-VI intraoperatively.
Roushangar, Kiyoumars; Alizadeh, Farhad; Adamowski, Jan
2018-08-01
Understanding precipitation on a regional basis is an important component of water resources planning and management. The present study outlines a methodology based on continuous wavelet transform (CWT) and multiscale entropy (CWME), combined with self-organizing map (SOM) and k-means clustering techniques, to measure and analyze the complexity of precipitation. Historical monthly precipitation data from 1960 to 2010 at 31 rain gauges across Iran were preprocessed by CWT. The multi-resolution CWT approach segregated the major features of the original precipitation series by unfolding the structure of the time series which was often ambiguous. The entropy concept was then applied to components obtained from CWT to measure dispersion, uncertainty, disorder, and diversification of subcomponents. Based on different validity indices, k-means clustering captured homogenous areas more accurately, and additional analysis was performed based on the outcome of this approach. The 31 rain gauges in this study were clustered into 6 groups, each one having a unique CWME pattern across different time scales. The results of clustering showed that hydrologic similarity (multiscale variation of precipitation) was not based on geographic contiguity. According to the pattern of entropy across the scales, each cluster was assigned an entropy signature that provided an estimation of the entropy pattern of precipitation data in each cluster. Based on the pattern of mean CWME for each cluster, a characteristic signature was assigned, which provided an estimation of the CWME of a cluster across scales of 1-2, 3-8, and 9-13 months relative to other stations. The validity of the homogeneous clusters demonstrated the usefulness of the proposed approach to regionalize precipitation. Further analysis based on wavelet coherence (WTC) was performed by selecting central rain gauges in each cluster and analyzing against temperature, wind, Multivariate ENSO index (MEI), and East Atlantic (EA) and North Atlantic Oscillation (NAO), indeces. The results revealed that all climatic features except NAO influenced precipitation in Iran during the 1960-2010 period. Copyright © 2018 Elsevier Inc. All rights reserved.
A skeleton family generator via physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2009-01-01
This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
Brauer, Chris J; Unmack, Peter J; Beheregaray, Luciano B
2017-12-01
Understanding whether small populations with low genetic diversity can respond to rapid environmental change via phenotypic plasticity is an outstanding research question in biology. RNA sequencing (RNA-seq) has recently provided the opportunity to examine variation in gene expression, a surrogate for phenotypic variation, in nonmodel species. We used a comparative RNA-seq approach to assess expression variation within and among adaptively divergent populations of a threatened freshwater fish, Nannoperca australis, found across a steep hydroclimatic gradient in the Murray-Darling Basin, Australia. These populations evolved under contrasting selective environments (e.g., dry/hot lowland; wet/cold upland) and represent opposite ends of the species' spectrum of genetic diversity and population size. We tested the hypothesis that environmental variation among isolated populations has driven the evolution of divergent expression at ecologically important genes using differential expression (DE) analysis and an anova-based comparative phylogenetic expression variance and evolution model framework based on 27,425 de novo assembled transcripts. Additionally, we tested whether gene expression variance within populations was correlated with levels of standing genetic diversity. We identified 290 DE candidate transcripts, 33 transcripts with evidence for high expression plasticity, and 50 candidates for divergent selection on gene expression after accounting for phylogenetic structure. Variance in gene expression appeared unrelated to levels of genetic diversity. Functional annotation of the candidate transcripts revealed that variation in water quality is an important factor influencing expression variation for N. australis. Our findings suggest that gene expression variation can contribute to the evolutionary potential of small populations. © 2017 John Wiley & Sons Ltd.
Thickness related textural properties of retinal nerve fiber layer in color fundus images.
Odstrcilik, Jan; Kolar, Radim; Tornow, Ralf-Peter; Jan, Jiri; Budai, Attila; Mayer, Markus; Vodakova, Martina; Laemmer, Robert; Lamos, Martin; Kuna, Zdenek; Gazarek, Jiri; Kubena, Tomas; Cernosek, Pavel; Ronzhina, Marina
2014-09-01
Images of ocular fundus are routinely utilized in ophthalmology. Since an examination using fundus camera is relatively fast and cheap procedure, it can be used as a proper diagnostic tool for screening of retinal diseases such as the glaucoma. One of the glaucoma symptoms is progressive atrophy of the retinal nerve fiber layer (RNFL) resulting in variations of the RNFL thickness. Here, we introduce a novel approach to capture these variations using computer-aided analysis of the RNFL textural appearance in standard and easily available color fundus images. The proposed method uses the features based on Gaussian Markov random fields and local binary patterns, together with various regression models for prediction of the RNFL thickness. The approach allows description of the changes in RNFL texture, directly reflecting variations in the RNFL thickness. Evaluation of the method is carried out on 16 normal ("healthy") and 8 glaucomatous eyes. We achieved significant correlation (normals: ρ=0.72±0.14; p≪0.05, glaucomatous: ρ=0.58±0.10; p≪0.05) between values of the model predicted output and the RNFL thickness measured by optical coherence tomography, which is currently regarded as a standard glaucoma assessment device. The evaluation thus revealed good applicability of the proposed approach to measure possible RNFL thinning. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multiscale spatial and temporal estimation of the b-value
NASA Astrophysics Data System (ADS)
García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.
2017-12-01
The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.
Effects of Scenery, Lighting, Glideslope, and Experience on Timing the Landing Flare
ERIC Educational Resources Information Center
Palmisano, Stephen; Favelle, Simone; Sachtler, W. L.
2008-01-01
This study examined three visual strategies for timing the initiation of the landing flare based on perceptions of either: (a) a critical height above ground level; (b) a critical runway width angle ([psi]); or (c) a critical time-to-contact (TTC) with the runway. Visual displays simulated landing approaches with trial-to-trial variations in…
A Features-Based Approach for Teaching Singapore English
ERIC Educational Resources Information Center
Schaetzel, Kirsten; Lim, Beng Soon; Low, Ee Ling
2010-01-01
Research into Singapore English (SgE) has undergone many paradigm shifts from the 1970s to the present. This paper first begins with a consideration of how variation in the English language used in Singapore has been studied. It then identifies the two main varieties of English commonly described in Singapore, namely, Standard SgE (SSE) and…
Socioeconomic Indicators for Analyzing Convergence: The Case of Greece--1960-2004
ERIC Educational Resources Information Center
Liargovas, Panagiotis G.; Fotopoulos, Georgios
2009-01-01
The purpose of this paper is to use socioeconomic indicators for analyzing convergence within Greece at regional (NUTS II) and prefecture levels (NUTS III) since 1960. We use two alternative approaches. The first one is based on the coefficient of variation and the second one on quality of life rankings. We confirm the decline of regional…
Appraising the reliability of visual impact assessment methods
Nickolaus R. Feimer; Kenneth H. Craik; Richard C. Smardon; Stephen R.J. Sheppard
1979-01-01
This paper presents the research approach and selected results of an empirical investigation aimed at the evaluation of selected observer-based visual impact assessment (VIA) methods. The VIA methods under examination were chosen to cover a range of VIA methods currently in use in both applied and research settings. Variation in three facets of VIA methods were...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bildhauer, Michael, E-mail: bibi@math.uni-sb.de; Fuchs, Martin, E-mail: fuchs@math.uni-sb.de
2012-12-15
We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.
NASA Astrophysics Data System (ADS)
Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael
2018-02-01
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
A time series modeling approach in risk appraisal of violent and sexual recidivism.
Bani-Yaghoub, Majid; Fedoroff, J Paul; Curry, Susan; Amundsen, David E
2010-10-01
For over half a century, various clinical and actuarial methods have been employed to assess the likelihood of violent recidivism. Yet there is a need for new methods that can improve the accuracy of recidivism predictions. This study proposes a new time series modeling approach that generates high levels of predictive accuracy over short and long periods of time. The proposed approach outperformed two widely used actuarial instruments (i.e., the Violence Risk Appraisal Guide and the Sex Offender Risk Appraisal Guide). Furthermore, analysis of temporal risk variations based on specific time series models can add valuable information into risk assessment and management of violent offenders.
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Kim, Youngmin; Sylvester, Dennis
2006-03-01
Focus is one of the major sources of linewidth variation. CD variation caused by defocus is largely systematic after the layout is finished. In particular, dense lines "smile" through focus while isolated lines "frown" in typical Bossung plots. This well-defined systematic behavior of focus-dependent CD variation allows us to develop a self-compensating design methodology. In this work, we propose a novel design methodology that allows explicit compensation of focus-dependent CD variation, either within a cell (self-compensated cells) or across cells in a critical path (self-compensated design). By creating iso and dense variants for each library cell, we can achieve designs that are more robust to focus variation. Optimization with a mixture of iso and dense cell variants is possible both for area and leakage power, with the latter providing an interesting complement to existing leakage reduction techniques such as dual-Vth. We implement both heuristic and Mixed-Integer Linear Programming (MILP) solution methods to address this optimization, and experimentally compare their results. Our results indicate that designing with a self-compensated cell library incurs ~12% area penalty and ~6% leakage increase over original layouts while compensating for focus-dependent CD variation (i.e., the design meets timing constraints across a large range of focus variation). We observe ~27% area penalty and ~7% leakage increase at the worst-case defocus condition using only single-pitch cells. The area penalty of circuits after using either the heuristic or MILP optimization approach is reduced to ~3% while maintaining timing. We also apply our optimizations to leakage, which traditionally shows very large variability due to its exponential relationship with gate CD. We conclude that a mixed iso/dense library combined with a sensitivity-based optimization approach yields much better area/timing/leakage tradeoffs than using a self-compensated cell library alone. Self-compensated design shows an average of 25% leakage reduction at the worst defocus condition for the benchmark designs that we have studied.
Geomorphic determinants of species composition of alpine tundra, Glacier National Park, U.S.A.
George P. Malanson,; Bengtson, Lindsey E.; Fagre, Daniel B.
2012-01-01
Because the distribution of alpine tundra is associated with spatially limited cold climates, global warming may threaten its local extent or existence. This notion has been challenged, however, based on observations of the diversity of alpine tundra in small areas primarily due to topographic variation. The importance of diversity in temperature or moisture conditions caused by topographic variation is an open question, and we extend this to geomorphology more generally. The extent to which geomorphic variation per se, based on relatively easily assessed indicators, can account for the variation in alpine tundra community composition is analyzed versus the inclusion of broad indicators of regional climate variation. Visual assessments of topography are quantified and reduced using principal components analysis (PCA). Observations of species cover are reduced using detrended correspondence analysis (DCA). A “best subsets” regression approach using the Akaike Information Criterion for selection of variables is compared to a simple stepwise regression with DCA scores as the dependent variable and scores on significant PCA axes plus more direct measures of topography as independent variables. Models with geographic coordinates (representing regional climate gradients) excluded explain almost as much variation in community composition as models with them included, although they are important contributors to the latter. The geomorphic variables in the model are those associated with local moisture differences such as snowbeds. The potential local variability of alpine tundra can be a buffer against climate change, but change in precipitation may be as important as change in temperature.
Variations in the implementation and characteristics of chiropractic services in VA.
Lisi, Anthony J; Khorsan, Raheleh; Smith, Monica M; Mittman, Brian S
2014-12-01
In 2004, the US Department of Veterans Affairs expanded its delivery of chiropractic care by establishing onsite chiropractic clinics at select facilities across the country. Systematic information regarding the planning and implementation of these clinics and describing their features and performance is lacking. To document the planning, implementation, key features and performance of VA chiropractic clinics, and to identify variations and their underlying causes and key consequences as well as their implications for policy, practice, and research on the introduction of new clinical services into integrated health care delivery systems. Comparative case study of 7 clinics involving site visit-based and telephone-based interviews with 118 key stakeholders, including VA clinicians, clinical leaders and administrative staff, and selected external stakeholders, as well as reviews of key documents and administrative data on clinic performance and service delivery. Interviews were recorded, transcribed, and analyzed using a mixed inductive (exploratory) and deductive approach. Interview data revealed considerable variations in clinic planning and implementation processes and clinic features, as well as perceptions of clinic performance and quality. Administrative data showed high variation in patterns of clinic patient care volume over time. A facility's initial willingness to establish a chiropractic clinic, along with a higher degree of perceived evidence-based and collegial attributes of the facility chiropractor, emerged as key factors associated with higher and more consistent delivery of chiropractic services and higher perceived quality of those services.
Secular temperature trends for the southern Rocky Mountains over the last five centuries
NASA Astrophysics Data System (ADS)
Berkelhammer, M.; Stott, L. D.
2012-09-01
Pre-instrumental surface temperature variability in the Southwestern United States has traditionally been reconstructed using variations in the annual ring widths of high altitude trees that live near a growth-limiting isotherm. A number of studies have suggested that the response of some trees to temperature variations is non-stationary, warranting the development of alternative approaches towards reconstructing past regional temperature variability. Here we present a five-century temperature reconstruction for a high-altitude site in the Rocky Mountains derived from the oxygen isotopic composition of cellulose (δ18Oc) from Bristlecone Pine trees. The record is independent of the co-located growth-based reconstruction while providing the same temporal resolution and absolute age constraints. The empirical correlation between δ18Oc and instrumental temperatures is used to produce a temperature transfer function. A forward-model for cellulose isotope variations, driven by meteorological data and output from an isotope-enabled General Circulation Model, is used to evaluate the processes that propagate the temperature signal to the proxy. The cellulose record documents persistent multidecadal variations in δ18Oc that are attributable to temperature shifts on the order of 1°C but no sustained monotonic rise in temperature or a step-like increase since the late 19th century. The isotope-based temperature history is consistent with both regional wood density-based temperature estimates and some sparse early instrumental records.
Towards a street-level pollen concentration and exposure forecast
NASA Astrophysics Data System (ADS)
van der Molen, Michiel; Krol, Maarten; van Vliet, Arnold; Heuvelink, Gerard
2015-04-01
Atmospheric pollen are an increasing source of nuisance for people in industrialised countries and are associated with significant cost of medication and sick leave. Citizen pollen warnings are often based on emission mapping based on local temperature sum approaches or on long-range atmospheric model approaches. In practise, locally observed pollen may originate from both local sources (plants in streets and gardens) and from long-range transport. We argue that making this distinction is relevant because the diurnal and spatial variation in pollen concentrations is much larger for pollen from local sources than for pollen from long-range transport due to boundary layer processes. This may have an important impact on exposure of citizens to pollen and on mitigation strategies. However, little is known about the partitioning of pollen into local and long-range origin categories. Our objective is to study how the concentrations of pollen from different sources vary temporally and spatially, and how the source region influences exposure and mitigation strategies. We built a Hay Fever Forecast system (HFF) based on WRF-chem, Allergieradar.nl, and geo-statistical downscaling techniques. HFF distinguishes between local (individual trees) and regional sources (based on tree distribution maps). We show first results on how the diurnal variation of pollen concentrations depends on source proximity. Ultimately, we will compare the model with local pollen counts, patient nuisance scores and medicine use.
Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam
2016-01-01
Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches. PMID:26927111
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam
2016-02-25
Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches.
Forecasting seasonal hydrologic response in major river basins
NASA Astrophysics Data System (ADS)
Bhuiyan, A. M.
2014-05-01
Seasonal precipitation variation due to natural climate variation influences stream flow and the apparent frequency and severity of extreme hydrological conditions such as flood and drought. To study hydrologic response and understand the occurrence of extreme hydrological events, the relevant forcing variables must be identified. This study attempts to assess and quantify the historical occurrence and context of extreme hydrologic flow events and quantify the relation between relevant climate variables. Once identified, the flow data and climate variables are evaluated to identify the primary relationship indicators of hydrologic extreme event occurrence. Existing studies focus on developing basin-scale forecasting techniques based on climate anomalies in El Nino/La Nina episodes linked to global climate. Building on earlier work, the goal of this research is to quantify variations in historical river flows at seasonal temporal-scale, and regional to continental spatial-scale. The work identifies and quantifies runoff variability of major river basins and correlates flow with environmental forcing variables such as El Nino, La Nina, sunspot cycle. These variables are expected to be the primary external natural indicators of inter-annual and inter-seasonal patterns of regional precipitation and river flow. Relations between continental-scale hydrologic flows and external climate variables are evaluated through direct correlations in a seasonal context with environmental phenomenon such as sun spot numbers (SSN), Southern Oscillation Index (SOI), and Pacific Decadal Oscillation (PDO). Methods including stochastic time series analysis and artificial neural networks are developed to represent the seasonal variability evident in the historical records of river flows. River flows are categorized into low, average and high flow levels to evaluate and simulate flow variations under associated climate variable variations. Results demonstrated not any particular method is suited to represent scenarios leading to extreme flow conditions. For selected flow scenarios, the persistence model performance may be comparable to more complex multivariate approaches, and complex methods did not always improve flow estimation. Overall model performance indicates inclusion of river flows and forcing variables on average improve model extreme event forecasting skills. As a means to further refine the flow estimation, an ensemble forecast method is implemented to provide a likelihood-based indication of expected river flow magnitude and variability. Results indicate seasonal flow variations are well-captured in the ensemble range, therefore the ensemble approach can often prove efficient in estimating extreme river flow conditions. The discriminant prediction approach, a probabilistic measure to forecast streamflow, is also adopted to derive model performance. Results show the efficiency of the method in terms of representing uncertainties in the forecasts.
NASA Astrophysics Data System (ADS)
Yasmirullah, Septia Devi Prihastuti; Iriawan, Nur; Sipayung, Feronika Rosalinda
2017-11-01
The success of regional economic establishment could be measured by economic growth. Since the Act No. 32 of 2004 has been implemented, unbalance economic among the regency in Indonesia is increasing. This condition is contrary different with the government goal to build society welfare through the economic activity development in each region. This research aims to examine economic growth through the distribution of bank credits to each Indonesia's regency. The data analyzed in this research is hierarchically structured data which follow normal distribution in first level. Two modeling approaches are employed in this research, a global-one level Bayesian approach and two-level hierarchical Bayesian approach. The result shows that hierarchical Bayesian has succeeded to demonstrate a better estimation than a global-one level Bayesian. It proves that the different economic growth in each province is significantly influenced by the variations of micro level characteristics in each province. These variations are significantly affected by cities and province characteristics in second level.
Coarse-graining errors and numerical optimization using a relative entropy framework
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2011-03-01
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.
Milenković, Jana; Dalmış, Mehmet Ufuk; Žgajnar, Janez; Platel, Bram
2017-09-01
New ultrafast view-sharing sequences have enabled breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to be performed at high spatial and temporal resolution. The aim of this study is to evaluate the diagnostic potential of textural features that quantify the spatiotemporal changes of the contrast-agent uptake in computer-aided diagnosis of malignant and benign breast lesions imaged with high spatial and temporal resolution DCE-MRI. The proposed approach is based on the textural analysis quantifying the spatial variation of six dynamic features of the early-phase contrast-agent uptake of a lesion's largest cross-sectional area. The textural analysis is performed by means of the second-order gray-level co-occurrence matrix, gray-level run-length matrix and gray-level difference matrix. This yields 35 textural features to quantify the spatial variation of each of the six dynamic features, providing a feature set of 210 features in total. The proposed feature set is evaluated based on receiver operating characteristic (ROC) curve analysis in a cross-validation scheme for random forests (RF) and two support vector machine classifiers, with linear and radial basis function (RBF) kernel. Evaluation is done on a dataset with 154 breast lesions (83 malignant and 71 benign) and compared to a previous approach based on 3D morphological features and the average and standard deviation of the same dynamic features over the entire lesion volume as well as their average for the smaller region of the strongest uptake rate. The area under the ROC curve (AUC) obtained by the proposed approach with the RF classifier was 0.8997, which was significantly higher (P = 0.0198) than the performance achieved by the previous approach (AUC = 0.8704) on the same dataset. Similarly, the proposed approach obtained a significantly higher result for both SVM classifiers with RBF (P = 0.0096) and linear kernel (P = 0.0417) obtaining AUC of 0.8876 and 0.8548, respectively, compared to AUC values of previous approach of 0.8562 and 0.8311, respectively. The proposed approach based on 2D textural features quantifying spatiotemporal changes of the contrast-agent uptake significantly outperforms the previous approach based on 3D morphology and dynamic analysis in differentiating the malignant and benign breast lesions, showing its potential to aid clinical decision making. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
DesRoches, Aaron J.; Butler, Karl E.
2016-12-01
Variations in self-potentials (SP) measured at surface during pumping of a heterogeneous confined fractured rock aquifer have been monitored and modelled in order to investigate capabilities and limitations of SP methods in estimating aquifer hydraulic properties. SP variations were recorded around a pumping well using an irregular grid of 31 non-polarizing Pb-PbCl2 that were referenced to a remote electrode and connected to a commercial multiplexer and digitizer/data logger through a passive lowpass filter on each channel. The lowpass filter reduced noise by a factor of 10 compared to levels obtained using the data logger's integration-based sampling method for powerline noise suppression alone. SP signals showed a linear relationship with water levels observed in the pumping and monitoring wells over the pumping period, with an apparent electrokinetic coupling coefficient of -3.4 mV · m-1. Following recent developments in SP methodology, variability of the SP response between different electrodes is taken as a proxy for lateral variations in hydraulic head within the aquifer and used to infer lateral variations in the aquifer's apparent transmissivity. In order to demonstrate the viability of this approach, SP is modelled numerically to determine its sensitivity to (i) lateral variations in the hydraulic conductivity of the confined aquifer and (ii) the electrical conductivity of the confining layer and conductive well casing. In all cases, SP simulated on the surface still varies linearly with hydraulic head modelled at the base on the confining layer although the apparent coupling coefficient changes to varying degrees. Using the linear relationship observed in the field, drawdown curves were inferred for each electrode location using SP variations observed over the duration of the pumping period. Transmissivity estimates, obtained by fitting the Theis model to inferred drawdown curves at all 31 electrodes, fell within a narrow range of (2.0-4.2) × 10-3 m2 · s-1 and were consistent with values measured in the pumping and monitoring wells. This approach will be of particular interest where monitoring wells are lacking for direct measurement, and SP on the surface can be used to quickly estimate hydraulic properties.
Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment
NASA Astrophysics Data System (ADS)
David, S.; Visvikis, D.; Roux, C.; Hatt, M.
2011-09-01
In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.
NASA Astrophysics Data System (ADS)
Bell, L. R.; Dowling, J. A.; Pogson, E. M.; Metcalfe, P.; Holloway, L.
2017-01-01
Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes.
Linear programming computational experience with onyx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
Optimal control of underactuated mechanical systems: A geometric approach
NASA Astrophysics Data System (ADS)
Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela
2010-08-01
In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.
Robust-mode analysis of hydrodynamic flows
NASA Astrophysics Data System (ADS)
Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.
2017-04-01
The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.
Analysing photonic structures in plants
Vignolini, Silvia; Moyroud, Edwige; Glover, Beverley J.; Steiner, Ullrich
2013-01-01
The outer layers of a range of plant tissues, including flower petals, leaves and fruits, exhibit an intriguing variation of microscopic structures. Some of these structures include ordered periodic multilayers and diffraction gratings that give rise to interesting optical appearances. The colour arising from such structures is generally brighter than pigment-based colour. Here, we describe the main types of photonic structures found in plants and discuss the experimental approaches that can be used to analyse them. These experimental approaches allow identification of the physical mechanisms producing structural colours with a high degree of confidence. PMID:23883949
Jiang, Rui ; Yang, Hua ; Zhou, Linqi ; Kuo, C.-C. Jay ; Sun, Fengzhu ; Chen, Ting
2007-01-01
The increasing demand for the identification of genetic variation responsible for common diseases has translated into a need for sophisticated methods for effectively prioritizing mutations occurring in disease-associated genetic regions. In this article, we prioritize candidate nonsynonymous single-nucleotide polymorphisms (nsSNPs) through a bioinformatics approach that takes advantages of a set of improved numeric features derived from protein-sequence information and a new statistical learning model called “multiple selection rule voting” (MSRV). The sequence-based features can maximize the scope of applications of our approach, and the MSRV model can capture subtle characteristics of individual mutations. Systematic validation of the approach demonstrates that this approach is capable of prioritizing causal mutations for both simple monogenic diseases and complex polygenic diseases. Further studies of familial Alzheimer diseases and diabetes show that the approach can enrich mutations underlying these polygenic diseases among the top of candidate mutations. Application of this approach to unclassified mutations suggests that there are 10 suspicious mutations likely to cause diseases, and there is strong support for this in the literature. PMID:17668383
NASA Astrophysics Data System (ADS)
Aviles, Angelica I.; Widlak, Thomas; Casals, Alicia; Nillesen, Maartje M.; Ammari, Habib
2017-06-01
Cardiac motion estimation is an important diagnostic tool for detecting heart diseases and it has been explored with modalities such as MRI and conventional ultrasound (US) sequences. US cardiac motion estimation still presents challenges because of complex motion patterns and the presence of noise. In this work, we propose a novel approach to estimate cardiac motion using ultrafast ultrasound data. Our solution is based on a variational formulation characterized by the L 2-regularized class. Displacement is represented by a lattice of b-splines and we ensure robustness, in the sense of eliminating outliers, by applying a maximum likelihood type estimator. While this is an important part of our solution, the main object of this work is to combine low-rank data representation with topology preservation. Low-rank data representation (achieved by finding the k-dominant singular values of a Casorati matrix arranged from the data sequence) speeds up the global solution and achieves noise reduction. On the other hand, topology preservation (achieved by monitoring the Jacobian determinant) allows one to radically rule out distortions while carefully controlling the size of allowed expansions and contractions. Our variational approach is carried out on a realistic dataset as well as on a simulated one. We demonstrate how our proposed variational solution deals with complex deformations through careful numerical experiments. The low-rank constraint speeds up the convergence of the optimization problem while topology preservation ensures a more accurate displacement. Beyond cardiac motion estimation, our approach is promising for the analysis of other organs that exhibit motion.
Gao, Yonghui; Chen, Xiaoli; Wang, Jianhua; Shangguan, Shaofang; Dai, Yaohua; Zhang, Ting; Liu, Junling
2013-06-20
With the increasing interest in copy number variation as it pertains to human genomic variation, common phenotypes, and disease susceptibility, there is a pressing need for methods to accurately identify copy number. In this study, we developed a simple approach that combines multiplex PCR with matrix-assisted laser desorption ionization time-of-flight mass spectrometry for submicroscopic copy number variation detection. Two pairs of primers were used to simultaneously amplify query and endogenous control regions in the same reaction. Using a base extension reaction, the two amplicons were then distinguished and quantified in a mass spectrometry map. The peak ratio between the test region and the endogenous control region was manually calculated. The relative copy number could be determined by comparing the peak ratio between the test and control samples. This method generated a copy number measurement comparable to those produced by two other commonly used methods - multiplex ligation-dependent probe amplification and quantitative real-time PCR. Furthermore, it can discriminate a wide range of copy numbers. With a typical 384-format SpectroCHIP, at least six loci on 384 samples can be analyzed simultaneously in a hexaplex assay, making this assay adaptable for high throughput, and potentially applicable for large-scale association studies. Copyright © 2013 Elsevier B.V. All rights reserved.
Modelling approaches: the case of schizophrenia.
Heeg, Bart M S; Damen, Joep; Buskens, Erik; Caleo, Sue; de Charro, Frank; van Hout, Ben A
2008-01-01
Schizophrenia is a chronic disease characterized by periods of relative stability interrupted by acute episodes (or relapses). The course of the disease may vary considerably between patients. Patient histories show considerable inter- and even intra-individual variability. We provide a critical assessment of the advantages and disadvantages of three modelling techniques that have been used in schizophrenia: decision trees, (cohort and micro-simulation) Markov models and discrete event simulation models. These modelling techniques are compared in terms of building time, data requirements, medico-scientific experience, simulation time, clinical representation, and their ability to deal with patient heterogeneity, the timing of events, prior events, patient interaction, interaction between co-variates and variability (first-order uncertainty). We note that, depending on the research question, the optimal modelling approach should be selected based on the expected differences between the comparators, the number of co-variates, the number of patient subgroups, the interactions between co-variates, and simulation time. Finally, it is argued that in case micro-simulation is required for the cost-effectiveness analysis of schizophrenia treatments, a discrete event simulation model is best suited to accurately capture all of the relevant interdependencies in this chronic, highly heterogeneous disease with limited long-term follow-up data.
Using the knowledge-to-action framework to guide the timing of dialysis initiation.
Sood, Manish M; Manns, Braden; Nesrallah, Gihad
2014-05-01
The optimal time at which to initiate chronic dialysis remains unknown. Using a contemporary knowledge translation approach (the knowledge-to-action framework), a pan-Canadian collaboration (CANN-NET) set out to study the scope of the problem, then develop and disseminate evidence-based guidelines addressing the timing of dialysis initiation. The purpose of this review is to summarize the key findings and describe the planned Canadian knowledge translation strategy for improving knowledge and practices pertaining to the timing dialysis initiation. New research has provided considerable insights regarding the initiation of dialysis. A Canadian cohort study identified significant variation in the estimated glomerular filtration rate level at dialysis initiation, and a survey of providers identified related knowledge gaps that might be amenable to knowledge translation interventions. A recent knowledge synthesis/guideline concluded that early dialysis initiation is costly, and provides no measureable clinical benefits. A systematic knowledge translation intervention including a multifaceted approach may aid in reducing variation in practice and improving the quality of care. Utilizing the knowledge-to-action framework, we identified practice variation and key barriers to the optimal timing for dialysis initiation that may be amenable to knowledge translation strategies.
Liu, Siyang; Huang, Shujia; Rao, Junhua; Ye, Weijian; Krogh, Anders; Wang, Jun
2015-01-01
Comprehensive recognition of genomic variation in one individual is important for understanding disease and developing personalized medication and treatment. Many tools based on DNA re-sequencing exist for identification of single nucleotide polymorphisms, small insertions and deletions (indels) as well as large deletions. However, these approaches consistently display a substantial bias against the recovery of complex structural variants and novel sequence in individual genomes and do not provide interpretation information such as the annotation of ancestral state and formation mechanism. We present a novel approach implemented in a single software package, AsmVar, to discover, genotype and characterize different forms of structural variation and novel sequence from population-scale de novo genome assemblies up to nucleotide resolution. Application of AsmVar to several human de novo genome assemblies captures a wide spectrum of structural variants and novel sequences present in the human population in high sensitivity and specificity. Our method provides a direct solution for investigating structural variants and novel sequences from de novo genome assemblies, facilitating the construction of population-scale pan-genomes. Our study also highlights the usefulness of the de novo assembly strategy for definition of genome structure.
Mode instability in one-dimensional anharmonic lattices: Variational equation approach
NASA Astrophysics Data System (ADS)
Yoshimura, K.
1999-03-01
The stability of normal mode oscillations has been studied in detail under the single-mode excitation condition for the Fermi-Pasta-Ulam-β lattice. Numerical experiments indicate that the mode stability depends strongly on k/N, where k is the wave number of the initially excited mode and N is the number of degrees of freedom in the system. It has been found that this feature does not change when N increases. We propose an average variational equation - approximate version of the variational equation - as a theoretical tool to facilitate a linear stability analysis. It is shown that this strong k/N dependence of the mode stability can be explained from the view point of the linear stability of the relevant orbits. We introduce a low-dimensional approximation of the average variational equation, which approximately describes the time evolution of variations in four normal mode amplitudes. The linear stability analysis based on this four-mode approximation demonstrates that the parametric instability mechanism plays a crucial role in the strong k/N dependence of the mode stability.
The diversity and evolution of ecological and environmental citizen science
Tweddle, John C.; Savage, Joanna; Robinson, Lucy D.; Roy, Helen E.
2017-01-01
Citizen science—the involvement of volunteers in data collection, analysis and interpretation—simultaneously supports research and public engagement with science, and its profile is rapidly rising. Citizen science represents a diverse range of approaches, but until now this diversity has not been quantitatively explored. We conducted a systematic internet search and discovered 509 environmental and ecological citizen science projects. We scored each project for 32 attributes based on publicly obtainable information and used multiple factor analysis to summarise this variation to assess citizen science approaches. We found that projects varied according to their methodological approach from ‘mass participation’ (e.g. easy participation by anyone anywhere) to ‘systematic monitoring’ (e.g. trained volunteers repeatedly sampling at specific locations). They also varied in complexity from approaches that are ‘simple’ to those that are ‘elaborate’ (e.g. provide lots of support to gather rich, detailed datasets). There was a separate cluster of entirely computer-based projects but, in general, we found that the range of citizen science projects in ecology and the environment showed continuous variation and cannot be neatly categorised into distinct types of activity. While the diversity of projects begun in each time period (pre 1990, 1990–99, 2000–09 and 2010–13) has not increased, we found that projects tended to have become increasingly different from each other as time progressed (possibly due to changing opportunities, including technological innovation). Most projects were still active so consequently we found that the overall diversity of active projects (available for participation) increased as time progressed. Overall, understanding the landscape of citizen science in ecology and the environment (and its change over time) is valuable because it informs the comparative evaluation of the ‘success’ of different citizen science approaches. Comparative evaluation provides an evidence-base to inform the future development of citizen science activities. PMID:28369087
Optimum design of bolted composite lap joints under mechanical and thermal loading
NASA Astrophysics Data System (ADS)
Kradinov, Vladimir Yurievich
A new approach is developed for the analysis and design of mechanically fastened composite lap joints under mechanical and thermal loading. Based on the combined complex potential and variational formulation, the solution method satisfies the equilibrium equations exactly while the boundary conditions are satisfied by minimizing the total potential. This approach is capable of modeling finite laminate planform dimensions, uniform and variable laminate thickness, laminate lay-up, interaction among bolts, bolt torque, bolt flexibility, bolt size, bolt-hole clearance and interference, insert dimensions and insert material properties. Comparing to the finite element analysis, the robustness of the method does not decrease when modeling the interaction of many bolts; also, the method is more suitable for parametric study and design optimization. The Genetic Algorithm (GA), a powerful optimization technique for multiple extrema functions in multiple dimensions search spaces, is applied in conjunction with the complex potential and variational formulation to achieve optimum designs of bolted composite lap joints. The objective of the optimization is to acquire such a design that ensures the highest strength of the joint. The fitness function for the GA optimization is based on the average stress failure criterion predicting net-section, shear-out, and bearing failure modes in bolted lap joints. The criterion accounts for the stress distribution in the thickness direction at the bolt location by applying an approach utilizing a beam on an elastic foundation formulation.
Fu, Guifang; Dai, Xiaotian; Symanzik, Jürgen; Bushman, Shaun
2017-01-01
Leaf shape traits have long been a focus of many disciplines, but the complex genetic and environmental interactive mechanisms regulating leaf shape variation have not yet been investigated in detail. The question of the respective roles of genes and environment and how they interact to modulate leaf shape is a thorny evolutionary problem, and sophisticated methodology is needed to address it. In this study, we investigated a framework-level approach that inputs shape image photographs and genetic and environmental data, and then outputs the relative importance ranks of all variables after integrating shape feature extraction, dimension reduction, and tree-based statistical models. The power of the proposed framework was confirmed by simulation and a Populus szechuanica var. tibetica data set. This new methodology resulted in the detection of novel shape characteristics, and also confirmed some previous findings. The quantitative modeling of a combination of polygenetic, plastic, epistatic, and gene-environment interactive effects, as investigated in this study, will improve the discernment of quantitative leaf shape characteristics, and the methods are ready to be applied to other leaf morphology data sets. Unlike the majority of approaches in the quantitative leaf shape literature, this framework-level approach is data-driven, without assuming any pre-known shape attributes, landmarks, or model structures. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Teichtmeister, S.; Aldakheel, F.
2016-01-01
This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic–plastic solids undergoing large strains. The phase-field approach regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling. It is linked to a formulation of gradient plasticity at finite strains. The framework includes two independent length scales which regularize both the plastic response as well as the crack discontinuities. This ensures that the damage zones of ductile fracture are inside of plastic zones, and guarantees on the computational side a mesh objectivity in post-critical ranges. PMID:27002069
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pupyshev, V.I.; Scherbinin, A.V.; Stepanov, N.F.
1997-11-01
The approach based on the multiplicative form of a trial wave function within the framework of the variational method, initially proposed by Kirkwood and Buckingham, is shown to be an effective analytical tool in the quantum mechanical study of atoms and molecules. As an example, the elementary proof is given to the fact that the ground state energy of a molecular system placed into the box with walls of finite height goes to the corresponding eigenvalue of the Dirichlet boundary value problem when the height of the walls is growing up to infinity. {copyright} {ital 1997 American Institute of Physics.}
Continuous time wavelet entropy of auditory evoked potentials.
Cek, M Emre; Ozgoren, Murat; Savaci, F Acar
2010-01-01
In this paper, the continuous time wavelet entropy (CTWE) of auditory evoked potentials (AEP) has been characterized by evaluating the relative wavelet energies (RWE) in specified EEG frequency bands. Thus, the rapid variations of CTWE due to the auditory stimulation could be detected in post-stimulus time interval. This approach removes the probability of missing the information hidden in short time intervals. The discrete time and continuous time wavelet based wavelet entropy variations were compared on non-target and target AEP data. It was observed that CTWE can also be an alternative method to analyze entropy as a function of time. 2009 Elsevier Ltd. All rights reserved.
Nondestructive evaluation of nuclear-grade graphite
NASA Astrophysics Data System (ADS)
Kunerth, D. C.; McJunkin, T. R.
2012-05-01
The material of choice for the core of the high-temperature gas-cooled reactors being developed by the U.S. Department of Energy's Next Generation Nuclear Plant Program is graphite. Graphite is a composite material whose properties are highly dependent on the base material and manufacturing methods. In addition to the material variations intrinsic to the manufacturing process, graphite will also undergo changes in material properties resulting from radiation damage and possible oxidation within the reactor. Idaho National Laboratory is presently evaluating the viability of conventional nondestructive evaluation techniques to characterize the material variations inherent to manufacturing and in-service degradation. Approaches of interest include x-ray radiography, eddy currents, and ultrasonics.
Development of a new model for short period ocean tidal variations of Earth rotation
NASA Astrophysics Data System (ADS)
Schuh, Harald
2015-08-01
Within project SPOT (Short Period Ocean Tidal variations in Earth rotation) we develop a new high frequency Earth rotation model based on empirical ocean tide models. The main purpose of the SPOT model is its application to space geodetic observations such as GNSS and VLBI.We consider an empirical ocean tide model, which does not require hydrodynamic ocean modeling to determine ocean tidal angular momentum. We use here the EOT11a model of Savcenko & Bosch (2012), which is extended for some additional minor tides (e.g. M1, J1, T2). As empirical tidal models do not provide ocean tidal currents, which are re- quired for the computation of oceanic relative angular momentum, we implement an approach first published by Ray (2001) to estimate ocean tidal current veloci- ties for all tides considered in the extended EOT11a model. The approach itself is tested by application to tidal heights from hydrodynamic ocean tide models, which also provide tidal current velocities. Based on the tidal heights and the associated current velocities the oceanic tidal angular momentum (OTAM) is calculated.For the computation of the related short period variation of Earth rotation, we have re-examined the Euler-Liouville equation for an elastic Earth model with a liquid core. The focus here is on the consistent calculation of the elastic Love num- bers and associated Earth model parameters, which are considered in the Euler- Liouville equation for diurnal and sub-diurnal periods in the frequency domain.
Intraspecific variation buffers projected climate change impacts on Pinus contorta
Oney, Brian; Reineking, Björn; O'Neill, Gregory; Kreyling, Juergen
2013-01-01
Species distribution modeling (SDM) is an important tool to assess the impact of global environmental change. Many species exhibit ecologically relevant intraspecific variation, and few studies have analyzed its relevance for SDM. Here, we compared three SDM techniques for the highly variable species Pinus contorta. First, applying a conventional SDM approach, we used MaxEnt to model the subject as a single species (species model), based on presence–absence observations. Second, we used MaxEnt to model each of the three most prevalent subspecies independently and combined their projected distributions (subspecies model). Finally, we used a universal growth transfer function (UTF), an approach to incorporate intraspecific variation utilizing provenance trial tree growth data. Different model approaches performed similarly when predicting current distributions. MaxEnt model discrimination was greater (AUC – species model: 0.94, subspecies model: 0.95, UTF: 0.89), but the UTF was better calibrated (slope and bias – species model: 1.31 and −0.58, subspecies model: 1.44 and −0.43, UTF: 1.01 and 0.04, respectively). Contrastingly, for future climatic conditions, projections of lodgepole pine habitat suitability diverged. In particular, when the species' intraspecific variability was acknowledged, the species was projected to better tolerate climatic change as related to suitable habitat without migration (subspecies model: 26% habitat loss or UTF: 24% habitat loss vs. species model: 60% habitat loss), and given unlimited migration may increase amount of suitable habitat (subspecies model: 8% habitat gain or UTF: 12% habitat gain vs. species model: 51% habitat loss) in the climatic period 2070–2100 (SRES A2 scenario, HADCM3). We conclude that models derived from within-species data produce different and better projections, and coincide with ecological theory. Furthermore, we conclude that intraspecific variation may buffer against adverse effects of climate change. A key future research challenge lies in assessing the extent to which species can utilize intraspecific variation under rapid environmental change. PMID:23467191
Li, Shou-Li; Vasemägi, Anti; Ramula, Satu
2016-01-01
Assessing the demographic consequences of genetic variation is fundamental to invasion biology. However, genetic and demographic approaches are rarely combined to explore the effects of genetic variation on invasive populations in natural environments. This study combined population genetics, demographic data and a greenhouse experiment to investigate the consequences of genetic variation for the population fitness of the perennial, invasive herb Lupinus polyphyllus. Genetic and demographic data were collected from 37 L. polyphyllus populations representing different latitudes in Finland, and genetic variation was characterized based on 13 microsatellite loci. Associations between genetic variation and population size, population density, latitude and habitat were investigated. Genetic variation was then explored in relation to four fitness components (establishment, survival, growth, fecundity) measured at the population level, and the long-term population growth rate (λ). For a subset of populations genetic variation was also examined in relation to the temporal variability of λ. A further assessment was made of the role of natural selection in the observed variation of certain fitness components among populations under greenhouse conditions. It was found that genetic variation correlated positively with population size, particularly at higher latitudes, and differed among habitat types. Average seedling establishment per population increased with genetic variation in the field, but not under greenhouse conditions. Quantitative genetic divergence (Q(ST)) based on seedling establishment in the greenhouse was smaller than allelic genetic divergence (F'(ST)), indicating that unifying selection has a prominent role in this fitness component. Genetic variation was not associated with average survival, growth or fecundity measured at the population level, λ or its variability. The study suggests that although genetic variation may facilitate plant invasions by increasing seedling establishment, it may not necessarily affect the long-term population growth rate. Therefore, established invasions may be able to grow equally well regardless of their genetic diversity. © The Author 2015. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
Messori, Andrea; Trippoli, Sabrina; Bonacchi, Massimo; Sani, Guido
2009-08-01
Value-based methods are increasingly used to reimburse therapeutic innovation, and the payment-by-results approach has been proposed for handling interventions with limited therapeutic evidence. Because most left ventricular assist devices are supported by preliminary efficacy data, we examined the effectiveness data of the HeartMate (Thoratec Corp, Pleasanton, CA) device to explore the application of the payment-by-results approach to these devices and to develop a model for handling reimbursements. According to our model, after establishing the societal economic countervalue for each month of life saved, each patient treated with one such device is associated to the payment of this countervalue for every month of survival lived beyond the final date of estimated life expectancy without left ventricular assist devices. Our base-case analysis, which used the published data of 68 patients who received the HeartMate device, was run with a monthly countervalue of euro 5000, no adjustment for quality of life, and a baseline life expectancy of 150 days without left ventricular assist devices. Sensitivity analysis was aimed at testing the effect of quality of life adjustments and changes in life expectancy without device. In our base-case analysis, the mean total reimbursement per patient was euro 82,426 (range, euro 0 to euro 250,000; N = 68) generated as the sum of monthly payments. This average value was close to the current price of the HeartMate device (euro 75,000). Sensitivity testing showed that the base-case reimbursement of euro 82,426 was little influenced by variations in life expectancy, whereas variations in utility had a more pronounced impact. Our report delineates an innovative procedure for appropriately allocating economic resources in this area of invasive cardiology.
van Kasteren, Yasmin; Bradford, Dana; Zhang, Qing; Karunanithi, Mohan; Ding, Hang
2017-06-13
An ongoing challenge for smart homes research for aging-in-place is how to make sense of the large amounts of data from in-home sensors to facilitate real-time monitoring and develop reliable alerts. The objective of our study was to explore the usefulness of a routine-based approach for making sense of smart home data for the elderly. Maximum variation sampling was used to select three cases for an in-depth mixed methods exploration of the daily routines of three elderly participants in a smart home trial using 180 days of power use and motion sensor data and longitudinal interview data. Sensor data accurately matched self-reported routines. By comparing daily movement data with personal routines, it was possible to identify changes in routine that signaled illness, recovery from bereavement, and gradual deterioration of sleep quality and daily movement. Interview and sensor data also identified changes in routine with variations in temperature and daylight hours. The findings demonstrated that a routine-based approach makes interpreting sensor data easy, intuitive, and transparent. They highlighted the importance of understanding and accounting for individual differences in preferences for routinization and the influence of the cyclical nature of daily routines, social or cultural rhythms, and seasonal changes in temperature and daylight hours when interpreting information based on sensor data. This research has demonstrated the usefulness of a routine-based approach for making sense of smart home data, which has furthered the understanding of the challenges that need to be addressed in order to make real-time monitoring and effective alerts a reality. ©Yasmin van Kasteren, Dana Bradford, Qing Zhang, Mohan Karunanithi, Hang Ding. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 13.06.2017.
van Kasteren, Yasmin; Bradford, Dana; Karunanithi, Mohan; Ding, Hang
2017-01-01
Background An ongoing challenge for smart homes research for aging-in-place is how to make sense of the large amounts of data from in-home sensors to facilitate real-time monitoring and develop reliable alerts. Objective The objective of our study was to explore the usefulness of a routine-based approach for making sense of smart home data for the elderly. Methods Maximum variation sampling was used to select three cases for an in-depth mixed methods exploration of the daily routines of three elderly participants in a smart home trial using 180 days of power use and motion sensor data and longitudinal interview data. Results Sensor data accurately matched self-reported routines. By comparing daily movement data with personal routines, it was possible to identify changes in routine that signaled illness, recovery from bereavement, and gradual deterioration of sleep quality and daily movement. Interview and sensor data also identified changes in routine with variations in temperature and daylight hours. Conclusions The findings demonstrated that a routine-based approach makes interpreting sensor data easy, intuitive, and transparent. They highlighted the importance of understanding and accounting for individual differences in preferences for routinization and the influence of the cyclical nature of daily routines, social or cultural rhythms, and seasonal changes in temperature and daylight hours when interpreting information based on sensor data. This research has demonstrated the usefulness of a routine-based approach for making sense of smart home data, which has furthered the understanding of the challenges that need to be addressed in order to make real-time monitoring and effective alerts a reality. PMID:28611014
NASA Astrophysics Data System (ADS)
Shiri, Jalal
2018-06-01
Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.
NASA Astrophysics Data System (ADS)
Jayasree, P. K.; Arun, K. V.; Oormila, R.; Sreelakshmi, H.
2018-05-01
As per Indian Standards, laterally loaded piles are usually analysed using the method adopted by IS 2911-2010 (Part 1/Section 2). But the practising engineers are of the opinion that the IS method is very conservative in design. This work aims at determining the extent to which the conventional IS design approach is conservative. This is done through a comparative study between IS approach and the theoretical model based on Vesic's equation. Bore log details for six different bridges were collected from the Kerala Public Works Department. Cast in situ fixed head piles embedded in three soil conditions both end bearing as well as friction piles were considered and analyzed separately. Piles were also modelled in STAAD.Pro software based on IS approach and the results were validated using Matlock and Reese (In Proceedings of fifth international conference on soil mechanics and foundation engineering, 1961) equation. The results were presented as the percentage variation in values of bending moment and deflection obtained by different methods. The results obtained from the mathematical model based on Vesic's equation and that obtained as per the IS approach were compared and the IS method was found to be uneconomical and conservative.
Traditional and New Influenza Vaccines
Wong, Sook-San
2013-01-01
SUMMARY The challenges in successful vaccination against influenza using conventional approaches lie in their variable efficacy in different age populations, the antigenic variability of the circulating virus, and the production and manufacturing limitations to ensure safe, timely, and adequate supply of vaccine. The conventional influenza vaccine platform is based on stimulating immunity against the major neutralizing antibody target, hemagglutinin (HA), by virus attenuation or inactivation. Improvements to this conventional system have focused primarily on improving production and immunogenicity. Cell culture, reverse genetics, and baculovirus expression technology allow for safe and scalable production, while adjuvants, dose variation, and alternate routes of delivery aim to improve vaccine immunogenicity. Fundamentally different approaches that are currently under development hope to signal new generations of influenza vaccines. Such approaches target nonvariable regions of antigenic proteins, with the idea of stimulating cross-protective antibodies and thus creating a “universal” influenza vaccine. While such approaches have obvious benefits, there are many hurdles yet to clear. Here, we discuss the process and challenges of the current influenza vaccine platform as well as new approaches that are being investigated based on the same antigenic target and newer technologies based on different antigenic targets. PMID:23824369
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
On the variability of the Priestley-Taylor coefficient over water bodies
NASA Astrophysics Data System (ADS)
Assouline, Shmuel; Li, Dan; Tyler, Scott; Tanny, Josef; Cohen, Shabtai; Bou-Zeid, Elie; Parlange, Marc; Katul, Gabriel G.
2016-01-01
Deviations in the Priestley-Taylor (PT) coefficient αPT from its accepted 1.26 value are analyzed over large lakes, reservoirs, and wetlands where stomatal or soil controls are minimal or absent. The data sets feature wide variations in water body sizes and climatic conditions. Neither surface temperature nor sensible heat flux variations alone, which proved successful in characterizing αPT variations over some crops, explain measured deviations in αPT over water. It is shown that the relative transport efficiency of turbulent heat and water vapor is key to explaining variations in αPT over water surfaces, thereby offering a new perspective over the concept of minimal advection or entrainment introduced by PT. Methods that allow the determination of αPT based on low-frequency sampling (i.e., 0.1 Hz) are then developed and tested, which are usable with standard meteorological sensors that filter some but not all turbulent fluctuations. Using approximations to the Gram determinant inequality, the relative transport efficiency is derived as a function of the correlation coefficient between temperature and water vapor concentration fluctuations (RTq). The proposed approach reasonably explains the measured deviations from the conventional αPT = 1.26 value even when RTq is determined from air temperature and water vapor concentration time series that are Gaussian-filtered and subsampled to a cutoff frequency of 0.1 Hz. Because over water bodies, RTq deviations from unity are often associated with advection and/or entrainment, linkages between αPT and RTq offer both a diagnostic approach to assess their significance and a prognostic approach to correct the 1.26 value when using routine meteorological measurements of temperature and humidity.
Decision support for the selection of reference sites using 137Cs as a soil erosion tracer
NASA Astrophysics Data System (ADS)
Arata, Laura; Meusburger, Katrin; Bürge, Alexandra; Zehringer, Markus; Ketterer, Michael E.; Mabit, Lionel; Alewell, Christine
2017-08-01
The classical approach of using 137Cs as a soil erosion tracer is based on the comparison between stable reference sites and sites affected by soil redistribution processes; it enables the derivation of soil erosion and deposition rates. The method is associated with potentially large sources of uncertainty with major parts of this uncertainty being associated with the selection of the reference sites. We propose a decision support tool to Check the Suitability of reference Sites (CheSS). Commonly, the variation among 137Cs inventories of spatial replicate reference samples is taken as the sole criterion to decide on the suitability of a reference inventory. Here we propose an extension of this procedure using a repeated sampling approach, in which the reference sites are resampled after a certain time period. Suitable reference sites are expected to present no significant temporal variation in their decay-corrected 137Cs depth profiles. Possible causes of variation are assessed by a decision tree. More specifically, the decision tree tests for (i) uncertainty connected to small-scale variability in 137Cs due to its heterogeneous initial fallout (such as in areas affected by the Chernobyl fallout), (ii) signs of erosion or deposition processes and (iii) artefacts due to the collection, preparation and measurement of the samples; (iv) finally, if none of the above can be assigned, this variation might be attributed to turbation
processes (e.g. bioturbation, cryoturbation and mechanical turbation, such as avalanches or rockfalls). CheSS was exemplarily applied in one Swiss alpine valley where the apparent temporal variability called into question the suitability of the selected reference sites. In general we suggest the application of CheSS as a first step towards a comprehensible approach to test for the suitability of reference sites.
Paving the COWpath: data-driven design of pediatric order sets
Zhang, Yiye; Padman, Rema; Levin, James E
2014-01-01
Objective Evidence indicates that users incur significant physical and cognitive costs in the use of order sets, a core feature of computerized provider order entry systems. This paper develops data-driven approaches for automating the construction of order sets that match closely with user preferences and workflow while minimizing physical and cognitive workload. Materials and methods We developed and tested optimization-based models embedded with clustering techniques using physical and cognitive click cost criteria. By judiciously learning from users’ actual actions, our methods identify items for constituting order sets that are relevant according to historical ordering data and grouped on the basis of order similarity and ordering time. We evaluated performance of the methods using 47 099 orders from the year 2011 for asthma, appendectomy and pneumonia management in a pediatric inpatient setting. Results In comparison with existing order sets, those developed using the new approach significantly reduce the physical and cognitive workload associated with usage by 14–52%. This approach is also capable of accommodating variations in clinical conditions that affect order set usage and development. Discussion There is a critical need to investigate the cognitive complexity imposed on users by complex clinical information systems, and to design their features according to ‘human factors’ best practices. Optimizing order set generation using cognitive cost criteria introduces a new approach that can potentially improve ordering efficiency, reduce unintended variations in order placement, and enhance patient safety. Conclusions We demonstrate that data-driven methods offer a promising approach for designing order sets that are generalizable, data-driven, condition-based, and up to date with current best practices. PMID:24674844
An analytical approach to separate climate and human contributions to basin streamflow variability
NASA Astrophysics Data System (ADS)
Li, Changbin; Wang, Liuming; Wanrui, Wang; Qi, Jiaguo; Linshan, Yang; Zhang, Yuan; Lei, Wu; Cui, Xia; Wang, Peng
2018-04-01
Climate variability and anthropogenic regulations are two interwoven factors in the ecohydrologic system across large basins. Understanding the roles that these two factors play under various hydrologic conditions is of great significance for basin hydrology and sustainable water utilization. In this study, we present an analytical approach based on coupling water balance method and Budyko hypothesis to derive effectiveness coefficients (ECs) of climate change, as a way to disentangle contributions of it and human activities to the variability of river discharges under different hydro-transitional situations. The climate dominated streamflow change (ΔQc) by EC approach was compared with those deduced by the elasticity method and sensitivity index. The results suggest that the EC approach is valid and applicable for hydrologic study at large basin scale. Analyses of various scenarios revealed that contributions of climate change and human activities to river discharge variation differed among the regions of the study area. Over the past several decades, climate change dominated hydro-transitions from dry to wet, while human activities played key roles in the reduction of streamflow during wet to dry periods. Remarkable decline of discharge in upstream was mainly due to human interventions, although climate contributed more to runoff increasing during dry periods in the semi-arid downstream. Induced effectiveness on streamflow changes indicated a contribution ratio of 49% for climate and 51% for human activities at the basin scale from 1956 to 2015. The mathematic derivation based simple approach, together with the case example of temporal segmentation and spatial zoning, could help people understand variation of river discharge with more details at a large basin scale under the background of climate change and human regulations.
[Pharmacogenetics and the treatment of addiction].
Schellekens, Arnt
2013-01-01
This article describes the current scientific knowledge regarding pharmacogenetic predictors of treatment outcome for substance-dependent patients. PubMed was searched for articles on pharmacogenetics and addiction. This search yielded 53 articles, of which 27 were selected. The most promising pharmacogenetic findings are related to the treatment of alcohol dependence. Genetic variation in the µ-opioid receptor (OPRM1) and the serotonin transporter (5-HTTLPR) appear to be associated with treatment outcomes for naltrexone and ondansetron, respectively. Genetic variation in CYP2D6 is related to efficacy of methadone treatment for opiate dependence. Pharmacogenetics may help explain the great inter-individual variation in treatment response. In the future, treatment matching, based on genetic characteristics of individual patients, could lead to a 'personalized medicine' approach. Pharmacogenetic matching of naltrexone in alcohol-dependent carriers of the OPRM1 G-allele currently seems most promising.
Nigenda-Morales, Sergio F; Hu, Yibo; Beasley, James C; Ruiz-Piña, Hugo A; Valenzuela-Galván, David; Wayne, Robert K
2018-06-01
Skin pigmentation and coat pigmentation are two of the best-studied examples of traits under natural selection given their quantifiable fitness interactions with the environment (e.g., camouflage) and signalling with other organisms (e.g., warning coloration). Previous morphological studies have found that skin pigmentation variation in the Virginia opossum (Didelphis virginiana) is associated with variation in precipitation and temperatures across its distribution range following Gloger's rule (lighter pigmentation in temperate environments). To investigate the molecular mechanism associated with skin pigmentation variation, we used RNA-Seq and quantified gene expression of wild opossums from tropical and temperate populations. Using differential expression analysis and a co-expression network approach, we found that expression variation in genes with melanocytic and immune functions is significantly associated with the degree of skin pigmentation variation and may be underlying this phenotypic difference. We also found evidence suggesting that the Wnt/β-catenin signalling pathway might be regulating the depigmentation observed in temperate populations. Based on our study results, we present several alternative hypotheses that may explain Gloger's rule pattern of skin pigmentation variation in opossum, including changes in pathogen diversity supporting a pathogen-resistant hypothesis, thermal stress associated with temperate environments, and pleiotropic and epistatic interactions between melanocytic and immune genes. © 2018 John Wiley & Sons Ltd.
Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard
2013-04-01
The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.
A generalized Condat's algorithm of 1D total variation regularization
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2017-09-01
A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
Hidden Markov Model-Based CNV Detection Algorithms for Illumina Genotyping Microarrays.
Seiser, Eric L; Innocenti, Federico
2014-01-01
Somatic alterations in DNA copy number have been well studied in numerous malignancies, yet the role of germline DNA copy number variation in cancer is still emerging. Genotyping microarrays generate allele-specific signal intensities to determine genotype, but may also be used to infer DNA copy number using additional computational approaches. Numerous tools have been developed to analyze Illumina genotype microarray data for copy number variant (CNV) discovery, although commonly utilized algorithms freely available to the public employ approaches based upon the use of hidden Markov models (HMMs). QuantiSNP, PennCNV, and GenoCN utilize HMMs with six copy number states but vary in how transition and emission probabilities are calculated. Performance of these CNV detection algorithms has been shown to be variable between both genotyping platforms and data sets, although HMM approaches generally outperform other current methods. Low sensitivity is prevalent with HMM-based algorithms, suggesting the need for continued improvement in CNV detection methodologies.
NASA Astrophysics Data System (ADS)
Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong
2016-12-01
We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.
Data-driven Inference and Investigation of Thermosphere Dynamics and Variations
NASA Astrophysics Data System (ADS)
Mehta, P. M.; Linares, R.
2017-12-01
This paper presents a methodology for data-driven inference and investigation of thermosphere dynamics and variations. The approach uses data-driven modal analysis to extract the most energetic modes of variations for neutral thermospheric species using proper orthogonal decomposition, where the time-independent modes or basis represent the dynamics and the time-depedent coefficients or amplitudes represent the model parameters. The data-driven modal analysis approach combined with sparse, discrete observations is used to infer amplitues for the dynamic modes and to calibrate the energy content of the system. In this work, two different data-types, namely the number density measurements from TIMED/GUVI and the mass density measurements from CHAMP/GRACE are simultaneously ingested for an accurate and self-consistent specification of the thermosphere. The assimilation process is achieved with a non-linear least squares solver and allows estimation/tuning of the model parameters or amplitudes rather than the driver. In this work, we use the Naval Research Lab's MSIS model to derive the most energetic modes for six different species, He, O, N2, O2, H, and N. We examine the dominant drivers of variations for helium in MSIS and observe that seasonal latitudinal variation accounts for about 80% of the dynamic energy with a strong preference of helium for the winter hemisphere. We also observe enhanced helium presence near the poles at GRACE altitudes during periods of low solar activity (Feb 2007) as previously deduced. We will also examine the storm-time response of helium derived from observations. The results are expected to be useful in tuning/calibration of the physics-based models.
Set-Based Approach to Design under Uncertainty and Applications to Shaping a Hydrofoil
2016-01-01
given requirements. This notion of set-based designwas pioneered by Toyota and adopted by the U.S. Navy [1]. It responds to most real-world design...in such a way that all desired shape variations are allowed both on the suction and pressure side. Figure 2 gives a schematic representation of the...of the hydrofoil. The control points of the pressure side have been changed in different ways to en- sure the overall hydrodynamic performance
Laser induced thermal therapy (LITT) for pediatric brain tumors: case-based review
Riordan, Margaret
2014-01-01
Integration of Laser induced thermal therapy (LITT) to magnetic resonance imaging (MRI) have created new options for treating surgically challenging tumors in locations that would otherwise have represented an intrinsic comorbidity by the approach itself. As new applications and variations of the use are discussed, we present a case-based review of the history, development, and subsequent updates of minimally invasive MRI-guided laser interstitial thermal therapy (MRgLITT) ablation in pediatric brain tumors. PMID:26835340
Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J
2012-12-15
A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.
NASA Astrophysics Data System (ADS)
Kim, Cheol-kyun; Kim, Jungchan; Choi, Jaeseung; Yang, Hyunjo; Yim, Donggyu; Kim, Jinwoong
2007-03-01
As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and CD uniformity, which represent real wafer. Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image. Design based metrology systems are able to extract information of whole chip CD variation. According to the results, OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin, appropriate combination between design based metrology system and model based verification tools is very important. Therefore, we evaluated design based metrology system and matched model based verification system for optimum combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be proposed by using combination of design based metrology and model based verification tools.
A multicriteria framework for producing local, regional, and national insect and disease risk maps
Frank J. Jr. Krist; Frank J. Sapio
2010-01-01
The construction of the 2006 National Insect and Disease Risk Map, compiled by the USDA Forest Service, State and Private Forestry Area, Forest Health Protection Unit, resulted in the development of a GIS-based, multicriteria approach for insect and disease risk mapping that can account for regional variations in forest health concerns and threats. This risk mapping...
Hamiltonian formulation of the KdV equation
NASA Astrophysics Data System (ADS)
Nutku, Y.
1984-06-01
We consider the canonical formulation of Whitham's variational principle for the KdV equation. This Lagrangian is degenerate and we have found it necessary to use Dirac's theory of constrained systems in constructing the Hamiltonian. Earlier discussions of the Hamiltonian structure of the KdV equation were based on various different decompositions of the field which is avoided by this new approach.
Learning Effects in the Block Design Task: A Stimulus Parameter-Based Approach
ERIC Educational Resources Information Center
Miller, Joseph C.; Ruthig, Joelle C.; Bradley, April R.; Wise, Richard A.; Pedersen, Heather A.; Ellison, Jo M.
2009-01-01
Learning effects were assessed for the block design (BD) task, on the basis of variation in 2 stimulus parameters: perceptual cohesiveness (PC) and set size uncertainty (U). Thirty-one nonclinical undergraduate students (19 female) each completed 3 designs for each of 4 varied sets of the stimulus parameters (high-PC/high-U, high-PC/low-U,…
Guidance Concept for a Mars Ascent Vehicle First Stage
NASA Technical Reports Server (NTRS)
Queen, Eric M.
2000-01-01
This paper presents a guidance concept for use on the first stage of a Mars Ascent Vehicle (MAV). The guidance is based on a calculus of variations approach similar to that used for the final phase of the Apollo Earth return guidance. A three degree-of-freedom (3DOF) Monte Carlo simulation is used to evaluate performance and robustness of the algorithm.
Historical range of variability in live and dead wood biomass: a regional-scale simulation study
Etsuko Nonaka; Thomas A. Spies; Michael C. Wimberly; Janet L. Ohmann
2007-01-01
The historical range of variability (HRV) in landscape structure and composition created by natural disturbance can serve as a general guide for evaluating ecological conditions of managed landscapes. HRV approaches to evaluating landscapes have been based on age classes or developmental stages, which may obscure variation in live and dead stand structure. Developing...
Stereoscopic Vascular Models of the Head and Neck: A Computed Tomography Angiography Visualization
ERIC Educational Resources Information Center
Cui, Dongmei; Lynch, James C.; Smith, Andrew D.; Wilson, Timothy D.; Lehman, Michael N.
2016-01-01
Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching…
Effect of inlet modelling on surface drainage in coupled urban flood simulation
NASA Astrophysics Data System (ADS)
Jang, Jiun-Huei; Chang, Tien-Hao; Chen, Wei-Bo
2018-07-01
For a highly developed urban area with complete drainage systems, flood simulation is necessary for describing the flow dynamics from rainfall, to surface runoff, and to sewer flow. In this study, a coupled flood model based on diffusion wave equations was proposed to simulate one-dimensional sewer flow and two-dimensional overland flow simultaneously. The overland flow model provides details on the rainfall-runoff process to estimate the excess runoff that enters the sewer system through street inlets for sewer flow routing. Three types of inlet modelling are considered in this study, including the manhole-based approach that ignores the street inlets by draining surface water directly into manholes, the inlet-manhole approach that drains surface water into manholes that are each connected to multiple inlets, and the inlet-node approach that drains surface water into sewer nodes that are connected to individual inlets. The simulation results were compared with a high-intensity rainstorm event that occurred in 2015 in Taipei City. In the verification of the maximum flood extent, the two approaches that considered street inlets performed considerably better than that without street inlets. When considering the aforementioned models in terms of temporal flood variation, using manholes as receivers leads to an overall inefficient draining of the surface water either by the manhole-based approach or by the inlet-manhole approach. Using the inlet-node approach is more reasonable than using the inlet-manhole approach because the inlet-node approach greatly reduces the fluctuation of the sewer water level. The inlet-node approach is more efficient in draining surface water by reducing flood volume by 13% compared with the inlet-manhole approach and by 41% compared with the manhole-based approach. The results show that inlet modeling has a strong influence on drainage efficiency in coupled flood simulation.
Estimating evaporative vapor generation from automobiles based on parking activities.
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S
2015-07-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
An optimization framework for measuring spatial access over healthcare networks.
Li, Zihao; Serban, Nicoleta; Swann, Julie L
2015-07-17
Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas.
A finite element based method for solution of optimal control problems
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.
1989-01-01
A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.
Evaluation of body-wise and organ-wise registrations for abdominal organs
NASA Astrophysics Data System (ADS)
Xu, Zhoubing; Panjwani, Sahil A.; Lee, Christopher P.; Burke, Ryan P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.
2016-03-01
Identifying cross-sectional and longitudinal correspondence in the abdomen on computed tomography (CT) scans is necessary for quantitatively tracking change and understanding population characteristics, yet abdominal image registration is a challenging problem. The key difficulty in solving this problem is huge variations in organ dimensions and shapes across subjects. The current standard registration method uses the global or body-wise registration technique, which is based on the global topology for alignment. This method (although producing decent results) has substantial influence of outliers, thus leaving room for significant improvement. Here, we study a new image registration approach using local (organ-wise registration) by first creating organ-specific bounding boxes and then using these regions of interest (ROIs) for aligning references to target. Based on Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD) and Hausdorff Distance (HD), the organ-wise approach is demonstrated to have significantly better results by minimizing the distorting effects of organ variations. This paper compares exclusively the two registration methods by providing novel quantitative and qualitative comparison data and is a subset of the more comprehensive problem of improving the multi-atlas segmentation by using organ normalization.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Note onset deviations as musical piece signatures.
Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis
2013-01-01
A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Alexander; Gao Mingcheng; Sinacore, James
2009-09-01
Purpose: To compare the dose distribution between customized planning (CP) and adopting a single plan (SP) in multifractionated high-dose-rate brachytherapy and to establish predictors for the necessity of CP in a given patient. Methods and Materials: A total of 50 computed tomography-based plans for 10 patients were evaluated. Each patient had received 6 Gy for five fractions. The clinical target volume and organs at risk (i.e., rectum, bladder, sigmoid, and small bowel) were delineated on each computed tomography scan. For the SP approach, the same dwell position and time was used for all fractions. For the CP approach, the dwellmore » position and time were reoptimized for each fraction. Applicator position variation was determined by measuring the distance between the posterior bladder wall and the tandem at the level of the vaginal fornices. Results: The organs at risk D{sub 2cc} (dose to 2 cc volume) was increased with the SP approach. The dose variation was statistically similar between the tandem and ring and tandem and ovoid groups. The bladder D{sub 2cc} dose was 81.95-105.42 Gy{sub 2} for CP and 82.11-122.49 Gy{sub 2} for SP. In 5 of the 10 patients, the bladder would have been significantly overdosed with the SP approach. The variation of the posterior bladder wall distance from that in the first fraction was correlated with the increase in the bladder D{sub 2cc} (SP/CP), with a correlation coefficient of -0.59. Conclusion: Our results support the use of CP instead of the SP approach to help avoid a significant overdose to the bladder. This is especially true for a decrease in the posterior wall distance of {>=}0.5 cm compared with that in the first fraction.« less
BLOOM: BLoom filter based oblivious outsourced matchings.
Ziegeldorf, Jan Henrik; Pennekamp, Jan; Hellmanns, David; Schwinger, Felix; Kunze, Ike; Henze, Martin; Hiller, Jens; Matzutt, Roman; Wehrle, Klaus
2017-07-26
Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations. We propose FHE-BLOOM and PHE-BLOOM, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. FHE-BLOOM is fully secure in the semi-honest model while PHE-BLOOM slightly relaxes security guarantees in a trade-off for highly improved performance. We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while PHE-BLOOM is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries. Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, FHE-BLOOM, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, PHE-BLOOM, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.
Klimstra, J.D.; O'Connell, A.F.; Pistrang, M.J.; Lewis, L.M.; Herrig, J.A.; Sauer, J.R.
2007-01-01
Science-based monitoring of biological resources is important for a greater understanding of ecological systems and for assessment of the target population using theoretic-based management approaches. When selecting variables to monitor, managers first need to carefully consider their objectives, the geographic and temporal scale at which they will operate, and the effort needed to implement the program. Generally, monitoring can be divided into two categories: index and inferential. Although index monitoring is usually easier to implement, analysis of index data requires strong assumptions about consistency in detection rates over time and space, and parameters are often biasednot accounting for detectability and spatial variation. In most cases, individuals are not always available for detection during sampling periods, and the entire area of interest cannot be sampled. Conversely, inferential monitoring is more rigorous because it is based on nearly unbiased estimators of spatial distribution. Thus, we recommend that detectability and spatial variation be considered for all monitoring programs that intend to make inferences about the target population or the area of interest. Application of these techniques is especially important for the monitoring of Threatened and Endangered (T&E) species because it is critical to determine if population size is increasing or decreasing with some level of certainty. Use of estimation-based methods and probability sampling will reduce many of the biases inherently associated with index data and provide meaningful information with respect to changes that occur in target populations. We incorporated inferential monitoring into protocols for T&E species spanning a wide range of taxa on the Cherokee National Forest in the Southern Appalachian Mountains. We review the various approaches employed for different taxa and discuss design issues, sampling strategies, data analysis, and the details of estimating detectability using site occupancy. These techniques provide a science-based approach for monitoring and can be of value to all resource managers responsible for management of T&E species.
Pavell, Anthony; Hughes, Keith A
2010-01-01
This article describes a method for achieving the load equivalence model, described in Parenteral Drug Association Technical Report 1, using a mass-based approach. The item and load bracketing approach allows for mixed equipment load size variation for operational flexibility along with decreased time to introduce new items to the operation. The article discusses the utilization of approximately 67 items/components (Table IV) identified for routine sterilization with varying quantities required weekly. The items were assessed for worst-case identification using four temperature-related criteria. The criteria were used to provide a data-based identification of worst-case items, and/or item equivalence, to carry forward into cycle validation using a variable load pattern. The mass approach to maximum load determination was used to bracket routine production use and allows for variable loading patterns. The result of the item mapping and load bracketing data is "a proven acceptable range" of sterilizing conditions including loading configuration and location. The application of these approaches, while initially more time/test-intensive than alternate approaches, provides a method of cycle validation with long-term benefit of ease of ongoing qualification, minimizing time and requirements for new equipment qualification for similar loads/use, and for rapid and rigorous assessment of new items for sterilization.
Design Issues in Small-Area Studies of Environment and Health
Elliott, Paul; Savitz, David A.
2008-01-01
Background Small-area studies are part of the tradition of spatial epidemiology, which is concerned with the analysis of geographic patterns of disease with respect to environmental, demographic, socioeconomic, and other factors. We focus on etiologic research, where the aim is to make inferences about spatially varying environmental factors influencing the risk of disease. Methods and results We illustrate the approach through three exemplars: a) magnetic fields from overhead electric power lines and the occurrence of childhood leukemia, which illustrates the use of geographic information systems to focus on areas with high exposure prevalence; b) drinking-water disinfection by-products and reproductive outcomes, taking advantage of large between- to within-area variability in exposures from the water supply; and c) chronic exposure to air pollutants and cardiorespiratory health, where issues of socioeconomic confounding are particularly important. Discussion The small-area epidemiologic approach assigns exposure estimates to individuals based on location of residence or other geographic variables such as workplace or school. In this way, large populations can be studied, increasing the ability to investigate rare exposures or rare diseases. The approach is most effective when there is well-defined exposure variation across geographic units, limited within-area variation, and good control for potential confounding across areas. Conclusions In conjunction with traditional individual-based approaches, small-area studies offer a valuable addition to the armamentarium of the environmental epidemiologist. Modeling of exposure patterns coupled with collection of individual-level data on subsamples of the population should lead to improved risk estimates (i.e., less potential for bias) and help strengthen etiologic inference. PMID:18709174
Genealogical and evolutionary inference with the human Y chromosome.
Stumpf, M P; Goldstein, D B
2001-03-02
Population genetics has emerged as a powerful tool for unraveling human history. In addition to the study of mitochondrial and autosomal DNA, attention has recently focused on Y-chromosome variation. Ambiguities and inaccuracies in data analysis, however, pose an important obstacle to further development of the field. Here we review the methods available for genealogical inference using Y-chromosome data. Approaches can be divided into those that do and those that do not use an explicit population model in genealogical inference. We describe the strengths and weaknesses of these model-based and model-free approaches, as well as difficulties associated with the mutation process that affect both methods. In the case of genealogical inference using microsatellite loci, we use coalescent simulations to show that relatively simple generalizations of the mutation process can greatly increase the accuracy of genealogical inference. Because model-free and model-based approaches have different biases and limitations, we conclude that there is considerable benefit in the continued use of both types of approaches.
Disentangling sampling and ecological explanations underlying species-area relationships
Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Alpizar-Jara, R.; Flather, C.H.
2002-01-01
We used a probabilistic approach to address the influence of sampling artifacts on the form of species-area relationships (SARs). We developed a model in which the increase in observed species richness is a function of sampling effort exclusively. We assumed that effort depends on area sampled, and we generated species-area curves under that model. These curves can be realistic looking. We then generated SARs from avian data, comparing SARs based on counts with those based on richness estimates. We used an approach to estimation of species richness that accounts for species detection probability and, hence, for variation in sampling effort. The slopes of SARs based on counts are steeper than those of curves based on estimates of richness, indicating that the former partly reflect failure to account for species detection probability. SARs based on estimates reflect ecological processes exclusively, not sampling processes. This approach permits investigation of ecologically relevant hypotheses. The slope of SARs is not influenced by the slope of the relationship between habitat diversity and area. In situations in which not all of the species are detected during sampling sessions, approaches to estimation of species richness integrating species detection probability should be used to investigate the rate of increase in species richness with area.
Scene-based nonuniformity correction using local constant statistics.
Zhang, Chao; Zhao, Wenyi
2008-06-01
In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.