Sample records for independent methods based

  1. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  2. Independent component analysis for automatic note extraction from musical trills

    NASA Astrophysics Data System (ADS)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  3. Problem based learning: the effect of real time data on the website to student independence

    NASA Astrophysics Data System (ADS)

    Setyowidodo, I.; Pramesti, Y. S.; Handayani, A. D.

    2018-05-01

    Learning science developed as an integrative science rather than disciplinary education, the reality of the nation character development has not been able to form a more creative and independent Indonesian man. Problem Based Learning based on real time data in the website is a learning method focuses on developing high-level thinking skills in problem-oriented situations by integrating technology in learning. The essence of this study is the presentation of authentic problems in the real time data situation in the website. The purpose of this research is to develop student independence through Problem Based Learning based on real time data in website. The type of this research is development research with implementation using purposive sampling technique. Based on the study there is an increase in student self-reliance, where the students in very high category is 47% and in the high category is 53%. This learning method can be said to be effective in improving students learning independence in problem-oriented situations.

  4. Microbial composition during Chinese soy sauce koji-making based on culture dependent and independent methods.

    PubMed

    Yan, Yin-zhuo; Qian, Yu-lin; Ji, Feng-di; Chen, Jing-yu; Han, Bei-zhong

    2013-05-01

    Koji-making is a key process for production of high quality soy sauce. The microbial composition during koji-making was investigated by culture-dependent and culture-independent methods to determine predominant bacterial and fungal populations. The culture-dependent methods used were direct culture and colony morphology observation, and PCR amplification of 16S/26S rDNA fragments followed by sequencing analysis. The culture-independent method was based on the analysis of 16S/26S rDNA clone libraries. There were differences between the results obtained by different methods. However, sufficient overlap existed between the different methods to identify potentially significant microbial groups. 16 and 20 different bacterial species were identified using culture-dependent and culture-independent methods, respectively. 7 species could be identified by both methods. The most predominant bacterial genera were Weissella and Staphylococcus. Both 6 different fungal species were identified using culture-dependent and culture-independent methods, respectively. Only 3 species could be identified by both sets of methods. The most predominant fungi were Aspergillus and Candida species. This work illustrated the importance of a comprehensive polyphasic approach in the analysis of microbial composition during soy sauce koji-making, the knowledge of which will enable further optimization of microbial composition and quality control of koji to upgrade Chinese traditional soy sauce product. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  6. Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data

    PubMed Central

    Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong

    2018-01-01

    The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730

  7. Device-Independent Tests of Entropy

    NASA Astrophysics Data System (ADS)

    Chaves, Rafael; Brask, Jonatan Bohr; Brunner, Nicolas

    2015-09-01

    We show that the entropy of a message can be tested in a device-independent way. Specifically, we consider a prepare-and-measure scenario with classical or quantum communication, and develop two different methods for placing lower bounds on the communication entropy, given observable data. The first method is based on the framework of causal inference networks. The second technique, based on convex optimization, shows that quantum communication provides an advantage over classical communication, in the sense of requiring a lower entropy to reproduce given data. These ideas may serve as a basis for novel applications in device-independent quantum information processing.

  8. Brain tumor segmentation based on local independent projection-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin

    2014-10-01

    Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.

  9. Functional connectivity analysis of the neural bases of emotion regulation: A comparison of independent component method with density-based k-means clustering method.

    PubMed

    Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo

    2016-04-29

    Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.

  10. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  11. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  12. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  13. Comparison of Standard Culture-Based Method to Culture-Independent Method for Evaluation of Hygiene Effects on the Hand Microbiome.

    PubMed

    Zapka, C; Leff, J; Henley, J; Tittl, J; De Nardo, E; Butler, M; Griggs, R; Fierer, N; Edmonds-Wilson, S

    2017-03-28

    Hands play a critical role in the transmission of microbiota on one's own body, between individuals, and on environmental surfaces. Effectively measuring the composition of the hand microbiome is important to hand hygiene science, which has implications for human health. Hand hygiene products are evaluated using standard culture-based methods, but standard test methods for culture-independent microbiome characterization are lacking. We sampled the hands of 50 participants using swab-based and glove-based methods prior to and following four hand hygiene treatments (using a nonantimicrobial hand wash, alcohol-based hand sanitizer [ABHS], a 70% ethanol solution, or tap water). We compared results among culture plate counts, 16S rRNA gene sequencing of DNA extracted directly from hands, and sequencing of DNA extracted from culture plates. Glove-based sampling yielded higher numbers of unique operational taxonomic units (OTUs) but had less diversity in bacterial community composition than swab-based sampling. We detected treatment-induced changes in diversity only by using swab-based samples ( P < 0.001); we were unable to detect changes with glove-based samples. Bacterial cell counts significantly decreased with use of the ABHS ( P < 0.05) and ethanol control ( P < 0.05). Skin hydration at baseline correlated with bacterial abundances, bacterial community composition, pH, and redness across subjects. The importance of the method choice was substantial. These findings are important to ensure improvement of hand hygiene industry methods and for future hand microbiome studies. On the basis of our results and previously published studies, we propose recommendations for best practices in hand microbiome research. IMPORTANCE The hand microbiome is a critical area of research for diverse fields, such as public health and forensics. The suitability of culture-independent methods for assessing effects of hygiene products on microbiota has not been demonstrated. This is the first controlled laboratory clinical hand study to have compared traditional hand hygiene test methods with newer culture-independent characterization methods typically used by skin microbiologists. This study resulted in recommendations for hand hygiene product testing, development of methods, and future hand skin microbiome research. It also demonstrated the importance of inclusion of skin physiological metadata in skin microbiome research, which is atypical for skin microbiome studies. Copyright © 2017 Zapka et al.

  14. On the use of thick-airfoil theory to design airfoil families in which thickness and lift are varied independently

    NASA Technical Reports Server (NTRS)

    Barger, R. L.

    1974-01-01

    A method has been developed for designing families of airfoils in which the members of a family have the same basic type of pressure distribution but vary in thickness ratio or lift, or both. Thickness ratio and lift may be prescribed independently. The method which is based on the Theodorsen thick-airfoil theory permits moderate variations from the basic shape on which the family is based.

  15. Case-Based Independent Study for Medical Students in Emergency Psychiatry

    ERIC Educational Resources Information Center

    Hirshbein, Laura D.; Gay, Tamara

    2005-01-01

    OBJECTIVE: Brief cases designed for independent study were developed to allow third-year medical students some exposure to important concepts in emergency psychiatry during their required psychiatry clerkship. METHODS: Five independent study cases were given to University of Michigan third-year medical students during their psychiatry clerkship,…

  16. Optimal model-based sensorless adaptive optics for epifluorescence microscopy.

    PubMed

    Pozzi, Paolo; Soloviev, Oleg; Wilding, Dean; Vdovin, Gleb; Verhaegen, Michel

    2018-01-01

    We report on a universal sample-independent sensorless adaptive optics method, based on modal optimization of the second moment of the fluorescence emission from a point-like excitation. Our method employs a sample-independent precalibration, performed only once for the particular system, to establish the direct relation between the image quality and the aberration. The method is potentially applicable to any form of microscopy with epifluorescence detection, including the practically important case of incoherent fluorescence emission from a three dimensional object, through minor hardware modifications. We have applied the technique successfully to a widefield epifluorescence microscope and to a multiaperture confocal microscope.

  17. An adaptive signal-processing approach to online adaptive tutoring.

    PubMed

    Bergeron, Bryan; Cline, Andrew

    2011-01-01

    Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.

  18. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Effective and Efficient Stochastic Global Optimization

    DTIC Science & Technology

    2012-02-01

    parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires

  19. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Validity and power of association testing in family-based sampling designs: evidence for and against the common wisdom.

    PubMed

    Knight, Stacey; Camp, Nicola J

    2011-04-01

    Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.

  1. Multi-spectrometer calibration transfer based on independent component analysis.

    PubMed

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  2. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  3. Performance of human fecal anaerobe-associated PCR-based assays in a multi-laboratory method evaluation study

    EPA Science Inventory

    A number of PCR-based methods for detecting human fecal material in environmental waters have been developed over the past decade, but these methods have rarely received independent comparative testing. Here, we evaluated ten of these methods (BacH, BacHum-UCD, B. thetaiotaomic...

  4. A Combined Independent Source Separation and Quality Index Optimization Method for Fetal ECG Extraction from Abdominal Maternal Leads

    PubMed Central

    Billeci, Lucia; Varanini, Maurizio

    2017-01-01

    The non-invasive fetal electrocardiogram (fECG) technique has recently received considerable interest in monitoring fetal health. The aim of our paper is to propose a novel fECG algorithm based on the combination of the criteria of independent source separation and of a quality index optimization (ICAQIO-based). The algorithm was compared with two methods applying the two different criteria independently—the ICA-based and the QIO-based methods—which were previously developed by our group. All three methods were tested on the recently implemented Fetal ECG Synthetic Database (FECGSYNDB). Moreover, the performance of the algorithm was tested on real data from the PhysioNet fetal ECG Challenge 2013 Database. The proposed combined method outperformed the other two algorithms on the FECGSYNDB (ICAQIO-based: 98.78%, QIO-based: 97.77%, ICA-based: 97.61%). Significant differences were obtained in particular in the conditions when uterine contractions and maternal and fetal ectopic beats occurred. On the real data, all three methods obtained very high performances, with the QIO-based method proving slightly better than the other two (ICAQIO-based: 99.38%, QIO-based: 99.76%, ICA-based: 99.37%). The findings from this study suggest that the proposed method could potentially be applied as a novel algorithm for accurate extraction of fECG, especially in critical recording conditions. PMID:28509860

  5. Has Stewart approach improved our ability to diagnose acid-base disorders in critically ill patients?

    PubMed

    Masevicius, Fabio D; Dubin, Arnaldo

    2015-02-04

    The Stewart approach-the application of basic physical-chemical principles of aqueous solutions to blood-is an appealing method for analyzing acid-base disorders. These principles mainly dictate that pH is determined by three independent variables, which change primarily and independently of one other. In blood plasma in vivo these variables are: (1) the PCO2; (2) the strong ion difference (SID)-the difference between the sums of all the strong (i.e., fully dissociated, chemically nonreacting) cations and all the strong anions; and (3) the nonvolatile weak acids (Atot). Accordingly, the pH and the bicarbonate levels (dependent variables) are only altered when one or more of the independent variables change. Moreover, the source of H(+) is the dissociation of water to maintain electroneutrality when the independent variables are modified. The basic principles of the Stewart approach in blood, however, have been challenged in different ways. First, the presumed independent variables are actually interdependent as occurs in situations such as: (1) the Hamburger effect (a chloride shift when CO2 is added to venous blood from the tissues); (2) the loss of Donnan equilibrium (a chloride shift from the interstitium to the intravascular compartment to balance the decrease of Atot secondary to capillary leak; and (3) the compensatory response to a primary disturbance in either independent variable. Second, the concept of water dissociation in response to changes in SID is controversial and lacks experimental evidence. In addition, the Stewart approach is not better than the conventional method for understanding acid-base disorders such as hyperchloremic metabolic acidosis secondary to a chloride-rich-fluid load. Finally, several attempts were performed to demonstrate the clinical superiority of the Stewart approach. These studies, however, have severe methodological drawbacks. In contrast, the largest study on this issue indicated the interchangeability of the Stewart and conventional methods. Although the introduction of the Stewart approach was a new insight into acid-base physiology, the method has not significantly improved our ability to understand, diagnose, and treat acid-base alterations in critically ill patients.

  6. A comparative study of interface reconstruction methods for multi-material ALE simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  7. Naturally-Emerging Technology-Based Leadership Roles in Three Independent Schools: A Social Network-Based Case Study Using Fuzzy Set Qualitative Comparative Analysis

    ERIC Educational Resources Information Center

    Velastegui, Pamela J.

    2013-01-01

    This hypothesis-generating case study investigates the naturally emerging roles of technology brokers and technology leaders in three independent schools in New York involving 92 school educators. A multiple and mixed method design utilizing Social Network Analysis (SNA) and fuzzy set Qualitative Comparative Analysis (FSQCA) involved gathering…

  8. A Multifunctional Interface Method for Coupling Finite Element and Finite Difference Methods: Two-Dimensional Scalar-Field Problems

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2002-01-01

    A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.

  9. An Evaluation of Attitude-Independent Magnetometer-Bias Determination Methods

    NASA Technical Reports Server (NTRS)

    Hashmall, J. A.; Deutschmann, Julie

    1996-01-01

    Although several algorithms now exist for determining three-axis magnetometer (TAM) biases without the use of attitude data, there are few studies on the effectiveness of these methods, especially in comparison with attitude dependent methods. This paper presents the results of a comparison of three attitude independent methods and an attitude dependent method for computing TAM biases. The comparisons are based on in-flight data from the Extreme Ultraviolet Explorer (EUVE), the Upper Atmosphere Research Satellite (UARS), and the Compton Gamma Ray Observatory (GRO). The effectiveness of an algorithm is measured by the accuracy of attitudes computed using biases determined with that algorithm. The attitude accuracies are determined by comparison with known, extremely accurate, star-tracker-based attitudes. In addition, the effect of knowledge of calibration parameters other than the biases on the effectiveness of all bias determination methods is examined.

  10. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    PubMed

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. USE OF BACTEROIDES PCR-BASED METHODS TO EXAMINE FECAL CONTAMINATION SOURCES IN TROPICAL COASTAL WATERS

    EPA Science Inventory

    Several library independent Microbial Source Tracking methods have been developed to rapidly determine the source of fecal contamination. Thus far, none of these methods have been tested in tropical marine waters. In this study, we used a Bacteroides 16S rDNA PCR-based...

  12. Data-Driven Model Reduction and Transfer Operator Approximation

    NASA Astrophysics Data System (ADS)

    Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank

    2018-06-01

    In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.

  13. Through-the-Wall Localization of a Moving Target by Two Independent Ultra Wideband (UWB) Radar Systems

    PubMed Central

    Kocur, Dušan; Švecová, Mária; Rovňáková, Jana

    2013-01-01

    In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered. PMID:24021968

  14. Through-the-wall localization of a moving target by two independent ultra wideband (UWB) radar systems.

    PubMed

    Kocur, Dušan; Svecová, Mária; Rovňáková, Jana

    2013-09-09

    In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered.

  15. Extinction-ratio-independent electrical method for measuring chirp parameters of Mach-Zehnder modulators using frequency-shifted heterodyne.

    PubMed

    Zhang, Shangjian; Wang, Heng; Zou, Xinhai; Zhang, Yali; Lu, Rongguo; Liu, Yong

    2015-06-15

    An extinction-ratio-independent electrical method is proposed for measuring chirp parameters of Mach-Zehnder electric-optic intensity modulators based on frequency-shifted optical heterodyne. The method utilizes the electrical spectrum analysis of the heterodyne products between the intensity modulated optical signal and the frequency-shifted optical carrier, and achieves the intrinsic chirp parameters measurement at microwave region with high-frequency resolution and wide-frequency range for the Mach-Zehnder modulator with a finite extinction ratio. Moreover, the proposed method avoids calibrating the responsivity fluctuation of the photodiode in spite of the involved photodetection. Chirp parameters as a function of modulation frequency are experimentally measured and compared to those with the conventional optical spectrum analysis method. Our method enables an extinction-ratio-independent and calibration-free electrical measurement of Mach-Zehnder intensity modulators by using the high-resolution frequency-shifted heterodyne technique.

  16. Centralized, decentralized, and independent control of a flexible manipulator on a flexible base

    NASA Technical Reports Server (NTRS)

    Li, Feiyue; Bainum, Peter M.; Xu, Jianke

    1991-01-01

    The dynamics and control of a flexible manipulator arm with payload mass on a flexible base in space are considered. The controllers are provided by one torquer at the center of the base and one torquer at the connection joint of the robot and the base. The nonlinear dynamics of the system is modeled by applying the finite element method and Lagrangian formula. Three control strategies are considered and compared, i.e., centralized control, decentralized control, and independent control. All these control designs are based on the linear quadratic regulator theory. A mathematical decomposition is used in the decentralization process so that the coupling between the subsystems is weak, while a physical decomposition is used in the independent control design process. For both the decentralized and the independent controls, the stability of the overall linear system is checked before a numerical simulations is initiated. Two numerical examples show that the response of the independent control system are close to those of the centralized control system, while the responses of the decentralized control system are not.

  17. Similar negative impacts of temperature on global wheat yield estimated by three independent methods

    USDA-ARS?s Scientific Manuscript database

    The potential impact of global temperature change on global wheat production has recently been assessed with different methods, scaling and aggregation approaches. Here we show that grid-based simulations, point-based simulations, and statistical regressions produce similar estimates of temperature ...

  18. A method for analyzing clustered interval-censored data based on Cox's model.

    PubMed

    Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau

    2013-02-28

    Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.

  19. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.

  20. Estimating biodiversity of fungi in activated sludge communities using culture-independent methods.

    PubMed

    Evans, Tegan N; Seviour, Robert J

    2012-05-01

    Fungal diversity of communities in several activated sludge plants treating different influent wastes was determined by comparative sequence analyses of their 18S rRNA genes. Methods for DNA extraction and choice of primers for PCR amplification were both optimised using denaturing gradient gel electrophoresis profile patterns. Phylogenetic analysis revealed that the levels of fungal biodiversity in some communities, like those treating paper pulp wastes, were low, and most of the fungi detected in all communities examined were novel uncultured representatives of the major fungal subdivisions, in particular, the newly described clade Cryptomycota. The fungal populations in activated sludge revealed by these culture-independent methods were markedly different to those based on culture-dependent data. Members of the genera Penicillium, Cladosporium, Aspergillus and Mucor, which have been commonly identified in mixed liquor, were not identified in any of these plant communities. Non-fungal eukaryotic 18S rRNA genes were also amplified with the primer sets used. This is the first report where culture-independent methods have been applied to flocculated activated sludge biomass samples to estimate fungal community composition and, as expected, the data obtained gave a markedly different view of their population biodiversity compared to that based on culture-dependent methods.

  1. The value of a year's general education for reducing the symptoms of dementia.

    PubMed

    Brent, Robert J

    2018-01-01

    We present a method for estimating the benefits of years of education for reducing dementia symptoms based on the cost savings that would accrue from continuing independent living rather than relying on formal or informal carers. Our method for estimating the benefits of education involves three steps: first taking a year of education and seeing how much this lowers dementia, second using this dementia reduction and estimating how much independent living is affected and third applying the change in caregiving costs associated with the independent living change. We apply our method for estimating education benefits to a National Alzheimer's Coordinating Center sample of 17,239 participants at 32 US Alzheimer's disease centres over the period September 2005 and May 2015.

  2. [Extraction of evoked related potentials by using the combination of independent component analysis and wavelet analysis].

    PubMed

    Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua

    2010-08-01

    In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.

  3. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.

  4. Approach-Method Interaction: The Role of Teaching Method on the Effect of Context-Based Approach in Physics Instruction

    ERIC Educational Resources Information Center

    Pesman, Haki; Ozdemir, Omer Faruk

    2012-01-01

    The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…

  5. Fall Risk Assessment Through Automatic Combination of Clinical Fall Risk Factors and Body-Worn Sensor Data.

    PubMed

    Greene, Barry R; Redmond, Stephen J; Caulfield, Brian

    2017-05-01

    Falls are the leading global cause of accidental death and disability in older adults and are the most common cause of injury and hospitalization. Accurate, early identification of patients at risk of falling, could lead to timely intervention and a reduction in the incidence of fall-related injury and associated costs. We report a statistical method for fall risk assessment using standard clinical fall risk factors (N = 748). We also report a means of improving this method by automatically combining it, with a fall risk assessment algorithm based on inertial sensor data and the timed-up-and-go test. Furthermore, we provide validation data on the sensor-based fall risk assessment method using a statistically independent dataset. Results obtained using cross-validation on a sample of 292 community dwelling older adults suggest that a combined clinical and sensor-based approach yields a classification accuracy of 76.0%, compared to either 73.6% for sensor-based assessment alone, or 68.8% for clinical risk factors alone. Increasing the cohort size by adding an additional 130 subjects from a separate recruitment wave (N = 422), and applying the same model building and validation method, resulted in a decrease in classification performance (68.5% for combined classifier, 66.8% for sensor data alone, and 58.5% for clinical data alone). This suggests that heterogeneity between cohorts may be a major challenge when attempting to develop fall risk assessment algorithms which generalize well. Independent validation of the sensor-based fall risk assessment algorithm on an independent cohort of 22 community dwelling older adults yielded a classification accuracy of 72.7%. Results suggest that the present method compares well to previously reported sensor-based fall risk assessment methods in assessing falls risk. Implementation of objective fall risk assessment methods on a large scale has the potential to improve quality of care and lead to a reduction in associated hospital costs, due to fewer admissions and reduced injuries due to falling.

  6. Improving semi-text-independent method of writer verification using difference vector

    NASA Astrophysics Data System (ADS)

    Li, Xin; Ding, Xiaoqing

    2009-01-01

    The semi-text-independent method of writer verification based on the linear framework is a method that can use all characters of two handwritings to discriminate the writers in the condition of knowing the text contents. The handwritings are allowed to just have small numbers of even totally different characters. This fills the vacancy of the classical text-dependent methods and the text-independent methods of writer verification. Moreover, the information, what every character is, is used for the semi-text-independent method in this paper. Two types of standard templates, generated from many writer-unknown handwritten samples and printed samples of each character, are introduced to represent the content information of each character. The difference vectors of the character samples are gotten by subtracting the standard templates from the original feature vectors and used to replace the original vectors in the process of writer verification. By removing a large amount of content information and remaining the style information, the verification accuracy of the semi-text-independent method is improved. On a handwriting database involving 30 writers, when the query handwriting and the reference handwriting are composed of 30 distinct characters respectively, the average equal error rate (EER) of writer verification reaches 9.96%. And when the handwritings contain 50 characters, the average EER falls to 6.34%, which is 23.9% lower than the EER of not using the difference vectors.

  7. Retention of colonoscopy skills after virtual reality simulator training by independent and proctored methods.

    PubMed

    Snyder, Christopher W; Vandromme, Marianne J; Tyra, Sharon L; Hawn, Mary T

    2010-07-01

    Virtual reality (VR) simulators may enhance surgical resident colonoscopy skills, but the duration of skill retention and the effects of different simulator training methods are unknown. Medical students participating in a randomized trial of independent (automated simulator feedback only) versus proctored (human expert feedback plus simulator feedback) simulator training performed a standardized VR colonoscopy scenario at baseline, at the end of training (posttraining), and after a median 4.5 months without practice (retention). Performances were scored on a 10-point scale based on expert proficiency criteria and compared for the independent and proctored groups. Thirteen trainees (8 proctored, 5 independent) were included. Performance at retention testing was significantly better than baseline (median score 10 vs. 5, P < 0.0001), and no different from posttraining (median score 10 vs. 10, P = 0.19). Score changes from baseline to retention and from posttraining to retention were no different for the proctored and independent groups. Overinsufflation and excessive force were the most common reasons for nonproficiency at retention. After proficiency-based VR simulator training, colonoscopy skills are retained for several months, regardless of whether an independent or proctored approach is used. Error avoidance skills may not be retained as well as speed and efficiency skills.

  8. Differential resistance of drinking water bacterial populations to monochloramine disinfection.

    PubMed

    Chiao, Tzu-Hsin; Clancy, Tara M; Pinto, Ameet; Xi, Chuanwu; Raskin, Lutgarde

    2014-04-01

    The impact of monochloramine disinfection on the complex bacterial community structure in drinking water systems was investigated using culture-dependent and culture-independent methods. Changes in viable bacterial diversity were monitored using culture-independent methods that distinguish between live and dead cells based on membrane integrity, providing a highly conservative measure of viability. Samples were collected from lab-scale and full-scale drinking water filters exposed to monochloramine for a range of contact times. Culture-independent detection of live cells was based on propidium monoazide (PMA) treatment to selectively remove DNA from membrane-compromised cells. Quantitative PCR (qPCR) and pyrosequencing of 16S rRNA genes was used to quantify the DNA of live bacteria and characterize the bacterial communities, respectively. The inactivation rate determined by the culture-independent PMA-qPCR method (1.5-log removal at 664 mg·min/L) was lower than the inactivation rate measured by the culture-based methods (4-log removal at 66 mg·min/L). Moreover, drastic changes in the live bacterial community structure were detected during monochloramine disinfection using PMA-pyrosequencing, while the community structure appeared to remain stable when pyrosequencing was performed on samples that were not subject to PMA treatment. Genera that increased in relative abundance during monochloramine treatment include Legionella, Escherichia, and Geobacter in the lab-scale system and Mycobacterium, Sphingomonas, and Coxiella in the full-scale system. These results demonstrate that bacterial populations in drinking water exhibit differential resistance to monochloramine, and that the disinfection process selects for resistant bacterial populations.

  9. Silicon-on-insulator-based polarization-independent 1×3 broadband beam splitter with adiabatic coupling

    NASA Astrophysics Data System (ADS)

    Gong, Yuanhao; Liu, Lei; Chang, Limin; Li, Zhiyong; Tan, Manqing; Yu, Yude

    2017-10-01

    We propose and numerically simulate a polarization-independent 1×3 broadband beam splitter based on silicon-on-insulator (SOI) technology with adiabatic coupling. The designed structure is simulated by beam-propagation-method (BPM) and gets simulated transmission uniformity of three outputs better than 0.3dB for TE-polarization and 0.8dB for TM-polarization in a broadband of 180nm.

  10. An Attempt at Matching Waking Events Into Dream Reports by Independent Judges

    PubMed Central

    Wang, Jia Xi; Shen, He Yong

    2018-01-01

    Correlations between memories and dreaming has typically been studied by linking conscious experiences and dream reports, which has illustrated that dreaming reflects waking life events, thoughts, and emotions. As some research suggests that sleep has a function of memory consolidation, and dreams reflect this, researching this relationship further may uncover more useful insights. However, most related research has been conducted using the self-report method which asks participants to judge the relationship between their own conscious experiences and dreams. This method may cause errors when the research purpose is to make comparisons between different groups, because individual differences cannot be balanced out when the results are compared among groups. Based on a knowledge of metaphors and symbols, we developed two operationalized definitions for independent judges to match conscious experiences and dreams, the descriptive incorporation and the metaphorical incorporation, and tested their reliability for the matching purpose. Two independent judges were asked to complete a linking task for 212 paired event-dreams. Results showed almost half dreams can be matched by independent judges, and the independent-judge method could provide similar proportions for the linking task, when compared with the self-report method. PMID:29681873

  11. Dielectric properties-based method for rapid and nondestructive moisture sensing in almonds

    USDA-ARS?s Scientific Manuscript database

    A dielectric-based method is presented for moisture determination in almonds independent of bulk density. The dielectric properties of almond were measured between 5 and 15 GHz, with a 1-GHz increments, for samples with moisture contents ranging from 4.8% to 16.5%, wet basis, bulk densities ranging ...

  12. An Information-Correction Method for Testlet-Based Test Analysis: From the Perspectives of Item Response Theory and Generalizability Theory. Research Report. ETS RR-17-27

    ERIC Educational Resources Information Center

    Li, Feifei

    2017-01-01

    An information-correction method for testlet-based tests is introduced. This method takes advantage of both generalizability theory (GT) and item response theory (IRT). The measurement error for the examinee proficiency parameter is often underestimated when a unidimensional conditional-independence IRT model is specified for a testlet dataset. By…

  13. Cross-domain expression recognition based on sparse coding and transfer learning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Zhang, Weiyi; Huang, Yong

    2017-05-01

    Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.

  14. Discontinuous Finite Element Quasidiffusion Methods

    DOE PAGES

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    2018-05-21

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  15. Discontinuous Finite Element Quasidiffusion Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  16. Single Transducer Ultrasonic Imaging Method that Eliminates the Effect of Plate Thickness Variation in the Image

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    1996-01-01

    This article describes a single transducer ultrasonic imaging method that eliminates the effect of plate thickness variation in the image. The method thus isolates ultrasonic variations due to material microstructure. The use of this method can result in significant cost savings because the ultrasonic image can be interpreted correctly without the need for machining to achieve precise thickness uniformity during nondestructive evaluations of material development. The method is based on measurement of ultrasonic velocity. Images obtained using the thickness-independent methodology are compared with conventional velocity and c-scan echo peak amplitude images for monolithic ceramic (silicon nitride), metal matrix composite and polymer matrix composite materials. It was found that the thickness-independent ultrasonic images reveal and quantify correctly areas of global microstructural (pore and fiber volume fraction) variation due to the elimination of thickness effects. The thickness-independent ultrasonic imaging method described in this article is currently being commercialized under a cooperative agreement between NASA Lewis Research Center and Sonix, Inc.

  17. The value of a year’s general education for reducing the symptoms of dementia

    PubMed Central

    Brent, Robert J.

    2017-01-01

    We present a method for estimating the benefits of years of education for reducing dementia symptoms based on the cost savings that would accrue from continuing independent living rather than relying on formal or informal carers. Our method for estimating the benefits of education involves three steps: first taking a year of education and seeing how much this lowers dementia, second using this dementia reduction and estimating how much independent living is affected and third applying the change in caregiving costs associated with the independent living change. We apply our method for estimating education benefits to a National Alzheimer’s Coordinating Center sample of 17,239 participants at 32 US Alzheimer’s disease centres over the period September 2005 and May 2015. PMID:29743729

  18. A Multilevel Testlet Model for Dual Local Dependence

    ERIC Educational Resources Information Center

    Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying

    2012-01-01

    The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…

  19. An Extension of IRT-Based Equating to the Dichotomous Testlet Response Theory Model

    ERIC Educational Resources Information Center

    Tao, Wei; Cao, Yi

    2016-01-01

    Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…

  20. Factors Affecting Student Retention at One Independent School in the Southwest

    ERIC Educational Resources Information Center

    Ahlstrom, Dan Roger

    2013-01-01

    This mixed-methods case study determined the factors and examined the issues associated with student retention at a faith-based independent day school in southwestern United States of America. The data included online surveys, personal interviews, collection of archival information, and the researcher's extensive field notes. Surveys (530) were…

  1. Transfer matrix method for dynamics modeling and independent modal space vibration control design of linear hybrid multibody system

    NASA Astrophysics Data System (ADS)

    Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun

    2018-05-01

    In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.

  2. Method and apparatus for control of a magnetic structure

    DOEpatents

    Challenger, Michael P.; Valla, Arthur S.

    1996-06-18

    A method and apparatus for independently adjusting the spacing between opposing magnet arrays in charged particle based light sources. Adjustment mechanisms between each of the magnet arrays and the supporting structure allow the gap between the two magnet arrays to be independently adjusted. In addition, spherical bearings in the linkages to the magnet arrays permit the transverse angular orientation of the magnet arrays to also be adjusted. The opposing magnet arrays can be supported above the ground by the structural support.

  3. Detecting coupled collective motions in protein by independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Joti, Yasumasa; Kitao, Akio

    2010-11-01

    Protein dynamics evolves in a high-dimensional space, comprising aharmonic, strongly correlated motional modes. Such correlation often plays an important role in analyzing protein function. In order to identify significantly correlated collective motions, here we employ independent subspace analysis based on the subspace joint approximate diagonalization of eigenmatrices algorithm for the analysis of molecular dynamics (MD) simulation trajectories. From the 100 ns MD simulation of T4 lysozyme, we extract several independent subspaces in each of which collective modes are significantly correlated, and identify the other modes as independent. This method successfully detects the modes along which long-tailed non-Gaussian probability distributions are obtained. Based on the time cross-correlation analysis, we identified a series of events among domain motions and more localized motions in the protein, indicating the connection between the functionally relevant phenomena which have been independently revealed by experiments.

  4. Development and application of pulmonary structure-function registration methods: towards pulmonary image-guidance tools for improved airway targeted therapies and outcomes

    NASA Astrophysics Data System (ADS)

    Guo, Fumin; Pike, Damien; Svenningsen, Sarah; Coxson, Harvey O.; Drozd, John J.; Yuan, Jing; Fenster, Aaron; Parraga, Grace

    2014-03-01

    Objectives: We aimed to develop a way to rapidly generate multi-modality (MRI-CT) pulmonary imaging structurefunction maps using novel non-rigid image registration methods. This objective is part of our overarching goal to provide an image processing pipeline to generate pulmonary structure-function maps and guide airway-targeted therapies. Methods: Anatomical 1H and functional 3He MRI were acquired in 5 healthy asymptomatic ex-smokers and 7 ex-smokers with chronic obstructive pulmonary disease (COPD) at inspiration breath-hold. Thoracic CT was performed within ten minutes of MRI using the same breath-hold volume. Landmark-based affine registration methods previously validated for imaging of COPD, was based on corresponding fiducial markers located in both CT and 1H MRI coronal slices and compared with shape-based CT-MRI non-rigid registration. Shape-based CT-MRI registration was developed by first identifying the shapes of the lung cavities manually, and then registering the two shapes using affine and thin-plate spline algorithms. We compared registration accuracy using the fiducial localization error (FLE) and target registration error (TRE). Results: For landmark-based registration, the TRE was 8.4±5.3 mm for whole lung and 7.8±4.6 mm for the R and L lungs registered independently (p=0.4). For shape-based registration, the TRE was 8.0±4.6 mm for whole lung as compared to 6.9±4.4 mm for the R and L lung registered independently and this difference was significant (p=0.01). The difference for shape-based (6.9±4.4 mm) and landmark-based R and L lung registration (7.8±4.6 mm) was also significant (p=.04) Conclusion: Shape-based registration TRE was significantly improved compared to landmark-based registration when considering L and R lungs independently.

  5. Culture-independent diagnostics for health security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doggett, Norman A.; Mukundan, Harshini; Lefkowitz, Elliot J.

    The past decade has seen considerable development in the diagnostic application of nonculture methods, including nucleic acid amplification-based methods and mass spectrometry, for the diagnosis of infectious diseases. The implications of these new culture-independent diagnostic tests (CIDTs) include bypassing the need to culture organisms, thus potentially affecting public health surveillance systems, which continue to use isolates as the basis of their surveillance programs and to assess phenotypic resistance to antimicrobial agents. CIDTs may also affect the way public health practitioners detect and respond to a bioterrorism event. In response to a request from the Department of Homeland Security, Los Alamosmore » National Laboratory and the Centers for Disease Control and Prevention cosponsored a workshop to review the impact of CIDTs on the rapid detection and identification of biothreat agents. Four panel discussions were held that covered nucleic acid amplification–based diagnostics, mass spectrometry, antibody-based diagnostics, and next-generation sequencing. Exploiting the extensive expertise available at this workshop, we identified the key features, benefits, and limitations of the various CIDT methods for providing rapid pathogen identification that are critical to the response and mitigation of a bioterrorism event. After the workshop we conducted a thorough review of the literature, investigating the current state of these 4 culture-independent diagnostic methods. Furthermore, this article combines information from the literature review and the insights obtained at the workshop.« less

  6. Culture-independent diagnostics for health security

    DOE PAGES

    Doggett, Norman A.; Mukundan, Harshini; Lefkowitz, Elliot J.; ...

    2016-06-17

    The past decade has seen considerable development in the diagnostic application of nonculture methods, including nucleic acid amplification-based methods and mass spectrometry, for the diagnosis of infectious diseases. The implications of these new culture-independent diagnostic tests (CIDTs) include bypassing the need to culture organisms, thus potentially affecting public health surveillance systems, which continue to use isolates as the basis of their surveillance programs and to assess phenotypic resistance to antimicrobial agents. CIDTs may also affect the way public health practitioners detect and respond to a bioterrorism event. In response to a request from the Department of Homeland Security, Los Alamosmore » National Laboratory and the Centers for Disease Control and Prevention cosponsored a workshop to review the impact of CIDTs on the rapid detection and identification of biothreat agents. Four panel discussions were held that covered nucleic acid amplification–based diagnostics, mass spectrometry, antibody-based diagnostics, and next-generation sequencing. Exploiting the extensive expertise available at this workshop, we identified the key features, benefits, and limitations of the various CIDT methods for providing rapid pathogen identification that are critical to the response and mitigation of a bioterrorism event. After the workshop we conducted a thorough review of the literature, investigating the current state of these 4 culture-independent diagnostic methods. Furthermore, this article combines information from the literature review and the insights obtained at the workshop.« less

  7. Culture-Independent Diagnostics for Health Security.

    PubMed

    Doggett, Norman A; Mukundan, Harshini; Lefkowitz, Elliot J; Slezak, Tom R; Chain, Patrick S; Morse, Stephen; Anderson, Kevin; Hodge, David R; Pillai, Segaran

    2016-01-01

    The past decade has seen considerable development in the diagnostic application of nonculture methods, including nucleic acid amplification-based methods and mass spectrometry, for the diagnosis of infectious diseases. The implications of these new culture-independent diagnostic tests (CIDTs) include bypassing the need to culture organisms, thus potentially affecting public health surveillance systems, which continue to use isolates as the basis of their surveillance programs and to assess phenotypic resistance to antimicrobial agents. CIDTs may also affect the way public health practitioners detect and respond to a bioterrorism event. In response to a request from the Department of Homeland Security, Los Alamos National Laboratory and the Centers for Disease Control and Prevention cosponsored a workshop to review the impact of CIDTs on the rapid detection and identification of biothreat agents. Four panel discussions were held that covered nucleic acid amplification-based diagnostics, mass spectrometry, antibody-based diagnostics, and next-generation sequencing. Exploiting the extensive expertise available at this workshop, we identified the key features, benefits, and limitations of the various CIDT methods for providing rapid pathogen identification that are critical to the response and mitigation of a bioterrorism event. After the workshop we conducted a thorough review of the literature, investigating the current state of these 4 culture-independent diagnostic methods. This article combines information from the literature review and the insights obtained at the workshop.

  8. Multigrid and Krylov Subspace Methods for the Discrete Stokes Equations

    NASA Technical Reports Server (NTRS)

    Elman, Howard C.

    1996-01-01

    Discretization of the Stokes equations produces a symmetric indefinite system of linear equations. For stable discretizations, a variety of numerical methods have been proposed that have rates of convergence independent of the mesh size used in the discretization. In this paper, we compare the performance of four such methods: variants of the Uzawa, preconditioned conjugate gradient, preconditioned conjugate residual, and multigrid methods, for solving several two-dimensional model problems. The results indicate that where it is applicable, multigrid with smoothing based on incomplete factorization is more efficient than the other methods, but typically by no more than a factor of two. The conjugate residual method has the advantage of being both independent of iteration parameters and widely applicable.

  9. Independent component analysis for the extraction of reliable protein signal profiles from MALDI-TOF mass spectra.

    PubMed

    Mantini, Dante; Petrucci, Francesca; Del Boccio, Piero; Pieragostino, Damiana; Di Nicola, Marta; Lugaresi, Alessandra; Federici, Giorgio; Sacchetta, Paolo; Di Ilio, Carmine; Urbani, Andrea

    2008-01-01

    Independent component analysis (ICA) is a signal processing technique that can be utilized to recover independent signals from a set of their linear mixtures. We propose ICA for the analysis of signals obtained from large proteomics investigations such as clinical multi-subject studies based on MALDI-TOF MS profiling. The method is validated on simulated and experimental data for demonstrating its capability of correctly extracting protein profiles from MALDI-TOF mass spectra. The comparison on peak detection with an open-source and two commercial methods shows its superior reliability in reducing the false discovery rate of protein peak masses. Moreover, the integration of ICA and statistical tests for detecting the differences in peak intensities between experimental groups allows to identify protein peaks that could be indicators of a diseased state. This data-driven approach demonstrates to be a promising tool for biomarker-discovery studies based on MALDI-TOF MS technology. The MATLAB implementation of the method described in the article and both simulated and experimental data are freely available at http://www.unich.it/proteomica/bioinf/.

  10. Separation of GRACE geoid time-variations using Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Frappart, F.; Ramillien, G.; Maisongrande, P.; Bonnet, M.

    2009-12-01

    Independent Component Analysis (ICA) is a blind separation method based on the simple assumptions of the independence of the sources and the non-Gaussianity of the observations. An approach based on this numerical method is used here to extract hydrological signals over land and oceans from the polluting striping noise due to orbit repetitiveness and present in the GRACE global mass anomalies. We took advantage of the availability of monthly Level-2 solutions from three official providers (i.e., CSR, JPL and GFZ) that can be considered as different observations of the same phenomenon. The efficiency of the methodology is first demonstrated on a synthetic case. Applied to one month of GRACE solutions, it allows to clearly separate the total water storage change from the meridional-oriented spurious gravity signals on the continents but not on the oceans. This technique gives results equivalent as the destriping method for continental water storage for the hydrological patterns with less smoothing. This methodology is then used to filter the complete series of the 2002-2009 GRACE solutions.

  11. Quantifying (dis)agreement between direct detection experiments in a halo-independent way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk

    We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less

  12. Sequence-independent construction of ordered combinatorial libraries with predefined crossover points.

    PubMed

    Jézéquel, Laetitia; Loeper, Jacqueline; Pompon, Denis

    2008-11-01

    Combinatorial libraries coding for mosaic enzymes with predefined crossover points constitute useful tools to address and model structure-function relationships and for functional optimization of enzymes based on multivariate statistics. The presented method, called sequence-independent generation of a chimera-ordered library (SIGNAL), allows easy shuffling of any predefined amino acid segment between two or more proteins. This method is particularly well adapted to the exchange of protein structural modules. The procedure could also be well suited to generate ordered combinatorial libraries independent of sequence similarities in a robotized manner. Sequence segments to be recombined are first extracted by PCR from a single-stranded template coding for an enzyme of interest using a biotin-avidin-based method. This technique allows the reduction of parental template contamination in the final library. Specific PCR primers allow amplification of two complementary mosaic DNA fragments, overlapping in the region to be exchanged. Fragments are finally reassembled using a fusion PCR. The process is illustrated via the construction of a set of mosaic CYP2B enzymes using this highly modular approach.

  13. Studying generalised dark matter interactions with extended halo-independent methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahlhoefer, Felix; Wild, Sebastian

    2016-10-20

    The interpretation of dark matter direct detection experiments is complicated by the fact that neither the astrophysical distribution of dark matter nor the properties of its particle physics interactions with nuclei are known in detail. To address both of these issues in a very general way we develop a new framework that combines the full formalism of non-relativistic effective interactions with state-of-the-art halo-independent methods. This approach makes it possible to analyse direct detection experiments for arbitrary dark matter interactions and quantify the goodness-of-fit independent of astrophysical uncertainties. We employ this method in order to demonstrate that the degeneracy between astrophysicalmore » uncertainties and particle physics unknowns is not complete. Certain models can be distinguished in a halo-independent way using a single ton-scale experiment based on liquid xenon, while other models are indistinguishable with a single experiment but can be separated using combined information from several target elements.« less

  14. Calculation of electronic coupling matrix elements for ground and excited state electron transfer reactions: Comparison of the generalized Mulliken-Hush and block diagonalization methods

    NASA Astrophysics Data System (ADS)

    Cave, Robert J.; Newton, Marshall D.

    1997-06-01

    Two independent methods are presented for the nonperturbative calculation of the electronic coupling matrix element (Hab) for electron transfer reactions using ab initio electronic structure theory. The first is based on the generalized Mulliken-Hush (GMH) model, a multistate generalization of the Mulliken Hush formalism for the electronic coupling. The second is based on the block diagonalization (BD) approach of Cederbaum, Domcke, and co-workers. Detailed quantitative comparisons of the two methods are carried out based on results for (a) several states of the system Zn2OH2+ and (b) the low-lying states of the benzene-Cl atom complex and its contact ion pair. Generally good agreement between the two methods is obtained over a range of geometries. Either method can be applied at an arbitrary nuclear geometry and, as a result, may be used to test the validity of the Condon approximation. Examples of nonmonotonic behavior of the electronic coupling as a function of nuclear coordinates are observed for Zn2OH2+. Both methods also yield a natural definition of the effective distance (rDA) between donor (D) and acceptor (A) sites, in contrast to earlier approaches which required independent estimates of rDA, generally based on molecular structure data.

  15. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  16. Computing convex quadrangulations☆

    PubMed Central

    Schiffer, T.; Aurenhammer, F.; Demuth, M.

    2012-01-01

    We use projected Delaunay tetrahedra and a maximum independent set approach to compute large subsets of convex quadrangulations on a given set of points in the plane. The new method improves over the popular pairing method based on triangulating the point set. PMID:22389540

  17. Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong

    2010-03-01

    The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.

  18. Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491

  19. Independent Evaluation of Middle School-Based Drug Prevention Curricula: A Systematic Review.

    PubMed

    Flynn, Anna B; Falco, Mathea; Hocini, Sophia

    2015-11-01

    Lack of robust program evaluation has hindered the effectiveness of school-based drug abuse prevention curricula overall. Independently evaluated randomized controlled trials (RCTs) of universal, middle school-based drug abuse prevention curricula are the most useful indicators of whether such programs are effective or ineffective. To conduct a systematic review identifying independently evaluated RCTs of universal, middle school-based drug abuse prevention curricula; extract data on study quality and substance use outcomes; and assess evidence of program effectiveness. PsycInfo, Educational Resources Information Center, Science Citation Index, Social Science Citation Index, Cumulative Index to Nursing and Allied Health Literature, MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews were searched between January 1, 1984, and March 15, 2015. Search terms included variations of drug, alcohol, tobacco, and marijuana use, as well as school, prevention, and effectiveness. Studies included in the review were RCTs carried out by independent evaluators of universal school-based drug prevention curricula available for dissemination in the United States that reported alcohol, tobacco, marijuana, or other drug use outcomes. Two researchers extracted data on study quality and outcomes independently using a data extraction form and met to resolve disagreements. A total of 5071 publications were reviewed, with 13 articles meeting final inclusion criteria. Of the 13 articles, 6 RCTs of 4 distinct school-based curricula were identified for inclusion. Outcomes were reported for 42 single-drug measures in the independent RCTs, with just 3 presenting statistically significant (P < .05) differences between the intervention group and the control group. One program revealed statistically significant positive effects at final follow-up (Lions-Quest Skills for Adolescence). The results of our review demonstrate the dearth of independent research that appropriately evaluates the effectiveness of universal, middle school-based drug prevention curricula. Independent evaluations show little evidence of effectiveness for widely used programs. New methods may be necessary to approach school-based adolescent drug prevention.

  20. Comparison of Standard Culture-Based Method to Culture-Independent Method for Evaluation of Hygiene Effects on the Hand Microbiome

    PubMed Central

    Leff, J.; Henley, J.; Tittl, J.; De Nardo, E.; Butler, M.; Griggs, R.; Fierer, N.

    2017-01-01

    ABSTRACT Hands play a critical role in the transmission of microbiota on one’s own body, between individuals, and on environmental surfaces. Effectively measuring the composition of the hand microbiome is important to hand hygiene science, which has implications for human health. Hand hygiene products are evaluated using standard culture-based methods, but standard test methods for culture-independent microbiome characterization are lacking. We sampled the hands of 50 participants using swab-based and glove-based methods prior to and following four hand hygiene treatments (using a nonantimicrobial hand wash, alcohol-based hand sanitizer [ABHS], a 70% ethanol solution, or tap water). We compared results among culture plate counts, 16S rRNA gene sequencing of DNA extracted directly from hands, and sequencing of DNA extracted from culture plates. Glove-based sampling yielded higher numbers of unique operational taxonomic units (OTUs) but had less diversity in bacterial community composition than swab-based sampling. We detected treatment-induced changes in diversity only by using swab-based samples (P < 0.001); we were unable to detect changes with glove-based samples. Bacterial cell counts significantly decreased with use of the ABHS (P < 0.05) and ethanol control (P < 0.05). Skin hydration at baseline correlated with bacterial abundances, bacterial community composition, pH, and redness across subjects. The importance of the method choice was substantial. These findings are important to ensure improvement of hand hygiene industry methods and for future hand microbiome studies. On the basis of our results and previously published studies, we propose recommendations for best practices in hand microbiome research. PMID:28351915

  1. Partial least squares density modeling (PLS-DM) - a new class-modeling strategy applied to the authentication of olives in brine by near-infrared spectroscopy.

    PubMed

    Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia

    2014-12-03

    A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  3. Modified independent modal space control method for active control of flexible systems

    NASA Technical Reports Server (NTRS)

    Baz, A.; Poh, S.

    1987-01-01

    A modified independent modal space control (MIMSC) method is developed for designing active vibration control systems for large flexible structures. The method accounts for the interaction between the controlled and residual modes. It incorporates also optimal placement procedures for selecting the optimal locations of the actuators in the structure in order to minimize the structural vibrations as well as the actuation energy. The MIMSC method relies on an important feature which is based on time sharing of a small number of actuators, in the modal space, to control effectively a large number of modes. Numerical examples are presented to illustrate the application of the method to generic flexible systems. The results obtained suggest the potential of the devised method in designing efficient active control systems for large flexible structures.

  4. SU-E-T-49: A Multi-Institutional Study of Independent Dose Verification for IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, H; Tachibana, H; Kamima, T

    2015-06-15

    Purpose: AAPM TG114 does not cover the independent verification for IMRT. We conducted a study of independent dose verification for IMRT in seven institutes to show the feasibility. Methods: 384 IMRT plans in the sites of prostate and head and neck (HN) were collected from the institutes, where the planning was performed using Eclipse and Pinnacle3 with the two techniques of step and shoot (S&S) and sliding window (SW). All of the institutes used a same independent dose verification software program (Simple MU Analysis: SMU, Triangle Product, Ishikawa, JP), which is Clarkson-based and CT images were used to compute radiologicalmore » path length. An ion-chamber measurement in a water-equivalent slab phantom was performed to compare the doses computed using the TPS and an independent dose verification program. Additionally, the agreement in dose computed in patient CT images between using the TPS and using the SMU was assessed. The dose of the composite beams in the plan was evaluated. Results: The agreement between the measurement and the SMU were −2.3±1.9 % and −5.6±3.6 % for prostate and HN sites, respectively. The agreement between the TPSs and the SMU were −2.1±1.9 % and −3.0±3.7 for prostate and HN sites, respectively. There was a negative systematic difference with similar standard deviation and the difference was larger in the HN site. The S&S technique showed a statistically significant difference between the SW. Because the Clarkson-based method in the independent program underestimated (cannot consider) the dose under the MLC. Conclusion: The accuracy would be improved when the Clarkson-based algorithm should be modified for IMRT and the tolerance level would be within 5%.« less

  5. Audio Recording for Independent Confirmation of Clinical Assessments in Generalized Anxiety Disorder.

    PubMed

    Targum, Steven D; Murphy, Christopher; Khan, Jibran; Zumpano, Laura; Whitlock, Mark; Simen, Arthur A; Binneman, Brendon

    2018-04-01

    Objective : The assessment of patients with generalized anxiety disorder (GAD) to deteremine whether a medication intervention is necessary is not always clear and might benefit from a second opinion. However, second opinions are time consuming, expensive, and not practical in most settings. We obtained independent, second opinion reviews of the primary clinician's assessment via audio-digital recording. Design : An audio-digital recording of key site-based assessments was used to generate site-independent "dual" reviews of the clinical presentation, symptom severity, and medication requirements of patients with GAD as part of the screening procedures for a clinical trial (ClinicalTrials.gov: NCT02310568). Results : Site-independent reviewers affirmed the diagnosis, symptom severity metrics, and treatment requirements of 90 moderately ill patients with GAD. The patients endorsed excessive worry that was hard to control and essentially all six of the associated DSM-IV-TR anxiety symptoms. The Hamilton Rating Scale for Anxiety scores revealed moderately severe anxiety with a high Pearson's correlation ( r =0.852) between site-based and independent raters and minimal scoring discordance on each scale item. Based upon their independent reviews, these "second" opinions confirmed that these GAD patients warranted a new medication intervention. Thirty patients (33.3%) reported a previous history of a major depressive episode (MDE) and had significantly more depressive symptoms than patients without a history of MDE. Conclusion : The audio-digital recording method provides a useful second opinion that can affirm the need for a different treatment intervention in these anxious patients. A second live assessment would have required additional clinic time and added patient burden. The audio-digital recording method is less burdensome than live second opinion assessments and might have utility in both research and clinical practice settings.

  6. Grouping individual independent BOLD effects: a new way to ICA group analysis

    NASA Astrophysics Data System (ADS)

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2009-04-01

    A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.

  7. Comparison of tissue viability imaging and colorimetry: skin blanching.

    PubMed

    Zhai, Hongbo; Chan, Heidi P; Farahmand, Sara; Nilsson, Gert E; Maibach, Howard I

    2009-02-01

    Operator-independent assessment of skin blanching is important in the development and evaluation of topically applied steroids. Spectroscopic instruments based on hand-held probes, however, include elements of operator dependence such as difference in applied pressure and probe misalignment, while laser Doppler-based methods are better suited for demonstration of skin vasodilatation than for vasoconstriction. To demonstrate the potential of the emerging technology of Tissue Viability Imaging (TiVi) in the objective and operator-independent assessment of skin blanching. The WheelsBridge TiVi600 Tissue Viability Imager was used for quantification of human skin blanching with the Minolta chromameter CR 200 as an independent colorimeter reference method. Desoximetasone gel 0.05% was applied topically on the volar side of the forearm under occlusion for 6 h in four healthy adults. In a separate study, the induction of blanching in the occlusion phase was mapped using a transparent occlusion cover. The relative uncertainty in the blanching estimate produced by the Tissue Viability Imager was about 5% and similar to that of the chromameter operated by a single user and taking the a(*) parameter as a measure of blanching. Estimation of skin blanching could also be performed in the presence of a transient paradoxical erythema, using the integrated TiVi software. The successive induction of skin blanching during the occlusion phase could readily be mapped by the Tissue Viability Imager. TiVi seems to be suitable for operator-independent and remote mapping of human skin blanching, eliminating the main disadvantages of methods based on hand-held probes.

  8. Sparsity-driven coupled imaging and autofocusing for interferometric SAR

    NASA Astrophysics Data System (ADS)

    Zengin, Oǧuzcan; Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    We propose a sparsity-driven method for coupled image formation and autofocusing based on multi-channel data collected in interferometric synthetic aperture radar (IfSAR). Relative phase between SAR images contains valuable information. For example, it can be used to estimate the height of the scene in SAR interferometry. However, this relative phase could be degraded when independent enhancement methods are used over SAR image pairs. Previously, Ramakrishnan et al. proposed a coupled multi-channel image enhancement technique, based on a dual descent method, which exhibits better performance in phase preservation compared to independent enhancement methods. Their work involves a coupled optimization formulation that uses a sparsity enforcing penalty term as well as a constraint tying the multichannel images together to preserve the cross-channel information. In addition to independent enhancement, the relative phase between the acquisitions can be degraded due to other factors as well, such as platform location uncertainties, leading to phase errors in the data and defocusing in the formed imagery. The performance of airborne SAR systems can be affected severely by such errors. We propose an optimization formulation that combines Ramakrishnan et al.'s coupled IfSAR enhancement method with the sparsity-driven autofocus (SDA) approach of Önhon and Çetin to alleviate the effects of phase errors due to motion errors in the context of IfSAR imaging. Our method solves the joint optimization problem with a Lagrangian optimization method iteratively. In our preliminary experimental analysis, we have obtained results of our method on synthetic SAR images and compared its performance to existing methods.

  9. Certified randomness in quantum physics.

    PubMed

    Acín, Antonio; Masanes, Lluis

    2016-12-07

    The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.

  10. Source separation on hyperspectral cube applied to dermatology

    NASA Astrophysics Data System (ADS)

    Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.

    2010-03-01

    This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.

  11. Material-Independent Nanotransfer onto a Flexible Substrate Using Mechanical-Interlocking Structure.

    PubMed

    Seo, Min-Ho; Choi, Seon-Jin; Park, Sang Hyun; Yoo, Jae-Young; Lim, Sung Kyu; Lee, Jae-Shin; Choi, Kwang-Wook; Jo, Min-Seung; Kim, Il-Doo; Yoon, Jun-Bo

    2018-05-22

    Nanowire-transfer technology has received much attention thanks to its capability to fabricate high-performance flexible nanodevices with high simplicity and throughput. However, it is still challenging to extend the conventional nanowire-transfer method to the fabrication of a wide range of devices since a chemical-adhesion-based nanowire-transfer mechanism is complex and time-consuming, hindering successful transfer of diverse nanowires made of various materials. Here, we introduce a material-independent mechanical-interlocking-based nanowire-transfer (MINT) method, fabricating ultralong and fully aligned nanowires on a large flexible substrate (2.5 × 2 cm 2 ) in a highly robust manner. For the material-independent nanotransfer, we developed a mechanics-based nanotransfer method, which employs a dry-removable amorphous carbon (a-C) sacrificial layer between a vacuum-deposited nanowire and the underlying master mold. The controlled etching of the sacrificial layer enables the formation of a mechanical-interlocking structure under the nanowire, facilitating peeling off of the nanowire from the master mold robustly and reliably. Using the developed MINT method, we successfully fabricated various metallic and semiconductor nanowire arrays on flexible substrates. We further demonstrated that the developed method is well suited to the reliable fabrication of highly flexible and high-performance nanoelectronic devices. As examples, a fully aligned gold (Au) microheater array exhibited high bending stability (10 6 cycling) and ultrafast (∼220 ms) heating operation up to ∼100 °C. An ultralong Au heater-embedded cuprous-oxide (Cu 2 O) nanowire chemical gas sensor showed significantly improved reversible reaction kinetics toward NO 2 with 10-fold enhancement in sensitivity at 100 °C.

  12. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  13. System and Method for Detecting Unauthorized Device Access by Comparing Multiple Independent Spatial-Time Data Sets from Other Devices

    NASA Technical Reports Server (NTRS)

    Westmeyer, Paul A. (Inventor); Wertenberg, Russell F. (Inventor); Krage, Frederick J. (Inventor); Riegel, Jack F. (Inventor)

    2017-01-01

    An authentication procedure utilizes multiple independent sources of data to determine whether usage of a device, such as a desktop computer, is authorized. When a comparison indicates an anomaly from the base-line usage data, the system, provides a notice that access of the first device is not authorized.

  14. Using Educational Data Mining Methods to Assess Field-Dependent and Field-Independent Learners' Complex Problem Solving

    ERIC Educational Resources Information Center

    Angeli, Charoula; Valanides, Nicos

    2013-01-01

    The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…

  15. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  16. Parent Training: A Review of Methods for Children with Developmental Disabilities

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Mahan, Sara; LoVullo, Santino V.

    2009-01-01

    Great strides have been made in the development of skills and procedures to aid children with developmental disabilities to establish maximum independence and quality of life. Paramount among the treatment methods that have empirical support are treatments based on applied behavior analysis. These methods are often very labor intensive. Thus,…

  17. On an additive partial correlation operator and nonparametric estimation of graphical models.

    PubMed

    Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu

    2016-09-01

    We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.

  18. On an additive partial correlation operator and nonparametric estimation of graphical models

    PubMed Central

    Li, Bing; Zhao, Hongyu

    2016-01-01

    Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689

  19. Improving consensus contact prediction via server correlation reduction.

    PubMed

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  20. Impact imaging of aircraft composite structure based on a model-independent spatial-wavenumber filter.

    PubMed

    Qiu, Lei; Liu, Bin; Yuan, Shenfang; Su, Zhongqing

    2016-01-01

    The spatial-wavenumber filtering technique is an effective approach to distinguish the propagating direction and wave mode of Lamb wave in spatial-wavenumber domain. Therefore, it has been gradually studied for damage evaluation in recent years. But for on-line impact monitoring in practical application, the main problem is how to realize the spatial-wavenumber filtering of impact signal when the wavenumber of high spatial resolution cannot be measured or the accurate wavenumber curve cannot be modeled. In this paper, a new model-independent spatial-wavenumber filter based impact imaging method is proposed. In this method, a 2D cross-shaped array constructed by two linear piezoelectric (PZT) sensor arrays is used to acquire impact signal on-line. The continuous complex Shannon wavelet transform is adopted to extract the frequency narrowband signals from the frequency wideband impact response signals of the PZT sensors. A model-independent spatial-wavenumber filter is designed based on the spatial-wavenumber filtering technique. Based on the designed filter, a wavenumber searching and best match mechanism is proposed to implement the spatial-wavenumber filtering of the frequency narrowband signals without modeling, which can be used to obtain a wavenumber-time image of the impact relative to a linear PZT sensor array. By using the two wavenumber-time images of the 2D cross-shaped array, the impact direction can be estimated without blind angle. The impact distance relative to the 2D cross-shaped array can be calculated by using the difference of time-of-flight between the frequency narrowband signals of two different central frequencies and the corresponding group velocities. The validations performed on a carbon fiber composite laminate plate and an aircraft composite oil tank show a good impact localization accuracy of the model-independent spatial-wavenumber filter based impact imaging method. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Synchronization of discrete-time neural networks with delays and Markov jump topologies based on tracker information.

    PubMed

    Yang, Xinsong; Feng, Zhiguo; Feng, Jianwen; Cao, Jinde

    2017-01-01

    In this paper, synchronization in an array of discrete-time neural networks (DTNNs) with time-varying delays coupled by Markov jump topologies is considered. It is assumed that the switching information can be collected by a tracker with a certain probability and transmitted from the tracker to controller precisely. Then the controller selects suitable control gains based on the received switching information to synchronize the network. This new control scheme makes full use of received information and overcomes the shortcomings of mode-dependent and mode-independent control schemes. Moreover, the proposed control method includes both the mode-dependent and mode-independent control techniques as special cases. By using linear matrix inequality (LMI) method and designing new Lyapunov functionals, delay-dependent conditions are derived to guarantee that the DTNNs with Markov jump topologies to be asymptotically synchronized. Compared with existing results on Markov systems which are obtained by separately using mode-dependent and mode-independent methods, our result has great flexibility in practical applications. Numerical simulations are finally given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. MOVES2014: Heavy-duty Vehicle Emissions Report

    EPA Science Inventory

    This report updates MOVES methods for evaluating current HD diesel NOx emission rates based on comparisons to independent data from EPA’s IUVP and Houston drayage programs. The report also details methods/assumptions made for HD gasoline HC, CO and NOx emission rates using reduct...

  3. DrugE-Rank: improving drug–target interaction prediction of new candidate drugs or targets by ensemble learning to rank

    PubMed Central

    Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-01-01

    Motivation: Identifying drug–target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug–target interactions of new candidate drugs or targets. Methods: Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. Results: The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. Availability: http://datamining-iip.fudan.edu.cn/service/DrugE-Rank Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307615

  4. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  5. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  6. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  7. Figure-ground segmentation based on class-independent shape priors

    NASA Astrophysics Data System (ADS)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  8. A stepwedge-based method for measuring breast density: observer variability and comparison with human reading

    NASA Astrophysics Data System (ADS)

    Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan

    2010-04-01

    Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.

  9. Estimation of the engineering elastic constants of a directionally solidified superalloy for finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Kalluri, Sreeramesh

    1991-01-01

    The temperature-dependent engineering elastic constants of a directionally solidified nickel-base superalloy were estimated from the single-crystal elastic constants of nickel and MAR-MOO2 superalloy by using Wells' method. In this method, the directionally solidified (columnar-grained) nickel-base superalloy was modeled as a transversely isotropic material, and the five independent elastic constants of the transversely isotropic material were determined from the three independent elastic constants of a cubic single crystal. Solidification for both the single crystals and the directionally solidified superalloy was assumed to be along the (001) direction. Temperature-dependent Young's moduli in longitudinal and transverse directions, shear moduli, and Poisson's ratios were tabulated for the directionally solidified nickel-base superalloy. These engineering elastic constants could be used as input for performing finite element structural analysis of directionally solidified turbine engine components.

  10. Global root zone storage capacity from satellite-based evaporation data

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert

    2016-04-01

    We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.

  11. Excitation-resolved multispectral method for imaging pharmacokinetic parameters in dynamic fluorescent molecular tomography

    NASA Astrophysics Data System (ADS)

    Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen

    2017-04-01

    Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.

  12. An Exploration of Alternative Scoring Methods Using Curriculum-Based Measurement in Early Writing

    ERIC Educational Resources Information Center

    Allen, Abigail A.; Poch, Apryl L.; Lembke, Erica S.

    2018-01-01

    This manuscript describes two empirical studies of alternative scoring procedures used with curriculum-based measurement in writing (CBM-W). Study 1 explored the technical adequacy of a trait-based rubric in first grade. Study 2 explored the technical adequacy of a trait-based rubric, production-dependent, and production-independent scores in…

  13. Principal component analysis-based unsupervised feature extraction applied to in silico drug discovery for posttraumatic stress disorder-mediated heart disease.

    PubMed

    Taguchi, Y-h; Iwadate, Mitsuo; Umeyama, Hideaki

    2015-04-30

    Feature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems. Two principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods. Our two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.

  14. Abel's Theorem Simplifies Reduction of Order

    ERIC Educational Resources Information Center

    Green, William R.

    2011-01-01

    We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.

  15. Statistical inference for remote sensing-based estimates of net deforestation

    Treesearch

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  16. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  17. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  18. State-independent uncertainty relations and entanglement detection

    NASA Astrophysics Data System (ADS)

    Qian, Chen; Li, Jun-Li; Qiao, Cong-Feng

    2018-04-01

    The uncertainty relation is one of the key ingredients of quantum theory. Despite the great efforts devoted to this subject, most of the variance-based uncertainty relations are state-dependent and suffering from the triviality problem of zero lower bounds. Here we develop a method to get uncertainty relations with state-independent lower bounds. The method works by exploring the eigenvalues of a Hermitian matrix composed by Bloch vectors of incompatible observables and is applicable for both pure and mixed states and for arbitrary number of N-dimensional observables. The uncertainty relation for the incompatible observables can be explained by geometric relations related to the parallel postulate and the inequalities in Horn's conjecture on Hermitian matrix sum. Practical entanglement criteria are also presented based on the derived uncertainty relations.

  19. Haplowebs as a graphical tool for delimiting species: a revival of Doyle's "field for recombination" approach and its application to the coral genus Pocillopora in Clipperton

    PubMed Central

    2010-01-01

    Background Usual methods for inferring species boundaries from molecular sequence data rely either on gene trees or on population genetic analyses. Another way of delimiting species, based on a view of species as "fields for recombination" (FFRs) characterized by mutual allelic exclusivity, was suggested in 1995 by Doyle. Here we propose to use haplowebs (haplotype networks with additional connections between haplotypes found co-occurring in heterozygous individuals) to visualize and delineate single-locus FFRs (sl-FFRs). Furthermore, we introduce a method to quantify the reliability of putative species boundaries according to the number of independent markers that support them, and illustrate this approach with a case study of taxonomically difficult corals of the genus Pocillopora collected around Clipperton Island (far eastern Pacific). Results One haploweb built from intron sequences of the ATP synthase β subunit gene revealed the presence of two sl-FFRs among our 74 coral samples, whereas a second one built from ITS sequences turned out to be composed of four sl-FFRs. As a third independent marker, we performed a combined analysis of two regions of the mitochondrial genome: since haplowebs are not suited to analyze non-recombining markers, individuals were sorted into four haplogroups according to their mitochondrial sequences. Among all possible bipartitions of our set of samples, thirteen were supported by at least one molecular dataset, none by two and only one by all three datasets: this congruent pattern obtained from independent nuclear and mitochondrial markers indicates that two species of Pocillopora are present in Clipperton. Conclusions Our approach builds on Doyle's method and extends it by introducing an intuitive, user-friendly graphical representation and by proposing a conceptual framework to analyze and quantify the congruence between sl-FFRs obtained from several independent markers. Like delineation methods based on population-level statistical approaches, our method can distinguish closely-related species that have not yet reached reciprocal monophyly at most or all of their loci; like tree-based approaches, it can yield meaningful conclusions using a number of independent markers as low as three. Future efforts will aim to develop programs that speed up the construction of haplowebs from FASTA sequence alignments and help perform the congruence analysis outlined in this article. PMID:21118572

  20. Haplowebs as a graphical tool for delimiting species: a revival of Doyle's "field for recombination" approach and its application to the coral genus Pocillopora in Clipperton.

    PubMed

    Flot, Jean-François; Couloux, Arnaud; Tillier, Simon

    2010-11-30

    Usual methods for inferring species boundaries from molecular sequence data rely either on gene trees or on population genetic analyses. Another way of delimiting species, based on a view of species as "fields for recombination" (FFRs) characterized by mutual allelic exclusivity, was suggested in 1995 by Doyle. Here we propose to use haplowebs (haplotype networks with additional connections between haplotypes found co-occurring in heterozygous individuals) to visualize and delineate single-locus FFRs (sl-FFRs). Furthermore, we introduce a method to quantify the reliability of putative species boundaries according to the number of independent markers that support them, and illustrate this approach with a case study of taxonomically difficult corals of the genus Pocillopora collected around Clipperton Island (far eastern Pacific). One haploweb built from intron sequences of the ATP synthase β subunit gene revealed the presence of two sl-FFRs among our 74 coral samples, whereas a second one built from ITS sequences turned out to be composed of four sl-FFRs. As a third independent marker, we performed a combined analysis of two regions of the mitochondrial genome: since haplowebs are not suited to analyze non-recombining markers, individuals were sorted into four haplogroups according to their mitochondrial sequences. Among all possible bipartitions of our set of samples, thirteen were supported by at least one molecular dataset, none by two and only one by all three datasets: this congruent pattern obtained from independent nuclear and mitochondrial markers indicates that two species of Pocillopora are present in Clipperton. Our approach builds on Doyle's method and extends it by introducing an intuitive, user-friendly graphical representation and by proposing a conceptual framework to analyze and quantify the congruence between sl-FFRs obtained from several independent markers. Like delineation methods based on population-level statistical approaches, our method can distinguish closely-related species that have not yet reached reciprocal monophyly at most or all of their loci; like tree-based approaches, it can yield meaningful conclusions using a number of independent markers as low as three. Future efforts will aim to develop programs that speed up the construction of haplowebs from FASTA sequence alignments and help perform the congruence analysis outlined in this article.

  1. DrugE-Rank: improving drug-target interaction prediction of new candidate drugs or targets by ensemble learning to rank.

    PubMed

    Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-06-15

    Identifying drug-target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug-target interactions of new candidate drugs or targets. Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. http://datamining-iip.fudan.edu.cn/service/DrugE-Rank zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  2. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  3. [Evaluation of three methods for constructing craniofacial mid-sagittal plane based on the cone beam computed tomography].

    PubMed

    Wang, S W; Li, M; Yang, H F; Zhao, Y J; Wang, Y; Liu, Y

    2016-04-18

    To compare the accuracyof interactive closet point (ICP) algorithm, Procrustes analysis (PA) algorithm,and a landmark-independent method to construct the mid-sagittal plane (MSP) of the cone beam computed tomography.To provide theoretical basis for establishing coordinate systemof CBCT images and symmetric analysis. Ten patients were selected and scanned by CBCT before orthodontic treatment.The scan data was imported into Mimics 10.0 to reconstructthree dimensional skulls.And the MSP of each skull was generated by ICP algorithm, PA algorithm and landmark-independent method. MSP extracted by ICP algorithm or PA algorithm involvedthree steps. First, the 3D skull processing was performed by reverse engineering software geomagic studio 2012 to obtain the mirror skull. Then, the original and its mirror skull was registered separately by ICP algorithm in geomagic studio 2012 and PA algorithm in NX Imageware 11.0. Finally, the registered data were united into new data to calculate the MSP of the originaldata in geomagic studio 2012. The mid-sagittal plane was determined by SELLA (S), nasion (N), basion (Ba) as traditional landmark-dependent methodconducted in software InVivoDental 5.0. The distance from 9 pairs of symmetric anatomical marked points to three sagittal plane were measured and calculated to compare the differences of the absolute value. The one-way ANOVA test was used to analyze the variable differences among the 3 MSPs. The pairwise comparison was performed with LSD method. MSPs calculated by the three methods were available for clinic analysis, which could be concluded from the front view.However, there was significant differences among the distances from the 9 pairs of symmetric anatomical marked points to the MSPs (F=10.932,P=0.001).LSD test showed there was no significant difference between the ICP algorithm and landmark-independent method (P=0.11), while there was significant difference between the PA algorithm and landmark-independent methods (P=0.01) . Mid-sagittal plane of 3D skulls could be generated base on ICP algorithm or PA algorithm. There was no significant difference between the ICP algorithm and landmark-independent method. For the subjects with no evident asymmetry, ICP algorithm is feasible in clinical analysis.

  4. Fetal ECG extraction using independent component analysis by Jade approach

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  5. Fast algorithms for evaluating the stress field of dislocation lines in anisotropic elastic media

    NASA Astrophysics Data System (ADS)

    Chen, C.; Aubry, S.; Oppelstrup, T.; Arsenlis, A.; Darve, E.

    2018-06-01

    In dislocation dynamics (DD) simulations, the most computationally intensive step is the evaluation of the elastic interaction forces among dislocation ensembles. Because the pair-wise interaction between dislocations is long-range, this force calculation step can be significantly accelerated by the fast multipole method (FMM). We implemented and compared four different methods in isotropic and anisotropic elastic media: one based on the Taylor series expansion (Taylor FMM), one based on the spherical harmonics expansion (Spherical FMM), one kernel-independent method based on the Chebyshev interpolation (Chebyshev FMM), and a new kernel-independent method that we call the Lagrange FMM. The Taylor FMM is an existing method, used in ParaDiS, one of the most popular DD simulation softwares. The Spherical FMM employs a more compact multipole representation than the Taylor FMM does and is thus more efficient. However, both the Taylor FMM and the Spherical FMM are difficult to derive in anisotropic elastic media because the interaction force is complex and has no closed analytical formula. The Chebyshev FMM requires only being able to evaluate the interaction between dislocations and thus can be applied easily in anisotropic elastic media. But it has a relatively large memory footprint, which limits its usage. The Lagrange FMM was designed to be a memory-efficient black-box method. Various numerical experiments are presented to demonstrate the convergence and the scalability of the four methods.

  6. Fast and secure encryption-decryption method based on chaotic dynamics

    DOEpatents

    Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.

    1995-01-01

    A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.

  7. The High School & Beyond Data Set: Academic Self-Concept Measures.

    ERIC Educational Resources Information Center

    Strein, William

    A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…

  8. Migration monitoring with automated technology

    Treesearch

    Rhonda L. Millikin

    2005-01-01

    Automated technology can supplement ground-based methods of migration monitoring by providing: (1) unbiased and automated sampling; (2) independent validation of current methods; (3) a larger sample area for landscape-level analysis of habitat selection for stopover, and (4) an opportunity to study flight behavior. In particular, radar-acoustic sensor fusion can...

  9. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    NASA Astrophysics Data System (ADS)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  10. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  11. Multiple UAV Cooperation for Wildfire Monitoring

    NASA Astrophysics Data System (ADS)

    Lin, Zhongjie

    Wildfires have been a major factor in the development and management of the world's forest. An accurate assessment of wildfire status is imperative for fire management. This thesis is dedicated to the topic of utilizing multiple unmanned aerial vehicles (UAVs) to cooperatively monitor a large-scale wildfire. This is achieved through wildfire spreading situation estimation based on on-line measurements and wise cooperation strategy to ensure efficiency. First, based on the understanding of the physical characteristics of the wildfire propagation behavior, a wildfire model and a Kalman filter-based method are proposed to estimate the wildfire rate of spread and the fire front contour profile. With the enormous on-line measurements from on-board sensors of UAVs, the proposed method allows a wildfire monitoring mission to benefit from on-line information updating, increased flexibility, and accurate estimation. An independent wildfire simulator is utilized to verify the effectiveness of the proposed method. Second, based on the filter analysis, wildfire spreading situation and vehicle dynamics, the influence of different cooperation strategies of UAVs to the overall mission performance is studied. The multi-UAV cooperation problem is formulated in a distributed network. A consensus-based method is proposed to help address the problem. The optimal cooperation strategy of UAVs is obtained through mathematical analysis. The derived optimal cooperation strategy is then verified in an independent fire simulation environment to verify its effectiveness.

  12. Application of extremum seeking for time-varying systems to resonance control of RF cavities

    DOE PAGES

    Scheinker, Alexander

    2016-09-13

    A recently developed form of extremum seeking for time-varying systems is implemented in hardware for the resonance control of radio-frequency cavities without phase measurements. Normal conducting RF cavity resonance control is performed via a slug tuner, while superconducting TESLA-type cavity resonance control is performed via piezo actuators. The controller maintains resonance by minimizing reflected power by utilizing model-independent adaptive feedback. Unlike standard phase-measurement-based resonance control, the presented approach is not sensitive to arbitrary phase shifts of the RF signals due to temperature-dependent cable length or phasemeasurement hardware changes. The phase independence of this method removes common slowly varying drifts andmore » required periodic recalibration of phase-based methods. A general overview of the adaptive controller is presented along with the proof of principle experimental results at room temperature. Lastly, this method allows us to both maintain a cavity at a desired resonance frequency and also to dynamically modify its resonance frequency to track the unknown time-varying frequency of an RF source, thereby maintaining maximal cavity field strength, based only on power-level measurements.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinker, Alexander

    A recently developed form of extremum seeking for time-varying systems is implemented in hardware for the resonance control of radio-frequency cavities without phase measurements. Normal conducting RF cavity resonance control is performed via a slug tuner, while superconducting TESLA-type cavity resonance control is performed via piezo actuators. The controller maintains resonance by minimizing reflected power by utilizing model-independent adaptive feedback. Unlike standard phase-measurement-based resonance control, the presented approach is not sensitive to arbitrary phase shifts of the RF signals due to temperature-dependent cable length or phasemeasurement hardware changes. The phase independence of this method removes common slowly varying drifts andmore » required periodic recalibration of phase-based methods. A general overview of the adaptive controller is presented along with the proof of principle experimental results at room temperature. Lastly, this method allows us to both maintain a cavity at a desired resonance frequency and also to dynamically modify its resonance frequency to track the unknown time-varying frequency of an RF source, thereby maintaining maximal cavity field strength, based only on power-level measurements.« less

  14. Direct sampling of cystic fibrosis lungs indicates that DNA-based analyses of upper-airway specimens can misrepresent lung microbiota.

    PubMed

    Goddard, Amanda F; Staudinger, Benjamin J; Dowd, Scot E; Joshi-Datar, Amruta; Wolcott, Randall D; Aitken, Moira L; Fligner, Corinne L; Singh, Pradeep K

    2012-08-21

    Recent work using culture-independent methods suggests that the lungs of cystic fibrosis (CF) patients harbor a vast array of bacteria not conventionally implicated in CF lung disease. However, sampling lung secretions in living subjects requires that expectorated specimens or collection devices pass through the oropharynx. Thus, contamination could confound results. Here, we compared culture-independent analyses of throat and sputum specimens to samples directly obtained from the lungs at the time of transplantation. We found that CF lungs with advanced disease contained relatively homogenous populations of typical CF pathogens. In contrast, upper-airway specimens from the same subjects contained higher levels of microbial diversity and organisms not typically considered CF pathogens. Furthermore, sputum exhibited day-to-day variation in the abundance of nontypical organisms, even in the absence of clinical changes. These findings suggest that oropharyngeal contamination could limit the accuracy of DNA-based measurements on upper-airway specimens. This work highlights the importance of sampling procedures for microbiome studies and suggests that methods that account for contamination are needed when DNA-based methods are used on clinical specimens.

  15. Quantification method for the appearance of melanin pigmentation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi

    2005-04-01

    In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.

  16. Efficiency of chemotherapy coupled with thermotherapy against citrus HLB

    USDA-ARS?s Scientific Manuscript database

    Six independent experiments were carried out to evaluate the effectiveness of the chemotherapy coupled with the thermotherapy on pot-contained HLB-affected plants based on our previous results from graft-based methods. Three-year old potted HLB-affected citrus plants were exposed to 4 thermotherapy ...

  17. Skill Transfer and Virtual Training for IND Response Decision-Making: Project Summary and Next Steps

    DTIC Science & Technology

    2016-04-12

    are likely to be very productive partners—independent video - game developers and academic game degree programs—are not familiar with working with...experimental validation. • Independent Video - Game Developers. Small companies and individuals that pursue video - game design and development can be...complexity, such as an improvised nuclear device (IND) detonation. The effort has examined game - based training methods to determine their suitability

  18. Application of Model-based Systems Engineering Methods to Development of Combat System Architectures

    DTIC Science & Technology

    2009-04-22

    Spt: Chacon , Hoang, Matela, Sarabia Supportability *Carpenter, Banner-Bacin, Chacon , Kinberg M&S Across Acq Manz Independent Kang, Chandler Advisor M...W entland SW / O A *W entland, Sysavath; Spt: Carpenter, Sung, Mend iola M&S/CORE *Pham , Kong, Va ldez, Vasquez; Spt: Chacon , Hoang, Matela...Sarabia Supportability *Carpenter, Banner-Bacin, Chacon , Kinberg M&S Across Acq Manz Independent Kang, Chandler Advisor M Green Capstone Architect

  19. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Design and optimization of a modal- independent linear ultrasonic motor.

    PubMed

    Zhou, Shengli; Yao, Zhiyuan

    2014-03-01

    To simplify the design of the linear ultrasonic motor (LUSM) and improve its output performance, a method of modal decoupling for LUSMs is proposed in this paper. The specific embodiment of this method is decoupling of the traditional LUSM stator's complex vibration into two simple vibrations, with each vibration implemented by one vibrator. Because the two vibrators are designed independently, their frequencies can be tuned independently and frequency consistency is easy to achieve. Thus, the method can simplify the design of the LUSM. Based on this method, a prototype modal- independent LUSM is designed and fabricated. The motor reaches its maximum thrust force of 47 N, maximum unloaded speed of 0.43 m/s, and maximum power of 7.85 W at applied voltage of 200 Vpp. The motor's structure is then optimized by controlling the difference between the two vibrators' resonance frequencies to reach larger output speed, thrust, and power. The optimized results show that when the frequency difference is 73 Hz, the output force, speed, and power reach their maximum values. At the input voltage of 200 Vpp, the motor reaches its maximum thrust force of 64.2 N, maximum unloaded speed of 0.76 m/s, maximum power of 17.4 W, maximum thrust-weight ratio of 23.7, and maximum efficiency of 39.6%.

  1. Calculation of electronic coupling matrix elements for ground and excited state electron transfer reactions: Comparison of the generalized Mulliken{endash}Hush and block diagonalization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cave, R.J.; Newton, M.D.

    1997-06-01

    Two independent methods are presented for the nonperturbative calculation of the electronic coupling matrix element (H{sub ab}) for electron transfer reactions using {ital ab initio} electronic structure theory. The first is based on the generalized Mulliken{endash}Hush (GMH) model, a multistate generalization of the Mulliken Hush formalism for the electronic coupling. The second is based on the block diagonalization (BD) approach of Cederbaum, Domcke, and co-workers. Detailed quantitative comparisons of the two methods are carried out based on results for (a) several states of the system Zn{sub 2}OH{sub 2}{sup +} and (b) the low-lying states of the benzene{endash}Cl atom complex andmore » its contact ion pair. Generally good agreement between the two methods is obtained over a range of geometries. Either method can be applied at an arbitrary nuclear geometry and, as a result, may be used to test the validity of the Condon approximation. Examples of nonmonotonic behavior of the electronic coupling as a function of nuclear coordinates are observed for Zn{sub 2}OH{sub 2}{sup +}. Both methods also yield a natural definition of the effective distance (r{sub DA}) between donor (D) and acceptor (A) sites, in contrast to earlier approaches which required independent estimates of r{sub DA}, generally based on molecular structure data. {copyright} {ital 1997 American Institute of Physics.}« less

  2. Culture-independent discovery of natural products from soil metagenomes.

    PubMed

    Katz, Micah; Hover, Bradley M; Brady, Sean F

    2016-03-01

    Bacterial natural products have proven to be invaluable starting points in the development of many currently used therapeutic agents. Unfortunately, traditional culture-based methods for natural product discovery have been deemphasized by pharmaceutical companies due in large part to high rediscovery rates. Culture-independent, or "metagenomic," methods, which rely on the heterologous expression of DNA extracted directly from environmental samples (eDNA), have the potential to provide access to metabolites encoded by a large fraction of the earth's microbial biosynthetic diversity. As soil is both ubiquitous and rich in bacterial diversity, it is an appealing starting point for culture-independent natural product discovery efforts. This review provides an overview of the history of soil metagenome-driven natural product discovery studies and elaborates on the recent development of new tools for sequence-based, high-throughput profiling of environmental samples used in discovering novel natural product biosynthetic gene clusters. We conclude with several examples of these new tools being employed to facilitate the recovery of novel secondary metabolite encoding gene clusters from soil metagenomes and the subsequent heterologous expression of these clusters to produce bioactive small molecules.

  3. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  4. A New Method for Reconstructing Sea-Level and Deep-Sea-Temperature Variability over the Past 5.3 Million Years

    NASA Astrophysics Data System (ADS)

    Rohling, E. J.

    2014-12-01

    Ice volume (and hence sea level) and deep-sea temperature are key measures of global climate change. Sea level has been documented using several independent methods over the past 0.5 million years (Myr). Older periods, however, lack such independent validation; all existing records are related to deep-sea oxygen isotope (d18O) data that are influenced by processes unrelated to sea level. For deep-sea temperature, only one continuous high-resolution (Mg/Ca-based) record exists, with related sea-level estimates, spanning the past 1.5 Myr. We have recently presented a novel sea-level reconstruction, with associated estimates of deep-sea temperature, which independently validates the previous 0-1.5 Myr reconstruction and extends it back to 5.3 Myr ago. A serious of caveats applies to this new method, especially in older times of its application, as is always the case with new methods. Independent validation exercises are needed to elucidate where consistency exists, and where solutions drift away from each other. A key observation from our new method is that a large temporal offset existed during the onset of Plio-Pleistocene ice ages, between a marked cooling step at 2.73 Myr ago and the first major glaciation at 2.15 Myr ago. This observation relies on relative changes within the dataset, which are more robust than absolute values. I will discuss our method and its main caveats and avenues for improvement.

  5. Evaluation of a Moderate Resolution, Satellite-Based Impervious Surface Map Using an Independent, High-Resolution Validation Dataset

    EPA Science Inventory

    Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data ...

  6. Changing Multiple Adolescent Health Behaviors through School-Based Interventions: A Review of the Literature

    ERIC Educational Resources Information Center

    Busch, Vincent; de Leeuw, Johannes Rob Josephus; de Harder, Alinda; Schrijvers, Augustinus Jacobus Petrus

    2013-01-01

    Background: In approaches to health promotion in adolescents, unhealthy behaviors are no longer regarded as independent processes, but as interrelated. This article presents a systematic literature review of school-based interventions targeting multiple adolescent behaviors simultaneously. Methods: A systematic literature search was performed…

  7. QUO VADIS SOURCE TRACKING? TOWARDS A STRATEGIC FRAMEWORK FOR ENVIRONMENTAL MONITORING OF FECAL POLLUTION

    EPA Science Inventory

    Advances in microbial source tracking (MST) have largely been driven by the need to comply with water quality standards based on traditional indicator bacteria. Recently, a number of PCR-based, culture- and library-independent methods have been gaining popularity among source tra...

  8. Conceptual Change through Changing the Process of Comparison

    ERIC Educational Resources Information Center

    Wasmann-Frahm, Astrid

    2009-01-01

    Classification can serve as a tool for conceptualising ideas about vertebrates. Training enhances classification skills as well as sharpening concepts. The method described in this paper is based on the "hybrid-model" of comparison that proposes two independently working processes: associative and theory-based. The two interact during a…

  9. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  10. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    PubMed

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  11. System-independent characterization of materials using dual-energy computed tomography

    DOE PAGES

    Azevedo, Stephen G.; Martz, Jr., Harry E.; Aufderheide, III, Maurice B.; ...

    2016-02-01

    In this study, we present a new decomposition approach for dual-energy computed tomography (DECT) called SIRZ that provides precise and accurate material description, independent of the scanner, over diagnostic energy ranges (30 to 200 keV). System independence is achieved by explicitly including a scanner-specific spectral description in the decomposition method, and a new X-ray-relevant feature space. The feature space consists of electron density, ρ e, and a new effective atomic number, Z e, which is based on published X-ray cross sections. Reference materials are used in conjunction with the system spectral response so that additional beam-hardening correction is not necessary.more » The technique is tested against other methods on DECT data of known specimens scanned by diverse spectra and systems. Uncertainties in accuracy and precision are less than 3% and 2% respectively for the (ρ e, Z e) results compared to prior methods that are inaccurate and imprecise (over 9%).« less

  12. The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition.

    PubMed

    Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo

    2016-09-01

    The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.

  13. Frequency-domain-independent vector analysis for mode-division multiplexed transmission

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Hu, Guijun; Li, Jiao

    2018-04-01

    In this paper, we propose a demultiplexing method based on frequency-domain independent vector analysis (FD-IVA) algorithm for mode-division multiplexing (MDM) system. FD-IVA extends frequency-domain independent component analysis (FD-ICA) from unitary variable to multivariate variables, and provides an efficient method to eliminate the permutation ambiguity. In order to verify the performance of FD-IVA algorithm, a 6 ×6 MDM system is simulated. The simulation results show that the FD-IVA algorithm has basically the same bit-error-rate(BER) performance with the FD-ICA algorithm and frequency-domain least mean squares (FD-LMS) algorithm. Meanwhile, the convergence speed of FD-IVA algorithm is the same as that of FD-ICA. However, compared with the FD-ICA and the FD-LMS, the FD-IVA has an obviously lower computational complexity.

  14. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  15. Huygens Titan Probe Trajectory Reconstruction Using Traditional Methods and the Program to Optimize Simulated Trajectories II

    NASA Technical Reports Server (NTRS)

    Striepe, Scott A.; Blanchard, Robert C.; Kirsch, Michael F.; Fowler, Wallace T.

    2007-01-01

    On January 14, 2005, ESA's Huygens probe separated from NASA's Cassini spacecraft, entered the Titan atmosphere and landed on its surface. As part of NASA Engineering Safety Center Independent Technical Assessment of the Huygens entry, descent, and landing, and an agreement with ESA, NASA provided results of all EDL analyses and associated findings to the Huygens project team prior to probe entry. In return, NASA was provided the flight data from the probe so that trajectory reconstruction could be done and simulation models assessed. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using two independent approaches: a traditional method and a POST2-based method. Results from both approaches are discussed in this paper.

  16. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  17. Global root zone storage capacity from satellite-based evaporation

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim G. M.; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel B.; van Dijk, Albert I. J. M.; Guerschman, Juan P.; Keys, Patrick W.; Gordon, Line J.; Savenije, Hubert H. G.

    2016-04-01

    This study presents an "Earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale independent. In contrast to a traditional look-up table approach, our method captures the variability in root zone storage capacity within land cover types, including in rainforests where direct measurements of root depths otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM (Simple Terrestrial Evaporation to Atmosphere Model) improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. Our results suggest that several forest types are able to create a large storage to buffer for severe droughts (with a very long return period), in contrast to, for example, savannahs and woody savannahs (medium length return period), as well as grasslands, shrublands, and croplands (very short return period). The presented method to estimate root zone storage capacity eliminates the need for poor resolution soil and rooting depth data that form a limitation for achieving progress in the global land surface modelling community.

  18. A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs

    ERIC Educational Resources Information Center

    Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry

    2004-01-01

    This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…

  19. A cell cycle-independent, conditional gene inactivation strategy for differentially tagging wild-type and mutant cells.

    PubMed

    Nagarkar-Jaiswal, Sonal; Manivannan, Sathiya N; Zuo, Zhongyuan; Bellen, Hugo J

    2017-05-31

    Here, we describe a novel method based on intronic MiMIC insertions described in Nagarkar-Jaiswal et al. (2015) to perform conditional gene inactivation in Drosophila . Mosaic analysis in Drosophila cannot be easily performed in post-mitotic cells. We therefore, therefore, developed Flip-Flop, a flippase -dependent in vivo cassette-inversion method that marks wild-type cells with the endogenous EGFP-tagged protein, whereas mutant cells are marked with mCherry upon inversion. We document the ease and usefulness of this strategy in differential tagging of wild-type and mutant cells in mosaics. We use this approach to phenotypically characterize the loss of SNF4Aγ , encoding the γ subunit of the AMP Kinase complex. The Flip-Flop method is efficient and reliable, and permits conditional gene inactivation based on both spatial and temporal cues, in a cell cycle-, and developmental stage-independent fashion, creating a platform for systematic screens of gene function in developing and adult flies with unprecedented detail.

  20. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    USGS Publications Warehouse

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.

  1. Eigencentrality based on dissimilarity measures reveals central nodes in complex networks

    PubMed Central

    Alvarez-Socorro, A. J.; Herrera-Almarza, G. C.; González-Díaz, L. A.

    2015-01-01

    One of the most important problems in complex network’s theory is the location of the entities that are essential or have a main role within the network. For this purpose, the use of dissimilarity measures (specific to theory of classification and data mining) to enrich the centrality measures in complex networks is proposed. The centrality method used is the eigencentrality which is based on the heuristic that the centrality of a node depends on how central are the nodes in the immediate neighbourhood (like rich get richer phenomenon). This can be described by an eigenvalues problem, however the information of the neighbourhood and the connections between neighbours is not taken in account, neglecting their relevance when is one evaluates the centrality/importance/influence of a node. The contribution calculated by the dissimilarity measure is parameter independent, making the proposed method is also parameter independent. Finally, we perform a comparative study of our method versus other methods reported in the literature, obtaining more accurate and less expensive computational results in most cases. PMID:26603652

  2. Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si

    NASA Technical Reports Server (NTRS)

    Wang, Z. Q.; Stroud, D.

    1990-01-01

    Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.

  3. An Independent Asteroseismic Analysis of the Fundamental Parameters and Internal Structure of the Solar-like Oscillator KIC 6225718

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Li, Yan

    2017-09-01

    Asteroseismology is a useful tool that is usually used to probe stellar interiors and to determine stellar fundamental parameters, such as stellar mass, radius, and surface gravity. In order to probe stellar interiors, making comparisons between observations and models is usually used with the {χ }2-minimization method. The work of Wu & Li reported that the best parameter determined by the {χ }2-matching process is the acoustic radius for pure p-mode oscillations. In the present work, based on the theoretical calculations of Wu & Li, we will independently analyze the seismic observations of KIC 6225718 to determine its fundamental parameters and to investigate its interior properties. First, in order to test the method, we use it in the Sun to determine its fundamental parameters and to investigate interiors. Second, we independently determine the fundamental parameters of KIC 6225718 without any other non-seismic constraint. Therefore, those determined fundamental parameters are independent of those determined by other methods. They can be regarded as independent references in other analyses. Finally, we analyze the stellar internal structure and find that KIC 6225718 has a convective core with the size of 0.078-0.092 {R}⊙ . Its overshooting parameter {f}{ov} in the core is around 0.010. In addition, its center hydrogen {X}{{c}} is about 0.264-0.355.

  4. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    PubMed

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.

  5. Influence maximization in social networks under an independent cascade-based model

    NASA Astrophysics Data System (ADS)

    Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan

    2016-02-01

    The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.

  6. Predictors of Cerebral Palsy in Very Preterm Infants: The EPIPAGE Prospective Population-Based Cohort Study

    ERIC Educational Resources Information Center

    Beaino, Ghada; Khoshnood, Babak; Kaminski, Monique; Pierrat, Veronique; Marret, Stephane; Matis, Jacqueline; Ledesert, Bernard; Thiriez, Gerard; Fresson, Jeanne; Roze, Jean-Christophe; Zupan-Simunek, Veronique; Arnaud, Catherine; Burguet, Antoine; Larroque, Beatrice; Breart, Gerard; Ancel, Pierre-Yves

    2010-01-01

    Aim: The aim of this study was to assess the independent role of cerebral lesions on ultrasound scan, and several other neonatal and obstetric factors, as potential predictors of cerebral palsy (CP) in a large population-based cohort of very preterm infants. Method: As part of EPIPAGE, a population-based prospective cohort study, perinatal data…

  7. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  8. A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.

    PubMed

    Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena

    2012-08-01

    A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.

  9. Automatic Generation of Boundary Conditions Using Demons Nonrigid Image Registration for Use in 3-D Modality-Independent Elastography

    PubMed Central

    Ou, Jao J.; Ong, Rowena E.; Miga, Michael I.

    2013-01-01

    Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method. PMID:21690002

  10. Automatic generation of boundary conditions using demons nonrigid image registration for use in 3-D modality-independent elastography.

    PubMed

    Pheiffer, Thomas S; Ou, Jao J; Ong, Rowena E; Miga, Michael I

    2011-09-01

    Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method.

  11. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization

    PubMed Central

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-01-01

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096

  12. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization.

    PubMed

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-08-24

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.

  13. The Problem of Constructive Misalignment in International Business Education: A Three-Stage Integrated Approach to Enhancing Teaching and Learning

    ERIC Educational Resources Information Center

    Zhao, Shasha

    2016-01-01

    Past evidence suggests that constructive misalignment is particularly problematic in International Business (IB) education, though this paradigm has received limited research attention. Building on the literature of three independent teaching methods (threshold concept, problem-based learning, and technology-based learning), this study contributes…

  14. Education Quality in Kazakhstan in the Context of Competence-Based Approach

    ERIC Educational Resources Information Center

    Nabi, Yskak; Zhaxylykova, Nuriya Ermuhametovna; Kenbaeva, Gulmira Kaparbaevna; Tolbayev, Abdikerim; Bekbaeva, Zeinep Nusipovna

    2016-01-01

    The background of this paper is to present how education system of Kazakhstan evolved during the last 24 years of independence, highlighting the contemporary transformational processes. We defined the aim to identify the education quality in the context of competence-based approach. Methods: Analysis of references, interviewing, experimental work.…

  15. Two-port connecting-layer-based sandwiched grating by a polarization-independent design.

    PubMed

    Li, Hongtao; Wang, Bo

    2017-05-02

    In this paper, a two-port connecting-layer-based sandwiched beam splitter grating with polarization-independent property is reported and designed. Such the grating can separate the transmission polarized light into two diffraction orders with equal energies, which can realize the nearly 50/50 output with good uniformity. For the given wavelength of 800 nm and period of 780 nm, a simplified modal method can design a optimal duty cycle and the estimation value of the grating depth can be calculated based on it. In order to obtain the precise grating parameters, a rigorous coupled-wave analysis can be employed to optimize grating parameters by seeking for the precise grating depth and the thickness of connecting layer. Based on the optimized design, a high-efficiency two-port output grating with the wideband performances can be gained. Even more important, diffraction efficiencies are calculated by using two analytical methods, which are proved to be coincided well with each other. Therefore, the grating is significant for practical optical photonic element in engineering.

  16. Effectiveness of problem-based learning in Chinese pharmacy education: a meta-analysis.

    PubMed

    Zhou, Jiyin; Zhou, Shiwen; Huang, Chunji; Xu, Rufu; Zhang, Zuo; Zeng, Shengya; Qian, Guisheng

    2016-01-19

    This review provides a critical overview of problem-based learning (PBL) practices in Chinese pharmacy education. PBL has yet to be widely applied in pharmaceutical education in China. The results of those studies that have been conducted are published in Chinese and thus may not be easily accessible to international researchers. Therefore, this meta-analysis was carried out to review the effectiveness of PBL. Databases were searched for studies in accordance with the inclusion criteria. Two reviewers independently performed the study identification and data extraction. A meta-analysis was conducted using Revman 5.3 software. Sixteen randomized controlled trials were included. The meta-analysis revealed that PBL had a positive association with higher theoretical scores (SMD = 1.17, 95% CI [0.77, 11.57], P < 0.00001). The questionnaire results show that PBL methods are superior to conventional teaching methods in improving students' learning interest, independent analysis skills, scope of knowledge, self-study, team spirit, and oral expression. This meta-analysis indicates that PBL pedagogy is superior to traditional lecture-based teaching in Chinese pharmacy education. PBL methods could be an optional, supplementary method of pharmaceutical teaching in China. However, Chinese pharmacy colleges and universities should revise PBL curricula according to their own needs, which would maximize the effectiveness of PBL.

  17. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  18. Proficiency-based laparoscopic and endoscopic training with virtual reality simulators: a comparison of proctored and independent approaches.

    PubMed

    Snyder, Christopher W; Vandromme, Marianne J; Tyra, Sharon L; Hawn, Mary T

    2009-01-01

    Virtual reality (VR) simulators for laparoscopy and endoscopy may be valuable tools for resident education. However, the cost of such training in terms of trainee and instructor time may vary depending upon whether an independent or proctored approach is employed. We performed a randomized controlled trial to compare independent and proctored methods of proficiency-based VR simulator training. Medical students were randomized to independent or proctored training groups. Groups were compared with respect to the number of training hours and task repetitions required to achieve expert level proficiency on laparoscopic and endoscopic simulators. Cox regression modeling was used to compare time to proficiency between groups, with adjustment for appropriate covariates. Thirty-six medical students (18 independent, 18 proctored) were enrolled. Achievement of overall simulator proficiency required a median of 11 hours of training (range, 6-21 hours). Laparoscopic and endoscopic proficiency were achieved after a median of 11 (range, 6-32) and 10 (range, 5-27) task repetitions, respectively. The number of repetitions required to achieve proficiency was similar between groups. After adjustment for covariates, trainees in the independent group achieved simulator proficiency with significantly fewer hours of training (hazard ratio, 2.62; 95% confidence interval, 1.01-6.85; p = 0.048). Our study quantifies the cost, in instructor and trainee hours, of proficiency-based laparoscopic and endoscopic VR simulator training, and suggests that proctored instruction does not offer any advantages to trainees. The independent approach may be preferable for surgical residency programs desiring to implement VR simulator training.

  19. Independent components analysis to increase efficiency of discriminant analysis methods (FDA and LDA): Application to NMR fingerprinting of wine.

    PubMed

    Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N

    2015-08-15

    Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Phylogenetic analysis of a spontaneous cocoa bean fermentation metagenome reveals new insights into its bacterial and fungal community diversity.

    PubMed

    Illeghems, Koen; De Vuyst, Luc; Papalexandratou, Zoi; Weckx, Stefan

    2012-01-01

    This is the first report on the phylogenetic analysis of the community diversity of a single spontaneous cocoa bean box fermentation sample through a metagenomic approach involving 454 pyrosequencing. Several sequence-based and composition-based taxonomic profiling tools were used and evaluated to avoid software-dependent results and their outcome was validated by comparison with previously obtained culture-dependent and culture-independent data. Overall, this approach revealed a wider bacterial (mainly γ-Proteobacteria) and fungal diversity than previously found. Further, the use of a combination of different classification methods, in a software-independent way, helped to understand the actual composition of the microbial ecosystem under study. In addition, bacteriophage-related sequences were found. The bacterial diversity depended partially on the methods used, as composition-based methods predicted a wider diversity than sequence-based methods, and as classification methods based solely on phylogenetic marker genes predicted a more restricted diversity compared with methods that took all reads into account. The metagenomic sequencing analysis identified Hanseniaspora uvarum, Hanseniaspora opuntiae, Saccharomyces cerevisiae, Lactobacillus fermentum, and Acetobacter pasteurianus as the prevailing species. Also, the presence of occasional members of the cocoa bean fermentation process was revealed (such as Erwinia tasmaniensis, Lactobacillus brevis, Lactobacillus casei, Lactobacillus rhamnosus, Lactococcus lactis, Leuconostoc mesenteroides, and Oenococcus oeni). Furthermore, the sequence reads associated with viral communities were of a restricted diversity, dominated by Myoviridae and Siphoviridae, and reflecting Lactobacillus as the dominant host. To conclude, an accurate overview of all members of a cocoa bean fermentation process sample was revealed, indicating the superiority of metagenomic sequencing over previously used techniques.

  1. Rational-operator-based depth-from-defocus approach to scene reconstruction.

    PubMed

    Li, Ang; Staunton, Richard; Tjahjadi, Tardi

    2013-09-01

    This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

  2. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings

    PubMed Central

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens

    2018-01-01

    Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Discussion Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings. PMID:29492336

  3. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    NASA Astrophysics Data System (ADS)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  4. A mathematical definition of the financial bubbles and crashes

    NASA Astrophysics Data System (ADS)

    Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako

    2007-09-01

    We check the validity of the mathematical method of detecting financial bubbles or crashes, which is based on a data fitting with an exponential function. We show that the period of a bubble can be determined nearly uniquely independent of the precision of data. The method is widely applicable for stock market data such as the Internet bubble.

  5. Influence of Previous Knowledge, Language Skills and Domain-Specific Interest on Observation Competency

    ERIC Educational Resources Information Center

    Kohlhauf, Lucia; Rutke, Ulrike; Neuhaus, Birgit

    2011-01-01

    Many epoch-making biological discoveries (e.g. Darwinian Theory) were based upon observations. Nevertheless, observation is often regarded as "just looking" rather than a basic scientific skill. As observation is one of the main research methods in biological sciences, it must be considered as an independent research method and systematic practice…

  6. Using a computer-based simulation with an artificial intelligence component and discovery learning to formulate training needs for a new technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillis, D.R.

    A computer-based simulation with an artificial intelligence component and discovery learning was investigated as a method to formulate training needs for new or unfamiliar technologies. Specifically, the study examined if this simulation method would provide for the recognition of applications and knowledge/skills which would be the basis for establishing training needs. The study also examined the effect of field-dependence/independence on recognition of applications and knowledge/skills. A pretest-posttest control group experimental design involving fifty-eight college students from an industrial technology program was used. The study concluded that the simulation was effective in developing recognition of applications and the knowledge/skills for amore » new or unfamiliar technology. And, the simulation's effectiveness for providing this recognition was not limited by an individual's field-dependence/independence.« less

  7. Neural correlates of the natural observation of an emotionally loaded video

    PubMed Central

    Gonzalez-Santos, Leopoldo

    2018-01-01

    Studies based on a paradigm of free or natural viewing have revealed characteristics that allow us to know how the brain processes stimuli within a natural environment. This method has been little used to study brain function. With a connectivity approach, we examine the processing of emotions using an exploratory method to analyze functional magnetic resonance imaging (fMRI) data. This research describes our approach to modeling stress paradigms suitable for neuroimaging environments. We showed a short film (4.54 minutes) with high negative emotional valence and high arousal content to 24 healthy male subjects (36.42 years old; SD = 12.14) during fMRI. Independent component analysis (ICA) was used to identify networks based on spatial statistical independence. Through this analysis we identified the sensorimotor system and its influence on the dorsal attention and default-mode networks, which in turn have reciprocal activity and modulate networks described as emotional. PMID:29883494

  8. Generating Personalized Web Search Using Semantic Context

    PubMed Central

    Xu, Zheng; Chen, Hai-Yan; Yu, Jie

    2015-01-01

    The “one size fits the all” criticism of search engines is that when queries are submitted, the same results are returned to different users. In order to solve this problem, personalized search is proposed, since it can provide different search results based upon the preferences of users. However, existing methods concentrate more on the long-term and independent user profile, and thus reduce the effectiveness of personalized search. In this paper, the method captures the user context to provide accurate preferences of users for effectively personalized search. First, the short-term query context is generated to identify related concepts of the query. Second, the user context is generated based on the click through data of users. Finally, a forgetting factor is introduced to merge the independent user context in a user session, which maintains the evolution of user preferences. Experimental results fully confirm that our approach can successfully represent user context according to individual user information needs. PMID:26000335

  9. Analysis of evolutionary conservation patterns and their influence on identifying protein functional sites.

    PubMed

    Fang, Chun; Noguchi, Tamotsu; Yamana, Hayato

    2014-10-01

    Evolutionary conservation information included in position-specific scoring matrix (PSSM) has been widely adopted by sequence-based methods for identifying protein functional sites, because all functional sites, whether in ordered or disordered proteins, are found to be conserved at some extent. However, different functional sites have different conservation patterns, some of them are linear contextual, some of them are mingled with highly variable residues, and some others seem to be conserved independently. Every value in PSSMs is calculated independently of each other, without carrying the contextual information of residues in the sequence. Therefore, adopting the direct output of PSSM for prediction fails to consider the relationship between conservation patterns of residues and the distribution of conservation scores in PSSMs. In order to demonstrate the importance of combining PSSMs with the specific conservation patterns of functional sites for prediction, three different PSSM-based methods for identifying three kinds of functional sites have been analyzed. Results suggest that, different PSSM-based methods differ in their capability to identify different patterns of functional sites, and better combining PSSMs with the specific conservation patterns of residues would largely facilitate the prediction.

  10. SU-E-T-24: A Simple Correction-Based Method for Independent Monitor Unit (MU) Verification in Monte Carlo (MC) Lung SBRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, D; Badkul, R; Jiang, H

    2014-06-01

    Purpose: Lung-SBRT uses hypo-fractionated dose in small non-IMRT fields with tissue-heterogeneity corrected plans. An independent MU verification is mandatory for safe and effective delivery of the treatment plan. This report compares planned MU obtained from iPlan-XVM-Calgorithm against spreadsheet-based hand-calculation using most commonly used simple TMR-based method. Methods: Treatment plans of 15 patients who underwent for MC-based lung-SBRT to 50Gy in 5 fractions for PTV V100%=95% were studied. ITV was delineated on MIP images based on 4D-CT scans. PTVs(ITV+5mm margins) ranged from 10.1- 106.5cc(average=48.6cc). MC-SBRT plans were generated using a combination of non-coplanar conformal arcs/beams using iPlan XVM-Calgorithm (BrainLAB iPlan ver.4.1.2)more » for Novalis-TX consisting of micro-MLCs and 6MV-SRS (1000MU/min) beam. These plans were re-computed using heterogeneity-corrected Pencil-Beam (PB-hete) algorithm without changing any beam parameters, such as MLCs/MUs. Dose-ratio: PB-hete/MC gave beam-by-beam inhomogeneity-correction-factors (ICFs):Individual Correction. For independent-2nd-check, MC-MUs were verified using TMR-based hand-calculation and obtained an average ICF:Average Correction, whereas TMR-based hand-calculation systematically underestimated MC-MUs by ∼5%. Also, first 10 MC-plans were verified with an ion-chamber measurement using homogenous phantom. Results: For both beams/arcs, mean PB-hete dose was systematically overestimated by 5.5±2.6% and mean hand-calculated MU systematic underestimated by 5.5±2.5% compared to XVMC. With individual correction, mean hand-calculated MUs matched with XVMC by - 0.3±1.4%/0.4±1.4 for beams/arcs, respectively. After average 5% correction, hand-calculated MUs matched with XVMC by 0.5±2.5%/0.6±2.0% for beams/arcs, respectively. Smaller dependence on tumor volume(TV)/field size(FS) was also observed. Ion-chamber measurement was within ±3.0%. Conclusion: PB-hete overestimates dose to lung tumor relative to XVMC. XVMC-algorithm is much more-complex and accurate with tissues-heterogeneities. Measurement at machine is time consuming and need extra resources; also direct measurement of dose for heterogeneous treatment plans is not clinically practiced, yet. This simple correction-based method was very helpful for independent-2nd-check of MC-lung-SBRT plans and routinely used in our clinic. A look-up table can be generated to include TV/FS dependence in ICFs.« less

  11. Analysis of pressure distortion testing

    NASA Technical Reports Server (NTRS)

    Koch, K. E.; Rees, R. L.

    1976-01-01

    The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.

  12. A new evaluation tool to obtain practice-based evidence of worksite health promotion programs.

    PubMed

    Dunet, Diane O; Sparling, Phillip B; Hersey, James; Williams-Piehota, Pamela; Hill, Mary D; Hanssen, Carl; Lawrenz, Frances; Reyes, Michele

    2008-10-01

    The Centers for Disease Control and Prevention developed the Swift Worksite Assessment and Translation (SWAT) evaluation method to identify promising practices in worksite health promotion programs. The new method complements research studies and evaluation studies of evidence-based practices that promote healthy weight in working adults. We used nationally recognized program evaluation standards of utility, feasibility, accuracy, and propriety as the foundation for our 5-step method: 1) site identification and selection, 2) site visit, 3) post-visit evaluation of promising practices, 4) evaluation capacity building, and 5) translation and dissemination. An independent, outside evaluation team conducted process and summative evaluations of SWAT to determine its efficacy in providing accurate, useful information and its compliance with evaluation standards. The SWAT evaluation approach is feasible in small and medium-sized workplace settings. The independent evaluation team judged SWAT favorably as an evaluation method, noting among its strengths its systematic and detailed procedures and service orientation. Experts in worksite health promotion evaluation concluded that the data obtained by using this evaluation method were sufficient to allow them to make judgments about promising practices. SWAT is a useful, business-friendly approach to systematic, yet rapid, evaluation that comports with program evaluation standards. The method provides a new tool to obtain practice-based evidence of worksite health promotion programs that help prevent obesity and, more broadly, may advance public health goals for chronic disease prevention and health promotion.

  13. Simultaneous ocular and muscle artifact removal from EEG data by exploiting diverse statistics.

    PubMed

    Chen, Xun; Liu, Aiping; Chen, Qiang; Liu, Yu; Zou, Liang; McKeown, Martin J

    2017-09-01

    Electroencephalography (EEG) recordings are frequently contaminated by both ocular and muscle artifacts. These are normally dealt with separately, by employing blind source separation (BSS) techniques relying on either second-order or higher-order statistics (SOS & HOS respectively). When HOS-based methods are used, it is usually in the setting of assuming artifacts are statistically independent to the EEG. When SOS-based methods are used, it is assumed that artifacts have autocorrelation characteristics distinct from the EEG. In reality, ocular and muscle artifacts do not completely follow the assumptions of strict temporal independence to the EEG nor completely unique autocorrelation characteristics, suggesting that exploiting HOS or SOS alone may be insufficient to remove these artifacts. Here we employ a novel BSS technique, independent vector analysis (IVA), to jointly employ HOS and SOS simultaneously to remove ocular and muscle artifacts. Numerical simulations and application to real EEG recordings were used to explore the utility of the IVA approach. IVA was superior in isolating both ocular and muscle artifacts, especially for raw EEG data with low signal-to-noise ratio, and also integrated usually separate SOS and HOS steps into a single unified step. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings.

    PubMed

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens; Comani, Silvia

    2018-01-01

    EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity ( p ). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings.

  15. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed Central

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554

  16. Charge carrier mobility in thin films of organic semiconductors by the gated van der Pauw method

    PubMed Central

    Rolin, Cedric; Kang, Enpu; Lee, Jeong-Hwan; Borghs, Gustaaf; Heremans, Paul; Genoe, Jan

    2017-01-01

    Thin film transistors based on high-mobility organic semiconductors are prone to contact problems that complicate the interpretation of their electrical characteristics and the extraction of important material parameters such as the charge carrier mobility. Here we report on the gated van der Pauw method for the simple and accurate determination of the electrical characteristics of thin semiconducting films, independently from contact effects. We test our method on thin films of seven high-mobility organic semiconductors of both polarities: device fabrication is fully compatible with common transistor process flows and device measurements deliver consistent and precise values for the charge carrier mobility and threshold voltage in the high-charge carrier density regime that is representative of transistor operation. The gated van der Pauw method is broadly applicable to thin films of semiconductors and enables a simple and clean parameter extraction independent from contact effects. PMID:28397852

  17. Separated-pair independent particle model and the generalized Brillouin theorem: ab initio calculations on the dissociation of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundberg, Kenneth Randall

    1976-01-01

    A method is developed to optimize the separated-pair independent particle (SPIP) wave function; it is a special case of the separated-pair theory obtained by using two-term natural expansions of the geminals. The orbitals are optimized by a theory based on the generalized Brillouin theorem and iterative configuration interaction (CI) calculations in the space of the SPIP function and its single excitations. The geminal expansion coefficients are optimized by serial 2 x 2 CI calculations. Formulas are derived for the matrix elements. An algorithm to implement the method is presented, and the work needed to evaluate the molecular integrals is discussed.

  18. A systematic review of novel technology for monitoring infant and newborn heart rate.

    PubMed

    Kevat, Ajay C; Bullen, Denise V R; Davis, Peter G; Kamlin, C Omar F

    2017-05-01

    Heart rate (HR) is a vital sign for assessing the need for resuscitation. We performed a systematic review of studies assessing novel methods of measuring HR in newborns and infants in the neonatal unit. Two investigators completed independent literature searches. Identified papers were independently evaluated, and relevant data were extracted and analysed. This systematic review identified seven new technologies, including camera-based photoplethysmography, reflectance pulse oximetry, laser Doppler methods, capacitive sensors, piezoelectric sensors, electromyography and a digital stethoscope. Clinicians should be aware of several of these, which may become available for clinical use in the near future. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  19. Calibration-free in vivo transverse blood flowmetry based on cross correlation of slow-time profiles from photoacoustic microscopy

    PubMed Central

    Zhou, Yong; Liang, Jinyang; Maslov, Konstantin I.; Wang, Lihong V.

    2013-01-01

    We propose a cross-correlation-based method to measure blood flow velocity by using photoacoustic microscopy. Unlike in previous auto-correlation-based methods, the measured flow velocity here is independent of particle size. Thus, an absolute flow velocity can be obtained without calibration. We first measured the flow velocity ex vivo, using defibrinated bovine blood. Then, flow velocities in vessels with different structures in a mouse ear were quantified in vivo. We further measured the flow variation in the same vessel and at a vessel bifurcation. All the experimental results indicate that our method can be used to accurately quantify blood velocity in vivo. PMID:24081077

  20. Competence-Based, Research-Related Lab Courses for Materials Modeling: The Case of Organic Photovoltaics

    ERIC Educational Resources Information Center

    Schellhammer, Karl Sebastian; Cuniberti, Gianaurelio

    2017-01-01

    We are hereby presenting a didactic concept for an advanced lab course that focuses on the design of donor materials for organic solar cells. Its research-related and competence-based approach qualifies the students to independently and creatively apply computational methods and to profoundly and critically discuss the results obtained. The high…

  1. Comparison of Methods for Demonstrating Passage of Time When Using Computer-Based Video Prompting

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Bryant, Kathryn J.; Spencer, Galen P.; Ayres, Kevin M.

    2015-01-01

    Two different video-based procedures for presenting the passage of time (how long a step lasts) were examined. The two procedures were presented within the framework of video prompting to promote independent multi-step task completion across four young adults with moderate intellectual disability. The two procedures demonstrating passage of the…

  2. The need for sustainability and alignment of future support for National Immunization Technical Advisory Groups (NITAGs) in low and middle-income countries.

    PubMed

    Howard, Natasha; Bell, Sadie; Walls, Helen; Blanchard, Laurence; Brenzel, Logan; Jit, Mark; Mounier-Jack, Sandra

    2018-02-22

    National Immunisation Technical Advisory Groups (NITAGs) provide independent guidance to health ministries to support evidence-based and nationally relevant immunisation decisions. We examined NITAGs' value, sustainability, and need for support in low and middle-income countries, drawing from a mixed-methods study including 130 global and national-level key informant interviews. NITAGs were particularly valued for providing independent and nationally owned evidence-based decision-making (EBDM), but needed to be integrated within national processes to effectively balance independence and influence. Participants agreed that most NITAGs, being relatively new, would need developmental and strengthening support for at least a decade. While national governments could support NITAG functioning, external support is likely needed for requisite capacity building. This might come from Gavi mechanisms and WHO, but would require alignment among stakeholders to be effective.

  3. Argumentation Based Joint Learning: A Novel Ensemble Learning Approach

    PubMed Central

    Xu, Junyi; Yao, Li; Li, Le

    2015-01-01

    Recently, ensemble learning methods have been widely used to improve classification performance in machine learning. In this paper, we present a novel ensemble learning method: argumentation based multi-agent joint learning (AMAJL), which integrates ideas from multi-agent argumentation, ensemble learning, and association rule mining. In AMAJL, argumentation technology is introduced as an ensemble strategy to integrate multiple base classifiers and generate a high performance ensemble classifier. We design an argumentation framework named Arena as a communication platform for knowledge integration. Through argumentation based joint learning, high quality individual knowledge can be extracted, and thus a refined global knowledge base can be generated and used independently for classification. We perform numerous experiments on multiple public datasets using AMAJL and other benchmark methods. The results demonstrate that our method can effectively extract high quality knowledge for ensemble classifier and improve the performance of classification. PMID:25966359

  4. Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G. P.

    Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.

  5. SURVEYS OF FALLOUT SHELTER--A COMPARISON BETWEEN AERIAL PHOTOGRAPHIC AND DOCUMENTARY METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleinecke, D.C.

    1960-02-01

    In 1959 a large part of Contra Costa County, California, was surveyed for fallout shelter areas. This survey was based on an examination of the tax assessor's records of existing buildings. A portion of this area was also surveyed independently by a method based on aerial photography. A statistical comparison of the results of these two surveys indicates that the aerial photographic method was more efficient than the documentary method in locating potential shelter space in buildings of heavy construction. This result, however, is probably not operationally significant. There is reason to believe that a combination of these two surveymore » methods could be devised which would be operationally preferable to either method. (auth)« less

  6. ECG-based gating in ultra high field cardiovascular magnetic resonance using an independent component analysis approach.

    PubMed

    Krug, Johannes W; Rose, Georg; Clifford, Gari D; Oster, Julien

    2013-11-19

    In Cardiovascular Magnetic Resonance (CMR), the synchronization of image acquisition with heart motion is performed in clinical practice by processing the electrocardiogram (ECG). The ECG-based synchronization is well established for MR scanners with magnetic fields up to 3 T. However, this technique is prone to errors in ultra high field environments, e.g. in 7 T MR scanners as used in research applications. The high magnetic fields cause severe magnetohydrodynamic (MHD) effects which disturb the ECG signal. Image synchronization is thus less reliable and yields artefacts in CMR images. A strategy based on Independent Component Analysis (ICA) was pursued in this work to enhance the ECG contribution and attenuate the MHD effect. ICA was applied to 12-lead ECG signals recorded inside a 7 T MR scanner. An automatic source identification procedure was proposed to identify an independent component (IC) dominated by the ECG signal. The identified IC was then used for detecting the R-peaks. The presented ICA-based method was compared to other R-peak detection methods using 1) the raw ECG signal, 2) the raw vectorcardiogram (VCG), 3) the state-of-the-art gating technique based on the VCG, 4) an updated version of the VCG-based approach and 5) the ICA of the VCG. ECG signals from eight volunteers were recorded inside the MR scanner. Recordings with an overall length of 87 min accounting for 5457 QRS complexes were available for the analysis. The records were divided into a training and a test dataset. In terms of R-peak detection within the test dataset, the proposed ICA-based algorithm achieved a detection performance with an average sensitivity (Se) of 99.2%, a positive predictive value (+P) of 99.1%, with an average trigger delay and jitter of 5.8 ms and 5.0 ms, respectively. Long term stability of the demixing matrix was shown based on two measurements of the same subject, each being separated by one year, whereas an averaged detection performance of Se = 99.4% and +P = 99.7% was achieved.Compared to the state-of-the-art VCG-based gating technique at 7 T, the proposed method increased the sensitivity and positive predictive value within the test dataset by 27.1% and 42.7%, respectively. The presented ICA-based method allows the estimation and identification of an IC dominated by the ECG signal. R-peak detection based on this IC outperforms the state-of-the-art VCG-based technique in a 7 T MR scanner environment.

  7. Determination of lactic microflora of kefir grains and kefir beverage by using culture-dependent and culture-independent methods.

    PubMed

    Kesmen, Zülal; Kacmaz, Nazife

    2011-01-01

    In this study, we investigated the bacterial compositions of kefir grains and kefir beverages collected from different regions of Turkey by using culture-independent and culture-dependent methods. In the culture-independent detection, 10 different species of bacteria were detected in total by using the polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) analysis of the 16S rRNA gene V3 region. Among these species, Lactobacillus kefiranofaciens was the most dominant one in the kefir grains, while Lactococcus lactis was found to be significantly prevalent in the kefir beverages. In the culture-dependent detection, the primary differentiation and grouping of the isolates from kefir beverages and kefir grains were performed using repetitive sequence-based PCR (rep-PCR) fingerprinting, and the results were validated by 16S rDNA full-length sequencing. According to the results of culture-dependent methods, the most frequently isolated species were L. lactis, Leuconostoc mesenteroides, and Lactobacillus kefiri, respectively. Only 3 species, which are L. lactis, Lactobacillus acidophilus, and Streptococcus thermophilus, were detected with both culture-dependent and culture-independent methods. This study showed that the combination of both methods is necessary for a detailed and reliable investigation of microbial communities in kefir grains and kefir beverages. Due to their artisan- and region-dependent microflora, kefir products can be a source of interesting lactic acid bacteria, either new taxa or strains with specific functional properties, which might be used for the development of new starter cultures and innovative food products. Therefore, an increasing demand exists for new strains that show desirable effects on the product characteristics Artisan dairy products are a candidate source of such microorganisms. For this reason, in this study, the bacterial compositions of kefir grains and kefir beverages obtained from different regions of Turkey were studied using culture-dependent and culture-independent molecular methods. © 2011 Institute of Food Technologists®

  8. DNA-based culture-independent analysis detects the presence of group a streptococcus in throat samples from healthy adults in Japan.

    PubMed

    Kulkarni, Tejaswini; Aikawa, Chihiro; Nozawa, Takashi; Murase, Kazunori; Maruyama, Fumito; Nakagawa, Ichiro

    2016-10-11

    Group A Streptococcus (GAS; Streptococcus pyogenes) causes a range of mild to severe infections in humans. It can also colonize healthy persons asymptomatically. Therefore, it is important to study GAS carriage in healthy populations, as carriage of it might lead to subsequent disease manifestation, clonal spread in the community, and/or diversification of the organism. Throat swab culture is the gold standard method for GAS detection. Advanced culture-independent methods provide rapid and efficient detection of microorganisms directly from clinical samples. We investigated the presence of GAS in throat swab samples from healthy adults in Japan using culture-dependent and culture-independent methods. Two throat swab samples were collected from 148 healthy volunteers. One was cultured on selective medium, while total DNA extracted from the other was polymerase chain reaction (PCR) amplified with two GAS-specific primer pairs: one was a newly designed 16S rRNA-specific primer pair, the other a previously described V-Na + -ATPase primer pair. Although only 5 (3.4 %) of the 148 samples were GAS-positive by the culture-dependent method, 146 (98.6 %) were positive for the presence of GAS DNA by the culture-independent method. To obtain serotype information by emm typing, we performed nested PCR using newly designed emm primers. We detected the four different emm types in 25 (16.9 %) samples, and these differed from the common emm types associated with GAS associated diseases in Japan. The different emm types detected in the healthy volunteers indicate that the presence of unique emm types might be associated with GAS carriage. Our results suggest that culture-independent methods should be considered for profiling GAS in the healthy hosts, with a view to obtaining better understanding of these organisms. The GAS-specific primers (16S rRNA and V-Na + -ATPase) used in this study can be used to estimate the maximum potential GAS carriage in people.

  9. Simultaneous optimization method for absorption spectroscopy postprocessing.

    PubMed

    Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T

    2015-05-10

    A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.

  10. Anti-cancer agents based on 6-trifluoromethoxybenzimidazole derivatives and method of making

    DOEpatents

    Gakh, Andrei A.; Vovk, Mykhaylo V.; Mel'nychenko, Nina V.; Sukach, Volodymyr A.

    2012-08-14

    The present disclosure relates to novel compounds having the structural Formulas (1a,1b), stereoisomers, tautomers, racemics, prodrugs, metabolites thereof, or pharmaceutically acceptable salt and/or solvate thereof as chemotherapy agents for treating of cancer, particularly androgen-independent prostate cancer. The disclosure also relates to methods for preparing said compounds, and to pharmaceutical compositions comprising said compounds.

  11. Anti-cancer agents based on 6-trifluoromethoxybenzimidazole derivatives and method of making

    DOEpatents

    Gakh, Andrei A; Vovk, Mykhaylo V; Mel& #x27; nychenko, Nina V; Sukach, Volodymyr A

    2012-10-23

    The present disclosure relates to novel compounds having the structural Formulas (1a,1b), stereoisomers, tautomers, racemics, prodrugs, metabolites thereof, or pharmaceutically acceptable salt and/or solvate thereof as chemotherapy agents for treating of cancer, particularly androgen-independent prostate cancer. The disclosure also relates to methods for preparing said compounds, and to pharmaceutical compositions comprising said compounds.

  12. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  13. ASPIC: a novel method to predict the exon-intron structure of a gene that is optimally compatible to a set of transcript sequences.

    PubMed

    Bonizzoni, Paola; Rizzi, Raffaella; Pesole, Graziano

    2005-10-05

    Currently available methods to predict splice sites are mainly based on the independent and progressive alignment of transcript data (mostly ESTs) to the genomic sequence. Apart from often being computationally expensive, this approach is vulnerable to several problems--hence the need to develop novel strategies. We propose a method, based on a novel multiple genome-EST alignment algorithm, for the detection of splice sites. To avoid limitations of splice sites prediction (mainly, over-predictions) due to independent single EST alignments to the genomic sequence our approach performs a multiple alignment of transcript data to the genomic sequence based on the combined analysis of all available data. We recast the problem of predicting constitutive and alternative splicing as an optimization problem, where the optimal multiple transcript alignment minimizes the number of exons and hence of splice site observations. We have implemented a splice site predictor based on this algorithm in the software tool ASPIC (Alternative Splicing PredICtion). It is distinguished from other methods based on BLAST-like tools by the incorporation of entirely new ad hoc procedures for accurate and computationally efficient transcript alignment and adopts dynamic programming for the refinement of intron boundaries. ASPIC also provides the minimal set of non-mergeable transcript isoforms compatible with the detected splicing events. The ASPIC web resource is dynamically interconnected with the Ensembl and Unigene databases and also implements an upload facility. Extensive bench marking shows that ASPIC outperforms other existing methods in the detection of novel splicing isoforms and in the minimization of over-predictions. ASPIC also requires a lower computation time for processing a single gene and an EST cluster. The ASPIC web resource is available at http://aspic.algo.disco.unimib.it/aspic-devel/.

  14. TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, X

    2016-06-15

    Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When amore » new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.« less

  15. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  16. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features.

    PubMed

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  17. Comparing Two Independent Satellite-Based Algorithms for Detecting and Tracking Ash Clouds by Using SEVIRI Sensor.

    PubMed

    Falconieri, Alfredo; Cooke, Michael C; Filizzola, Carolina; Marchese, Francesco; Pergola, Nicola; Tramutoli, Valerio

    2018-01-27

    The Eyjafjallajökull (Iceland) volcanic eruption of April-May 2010 caused unprecedented air-traffic disruption in Northern Europe, revealing some important weaknesses of current operational ash-monitoring and forecasting systems and encouraging the improvement of methods and procedures for supporting the activities of Volcanic Ash Advisory Centers (VAACs) better. In this work, we compare two established satellite-based algorithms for ash detection, namely RST ASH and the operational London VAAC method, both exploiting sensor data of the spinning enhanced visible and infrared imager (SEVIRI). We analyze similarities and differences in the identification of ash clouds during the different phases of the Eyjafjallajökull eruption. The work reveals, in some cases, a certain complementary behavior of the two techniques, whose combination might improve the identification of ash-affected areas in specific conditions. This is indicated by the quantitative comparison of the merged SEVIRI ash product, achieved integrating outputs of the RST ASH and London VAAC methods, with independent atmospheric infrared sounder (AIRS) DDA (dust-detection algorithm) observations.

  18. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features

    NASA Astrophysics Data System (ADS)

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  19. Comparing Two Independent Satellite-Based Algorithms for Detecting and Tracking Ash Clouds by Using SEVIRI Sensor

    PubMed Central

    Cooke, Michael C.; Filizzola, Carolina

    2018-01-01

    The Eyjafjallajökull (Iceland) volcanic eruption of April–May 2010 caused unprecedented air-traffic disruption in Northern Europe, revealing some important weaknesses of current operational ash-monitoring and forecasting systems and encouraging the improvement of methods and procedures for supporting the activities of Volcanic Ash Advisory Centers (VAACs) better. In this work, we compare two established satellite-based algorithms for ash detection, namely RSTASH and the operational London VAAC method, both exploiting sensor data of the spinning enhanced visible and infrared imager (SEVIRI). We analyze similarities and differences in the identification of ash clouds during the different phases of the Eyjafjallajökull eruption. The work reveals, in some cases, a certain complementary behavior of the two techniques, whose combination might improve the identification of ash-affected areas in specific conditions. This is indicated by the quantitative comparison of the merged SEVIRI ash product, achieved integrating outputs of the RSTASH and London VAAC methods, with independent atmospheric infrared sounder (AIRS) DDA (dust-detection algorithm) observations. PMID:29382058

  20. Unfolding the Second Riemann sheet with Pade Approximants: hunting resonance poles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masjuan, Pere; Departamento de Fisica Teorica y del Cosmos, Universidad de Granada, Campus de Fuentenueva, E-18071 Granada

    2011-05-23

    Based on Pade Theory, a new procedure for extracting the pole mass and width of resonances is proposed. The method is systematic and provides a model-independent treatment for the prediction and the errors of the approximation.

  1. SU-F-T-268: A Feasibility Study of Independent Dose Verification for Vero4DRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamashita, M; Kokubo, M; Institute of Biomedical Research and Innovation, Kobe, Hyogo

    2016-06-15

    Purpose: Vero4DRT (Mitsubishi Heavy Industries Ltd.) has been released for a few years. The treatment planning system (TPS) of Vero4DRT is dedicated, so the measurement is the only method of dose verification. There have been no reports of independent dose verification using Clarksonbased algorithm for Vero4DRT. An independent dose verification software program of the general-purpose linac using a modified Clarkson-based algorithm was modified for Vero4DRT. In this study, we evaluated the accuracy of independent dose verification program and the feasibility of the secondary check for Vero4DRT. Methods: iPlan (Brainlab AG) was used as the TPS. PencilBeam Convolution was used formore » dose calculation algorithm of IMRT and X-ray Voxel Monte Carlo was used for the others. Simple MU Analysis (SMU, Triangle Products, Japan) was used as the independent dose verification software program in which CT-based dose calculation was performed using a modified Clarkson-based algorithm. In this study, 120 patients’ treatment plans were collected in our institute. The treatments were performed using the conventional irradiation for lung and prostate, SBRT for lung and Step and shoot IMRT for prostate. Comparison in dose between the TPS and the SMU was done and confidence limits (CLs, Mean ± 2SD %) were compared to those from the general-purpose linac. Results: As the results of the CLs, the conventional irradiation (lung, prostate), SBRT (lung) and IMRT (prostate) show 2.2 ± 3.5% (CL of the general-purpose linac: 2.4 ± 5.3%), 1.1 ± 1.7% (−0.3 ± 2.0%), 4.8 ± 3.7% (5.4 ± 5.3%) and −0.5 ± 2.5% (−0.1 ± 3.6%), respectively. The CLs for Vero4DRT show similar results to that for the general-purpose linac. Conclusion: The independent dose verification for the new linac is clinically available as a secondary check and we performed the check with the similar tolerance level of the general-purpose linac. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  2. Automatic classification of artifactual ICA-components for artifact removal in EEG signals.

    PubMed

    Winkler, Irene; Haufe, Stefan; Tangermann, Michael

    2011-08-02

    Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.

  3. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    PubMed

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Spatiotemporal patterns of ERP based on combined ICA-LORETA analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Guo, Taomei; Xu, Yaqin; Zhao, Xiaojie; Yao, Li

    2007-03-01

    In contrast to the FMRI methods widely used up to now, this method try to understand more profoundly how the brain systems work under sentence processing task map accurately the spatiotemporal patterns of activity of the large neuronal populations in the human brain from the analysis of ERP data recorded on the brain scalp. In this study, an event-related brain potential (ERP) paradigm to record the on-line responses to the processing of sentences is chosen as an example. In order to give attention to both utilizing the ERPs' temporal resolution of milliseconds and overcoming the insensibility of cerebral location ERP sources, we separate these sources in space and time based on a combined method of independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. ICA blindly separate the input ERP data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. And then the spatial maps associated with each ICA component are analyzed, with use of LORETA to uniquely locate its cerebral sources throughout the full brain according to the assumption that neighboring neurons are simultaneously and synchronously activated. Our results show that the cerebral computation mechanism underlies content words reading is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, and activate at distinct time intervals and are grouped into different statistically independent components. Thus ICA-LORETA analysis provides an encouraging and effective method to study brain dynamics from ERP.

  5. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  6. The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method

    NASA Astrophysics Data System (ADS)

    Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad

    2018-04-01

    Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.

  7. A machine independent expert system for diagnosing environmentally induced spacecraft anomalies

    NASA Technical Reports Server (NTRS)

    Rolincik, Mark J.

    1991-01-01

    A new rule-based, machine independent analytical tool for diagnosing spacecraft anomalies, the EnviroNET expert system, was developed. Expert systems provide an effective method for storing knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms which allow approximate reasoning and inference, and the ability to attack problems not rigidly defines. The EviroNET expert system knowledge base currently contains over two hundred rules, and links to databases which include past environmental data, satellite data, and previous known anomalies. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose.

  8. Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.

    PubMed

    Zhou, Weidong; Gotman, Jean

    2004-01-01

    In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.

  9. Real-Time Subject-Independent Pattern Classification of Overt and Covert Movements from fNIRS Signals

    PubMed Central

    Rana, Mohit; Prasad, Vinod A.; Guan, Cuntai; Birbaumer, Niels; Sitaram, Ranganatha

    2016-01-01

    Recently, studies have reported the use of Near Infrared Spectroscopy (NIRS) for developing Brain–Computer Interface (BCI) by applying online pattern classification of brain states from subject-specific fNIRS signals. The purpose of the present study was to develop and test a real-time method for subject-specific and subject-independent classification of multi-channel fNIRS signals using support-vector machines (SVM), so as to determine its feasibility as an online neurofeedback system. Towards this goal, we used left versus right hand movement execution and movement imagery as study paradigms in a series of experiments. In the first two experiments, activations in the motor cortex during movement execution and movement imagery were used to develop subject-dependent models that obtained high classification accuracies thereby indicating the robustness of our classification method. In the third experiment, a generalized classifier-model was developed from the first two experimental data, which was then applied for subject-independent neurofeedback training. Application of this method in new participants showed mean classification accuracy of 63% for movement imagery tasks and 80% for movement execution tasks. These results, and their corresponding offline analysis reported in this study demonstrate that SVM based real-time subject-independent classification of fNIRS signals is feasible. This method has important applications in the field of hemodynamic BCIs, and neuro-rehabilitation where patients can be trained to learn spatio-temporal patterns of healthy brain activity. PMID:27467528

  10. Speaker-independent phoneme recognition with a binaural auditory image model

    NASA Astrophysics Data System (ADS)

    Francis, Keith Ivan

    1997-09-01

    This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.

  11. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  12. Culture-based indicators of fecal contamination and molecular microbial indicators rarely correlate with Campylobacter spp. in recreational waters.

    PubMed

    Hellein, Kristen N; Battie, Cynthia; Tauchman, Eric; Lund, Deanna; Oyarzabal, Omar A; Lepo, Joe Eugene

    2011-12-01

    Campylobacter spp. are the leading cause of gastroenteritis worldwide. Most human infections result from contaminated food; however, infections are also caused by recreational waterway contamination. Campylobacter culture is technically challenging and enumeration by culture-based methods is onerous. Thus, we employed qPCR to quantify Campylobacter spp. in fresh- and marine-water samples, raw sewage and animal feces. Multiplex PCR determined whether Campylobacter jejuni or C. coli, most commonly associated with human disease, were present in qPCR-positive samples. Campylobacters were detected in raw sewage, and in feces of all avian and mammalian species tested. Campylobacter-positive concentrations ranged from 68 to 2.3 × 10⁶ cells per 500 mL. Although C. jejuni and C. coli were rare in waterways, they were prevalent in sewage and feces. Campylobacter-specific qPCR screening of environmental waters did not correlate with the regulatory EPA method 1600 (Enterococcus culture), nor with culture-independent, molecular-based microbial source tracking indicators, such as human polyomavirus, human Bacteroidales and Methanobrevibacter smithii. Our results suggest that neither the standard EPA method nor the newly proposed culture-independent methods are appropriate surrogates for Campylobacter contamination in water. Thus, assays for specific pathogens may be necessary to protect human health, especially in waters that are contaminated with sewage and animal feces.

  13. Control method of Three-phase Four-leg converter based on repetitive control

    NASA Astrophysics Data System (ADS)

    Hui, Wang

    2018-03-01

    The research chose the magnetic levitation force of wind power generation system as the object. In order to improve the power quality problem caused by unbalanced load in power supply system, we combined the characteristics and repetitive control principle of magnetic levitation wind power generation system, and then an independent control strategy for three-phase four-leg converter was proposed. In this paper, based on the symmetric component method, the second order generalized integrator was used to generate the positive and negative sequence of signals, and the decoupling control was carried out under the synchronous rotating reference frame, in which the positive and negative sequence voltage is PI double closed loop, and a PI regulator with repetitive control was introduced to eliminate the static error regarding the fundamental frequency fluctuation characteristic of zero sequence component. The simulation results based on Matlab/Simulink show that the proposed control project can effectively suppress the disturbance caused by unbalanced loads and maintain the load voltage balance. The project is easy to be achieved and remarkably improves the quality of the independent power supply system.

  14. Isolation of circulating tumor cells from pancreatic cancer by automated filtration

    PubMed Central

    Brychta, Nora; Drosch, Michael; Driemel, Christiane; Fischer, Johannes C.; Neves, Rui P.; Esposito, Irene; Knoefel, Wolfram; Möhlendick, Birte; Hille, Claudia; Stresemann, Antje; Krahn, Thomas; Kassack, Matthias U.; Stoecklein, Nikolas H.; von Ahsen, Oliver

    2017-01-01

    It is now widely recognized that the isolation of circulating tumor cells based on cell surface markers might be hindered by variability in their protein expression. Especially in pancreatic cancer, isolation based only on EpCAM expression has produced very diverse results. Methods that are independent of surface markers and therefore independent of phenotypical changes in the circulating cells might increase CTC recovery also in pancreatic cancer. We compared an EpCAM-dependent (IsoFlux) and a size-dependent (automated Siemens Healthineers filtration device) isolation method for the enrichment of pancreatic cancer CTCs. The recovery rate of the filtration based approach is dramatically superior to the EpCAM-dependent approach especially for cells with low EpCAM-expression (filtration: 52%, EpCAM-dependent: 1%). As storage and shipment of clinical samples is important for centralized analyses, we also evaluated the use of frozen diagnostic leukapheresis (DLA) as source for isolating CTCs and subsequent genetic analysis such as KRAS mutation detection analysis. Using frozen DLA samples of pancreatic cancer patients we detected CTCs in 42% of the samples by automated filtration. PMID:29156783

  15. Isolation of circulating tumor cells from pancreatic cancer by automated filtration.

    PubMed

    Brychta, Nora; Drosch, Michael; Driemel, Christiane; Fischer, Johannes C; Neves, Rui P; Esposito, Irene; Knoefel, Wolfram; Möhlendick, Birte; Hille, Claudia; Stresemann, Antje; Krahn, Thomas; Kassack, Matthias U; Stoecklein, Nikolas H; von Ahsen, Oliver

    2017-10-17

    It is now widely recognized that the isolation of circulating tumor cells based on cell surface markers might be hindered by variability in their protein expression. Especially in pancreatic cancer, isolation based only on EpCAM expression has produced very diverse results. Methods that are independent of surface markers and therefore independent of phenotypical changes in the circulating cells might increase CTC recovery also in pancreatic cancer. We compared an EpCAM-dependent (IsoFlux) and a size-dependent (automated Siemens Healthineers filtration device) isolation method for the enrichment of pancreatic cancer CTCs. The recovery rate of the filtration based approach is dramatically superior to the EpCAM-dependent approach especially for cells with low EpCAM-expression (filtration: 52%, EpCAM-dependent: 1%). As storage and shipment of clinical samples is important for centralized analyses, we also evaluated the use of frozen diagnostic leukapheresis (DLA) as source for isolating CTCs and subsequent genetic analysis such as KRAS mutation detection analysis. Using frozen DLA samples of pancreatic cancer patients we detected CTCs in 42% of the samples by automated filtration.

  16. Frequency analysis of uncertain structures using imprecise probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modares, Mehdi; Bergerson, Joshua

    2015-01-01

    Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less

  17. A Meta-Analytic Study Concerning the Effect of Computer-Based Teaching on Academic Success in Turkey

    ERIC Educational Resources Information Center

    Batdi, Veli

    2015-01-01

    This research aims to investigate the effect of computer-based teaching (CBT) on students' academic success. The research used a meta-analytic method to reach a general conclusion by statistically calculating the results of a number of independent studies. In total, 78 studies (62 master's theses, 4 PhD theses, and 12 articles) concerning this…

  18. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  19. fMRI capture of auditory hallucinations: Validation of the two-steps method.

    PubMed

    Leroy, Arnaud; Foucher, Jack R; Pins, Delphine; Delmaire, Christine; Thomas, Pierre; Roser, Mathilde M; Lefebvre, Stéphanie; Amad, Ali; Fovet, Thomas; Jaafari, Nemat; Jardri, Renaud

    2017-10-01

    Our purpose was to validate a reliable method to capture brain activity concomitant with hallucinatory events, which constitute frequent and disabling experiences in schizophrenia. Capturing hallucinations using functional magnetic resonance imaging (fMRI) remains very challenging. We previously developed a method based on a two-steps strategy including (1) multivariate data-driven analysis of per-hallucinatory fMRI recording and (2) selection of the components of interest based on a post-fMRI interview. However, two tests still need to be conducted to rule out critical pitfalls of conventional fMRI capture methods before this two-steps strategy can be adopted in hallucination research: replication of these findings on an independent sample and assessment of the reliability of the hallucination-related patterns at the subject level. To do so, we recruited a sample of 45 schizophrenia patients suffering from frequent hallucinations, 20 schizophrenia patients without hallucinations and 20 matched healthy volunteers; all participants underwent four different experiments. The main findings are (1) high accuracy in reporting unexpected sensory stimuli in an MRI setting; (2) good detection concordance between hypothesis-driven and data-driven analysis methods (as used in the two-steps strategy) when controlled unexpected sensory stimuli are presented; (3) good agreement of the two-steps method with the online button-press approach to capture hallucinatory events; (4) high spatial consistency of hallucinatory-related networks detected using the two-steps method on two independent samples. By validating the two-steps method, we advance toward the possible transfer of such technology to new image-based therapies for hallucinations. Hum Brain Mapp 38:4966-4979, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Validity, reliability and support for implementation of independence-scaled procedural assessment in laparoscopic surgery.

    PubMed

    Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N

    2016-06-01

    There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs). An independence-scaled procedural assessment was created by linking the procedural key steps of the laparoscopic cholecystectomy to an independence scale. Subtitled and blinded videos of a novice, an intermediate and an almost competent trainee, were evaluated with GRSs (OSATS and GOALS) and the independence-scaled procedural assessment by seven surgeons, three senior trainees and six scrub nurses. Participants received a short introduction to the GRSs and independence-scaled procedural assessment before assessment. The validity was estimated with the Friedman and Wilcoxon test and the reliability with the intra-class correlation coefficient (ICC). A questionnaire was used to evaluate user opinion. Independence-scaled procedural assessment and GRS scores improved significantly with surgical experience (OSATS p = 0.001, GOALS p < 0.001, independence-scaled procedural assessment p < 0.001). The ICCs of the OSATS, GOALS and independence-scaled procedural assessment were 0.78, 0.74 and 0.84, respectively, among surgeons. The ICCs increased when the ratings of scrub nurses were added to those of the surgeons. The independence-scaled procedural assessment was not considered more of an administrative burden than the GRSs (p = 0.692). A procedural assessment created by combining procedural key steps to an independence scale is a valid, reliable and acceptable assessment instrument in surgery. In contrast to the GRSs, the reliability of the independence-scaled procedural assessment exceeded the threshold of 0.8, indicating that it can also be used for summative assessment. It furthermore seems that scrub nurses can assess the operative competence of surgical trainees.

  1. Determinants of Occupational and Residential Functioning in Bipolar Disorder

    PubMed Central

    Depp, Colin A; Mausbach, Brent T; Bowie, Christopher; Wolyniec, Paula; Thornquist, Mary H.; Luke, James R.; McGrath, John A.; Pulver, Ann E.; Harvey, Philip D.; Patterson, Thomas L

    2013-01-01

    Background Bipolar disorder is associated with reduced rates of employment and residential independence. The influence of cognitive impairment and affective symptoms on these functional attainments has received little previous attention and is the focus of this study. Method A total of 229 adult outpatients with bipolar disorder without active substance use disorders and with an average of mild severity of affective symptoms were included in the analyses. After adjusting for sociodemographic and illness history covariates, univariate and multivariate analyses were used to evaluate the independent and interactive associations of neurocognitive ability, performance-based functional capacity, and affective symptom severity with residential independence, occupational status and number of hours worked. Results A total of 30% of the sample was unemployed and 18% were not independently residing. Neurocognitive ability was the strongest predictor of any employment, but depressive symptom severity was the only variable significantly related to hours worked. The strongest predictor of residential independence was performance-based functional capacity. Affective symptoms and neurocognitive ability were independent (non-interactive) predictors of occupational and residential status. Limitations This is a cross-sectional study and thus causal direction among variables is unknown. The sample was ethnically homogeneous and thus the results may not generalize to ethnically diverse samples. Conclusions This study confirmed elevated rates of unemployment and residential non-independence in adults with bipolar disorder. Interventions targeting cognitive deficits and functional capacity may increase the likelihood of any employment or residential independence, respectively. Interventions targeting depressive symptoms may be most influential on work outcomes among those already employed. PMID:22129770

  2. Building Better Planet Populations for EXOSIMS

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2018-01-01

    The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.

  3. Analysis of Factors Influencing Energy Consumption at an Air Force Base.

    DTIC Science & Technology

    1995-12-01

    include them in energy consumption projections. 28 Table 2-3 Selected Independent Variables ( Morill , 1985) Dependent Variable Energy Conservation...most appropriate method for forecasting energy consumption (Weck, 1981; Tinsley, 1981; and Morill , 1985). This section will present a brief

  4. Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals

    PubMed Central

    2011-01-01

    Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266

  5. Comparing generalized ensemble methods for sampling of systems with many degrees of freedom

    DOE PAGES

    Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa

    2016-11-03

    Here, we compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchangemore » (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium.« less

  6. Comparing generalized ensemble methods for sampling of systems with many degrees of freedom.

    PubMed

    Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa

    2016-11-07

    We compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchange (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium (http://www.omnia.md/).

  7. Simulated electronic heterodyne recording and processing of pulsed-laser holograms

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1979-01-01

    The electronic recording of pulsed-laser holograms is proposed. The polarization sensitivity of each resolution element of the detector is controlled independently to add an arbitrary phase to the image waves. This method which can be used to simulate heterodyne recording and to process three-dimensional optical images, is based on a similar method for heterodyne recording and processing of continuous-wave holograms.

  8. An overview of very high level software design methods

    NASA Technical Reports Server (NTRS)

    Asdjodi, Maryam; Hooper, James W.

    1988-01-01

    Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.

  9. Nanoengineered CIGS thin films for low cost photovoltaics

    NASA Astrophysics Data System (ADS)

    Eldada, Louay; Taylor, Matthew; Sang, Baosheng; McWilliams, Scott; Oswald, Robert; Stanbery, Billy J.

    2008-08-01

    Low cost manufacturing of Cu(In,Ga)Se2 (CIGS) films for high efficiency photovoltaic devices by the innovative Field-Assisted Simultaneous Synthesis and Transfer (FASST®) process is reported. The FASST® process is a two-stage reactive transfer printing method relying on chemical reaction between two separate precursor films to form CIGS, one deposited on the substrate and the other on a printing plate in the first stage. In the second stage these precursors are brought into intimate contact and rapidly reacted under pressure in the presence of an applied electrostatic field. The method utilizes physical mechanisms characteristic of anodic wafer bonding and rapid thermal annealing, effectively creating a sealed micro-reactor that ensures high material utilization efficiency, direct control of reaction pressure, and low thermal budget. The use of two independent ink-based or PVD-based nanoengineered precursor thin films provides the benefits of independent composition and flexible deposition technique optimization, and eliminates pre-reaction prior to the second stage FASST® synthesis of CIGS. High quality CIGS with large grains on the order of several microns are formed in just several minutes based on compositional and structural analysis by XRF, SIMS, SEM and XRD. Cell efficiencies of 12.2% have been achieved using this method.

  10. Evaluating convex roof entanglement measures.

    PubMed

    Tóth, Géza; Moroder, Tobias; Gühne, Otfried

    2015-04-24

    We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples.

  11. Self-stabilizing byzantine-fault-tolerant clock synchronization system and method

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R. (Inventor)

    2012-01-01

    Systems and methods for rapid Byzantine-fault-tolerant self-stabilizing clock synchronization are provided. The systems and methods are based on a protocol comprising a state machine and a set of monitors that execute once every local oscillator tick. The protocol is independent of specific application specific requirements. The faults are assumed to be arbitrary and/or malicious. All timing measures of variables are based on the node's local clock and thus no central clock or externally generated pulse is used. Instances of the protocol are shown to tolerate bursts of transient failures and deterministically converge with a linear convergence time with respect to the synchronization period as predicted.

  12. Continuous-variable measurement-device-independent quantum key distribution with photon subtraction

    NASA Astrophysics Data System (ADS)

    Ma, Hong-Xin; Huang, Peng; Bai, Dong-Yun; Wang, Shi-Yu; Bao, Wan-Su; Zeng, Gui-Hua

    2018-04-01

    It has been found that non-Gaussian operations can be applied to increase and distill entanglement between Gaussian entangled states. We show the successful use of the non-Gaussian operation, in particular, photon subtraction operation, on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. The proposed method can be implemented based on existing technologies. Security analysis shows that the photon subtraction operation can remarkably increase the maximal transmission distance of the CV-MDI-QKD protocol, which precisely make up for the shortcoming of the original CV-MDI-QKD protocol, and one-photon subtraction operation has the best performance. Moreover, the proposed protocol provides a feasible method for the experimental implementation of the CV-MDI-QKD protocol.

  13. A SVM-based quantitative fMRI method for resting-state functional network detection.

    PubMed

    Song, Xiaomu; Chen, Nan-kuei

    2014-09-01

    Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Security analysis of quadratic phase based cryptography

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Healy, John J.; Sheridan, John T.

    2016-09-01

    The linear canonical transform (LCT) is essential in modeling a coherent light field propagation through first-order optical systems. Recently, a generic optical system, known as a Quadratic Phase Encoding System (QPES), for encrypting a two-dimensional (2D) image has been reported. It has been reported together with two phase keys the individual LCT parameters serve as keys of the cryptosystem. However, it is important that such the encryption systems also satisfies some dynamic security properties. Therefore, in this work, we examine some cryptographic evaluation methods, such as Avalanche Criterion and Bit Independence, which indicates the degree of security of the cryptographic algorithms on QPES. We compare our simulation results with the conventional Fourier and the Fresnel transform based DRPE systems. The results show that the LCT based DRPE has an excellent avalanche and bit independence characteristics than that of using the conventional Fourier and Fresnel based encryption systems.

  15. A Subject-Independent Method for Automatically Grading Electromyographic Features During a Fatiguing Contraction

    PubMed Central

    Jesunathadas, Mark; Poston, Brach; Santello, Marco; Ye, Jieping; Panchanathan, Sethuraman

    2014-01-01

    Many studies have attempted to monitor fatigue from electromyogram (EMG) signals. However, fatigue affects EMG in a subject-specific manner. We present here a subject-independent framework for monitoring the changes in EMG features that accompany muscle fatigue based on principal component analysis and factor analysis. The proposed framework is based on several time- and frequency-domain features, unlike most of the existing work, which is based on two to three features. Results show that latent factors obtained from factor analysis on these features provide a robust and unified framework. This framework learns a model from EMG signals of multiple subjects, that form a reference group, and monitors the changes in EMG features during a sustained submaximal contraction on a test subject on a scale from zero to one. The framework was tested on EMG signals collected from 12 muscles of eight healthy subjects. The distribution of factor scores of the test subject, when mapped onto the framework was similar for both the subject-specific and subject-independent cases. PMID:22498666

  16. A clinically observed discrepancy between image-based and log-based MLC positions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, Brian, E-mail: bpn2p@virginia.edu; Ahmed, Mahmoud; Kathuria, Kunal

    2016-06-15

    Purpose: To present a clinical case in which real-time intratreatment imaging identified an multileaf collimator (MLC) leaf to be consistently deviating from its programmed and logged position by >1 mm. Methods: An EPID-based exit-fluence dosimetry system designed to prevent gross delivery errors was used to capture cine during treatment images. The author serendipitously visually identified a suspected MLC leaf displacement that was not otherwise detected. The leaf position as recorded on the EPID images was measured and log-files were analyzed for the treatment in question, the prior day’s treatment, and for daily MLC test patterns acquired on those treatment days.more » Additional standard test patterns were used to quantify the leaf position. Results: Whereas the log-file reported no difference between planned and recorded positions, image-based measurements showed the leaf to be 1.3 ± 0.1 mm medial from the planned position. This offset was confirmed with the test pattern irradiations. Conclusions: It has been clinically observed that log-file derived leaf positions can differ from their actual position by >1 mm, and therefore cannot be considered to be the actual leaf positions. This cautions the use of log-based methods for MLC or patient quality assurance without independent confirmation of log integrity. Frequent verification of MLC positions through independent means is a necessary precondition to trust log-file records. Intratreatment EPID imaging provides a method to capture departures from MLC planned positions.« less

  17. Style-independent document labeling: design and performance evaluation

    NASA Astrophysics Data System (ADS)

    Mao, Song; Kim, Jong Woo; Thoma, George R.

    2003-12-01

    The Medical Article Records System or MARS has been developed at the U.S. National Library of Medicine (NLM) for automated data entry of bibliographical information from medical journals into MEDLINE, the premier bibliographic citation database at NLM. Currently, a rule-based algorithm (called ZoneCzar) is used for labeling important bibliographical fields (title, author, affiliation, and abstract) on medical journal article page images. While rules have been created for medical journals with regular layout types, new rules have to be manually created for any input journals with arbitrary or new layout types. Therefore, it is of interest to label any journal articles independent of their layout styles. In this paper, we first describe a system (called ZoneMatch) for automated generation of crucial geometric and non-geometric features of important bibliographical fields based on string-matching and clustering techniques. The rule based algorithm is then modified to use these features to perform style-independent labeling. We then describe a performance evaluation method for quantitatively evaluating our algorithm and characterizing its error distributions. Experimental results show that the labeling performance of the rule-based algorithm is significantly improved when the generated features are used.

  18. An expert system for natural language processing

    NASA Technical Reports Server (NTRS)

    Hennessy, John F.

    1988-01-01

    A solution to the natural language processing problem that uses a rule based system, written in OPS5, to replace the traditional parsing method is proposed. The advantage to using a rule based system are explored. Specifically, the extensibility of a rule based solution is discussed as well as the value of maintaining rules that function independently. Finally, the power of using semantics to supplement the syntactic analysis of a sentence is considered.

  19. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  20. Adobe acrobat: an alternative electronic teaching file construction methodology independent of HTML restrictions.

    PubMed

    Katzman, G L

    2001-03-01

    The goal of the project was to create a method by which an in-house digital teaching file could be constructed that was simple, inexpensive, independent of hypertext markup language (HTML) restrictions, and appears identical on multiple platforms. To accomplish this, Microsoft PowerPoint and Adobe Acrobat were used in succession to assemble digital teaching files in the Acrobat portable document file format. They were then verified to appear identically on computers running Windows, Macintosh Operating Systems (OS), and the Silicon Graphics Unix-based OS as either a free-standing file using Acrobat Reader software or from within a browser window using the Acrobat browser plug-in. This latter display method yields a file viewed through a browser window, yet remains independent of underlying HTML restrictions, which may confer an advantage over simple HTML teaching file construction. Thus, a hybrid of HTML-distributed Adobe Acrobat generated WWW documents may be a viable alternative for digital teaching file construction and distribution.

  1. Culture-independent Profiling of the Fecal Microbiome to Identify Microbial Species Associated with a Diarrheal Outbreak in Immunocompromised Mice.

    PubMed

    Misic, Ana M; Miedel, Emily L; Brice, Angela K; Cole, Stephen; Zhang, Grace F; Dyer, Cecilia D; Secreto, Anthony; Smith, Abigail L; Danet-Desnoyers, Gwenn; Beiting, Daniel P

    2018-06-13

    Immunocompromised mice are used frequently in biomedical research, in part because they accommodate the engraftmentandstudy of primary human cells within a mouse model; however, these animals are susceptible to opportunistic infectionsand require special husbandry considerations. In 2015, an outbreak marked by high morbidity but low mortality swept througha colony of immunocompromised mice; this outbreak rapidly affected 75% of the colony and ultimately required completedepopulation of the barrier suite. Conventional microbiologic and molecular diagnostics were unsuccessful in determiningthe cause; therefore, we explored culture-independent methods to broadly profile the microbial community in the feces of affected animals. This approach identified 4 bacterial taxa-Candidatus Arthromitus, Clostridium celatum, Clostridiales bacterium VE202-01, and Bifidobacterium pseudolongum strain PV8-2- that were significantly enriched in the affected mice. Based on these results, specific changes were made to the animal husbandry procedures for immunocompromised mice. This case report highlights the utility of culture-independent methods in laboratory animal diagnostics.

  2. ISE: An Integrated Search Environment. The manual

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan

    1992-01-01

    Integrated Search Environment (ISE), a software package that implements hierarchical searches with meta-control, is described in this manual. ISE is a collection of problem-independent routines to support solving searches. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. Potential users of ISE are designers of new application solvers and new search algorithms, and users of experimental application solvers and search algorithms. The ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and several experiments carried out on ISE are also described.

  3. Transactional Database Transformation and Its Application in Prioritizing Human Disease Genes

    PubMed Central

    Xiang, Yang; Payne, Philip R.O.; Huang, Kun

    2013-01-01

    Binary (0,1) matrices, commonly known as transactional databases, can represent many application data, including gene-phenotype data where “1” represents a confirmed gene-phenotype relation and “0” represents an unknown relation. It is natural to ask what information is hidden behind these “0”s and “1”s. Unfortunately, recent matrix completion methods, though very effective in many cases, are less likely to infer something interesting from these (0,1)-matrices. To answer this challenge, we propose IndEvi, a very succinct and effective algorithm to perform independent-evidence-based transactional database transformation. Each entry of a (0,1)-matrix is evaluated by “independent evidence” (maximal supporting patterns) extracted from the whole matrix for this entry. The value of an entry, regardless of its value as 0 or 1, has completely no effect for its independent evidence. The experiment on a gene-phenotype database shows that our method is highly promising in ranking candidate genes and predicting unknown disease genes. PMID:21422495

  4. BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.

    2013-03-20

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less

  5. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  6. The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method

    PubMed Central

    2015-01-01

    Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838

  7. A novel method for determining calibration and behavior of PVDF ultrasonic hydrophone probes in the frequency range up to 100 MHz.

    PubMed

    Bleeker, H J; Lewin, P A

    2000-01-01

    A new calibration technique for PVDF ultrasonic hydrophone probes is described. Current implementation of the technique allows determination of hydrophone frequency response between 2 and 100 MHz and is based on the comparison of theoretically predicted and experimentally determined pressure-time waveforms produced by a focused, circular source. The simulation model was derived from the time domain algorithm that solves the non linear KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation describing acoustic wave propagation. The calibration technique data were experimentally verified using independent calibration procedures in the frequency range from 2 to 40 MHz using a combined time delay spectrometry and reciprocity approach or calibration data provided by the National Physical Laboratory (NPL), UK. The results of verification indicated good agreement between the results obtained using KZK and the above-mentioned independent calibration techniques from 2 to 40 MHz, with the maximum discrepancy of 18% at 30 MHz. The frequency responses obtained using different hydrophone designs, including several membrane and needle probes, are presented, and it is shown that the technique developed provides a desirable tool for independent verification of primary calibration techniques such as those based on optical interferometry. Fundamental limitations of the presented calibration method are also examined.

  8. Independent EEG Sources Are Dipolar

    PubMed Central

    Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott

    2012-01-01

    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308

  9. Description of quasiparticle and satellite properties via cumulant expansions of the retarded one-particle Green's function

    DOE PAGES

    Mayers, Matthew Z.; Hybertsen, Mark S.; Reichman, David R.

    2016-08-22

    A cumulant-based GW approximation for the retarded one-particle Green's function is proposed, motivated by an exact relation between the improper Dyson self-energy and the cumulant generating function. We explore qualitative aspects of this method within a simple one-electron independent phonon model, where it is seen that the method preserves the energy moment of the spectral weight while also reproducing the exact Green's function in the weak-coupling limit. For the three-dimensional electron gas, this method predicts multiple satellites at the bottom of the band, albeit with inaccurate peak spacing. But, its quasiparticle properties and correlation energies are more accurate than bothmore » previous cumulant methods and standard G0W0. These results point to features that may be exploited within the framework of cumulant-based methods and suggest promising directions for future exploration and improvements of cumulant-based GW approaches.« less

  10. HIV-1 protease cleavage site prediction based on two-stage feature selection method.

    PubMed

    Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong

    2013-03-01

    Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.

  11. 77 FR 47043 - Draft 2012 Marine Mammal Stock Assessment Reports

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-07

    ...], by any of the following methods: Electronic Submissions: Submit all electronic public comments via... mortality exceeds the potential biological removal level; (B) which, based on the best available scientific... Pacific independent Scientific Review Groups (SRGs), reviewed the status of marine mammal stocks as...

  12. Exploring Older Adults' Health Information Seeking Behaviors

    ERIC Educational Resources Information Center

    Manafo, Elizabeth; Wong, Sharon

    2012-01-01

    Objective: To explore older adults' (55-70 years) health information-seeking behaviors. Methods: Using a qualitative methodology, based on grounded theory, data were collected using in-depth interviews. Participants were community-living, older adults in Toronto, Canada who independently seek nutrition and health information. Interview transcripts…

  13. Amplicon Sequencing of the slpH Locus Permits Culture-Independent Strain Typing of Lactobacillus helveticus in Dairy Products

    PubMed Central

    Moser, Aline; Wüthrich, Daniel; Bruggmann, Rémy; Eugster-Meier, Elisabeth; Meile, Leo; Irmler, Stefan

    2017-01-01

    The advent of massive parallel sequencing technologies has opened up possibilities for the study of the bacterial diversity of ecosystems without the need for enrichment or single strain isolation. By exploiting 78 genome data-sets from Lactobacillus helveticus strains, we found that the slpH locus that encodes a putative surface layer protein displays sufficient genetic heterogeneity to be a suitable target for strain typing. Based on high-throughput slpH gene sequencing and the detection of single-base DNA sequence variations, we established a culture-independent method to assess the biodiversity of the L. helveticus strains present in fermented dairy food. When we applied the method to study the L. helveticus strain composition in 15 natural whey cultures (NWCs) that were collected at different Gruyère, a protected designation of origin (PDO) production facilities, we detected a total of 10 sequence types (STs). In addition, we monitored the development of a three-strain mix in raclette cheese for 17 weeks. PMID:28775722

  14. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  15. Estimation of longitudinal force, lateral vehicle speed and yaw rate for four-wheel independent driven electric vehicles

    NASA Astrophysics Data System (ADS)

    Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong

    2018-02-01

    Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.

  16. Automated processing of the single-lead electrocardiogram for the detection of obstructive sleep apnoea.

    PubMed

    de Chazal, Philip; Heneghan, Conor; Sheridan, Elaine; Reilly, Richard; Nolan, Philip; O'Malley, Mark

    2003-06-01

    A method for the automatic processing of the electrocardiogram (ECG) for the detection of obstructive apnoea is presented. The method screens nighttime single-lead ECG recordings for the presence of major sleep apnoea and provides a minute-by-minute analysis of disordered breathing. A large independently validated database of 70 ECG recordings acquired from normal subjects and subjects with obstructive and mixed sleep apnoea, each of approximately eight hours in duration, was used throughout the study. Thirty-five of these recordings were used for training and 35 retained for independent testing. A wide variety of features based on heartbeat intervals and an ECG-derived respiratory signal were considered. Classifiers based on linear and quadratic discriminants were compared. Feature selection and regularization of classifier parameters were used to optimize classifier performance. Results show that the normal recordings could be separated from the apnoea recordings with a 100% success rate and a minute-by-minute classification accuracy of over 90% is achievable.

  17. Driver Fatigue Classification With Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System.

    PubMed

    Chai, Rifai; Naik, Ganesh R; Nguyen, Tuan Nghia; Ling, Sai Ho; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T

    2017-05-01

    This paper presents a two-class electroencephal-ography-based classification for classifying of driver fatigue (fatigue state versus alert state) from 43 healthy participants. The system uses independent component by entropy rate bound minimization analysis (ERBM-ICA) for the source separation, autoregressive (AR) modeling for the features extraction, and Bayesian neural network for the classification algorithm. The classification results demonstrate a sensitivity of 89.7%, a specificity of 86.8%, and an accuracy of 88.2%. The combination of ERBM-ICA (source separator), AR (feature extractor), and Bayesian neural network (classifier) provides the best outcome with a p-value < 0.05 with the highest value of area under the receiver operating curve (AUC-ROC = 0.93) against other methods such as power spectral density as feature extractor (AUC-ROC = 0.81). The results of this study suggest the method could be utilized effectively for a countermeasure device for driver fatigue identification and other adverse event applications.

  18. Size-independent neural networks based first-principles method for accurate prediction of heat of formation of fuels

    NASA Astrophysics Data System (ADS)

    Yang, GuanYa; Wu, Jiang; Chen, ShuGuang; Zhou, WeiJun; Sun, Jian; Chen, GuanHua

    2018-06-01

    Neural network-based first-principles method for predicting heat of formation (HOF) was previously demonstrated to be able to achieve chemical accuracy in a broad spectrum of target molecules [L. H. Hu et al., J. Chem. Phys. 119, 11501 (2003)]. However, its accuracy deteriorates with the increase in molecular size. A closer inspection reveals a systematic correlation between the prediction error and the molecular size, which appears correctable by further statistical analysis, calling for a more sophisticated machine learning algorithm. Despite the apparent difference between simple and complex molecules, all the essential physical information is already present in a carefully selected set of small molecule representatives. A model that can capture the fundamental physics would be able to predict large and complex molecules from information extracted only from a small molecules database. To this end, a size-independent, multi-step multi-variable linear regression-neural network-B3LYP method is developed in this work, which successfully improves the overall prediction accuracy by training with smaller molecules only. And in particular, the calculation errors for larger molecules are drastically reduced to the same magnitudes as those of the smaller molecules. Specifically, the method is based on a 164-molecule database that consists of molecules made of hydrogen and carbon elements. 4 molecular descriptors were selected to encode molecule's characteristics, among which raw HOF calculated from B3LYP and the molecular size are also included. Upon the size-independent machine learning correction, the mean absolute deviation (MAD) of the B3LYP/6-311+G(3df,2p)-calculated HOF is reduced from 16.58 to 1.43 kcal/mol and from 17.33 to 1.69 kcal/mol for the training and testing sets (small molecules), respectively. Furthermore, the MAD of the testing set (large molecules) is reduced from 28.75 to 1.67 kcal/mol.

  19. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  20. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms II: A Method to Obtain First-Level Analysis Residuals with Uniform and Gaussian Spatial Autocorrelation Function and Independent and Identically Distributed Time-Series.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K

    2018-02-01

    In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.

  1. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  2. A Probabilistic Framework for Peptide and Protein Quantification from Data-Dependent and Data-Independent LC-MS Proteomics Experiments

    PubMed Central

    Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.

    2013-01-01

    A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168

  3. Assessing the significance of global and local correlations under spatial autocorrelation: a nonparametric approach.

    PubMed

    Viladomat, Júlia; Mazumder, Rahul; McInturff, Alex; McCauley, Douglas J; Hastie, Trevor

    2014-06-01

    We propose a method to test the correlation of two random fields when they are both spatially autocorrelated. In this scenario, the assumption of independence for the pair of observations in the standard test does not hold, and as a result we reject in many cases where there is no effect (the precision of the null distribution is overestimated). Our method recovers the null distribution taking into account the autocorrelation. It uses Monte-Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial autocorrelation. With this simulation model, any test based on the independence of two (or more) random fields can be constructed. This research was motivated by a project in biodiversity and conservation in the Biology Department at Stanford University. © 2014, The International Biometric Society.

  4. Material point method of modelling and simulation of reacting flow of oxygen

    NASA Astrophysics Data System (ADS)

    Mason, Matthew; Chen, Kuan; Hu, Patrick G.

    2014-07-01

    Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.

  5. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    PubMed

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  6. Opinion control in complex networks

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki

    2015-03-01

    In many political elections, the electorate appears to be a composite of partisan and independent voters. Given that partisans are not likely to convert to a different party, an important goal for a political party could be to mobilize independent voters toward the party with the help of strong leadership, mass media, partisans, and the effects of peer-to-peer influence. Based on the exact solution of classical voter model dynamics in the presence of perfectly partisan voters (i.e., zealots), we propose a computational method that uses pinning control strategy to maximize the share of a party in a social network of independent voters. The party, corresponding to the controller or zealots, optimizes the nodes to be controlled given the information about the connectivity of independent voters and the set of nodes that the opposing party controls. We show that controlling hubs is generally a good strategy, but the optimized strategy is even better. The superiority of the optimized strategy is particularly eminent when the independent voters are connected as directed (rather than undirected) networks.

  7. Including foreshocks and aftershocks in time-independent probabilistic seismic hazard analyses

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    Time‐independent probabilistic seismic‐hazard analysis treats each source as being temporally and spatially independent; hence foreshocks and aftershocks, which are both spatially and temporally dependent on the mainshock, are removed from earthquake catalogs. Yet, intuitively, these earthquakes should be considered part of the seismic hazard, capable of producing damaging ground motions. In this study, I consider the mainshock and its dependents as a time‐independent cluster, each cluster being temporally and spatially independent from any other. The cluster has a recurrence time of the mainshock; and, by considering the earthquakes in the cluster as a union of events, dependent events have an opportunity to contribute to seismic ground motions and hazard. Based on the methods of the U.S. Geological Survey for a high‐hazard site, the inclusion of dependent events causes ground motions that are exceeded at probability levels of engineering interest to increase by about 10% but could be as high as 20% if variations in aftershock productivity can be accounted for reliably.

  8. One-Year Efficacy Testing of Enabling Mothers to Prevent Pediatric Obesity through Web-Based Education and Reciprocal Determinism (EMPOWER) Randomized Control Trial

    ERIC Educational Resources Information Center

    Knowlden, Adam; Sharma, Manoj

    2016-01-01

    Background: The purpose of this study was to evaluate the efficacy of the Enabling Mothers to Prevent Pediatric Obesity through Web-Based Education and Reciprocal Determinism (EMPOWER) intervention at 1-year, postintervention follow-up. Method: A mixed between-within subjects design was used to evaluate the trial. Independent variables included a…

  9. The Effect of Dynamic Assessment on Adult Learners of Arabic: A Mixed-Method Study at the Defense Language Institute Foreign Language Center

    ERIC Educational Resources Information Center

    Fahmy, Mohsen M.

    2013-01-01

    Dynamic assessment (DA) is based on Vygotsky's (1978) sociocultural theory and his Zone of Proximal Development (ZPD). ZPD is the range of abilities bordered by the learner's assisted and independent performances. Previous studies showed promising results for DA in tutoring settings. However, they did not use proficiency-based rubrics to measure…

  10. Lung segmentation from HRCT using united geometric active contours

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  11. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    NASA Technical Reports Server (NTRS)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  12. Similar estimates of temperature impacts on global wheat yield by three independent methods

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  13. Determination of the optimal number of components in independent components analysis.

    PubMed

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A method for operative quantitative interpretation of multispectral images of biological tissues

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.

    2013-10-01

    A method for operative retrieval of spatial distributions of biophysical parameters of a biological tissue by using a multispectral image of it has been developed. The method is based on multiple regressions between linearly independent components of the diffuse reflection spectrum of the tissue and unknown parameters. Possibilities of the method are illustrated by an example of determining biophysical parameters of the skin (concentrations of melanin, hemoglobin and bilirubin, blood oxygenation, and scattering coefficient of the tissue). Examples of quantitative interpretation of the experimental data are presented.

  15. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  16. Gradient-free MCMC methods for dynamic causal modelling

    DOE PAGES

    Sengupta, Biswa; Friston, Karl J.; Penny, Will D.

    2015-03-14

    Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less

  17. Halftoning method for the generation of motion stimuli

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1989-01-01

    This paper describes a novel computer-graphic technique for the generation of a broad class of motion stimuli for vision research, which uses color table animation in conjunction with a single base image. Using this technique, contrast and temporal frequency can be varied with a negligible amount of computation, once a single-base image is produced. Since only two-bit planes are needed to display a single drifting grating, an eight-bit/pixel display can be used to generate four-component plaids, in which each component of the plaid has independently programmable contrast and temporal frequency. Because the contrast and temporal frequencies of the various components are mutually independent, a large number of two-dimensional stimulus motions can be produced from a single image file.

  18. Boosting specificity of MEG artifact removal by weighted support vector machine.

    PubMed

    Duan, Fang; Phothisonothai, Montri; Kikuchi, Mitsuru; Yoshimura, Yuko; Minabe, Yoshio; Watanabe, Kastumi; Aihara, Kazuyuki

    2013-01-01

    An automatic artifact removal method of magnetoencephalogram (MEG) was presented in this paper. The method proposed is based on independent components analysis (ICA) and support vector machine (SVM). However, different from the previous studies, in this paper we consider two factors which would influence the performance. First, the imbalance factor of independent components (ICs) of MEG is handled by weighted SVM. Second, instead of simply setting a fixed weight to each class, a re-weighting scheme is used for the preservation of useful MEG ICs. Experimental results on manually marked MEG dataset showed that the method proposed could correctly distinguish the artifacts from the MEG ICs. Meanwhile, 99.72% ± 0.67 of MEG ICs were preserved. The classification accuracy was 97.91% ± 1.39. In addition, it was found that this method was not sensitive to individual differences. The cross validation (leave-one-subject-out) results showed an averaged accuracy of 97.41% ± 2.14.

  19. Determination of Problematic ICD-9-CM Subcategories for Further Study of Coding Performance: Delphi Method

    PubMed Central

    Zeng, Xiaoming; Bell, Paul D

    2011-01-01

    In this study, we report on a qualitative method known as the Delphi method, used in the first part of a research study for improving the accuracy and reliability of ICD-9-CM coding. A panel of independent coding experts interacted methodically to determine that the three criteria to identify a problematic ICD-9-CM subcategory for further study were cost, volume, and level of coding confusion caused. The Medicare Provider Analysis and Review (MEDPAR) 2007 fiscal year data set as well as suggestions from the experts were used to identify coding subcategories based on cost and volume data. Next, the panelists performed two rounds of independent ranking before identifying Excisional Debridement as the subcategory that causes the most confusion among coders. As a result, they recommended it for further study aimed at improving coding accuracy and variation. This framework can be adopted at different levels for similar studies in need of a schema for determining problematic subcategories of code sets. PMID:21796264

  20. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  1. Superpixel-based spatial amplitude and phase modulation using a digital micromirror device.

    PubMed

    Goorden, Sebastianus A; Bertolotti, Jacopo; Mosk, Allard P

    2014-07-28

    We present a superpixel method for full spatial phase and amplitude control of a light beam using a digital micromirror device (DMD) combined with a spatial filter. We combine square regions of nearby micromirrors into superpixels by low pass filtering in a Fourier plane of the DMD. At each superpixel we are able to independently modulate the phase and the amplitude of light, while retaining a high resolution and the very high speed of a DMD. The method achieves a measured fidelity F = 0.98 for a target field with fully independent phase and amplitude at a resolution of 8 × 8 pixels per diffraction limited spot. For the LG10 orbital angular momentum mode the calculated fidelity is F = 0.99993, using 768 × 768 DMD pixels. The superpixel method reduces the errors when compared to the state of the art Lee holography method for these test fields by 50% and 18%, with a comparable light efficiency of around 5%. Our control software is publicly available.

  2. Cross-evaluation of ground-based, multi-satellite and reanalysis precipitation products: Applicability of the Triple Collocation method across Mainland China

    NASA Astrophysics Data System (ADS)

    Li, Changming; Tang, Guoqiang; Hong, Yang

    2018-07-01

    Evaluating the reliability of satellite and reanalysis precipitation products is critical but challenging over ungauged or poorly gauged regions. The Triple Collocation (TC) method is a reliable approach to estimate the accuracy of any three independent inputs in the absence of truth values. This study assesses the uncertainty of three types of independent precipitation products, i.e., satellite-based, ground-based and model reanalysis over Mainland China using the TC method. The ground-based data set is Gauge Based Daily Precipitation Analysis (CGDPA). The reanalysis data set is European Reanalysis Agency Reanalysis Product (ERA-interim). The satellite-based products include five mainstream satellite products. The comparison and evaluation are conducted at 0.25° and daily resolutions from 2013 to 2015. First, the effectiveness of the TC method is evaluated in South China with dense gauge network. The results demonstrate that the TC method is reliable because the correlation coefficient (CC) and root mean square error (RMSE) derived from TC are close to those derived from ground observations, with only 9% and 7% mean relative differences, respectively. Then, the TC method is applied in Mainland China, with special attention paid to the Tibetan Plateau (TP) known as the Earth's third pole with few ground stations. Results indicate that (1) The overall performance of IMERG is better than the other satellite products over Mainland China, followed by 3B42V7, CMORPH-CRT and PERSIANN-CDR. (2) In the TP, CGDPA shows the best overall performance over gauged grid cells, however, over ungauged regions, IMERG and ERA-interim slightly outperform CGDPA with similar RMSE but higher mean CC (0.63, 0.61, and 0.58, respectively). It highlights the strengths and potentiality of remote sensing and reanalysis data over the TP and reconfirms the cons of the inherent uncertainty of CGDPA due to interpolation from sparsely gauged data. The study concludes that the TC method provides not only reliable cross-validation results over Mainland China but also a new perspective for comparatively assessing multi-source precipitation products, particularly over poorly gauged regions such as the TP.

  3. ADSA Foundation Scholar Award: Trends in culture-independent methods for assessing dairy food quality and safety: emerging metagenomic tools.

    PubMed

    Yeung, Marie

    2012-12-01

    Enhancing the quality and safety of dairy food is critical to maintaining the competitiveness of dairy products in the food and beverage market and in reinforcing consumer confidence in the dairy industry. Raw milk quality has a significant effect on finished product quality. Several microbial groups found in raw milk have been shown to adversely affect the shelf life of pasteurized milk. Current microbiological criteria used to define milk quality are based primarily on culture-dependent methods, some of which are perceived to lack the desired sensitivity and specificity. To supplement traditional methods, culture-independent methods are increasingly being used to identify specific species or microbial groups, and to detect indicator genes or proteins in raw milk or dairy products. Some molecular subtyping techniques have been developed to track the transmission of microbes in dairy environments. The burgeoning "-omics" technologies offer new and exciting opportunities to enhance our understanding of food quality and safety in relation to microbes. Metagenomics has the potential to characterize microbial diversity, detect nonculturable microbes, and identify unique sequences or other factors associated with dairy product quality and safety. In this review, fluid milk will be used as the primary example to examine the adequacy and validity of conventional methods, the current trend of culture-independent methods, and the potential applications of metagenomics in dairy food research. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. La Aplicacion de las Bases de Datos al Estudio Historico del Espanol (The Application of Databases to the Historical Study of Spanish).

    ERIC Educational Resources Information Center

    Nadal, Gloria Claveria; Lancis, Carlos Sanchez

    1997-01-01

    Notes that the employment of databases to the study of the history of a language is a method that allows for substantial improvement in investigative quality. Illustrates this with the example of the application of this method to two studies of the history of Spanish developed in the Language and Information Seminary of the Independent University…

  5. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  6. Content-based intermedia synchronization

    NASA Astrophysics Data System (ADS)

    Oh, Dong-Young; Sampath-Kumar, Srihari; Rangan, P. Venkat

    1995-03-01

    Inter-media synchronization methods developed until now have been based on syntactic timestamping of video frames and audio samples. These methods are not fully appropriate for the synchronization of multimedia objects which may have to be accessed individually by their contents, e.g. content-base data retrieval. We propose a content-based multimedia synchronization scheme in which a media stream is viewed as hierarchial composition of smaller objects which are logically structured based on the contents, and the synchronization is achieved by deriving temporal relations among logical units of media object. content-based synchronization offers several advantages such as, elimination of the need for time stamping, freedom from limitations of jitter, synchronization of independently captured media objects in video editing, and compensation for inherent asynchronies in capture times of video and audio.

  7. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    NASA Astrophysics Data System (ADS)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  8. Implementation of a Serial Replica Exchange Method in a Physics-Based United-Residue (UNRES) Force Field

    PubMed Central

    Shen, Hujun; Czaplewski, Cezary; Liwo, Adam; Scheraga, Harold A.

    2009-01-01

    The kinetic-trapping problem in simulating protein folding can be overcome by using a Replica Exchange Method (REM). However, in implementing REM in molecular dynamics simulations, synchronization between processors on parallel computers is required, and communication between processors limits its ability to sample conformational space in a complex system efficiently. To minimize communication between processors during the simulation, a Serial Replica Exchange Method (SREM) has been proposed recently by Hagan et al. (J. Phys. Chem. B 2007, 111, 1416–1423). Here, we report the implementation of this new SREM algorithm with our physics-based united-residue (UNRES) force field. The method has been tested on the protein 1E0L with a temperature-independent UNRES force field and on terminally blocked deca-alanine (Ala10) and 1GAB with the recently introduced temperature-dependent UNRES force field. With the temperature-independent force field, SREM reproduces the results of REM but is more efficient in terms of wall-clock time and scales better on distributed-memory machines. However, exact application of SREM to the temperature-dependent UNRES algorithm requires the determination of a four-dimensional distribution of UNRES energy components instead of a one-dimensional energy distribution for each temperature, which is prohibitively expensive. Hence, we assumed that the temperature dependence of the force field can be ignored for neighboring temperatures. This version of SREM worked for Ala10 which is a simple system but failed to reproduce the thermodynamic results as well as regular REM on the more complex 1GAB protein. Hence, SREM can be applied to the temperature-independent but not to the temperature-dependent UNRES force field. PMID:20011673

  9. Unified design of sinusoidal-groove fused-silica grating.

    PubMed

    Feng, Jijun; Zhou, Changhe; Cao, Hongchao; Lu, Peng

    2010-10-20

    A general design rule of deep-etched subwavelength sinusoidal-groove fused-silica grating as a highly efficient polarization-independent or polarization-selective device is studied based on the simplified modal method, which shows that the device structure depends little on the incident wavelength, but mainly on the ratio of groove depth to incident wavelength and the ratio of wavelength to grating period. These two ratios could be used as the design guidelines for wavelength-independent structure from deep ultraviolet to far infrared. The optimized grating profile with a different function as a polarizing beam splitter, a polarization-independent two-port beam splitter, or a polarization-independent grating with high efficiency of -1st order is obtained at a wavelength of 1064 nm, and verified by using the rigorous coupled-wave analysis. The performance of the sinusoidal grating is better than a conventional rectangular one, which could be useful for practical applications.

  10. Functional Performance in Young Australian Children with Achondroplasia

    ERIC Educational Resources Information Center

    Ireland, Penelope Jane; McGill, James; Zankl, Andreas; Ware, Robert S.; Pacey, Verity; Ault, Jenny; Savarirayan, Ravi; Sillence, David; Thompson, Elizabeth M.; Townshend, Sharron; Johnston, Leanne Marie

    2011-01-01

    Aim: The aim of this study was to determine population-specific developmental milestones for independence in self-care, mobility, and social cognitive skills in children with achondroplasia, the most common skeletal dysplasia. Methods: Population-based recruitment from October 2008 to October 2010 identified 44 Australian children with…

  11. Judaism and Montessori

    ERIC Educational Resources Information Center

    Coates, Miriam

    2011-01-01

    Judaism, as a religion and a culture, places a high value on education and scholarly pursuits. As Jewish schools of varying affiliations and denominations look for ways to improve and revive programming, some are exploring the Montessori method. Based on education that follows the child, Montessori focuses on respect, independence, and preparing…

  12. Probing the nature and resistance of the molecule-electrode contact in SAM-based junctions.

    PubMed

    Sangeeth, C S Suchand; Wan, Albert; Nijhuis, Christian A

    2015-07-28

    It is challenging to quantify the contact resistance and to determine the nature of the molecule-electrode contacts in molecular two-terminal junctions. Here we show that potentiodynamic and temperature dependent impedance measurements give insights into the nature of the SAM-electrode interface and other bottlenecks of charge transport (the capacitance of the SAM (C(SAM)) and the resistance of the SAM (R(SAM))), unlike DC methods, independently of each other. We found that the resistance of the top-electrode-SAM contact for junctions with the form of Ag(TS)-SC(n)//GaO(x)/EGaIn with n = 10, 12, 14, 16 or 18 is bias and temperature independent and hence Ohmic (non-rectifying) in nature, and is orders of magnitude smaller than R(SAM). The C(SAM) and R(SAM) are independent of the temperature, indicating that the mechanism of charge transport in these SAM-based junctions is coherent tunneling and the charge carrier trapping at the interfaces is negligible.

  13. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  14. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    PubMed

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  15. Web-based, virtual course units as a didactic concept for medical teaching.

    PubMed

    Schultze-Mosgau, Stefan; Zielinski, Thomas; Lochner, Jürgen

    2004-06-01

    The objective was to develop a web-based, virtual series of lectures for evidence-based, standardized knowledge transfer independent of location and time with possibilities for interactive participation and a concluding web-based online examination. Within the framework of a research project, specific Intranet and Internet capable course modules were developed together with a concluding examination. The concept of integrating digital and analogue course units supported by sound was based on FlashCam (Nexus Concepts), Flash MX (Macromedia), HTML and JavaScript. A Web server/SGI Indigo Unix server was used as a platform by the course provider. A variety of independent formats (swf, avi, mpeg, DivX, etc.) were integrated in the individual swf modules. An online examination was developed to monitor the learning effect. The examination papers are automatically forwarded by email after completion. The results are also returned to the user automatically after they have been processed by a key program and an evaluation program. The system requirements for the user PC have deliberately been kept low (Internet Explorer 5.0, Flash-Player 6, 56 kbit/s modem, 200 MHz PC). Navigation is intuitive. Users were provided with a technical online introduction and a FAQ list. Eighty-two students of dentistry in their 3rd to 5th years of study completed a questionnaire to assess the course content and the user friendliness (SPSS V11) with grades 1 to 6 (1 = 'excellent' and 6 = 'unsatisfactory'). The course units can be viewed under the URL: http://giga.rrze.uni-erlangen.de/movies/MKG/trailer and URL: http://giga.rrze.uni-erlangen.de/movies/MKG/demo/index. Some 89% of the students gave grades 1 (excellent) and 2 (good) for accessibility independent of time and 83% for access independent of location. Grades 1 and 2 were allocated for an objectivization of the knowledge transfer by 67% of the students and for the use of video sequences for demonstrating surgical techniques by 91% of the students. The course units were used as an optional method of studying by 87% of the students; 76% of the students made use of this facility from home; 83% of the students used Internet Explorer as a browser; 60% used online streaming and 35% downloading as the preferred method for data transfer. The course units contribute to an evidence-based objectivization of multimedia knowledge transfer independent of time and location. Online examinations permit automatic monitoring and evaluation of the learning effect. The modular structure permits easy updating of course contents. Hyperlinks with literature sources facilitate study.

  16. Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data

    PubMed Central

    Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping

    2013-01-01

    Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189

  17. Analysis of the semi-permanent house in Merauke city in terms of aesthetic value in architecture

    NASA Astrophysics Data System (ADS)

    Topan, Anton; Octavia, Sari; Soleman, Henry

    2018-05-01

    Semi permanent houses are also used called “Rumah Kancingan” is the houses that generally exist in the Merauke city. Called semi permanent because the main structure use is woods even if the walls uses bricks. This research tries to analyze more about Semi permanent house in terms of aesthethics value. This research is a qualitative research with data collection techniques using questionnaire method and direct observation field and study of literature. The result of questionnaire data collection then processed using SPSS to get the influence of independent variable against the dependent variable and found that color, ornament, shape of the door-window and shape of roof (independent) gives 97,1% influence to the aesthetics of the Semi permanent house and based on the output coefficient SPSS obtained that the dependent variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable. For variables of semi permanent and wooden structure gives an effect of 98,6% to aesthetics and based on the result of SPSS coefficient it is found that free variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable.

  18. Fast label-free detection of Legionella spp. in biofilms by applying immunomagnetic beads and Raman spectroscopy.

    PubMed

    Kusić, Dragana; Rösch, Petra; Popp, Jürgen

    2016-03-01

    Legionellae colonize biofilms, can form a biofilm by itself and multiply intracellularly within the protozoa commonly found in water distribution systems. Approximately half of the known species are pathogenic and have been connected to severe multisystem Legionnaires' disease. The detection methods for Legionella spp. in water samples are still based on cultivation, which is time consuming due to the slow growth of this bacterium. Here, we developed a cultivation-independent, label-free and fast detection method for legionellae in a biofilm matrix based on the Raman spectroscopic analysis of isolated single cells via immunomagnetic separation (IMS). A database comprising the Raman spectra of single bacterial cells captured and separated from the biofilms formed by each species was used to build the identification method based on a support vector machine (SVM) discriminative classifier. The complete method allows the detection of Legionella spp. in 100 min. Cross-reactivity of Legionella spp. specific immunomagnetic beads to the other studied genera was tested, where only small cell amounts of Pseudomonas aeruginosa, Klebsiella pneumoniae and Escherichia coli compared to the initial number of cells were isolated by the immunobeads. Nevertheless, the Raman spectra collected from isolated non-targeted bacteria were well-discriminated from the Raman spectra collected from isolated Legionella cells, whereby the Raman spectra of the independent dataset of Legionella strains were assigned with an accuracy of 98.6%. In addition, Raman spectroscopy was also used to differentiate between isolated Legionella species. Copyright © 2016 Elsevier GmbH. All rights reserved.

  19. Quantitative Assessment of In-solution Digestion Efficiency Identifies Optimal Protocols for Unbiased Protein Analysis*

    PubMed Central

    León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.

    2013-01-01

    The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921

  20. Improving MEG source localizations: an automated method for complete artifact removal based on independent component analysis.

    PubMed

    Mantini, D; Franciotti, R; Romani, G L; Pizzella, V

    2008-03-01

    The major limitation for the acquisition of high-quality magnetoencephalography (MEG) recordings is the presence of disturbances of physiological and technical origins: eye movements, cardiac signals, muscular contractions, and environmental noise are serious problems for MEG signal analysis. In the last years, multi-channel MEG systems have undergone rapid technological developments in terms of noise reduction, and many processing methods have been proposed for artifact rejection. Independent component analysis (ICA) has already shown to be an effective and generally applicable technique for concurrently removing artifacts and noise from the MEG recordings. However, no standardized automated system based on ICA has become available so far, because of the intrinsic difficulty in the reliable categorization of the source signals obtained with this technique. In this work, approximate entropy (ApEn), a measure of data regularity, is successfully used for the classification of the signals produced by ICA, allowing for an automated artifact rejection. The proposed method has been tested using MEG data sets collected during somatosensory, auditory and visual stimulation. It was demonstrated to be effective in attenuating both biological artifacts and environmental noise, in order to reconstruct clear signals that can be used for improving brain source localizations.

  1. Simulated color: a diagnostic tool for skin lesions like port-wine stain

    NASA Astrophysics Data System (ADS)

    Randeberg, Lise L.; Svaasand, Lars O.

    2001-05-01

    A device independent method for skin color visualization has been developed. Colors reconstructed from a reflectance spectrum are presented on a computer screen by sRGB (standard Red Green Blue) color coordinates. The colors are presented as adjacent patches surrounded by a medium grey border. CIELAB color coordinates and CIE (International Commission on Illumination) color difference (Delta) E are computed. The change in skin color due to a change in average blood content or scattering properties in dermis is investigated. This is done by analytical simulations based on the diffusion approximation. It is found that an 11% change in average blood content and a 15% change in scattering properties will give a visible color change. A supposed visibility limit for (Delta) E is given. This value is based on experimental testing and the known properties of the human visual system. This limit value can be used as a tool to determine when to terminate laser treatment of port- wine stain due to low treatment response, i.e. low (Delta) E between treatments. The visualization method presented seems promising for medical applications as port-wine stain diagnostics. The method gives good possibilities for electronic transfer of data between clinics because it is device independent.

  2. A comparison of gradual sedation levels using the Comfort-B scale and bispectral index in children on mechanical ventilation in the pediatric intensive care unit

    PubMed Central

    Silva, Cláudia da Costa; Alves, Marta Maria Osório; El Halal, Michel Georges dos Santos; Pinheiro, Sabrina dos Santos; Carvalho, Paulo Roberto Antonacci

    2013-01-01

    Objective Compare the scores resulting from the Comfort-B scale with the bispectral index in children in an intensive care unit. Methods Eleven children between the ages of 1 month and 16 years requiring mechanical ventilation and sedation were simultaneously classified based on the bispectral index and the Comfort-B scale. Their behavior was recorded using digital photography, and the record was later evaluated by three independent evaluators. Agreement tests (Bland-Altman and Kappa) were then performed. The correlation between the two methods (Pearson correlation) was tested. Results In total, 35 observations were performed on 11 patients. Based on the Kappa coefficient, the agreement among evaluators ranged from 0.56 to 0.75 (p<0.001). There was a positive and consistent association between the bispectral index and the Comfort-B scale [r=0.424 (p=0.011) to r=0.498 (p=0.002)]. Conclusion Due to the strong correlation between the independent evaluators and the consistent correlation between the two methods, the results suggest that the Comfort-B scale is reproducible and useful in classifying the level of sedation in children requiring mechanical ventilation. PMID:24553512

  3. Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR

    NASA Astrophysics Data System (ADS)

    Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng

    2017-06-01

    The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.

  4. SU-F-T-469: A Clinically Observed Discrepancy Between Image-Based and Log- Based MLC Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, B; Ahmed, M; Siebers, J

    2016-06-15

    Purpose: To present a clinical case which challenges the base assumption of log-file based QA, by showing that the actual position of a MLC leaf can suddenly deviate from its programmed and logged position by >1 mm as observed with real-time imaging. Methods: An EPID-based exit-fluence dosimetry system designed to prevent gross delivery errors was used in cine mode to capture portal images during treatment. Visual monitoring identified an anomalous MLC leaf pair gap not otherwise detected by the automatic position verification. The position of the erred leaf was measured on EPID images and log files were analyzed for themore » treatment in question, the prior day’s treatment, and for daily MLC test patterns acquired on those treatment days. Additional standard test patterns were used to quantify the leaf position. Results: Whereas the log file reported no difference between planned and recorded positions, image-based measurements showed the leaf to be 1.3±0.1 mm medial from the planned position. This offset was confirmed with the test pattern irradiations. Conclusion: It has been clinically observed that log-file derived leaf positions can differ from their actual positions by >1 mm, and therefore cannot be considered to be the actual leaf positions. This cautions the use of log-based methods for MLC or patient quality assurance without independent confirmation of log integrity. Frequent verification of MLC positions through independent means is a necessary precondition to trusting log file records. Intra-treatment EPID imaging provides a method to capture departures from MLC planned positions. Work was supported in part by Varian Medical Systems.« less

  5. Continental-Scale Validation of Modis-Based and LEDAPS Landsat ETM + Atmospheric Correction Methods

    NASA Technical Reports Server (NTRS)

    Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy

    2012-01-01

    The potential of Landsat data processing to provide systematic continental scale products has been demonstratedby several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent freeavailability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicableto large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correctionmethods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive ProcessingSystem (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmosphericcharacterization approaches. The MODIS-based method uses the MODIS Terra derived dynamicaerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions ineach coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from eachLandsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validationresults are presented comparing ETM+ atmospherically corrected data generated using these two methodswith AERONET corrected ETM+ data for 95 10 km10 km 30 m subsets, a total of nearly 8 million 30 mpixels, located across the conterminous United States. The results indicate that the MODIS-based methodhas better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.

  6. Ground-based cloud classification by learning stable local binary patterns

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua

    2018-07-01

    Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.

  7. Simultaneous, proportional, multi-axis prosthesis control using multichannel surface EMG.

    PubMed

    Yatsenko, Dimitri; McDonnall, Daniel; Guillory, K Shane

    2007-01-01

    Most upper limb prosthesis controllers only allow the individual selection and control of single joints of the limb. The main limiting factor for simultaneous multi-joint control is usually the availability of reliable independent control signals that can intuitively be used. In this paper, a novel method is presented for extraction of individual muscle source signals from surface EMG array recordings, based on EMG energy orthonormalization along principle movement vectors. In cases where independently-controllable muscles are present in residual limbs, this method can be used to provide simultaneous, multi-axis, proportional control of prosthetic systems. Initial results are presented for simultaneous control of wrist rotation, wrist flexion/extension, and grip open/close for two intact subjects under both isometric and non-isometric conditions and for one subject with transradial amputation.

  8. Independent Manipulation of Topological Charges and Polarization Patterns of Optical Vortices

    PubMed Central

    Yang, Ching-Han; Chen, Yuan-Di; Wu, Shing-Trong; Fuh, Andy Ying-Guey

    2016-01-01

    We present a simple and flexible method to generate various vectorial vortex beams (VVBs) with a Pancharatnam phase based on the scheme of double reflections from a single liquid crystal spatial light modulator (SLM). In this configuration, VVBs are constructed by the superposition of two orthogonally polarized orbital angular momentum (OAM) eigenstates. To verify the optical properties of the generated beams, Stokes polarimetry is developed to measure the states of polarization (SOP) over the transverse plane, while a Shack–Hartmann wavefront sensor is used to measure the OAM charge of beams. It is shown that both the simulated and the experimental results are in good qualitative agreement. In addition, polarization patterns and OAM charges of generated beams can be controlled independently using the proposed method. PMID:27526858

  9. Redundant array of independent disks: practical on-line archiving of nuclear medicine image data.

    PubMed

    Lear, J L; Pratt, J P; Trujillo, N

    1996-02-01

    While various methods for long-term archiving of nuclear medicine image data exist, none support rapid on-line search and retrieval of information. We assembled a 90-Gbyte redundant array of independent disks (RAID) system using 10-, 9-Gbyte disk drives. The system was connected to a personal computer and software was used to partition the array into 4-Gbyte sections. All studies (50,000) acquired over a 7-year period were archived in the system. Based on patient name/number and study date, information could be located within 20 seconds and retrieved for display and analysis in less than 5 seconds. RAID offers a practical, redundant method for long-term archiving of nuclear medicine studies that supports rapid on-line retrieval.

  10. Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods

    NASA Technical Reports Server (NTRS)

    Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy

    2012-01-01

    The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.

  11. Towards a Novel Integrated Approach for Estimating Greenhouse Gas Emissions in Support of International Agreements

    NASA Astrophysics Data System (ADS)

    Reimann, S.; Vollmer, M. K.; Henne, S.; Brunner, D.; Emmenegger, L.; Manning, A.; Fraser, P. J.; Krummel, P. B.; Dunse, B. L.; DeCola, P.; Tarasova, O. A.

    2016-12-01

    In the recently adopted Paris Agreement the community of signatory states has agreed to limit the future global temperature increase between +1.5 °C and +2.0 °C, compared to pre-industrial times. To achieve this goal, emission reduction targets have been submitted by individual nations (called Intended Nationally Determined Contributions, INDCs). Inventories will be used for checking progress towards these envisaged goals. These inventories are calculated by combining information on specific activities (e.g. passenger cars, agriculture) with activity-related, typically IPCC-sanctioned, emission factors - the so-called bottom-up method. These calculated emissions are reported on an annual basis and are checked by external bodies by using the same method. A second independent method estimates emissions by translating greenhouse gas measurements made at regionally representative stations into regional/global emissions using meteorologically-based transport models. In recent years this so-called top-down approach has been substantially advanced into a powerful tool and emission estimates at the national/regional level have become possible. This method is already used in Switzerland, in the United Kingdom and in Australia to estimate greenhouse gas emissions and independently support the national bottom-up emission inventories within the UNFCCC framework. Examples of the comparison of the two independent methods will be presented and the added-value will be discussed. The World Meteorological Organization (WMO) and partner organizations are currently developing a plan to expand this top-down approach and to expand the globally representative GAW network of ground-based stations and remote-sensing platforms and integrate their information with atmospheric transport models. This Integrated Global Greenhouse Gas Information System (IG3IS) initiative will help nations to improve the accuracy of their country-based emissions inventories and their ability to evaluate the success of emission reductions strategies. This could foster trans-national collaboration on methodologies for estimation of emissions. Furthermore, more accurate emission knowledge will clarify the value of emission reduction efforts and could encourage countries to strengthen their reduction pledges.

  12. A Scheme for Obtaining Secure S-Boxes Based on Chaotic Baker's Map

    NASA Astrophysics Data System (ADS)

    Gondal, Muhammad Asif; Abdul Raheem; Hussain, Iqtadar

    2014-09-01

    In this paper, a method for obtaining cryptographically strong 8 × 8 substitution boxes (S-boxes) is presented. The method is based on chaotic baker's map and a "mini version" of a new block cipher with block size 8 bits and can be easily and efficiently performed on a computer. The cryptographic strength of some 8 × 8 S-boxes randomly produced by the method is analyzed. The results show (1) all of them are bijective; (2) the nonlinearity of each output bit of them is usually about 100; (3) all of them approximately satisfy the strict avalanche criterion and output bits independence criterion; (4) they all have an almost equiprobable input/output XOR distribution.

  13. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  14. Integral imaging based light field display with enhanced viewing resolution using holographic diffuser

    NASA Astrophysics Data System (ADS)

    Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun

    2017-11-01

    An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.

  15. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    PubMed

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  16. Stratospheric aerosol particle size distribution based on multi-color polarization measurements of the twilight sky

    NASA Astrophysics Data System (ADS)

    Ugolnikov, Oleg S.; Maslov, Igor A.

    2018-03-01

    Polarization measurements of the twilight background with Wide-Angle Polarization Camera (WAPC) are used to detect the depolarization effect caused by stratospheric aerosol near the altitude of 20 km. Based on a number of observations in central Russia in spring and summer 2016, we found the parameters of lognormal size distribution of aerosol particles. This confirmed the previously published results of the colorimetric method as applied to the same twilights. The mean particle radius (about 0.1 micrometers) and size distribution are also in agreement with the recent data of in situ and space-based remote sensing of stratospheric aerosol. Methods considered here provide two independent techniques of the stratospheric aerosol study based on the twilight sky analysis.

  17. Equilibrium Molecular Thermodynamics from Kirkwood Sampling

    PubMed Central

    2015-01-01

    We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525

  18. Investigating Microadaptation in One-to-One Human Tutoring

    ERIC Educational Resources Information Center

    Siler, Stephanie Ann; VanLehn, Kurt

    2015-01-01

    The authors investigated whether some advantages of tutoring over other instructional methods are due to microadaptation, or, tutors basing their actions on assessments of tutees they develop during tutoring. In a 2 × 2 between-subjects experiment, independent variables were shared experience (tutors either worked with the same or a different…

  19. Saturated linkage map construction in Rubus idaeus using genotyping by sequencing and genome-independent imputation

    USDA-ARS?s Scientific Manuscript database

    Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry). Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker d...

  20. Mission and Methods of Democratizing the Classroom.

    ERIC Educational Resources Information Center

    Slaton, Christa Daryl

    1993-01-01

    Too many college students seem conditioned (by authoritarian teaching styles) to serve as "clerks" to the decision makers and power holders. To help students learn to think critically and independently, this article advises faculty to create practica based on televotes and mediation training, creative projects (such as monopoly games and…

  1. Factors Contributing to Teacher Retention in Georgia

    ERIC Educational Resources Information Center

    Locklear, Tina M.

    2010-01-01

    The purpose of this mixed method, survey-based inquiry was to determine how Georgia public high school faculty members perceive various pressures and experiences associated with a career in education. These perceptions were then analyzed as possible indicators of teacher attrition in order to improve retention rates. The independent demographic…

  2. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    PubMed

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.

  3. Multi-Particle Interferometry Based on Double Entangled States

    NASA Technical Reports Server (NTRS)

    Pittman, Todd B.; Shih, Y. H.; Strekalov, D. V.; Sergienko, A. V.; Rubin, M. H.

    1996-01-01

    A method for producing a 4-photon entangled state based on the use of two independent pair sources is discussed. Of particular interest is that each of the pair sources produces a two-photon state which is simultaneously entangled in both polarization and space-time variables. Performing certain measurements which exploit this double entanglement provides an opportunity for verifying the recent demonstration of nonlocality by Greenberger, Horne, and Zeilinger.

  4. Plate/shell structure topology optimization of orthotropic material for buckling problem based on independent continuous topological variables

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang

    2017-10-01

    The purpose of the present work is to study the buckling problem with plate/shell topology optimization of orthotropic material. A model of buckling topology optimization is established based on the independent, continuous, and mapping method, which considers structural mass as objective and buckling critical loads as constraints. Firstly, composite exponential function (CEF) and power function (PF) as filter functions are introduced to recognize the element mass, the element stiffness matrix, and the element geometric stiffness matrix. The filter functions of the orthotropic material stiffness are deduced. Then these filter functions are put into buckling topology optimization of a differential equation to analyze the design sensitivity. Furthermore, the buckling constraints are approximately expressed as explicit functions with respect to the design variables based on the first-order Taylor expansion. The objective function is standardized based on the second-order Taylor expansion. Therefore, the optimization model is translated into a quadratic program. Finally, the dual sequence quadratic programming (DSQP) algorithm and the global convergence method of moving asymptotes algorithm with two different filter functions (CEF and PF) are applied to solve the optimal model. Three numerical results show that DSQP&CEF has the best performance in the view of structural mass and discretion.

  5. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  6. Retrieval of Snow and Rain From Combined X- and W-B and Airborne Radar Measurements

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert; Tian, Lin; Heymsfield, Gerald M.

    2008-01-01

    Two independent airborne dual-wavelength techniques, based on nadir measurements of radar reflectivity factors and Doppler velocities, respectively, are investigated with respect to their capability of estimating microphysical properties of hydrometeors. The data used to investigate the methods are taken from the ER-2 Doppler radar (X-band) and Cloud Radar System (W-band) airborne Doppler radars during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers-Florida Area Cirrus Experiment campaign in 2002. Validity is assessed by the degree to which the methods produce consistent retrievals of the microphysics. For deriving snow parameters, the reflectivity-based technique has a clear advantage over the Doppler-velocity-based approach because of the large dynamic range in the dual-frequency ratio (DFR) with respect to the median diameter Do and the fact that the difference in mean Doppler velocity at the two frequencies, i.e., the differential Doppler velocity (DDV), in snow is small relative to the measurement errors and is often not uniquely related to Do. The DFR and DDV can also be used to independently derive Do in rain. At W-band, the DFR-based algorithms are highly sensitive to attenuation from rain, cloud water, and water vapor. Thus, the retrieval algorithms depend on various assumptions regarding these components, whereas the DDV-based approach is unaffected by attenuation. In view of the difficulties and ambiguities associated with the attenuation correction at W-band, the DDV approach in rain is more straightforward and potentially more accurate than the DFR method.

  7. Choice of optical system is critical for the security of double random phase encryption systems

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Cassidy, Derek; Zhao, Liang; Ryle, James P.; Healy, John J.; Sheridan, John T.

    2017-06-01

    The linear canonical transform (LCT) is used in modeling a coherent light-field propagation through first-order optical systems. Recently, a generic optical system, known as the quadratic phase encoding system (QPES), for encrypting a two-dimensional image has been reported. In such systems, two random phase keys and the individual LCT parameters (α,β,γ) serve as secret keys of the cryptosystem. It is important that such encryption systems also satisfy some dynamic security properties. We, therefore, examine such systems using two cryptographic evaluation methods, the avalanche effect and bit independence criterion, which indicate the degree of security of the cryptographic algorithms using QPES. We compared our simulation results with the conventional Fourier and the Fresnel transform-based double random phase encryption (DRPE) systems. The results show that the LCT-based DRPE has an excellent avalanche and bit independence characteristics compared to the conventional Fourier and Fresnel-based encryption systems.

  8. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  9. Effectiveness of Jigsaw learning compared to lecture-based learning in dental education.

    PubMed

    Sagsoz, O; Karatas, O; Turel, V; Yildiz, M; Kaya, E

    2017-02-01

    The objective of this study was to evaluate the success levels of students using the Jigsaw learning method in dental education. Fifty students with similar grade point average (GPA) scores were selected and randomly assigned into one of two groups (n = 25). A pretest concerning 'adhesion and bonding agents in dentistry' was administered to all students before classes. The Jigsaw learning method was applied to the experimental group for 3 weeks. At the same time, the control group was taking classes using the lecture-based learning method. At the end of the 3 weeks, all students were retested (post-test) on the subject. A retention test was administered 3 weeks after the post-test. Mean scores were calculated for each test for the experimental and control groups, and the data obtained were analysed using the independent samples t-test. No significant difference was determined between the Jigsaw and lecture-based methods at pretest or post-test. The highest mean test score was observed in the post-test with the Jigsaw method. In the retention test, success with the Jigsaw method was significantly higher than that with the lecture-based method. The Jigsaw method is as effective as the lecture-based method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Automatic Identification of Artifact-Related Independent Components for Artifact Removal in EEG Recordings.

    PubMed

    Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh

    2016-01-01

    Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from independent component analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event-related potential (ERP)-related independent components. However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g., identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by nonbiological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature-based clustering algorithm used to identify artifacts which have physiological origins; and 2) the electrode-scalp impedance information employed for identifying nonbiological artifacts. The results on EEG data collected from ten subjects show that our algorithm can effectively detect, separate, and remove both physiological and nonbiological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods.

  11. Automatic Identification of Artifact-related Independent Components for Artifact Removal in EEG Recordings

    PubMed Central

    Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh

    2017-01-01

    Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from Independent Component Analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event related potential (ERP)-related independent components (ICs). However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g. identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by non-biological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature based clustering algorithm used to identify artifacts which have physiological origins and 2) the electrode-scalp impedance information employed for identifying non-biological artifacts. The results on EEG data collected from 10 subjects show that our algorithm can effectively detect, separate, and remove both physiological and non-biological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods. PMID:25415992

  12. An ICA-based method for the segmentation of pigmented skin lesions in macroscopic images.

    PubMed

    Cavalcanti, Pablo G; Scharcanski, Jacob; Di Persia, Leandro E; Milone, Diego H

    2011-01-01

    Segmentation is an important step in computer-aided diagnostic systems for pigmented skin lesions, since that a good definition of the lesion area and its boundary at the image is very important to distinguish benign from malignant cases. In this paper a new skin lesion segmentation method is proposed. This method uses Independent Component Analysis to locate skin lesions in the image, and this location information is further refined by a Level-set segmentation method. Our method was evaluated in 141 images and achieved an average segmentation error of 16.55%, lower than the results for comparable state-of-the-art methods proposed in literature.

  13. Applying open source data visualization tools to standard based medical data.

    PubMed

    Kopanitsa, Georgy; Taranik, Maxim

    2014-01-01

    Presentation of medical data in personal health records (PHRs) requires flexible platform independent tools to ensure easy access to the information. Different backgrounds of the patients, especially elder people require simple graphical presentation of the data. Data in PHRs can be collected from heterogeneous sources. Application of standard based medical data allows development of generic visualization methods. Focusing on the deployment of Open Source Tools, in this paper we applied Java Script libraries to create data presentations for standard based medical data.

  14. Method of plasma etching Ga-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  15. Estimation of 1RM for knee extension based on the maximal isometric muscle strength and body composition.

    PubMed

    Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo

    2017-11-01

    [Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.

  16. Crystal structure of low-symmetry rondorfite

    NASA Astrophysics Data System (ADS)

    Rastsvetaeva, R. K.; Zadov, A. E.; Chukanov, N. V.

    2008-03-01

    The crystal structure of an aluminum-rich variety of the mineral rondorfite with the composition Ca16[Mg2(Si7Al)(O31OH)]Cl4 from the skarns of the Verkhne-Chegemskoe plateau (the Kabardino-Balkarian Republic, the Northern Caucasus Region, Russia) was solved in the triclinic space group with the unit-cell parameters a = 15.100(2) Å, b = 15.110(2) Å, c = 15.092(2) Å, α = 90.06(1)°, β = 90.01(1)°, γ = 89.93(1)°, Z = 4, sp. gr. P1. The structural model consisting of 248 independent atoms was determined by the phase-correction method and refined to R = 3.8% with anisotropic displacement parameters based on all 7156 independent reflections with 7156 F > 3σ( F). The crystal structure is based on pentamers consisting of four Si tetrahedra linked by the central Mg tetrahedron. The structure can formally be refined in the cubic space group ( a = 15.105 Å, sp. gr. Fd overline 3 , seven independent positions) with anisotropic displacement parameters to R = 2.74% based on 579 reflections with F > 3σ( F) without accounting for more than 1000 observed reflections, which are inconsistent with the cubic symmetry of the crystal structure.

  17. Stakeholder insights on the planning and development of an independent benchmark standard for responsible food marketing.

    PubMed

    Cairns, Georgina; Macdonald, Laura

    2016-06-01

    A mixed methods qualitative survey investigated stakeholder responses to the proposal to develop an independently defined, audited and certifiable set of benchmark standards for responsible food marketing. Its purpose was to inform the policy planning and development process. A majority of respondents were supportive of the proposal. A majority also viewed the engagement and collaboration of a broad base of stakeholders in its planning and development as potentially beneficial. Positive responses were associated with views that policy controls can and should be extended to include all form of marketing, that obesity and non-communicable diseases prevention and control was a shared responsibility and an urgent policy priority and prior experience of independent standardisation as a policy lever for good practice. Strong policy leadership, demonstrable utilisation of the evidence base in its development and deployment and a conceptually clear communications plan were identified as priority targets for future policy planning. Future research priorities include generating more evidence on the feasibility of developing an effective community of practice and theory of change, the strengths and limitations of these and developing an evidence-based step-wise communications strategy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  19. Implementation of support vector machine for classification of speech marked hijaiyah letters based on Mel frequency cepstrum coefficient feature extraction

    NASA Astrophysics Data System (ADS)

    Adhi Pradana, Wisnu; Adiwijaya; Novia Wisesty, Untari

    2018-03-01

    Support Vector Machine or commonly called SVM is one method that can be used to process the classification of a data. SVM classifies data from 2 different classes with hyperplane. In this study, the system was built using SVM to develop Arabic Speech Recognition. In the development of the system, there are 2 kinds of speakers that have been tested that is dependent speakers and independent speakers. The results from this system is an accuracy of 85.32% for speaker dependent and 61.16% for independent speakers.

  20. RAVE—a Detector-independent vertex reconstruction toolkit

    NASA Astrophysics Data System (ADS)

    Waltenberger, Wolfgang; Mitaroff, Winfried; Moser, Fabian

    2007-10-01

    A detector-independent toolkit for vertex reconstruction (RAVE ) is being developed, along with a standalone framework (VERTIGO ) for testing, analyzing and debugging. The core algorithms represent state of the art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available. VERTIGO = "vertex reconstruction toolkit and interface to generic objects".

  1. Bibliometrics for Social Validation.

    PubMed

    Hicks, Daniel J

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.

  2. Bibliometrics for Social Validation

    PubMed Central

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974

  3. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  4. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  5. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis

    PubMed Central

    Gong, Xiajing; Hu, Meng

    2018-01-01

    Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640

  6. Conceptual Design of a Communication-Based Deep Space Navigation Network

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan J.; Chuang, C. H.

    2012-01-01

    As the need grows for increased autonomy and position knowledge accuracy to support missions beyond Earth orbit, engineers must push and develop more advanced navigation sensors and systems that operate independent of Earth-based analysis and processing. Several spacecraft are approaching this problem using inter-spacecraft radiometric tracking and onboard autonomous optical navigation methods. This paper proposes an alternative implementation to aid in spacecraft position fixing. The proposed method Network-Based Navigation technique takes advantage of the communication data being sent between spacecraft and between spacecraft and ground control to embed navigation information. The navigation system uses these packets to provide navigation estimates to an onboard navigation filter to augment traditional ground-based radiometric tracking techniques. As opposed to using digital signal measurements to capture inherent information of the transmitted signal itself, this method relies on the embedded navigation packet headers to calculate a navigation estimate. This method is heavily dependent on clock accuracy and the initial results show the promising performance of a notional system.

  7. Meta-analysis of pathway enrichment: combining independent and dependent omics data sets.

    PubMed

    Kaever, Alexander; Landesfeind, Manuel; Feussner, Kirstin; Morgenstern, Burkhard; Feussner, Ivo; Meinicke, Peter

    2014-01-01

    A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics technologies is highly desirable. A popular method for the knowledge-based interpretation of single data sets is the (Gene) Set Enrichment Analysis. In order to combine the results from different analyses, we introduce a methodical framework for the meta-analysis of p-values obtained from Pathway Enrichment Analysis (Set Enrichment Analysis based on pathways) of multiple dependent or independent data sets from different omics platforms. For dependent data sets, e.g. obtained from the same biological samples, the framework utilizes a covariance estimation procedure based on the nonsignificant pathways in single data set enrichment analysis. The framework is evaluated and applied in the joint analysis of Metabolomics mass spectrometry and Transcriptomics DNA microarray data in the context of plant wounding. In extensive studies of simulated data set dependence, the introduced correlation could be fully reconstructed by means of the covariance estimation based on pathway enrichment. By restricting the range of p-values of pathways considered in the estimation, the overestimation of correlation, which is introduced by the significant pathways, could be reduced. When applying the proposed methods to the real data sets, the meta-analysis was shown not only to be a powerful tool to investigate the correlation between different data sets and summarize the results of multiple analyses but also to distinguish experiment-specific key pathways.

  8. Robust independent modal space control of a coupled nano-positioning piezo-stage

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Yang, Fufeng; Rui, Xiaoting

    2018-06-01

    In order to accurately control a coupled 3-DOF nano-positioning piezo-stage, this paper designs a hybrid controller. In this controller, a hysteresis observer based on a Bouc-Wen model is established to compensate the hysteresis nonlinearity of the piezoelectric actuator first. Compared to hysteresis compensations using Preisach model and Prandt-Ishlinskii model, the compensation method using the hysteresis observer is computationally lighter. Then, based on the proposed dynamics model, by constructing the modal filter, a robust H∞ independent modal space controller is designed and utilized to decouple the piezo-stage and deal with the unmodeled dynamics, disturbance, and hysteresis compensation error. The effectiveness of the proposed controller is demonstrated experimentally. The experimental results show that the proposed controller can significantly achieve the high-precision positioning.

  9. Gradient-free MCMC methods for dynamic causal modelling.

    PubMed

    Sengupta, Biswa; Friston, Karl J; Penny, Will D

    2015-05-15

    In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Colorimetric characterization models based on colorimetric characteristics evaluation for active matrix organic light emitting diode panels.

    PubMed

    Gong, Rui; Xu, Haisong; Tong, Qingfen

    2012-10-20

    The colorimetric characterization of active matrix organic light emitting diode (AMOLED) panels suffers from their poor channel independence. Based on the colorimetric characteristics evaluation of channel independence and chromaticity constancy, an accurate colorimetric characterization method, namely, the polynomial compensation model (PC model) considering channel interactions was proposed for AMOLED panels. In this model, polynomial expressions are employed to calculate the relationship between the prediction errors of XYZ tristimulus values and the digital inputs to compensate the XYZ prediction errors of the conventional piecewise linear interpolation assuming the variable chromaticity coordinates (PLVC) model. The experimental results indicated that the proposed PC model outperformed other typical characterization models for the two tested AMOLED smart-phone displays and for the professional liquid crystal display monitor as well.

  11. A novel knowledge-based potential for RNA 3D structure evaluation

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang

    2018-03-01

    Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).

  12. CAE "FOCUS" for modelling and simulating electron optics systems: development and application

    NASA Astrophysics Data System (ADS)

    Trubitsyn, Andrey; Grachev, Evgeny; Gurov, Victor; Bochkov, Ilya; Bochkov, Victor

    2017-02-01

    Electron optics is a theoretical base of scientific instrument engineering. Mathematical simulation of occurring processes is a base for contemporary design of complicated devices of the electron optics. Problems of the numerical mathematical simulation are effectively solved by CAE system means. CAE "FOCUS" developed by the authors includes fast and accurate methods: boundary element method (BEM) for the electric field calculation, Runge-Kutta- Fieghlberg method for the charged particle trajectory computation controlling an accuracy of calculations, original methods for search of terms for the angular and time-of-flight focusing. CAE "FOCUS" is organized as a collection of modules each of which solves an independent (sub) task. A range of physical and analytical devices, in particular a microfocus X-ray tube of high power, has been developed using this soft.

  13. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  14. Applying independent component analysis to detect silent speech in magnetic resonance imaging signals.

    PubMed

    Abe, Kazuhiro; Takahashi, Toshimitsu; Takikawa, Yoriko; Arai, Hajime; Kitazawa, Shigeru

    2011-10-01

    Independent component analysis (ICA) can be usefully applied to functional imaging studies to evaluate the spatial extent and temporal profile of task-related brain activity. It requires no a priori assumptions about the anatomical areas that are activated or the temporal profile of the activity. We applied spatial ICA to detect a voluntary but hidden response of silent speech. To validate the method against a standard model-based approach, we used the silent speech of a tongue twister as a 'Yes' response to single questions that were delivered at given times. In the first task, we attempted to estimate one number that was chosen by a participant from 10 possibilities. In the second task, we increased the possibilities to 1000. In both tasks, spatial ICA was as effective as the model-based method for determining the number in the subject's mind (80-90% correct per digit), but spatial ICA outperformed the model-based method in terms of time, especially in the 1000-possibility task. In the model-based method, calculation time increased by 30-fold, to 15 h, because of the necessity of testing 1000 possibilities. In contrast, the calculation time for spatial ICA remained as short as 30 min. In addition, spatial ICA detected an unexpected response that occurred by mistake. This advantage was validated in a third task, with 13 500 possibilities, in which participants had the freedom to choose when to make one of four responses. We conclude that spatial ICA is effective for detecting the onset of silent speech, especially when it occurs unexpectedly. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  15. Research on Application of Automatic Weather Station Based on Internet of Things

    NASA Astrophysics Data System (ADS)

    Jianyun, Chen; Yunfan, Sun; Chunyan, Lin

    2017-12-01

    In this paper, the Internet of Things is briefly introduced, and then its application in the weather station is studied. A method of data acquisition and transmission based on NB-iot communication mode is proposed, Introduction of Internet of things technology, Sensor digital and independent power supply as the technical basis, In the construction of Automatic To realize the intelligent interconnection of the automatic weather station, and then to form an automatic weather station based on the Internet of things. A network structure of automatic weather station based on Internet of things technology is constructed to realize the independent operation of intelligent sensors and wireless data transmission. Research on networking data collection and dissemination of meteorological data, through the data platform for data analysis, the preliminary work of meteorological information publishing standards, networking of meteorological information receiving terminal provides the data interface, to the wisdom of the city, the wisdom of the purpose of the meteorological service.

  16. The ratio method: A new tool to study one-neutron halo nuclei

    DOE PAGES

    Capel, Pierre; Johnson, R. C.; Nunes, F. M.

    2013-10-02

    Recently a new observable to study halo nuclei was introduced, based on the ratio between breakup and elastic angular cross sections. This new observable is shown by the analysis of specific reactions to be independent of the reaction mechanism and to provide nuclear-structure information of the projectile. Here we explore the details of this ratio method, including the sensitivity to binding energy and angular momentum of the projectile. We also study the reliability of the method with breakup energy. Lastly, we provide guidelines and specific examples for experimentalists who wish to apply this method.

  17. Estimation of scattering object characteristics for image reconstruction using a nonzero background.

    PubMed

    Jin, Jing; Astheimer, Jeffrey; Waag, Robert

    2010-06-01

    Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.

  18. A group ICA based framework for evaluating resting fMRI markers when disease categories are unclear: application to schizophrenia, bipolar, and schizoaffective disorders

    PubMed Central

    Du, Yuhui; Pearlson, Godfrey D; Liu, Jingyu; Sui, Jing; Yu, Qingbao; He, Hao; Castro, Eduardo; Calhoun, Vince D.

    2015-01-01

    Schizophrenia (SZ), bipolar disorder (BP) and schizoaffective disorder (SAD) share some common symptoms, and there is a debate about whether SAD is an independent category. To the best of our knowledge, no study has been done to differentiate these three disorders or to investigate the distinction of SAD as an independent category using fMRI data. The present study is aimed to explore biomarkers from resting-state fMRI networks for differentiating these disorders and investigate the relationship among these disorders based on fMRI networks with an emphasis on SAD. Firstly, a novel group ICA method, group information guided independent component analysis (GIG-ICA), was applied to extract subject-specific brain networks from fMRI data of 20 healthy controls (HC), 20 SZ patients, 20 BP patients, 20 patients suffering SAD with manic episodes (SADM), and 13 patients suffering SAD with depressive episodes exclusively (SADD). Then, five-level one-way analysis of covariance and multiclass support vector machine recursive feature elimination were employed to identify discriminative regions from the networks. Subsequently, the t-distributed stochastic neighbor embedding (t-SNE) projection and the hierarchical clustering methods were implemented to investigate the relationship among those groups. Finally, to evaluate the generalization ability, 16 new subjects were classified based on the found regions and the trained model using original 93 subjects. Results show that the discriminative regions mainly include frontal, parietal, precuneus, cingulate, supplementary motor, cerebellar, insula and supramarginal cortices, which performed well in distinguishing different groups. SADM and SADD were the most similar to each other, although SADD had greater similarity to SZ compared to other groups, which indicates SAD may be an independent category. BP was closer to HC compared with other psychotic disorders. In summary, resting-state fMRI brain networks extracted via GIG-ICA provide a promising potential to differentiate SZ, BP, and SAD. PMID:26216278

  19. Predictors of Virological Response in 3,235 Chronic HCV Egyptian Patients Treated with Peginterferon Alpha-2a Compared with Peginterferon Alpha-2b Using Statistical Methods and Data Mining Techniques.

    PubMed

    El Raziky, Maissa; Fathalah, Waleed Fouad; Zakaria, Zeinab; Eldeen, Hadeel Gamal; Abul-Fotouh, Amr; Salama, Ahmed; Awad, Abubakr; Esmat, Gamal; Mabrouk, Mahasen

    2016-05-01

    Despite the appearance of new oral antiviral drugs, pegylated interferon (PEG-IFN)/RBV may remain the standard of care therapy for some time, and several viral and host factors are reported to be correlated with therapeutic effects. This study aimed to reveal the independent variables associated with failure of sustained virological response (SVR) to PEG-IFN alpha-2a versus PEG-IFN alpha-2b in treatment of naive chronic hepatitis C virus (HCV) Egyptian patients using both statistical methods and data mining techniques. This retrospective cohort study included 3,235 chronic hepatitis C patients enrolled in a large Egyptian medical center: 1,728 patients had been treated with PEG-IFN alpha-2a plus ribavirin (RBV) and 1,507 patients with PEG-IFN alpha-2b plus RBV between 2007 and 2011. Both multivariate analysis and Reduced Error Pruning Tree (REPTree)-based model were used to reveal the independent variables associated with treatment response. In both treatment types, alpha-fetoprotein (AFP) >10 ng/mL and HCV viremia >600 × 10(3) IU/mL were the independent baseline variables associated with failure of SVR, while male gender, decreased hemoglobin, and thyroid-stimulating hormone were the independent variables associated with good response (P < 0.05). Using REPTree-based model showed that low AFP was the factor of initial split (best predictor) of response for either PEG-IFN alpha-2a or PEG-IFN alpha-2b (cutoff value 8.53, 4.89 ng/mL, AUROC = 0.68 and 0.61, P = 0.05). Serum AFP >10 ng/mL and viral load >600 × 10(3) IU/mL are variables associated with failure of response in both treatment types. REPTree-based model could be used to assess predictors of response.

  20. Comparison of DGT with traditional extraction methods for assessing arsenic bioavailability to Brassica chinensis in different soils.

    PubMed

    Dai, Yunchao; Nasir, Mubasher; Zhang, Yulin; Gao, Jiakai; Lv, Yamin; Lv, Jialong

    2018-01-01

    Several predictive models and methods have been used for heavy metals bioavailability, but there is no universally accepted approach in evaluating the bioavailability of arsenic (As) in soil. The technique of diffusive gradients in thin-films (DGT) is a promising tool, but there is a considerable debate with respect to its suitability. The DGT method was compared with other traditional chemical extractions techniques (soil solution, NaHCO 3 , NH 4 Cl, HCl, and total As method) for estimating As bioavailability in soil based on a greenhouse experiment using Brassica chinensis grown in various soils from 15 provinces in China. In addition, we assessed whether these methods are independent of soil properties. The correlations between plant and soil As concentration measured with traditional extraction techniques were pH and iron oxide (Fe ox ) dependent, indicating that these methods are influenced by soil properties. In contrast, DGT measurements were independent of soil properties and also showed a better correlation coefficient than other traditional techniques. Thus, DGT technique is superior to traditional techniques and should be preferable for evaluating As bioavailability in different type of soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Optimization of Robust HPLC Method for Quantitation of Ambroxol Hydrochloride and Roxithromycin Using a DoE Approach.

    PubMed

    Patel, Rashmin B; Patel, Nilay M; Patel, Mrunali R; Solanki, Ajay B

    2017-03-01

    The aim of this work was to develop and optimize a robust HPLC method for the separation and quantitation of ambroxol hydrochloride and roxithromycin utilizing Design of Experiment (DoE) approach. The Plackett-Burman design was used to assess the impact of independent variables (concentration of organic phase, mobile phase pH, flow rate and column temperature) on peak resolution, USP tailing and number of plates. A central composite design was utilized to evaluate the main, interaction, and quadratic effects of independent variables on the selected dependent variables. The optimized HPLC method was validated based on ICH Q2R1 guideline and was used to separate and quantify ambroxol hydrochloride and roxithromycin in tablet formulations. The findings showed that DoE approach could be effectively applied to optimize a robust HPLC method for quantification of ambroxol hydrochloride and roxithromycin in tablet formulations. Statistical comparison between results of proposed and reported HPLC method revealed no significant difference; indicating the ability of proposed HPLC method for analysis of ambroxol hydrochloride and roxithromycin in pharmaceutical formulations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Hydrological, water-quality, and ecological data for streams in Independence, Missouri, June 2005 through September 2013

    USGS Publications Warehouse

    Niesen, Shelley L.; Christensen, Eric D.

    2015-01-01

    Water-quality, hydrological, and ecological data collected from June 2005 through September 2013 from the Little Blue River and smaller streams within the City of Independence, Missouri, are presented in this report. These data were collected as a part of an ongoing cooperative study between the U.S. Geological Survey and the City of Independence Water Pollution Control Department to characterize the water quality and ecological condition of Independence streams. The quantities, sources of selected constituents, and processes affecting water quality and aquatic life were evaluated to determine the resulting ecological condition of streams within Independence. Data collected for this study fulfill the municipal separate sewer system permit requirements for the City of Independence and can be used to provide a baseline with which city managers can determine the effectiveness of current (2014) and future best management practices within Independence. Continuous streamflow and water-quality data, collected during base flow and stormflow, included physical and chemical properties, inorganic constituents, common organic micro-constituents, pesticides in streambed sediment and surface water, fecal indicator bacteria and microbial source tracking data, and suspended sediment. Dissolved oxygen, pH, specific conductance, water temperature, and turbidity data were measured continuously at seven sites within Independence. Base-flow and stormflow samples were collected at eight gaged and two ungaged sites. Fecal sources samples were collected for reference for microbial source tracking, and sewage influent samples were collected as additional source samples. Dry-weather screening was done on 11 basins within Independence to identify potential contaminant sources to the streams. Benthic macroinvertebrate community surveys and habitat assessments were done on 10 stream sites and 2 comparison sites outside the city. Sampling and laboratory procedures and quality-assurance and quality-control methods used in data collection for this study are described in this report.

  3. Three-way parallel independent component analysis for imaging genetics using multi-objective optimization.

    PubMed

    Ulloa, Alvaro; Jingyu Liu; Vergara, Victor; Jiayu Chen; Calhoun, Vince; Pattichis, Marios

    2014-01-01

    In the biomedical field, current technology allows for the collection of multiple data modalities from the same subject. In consequence, there is an increasing interest for methods to analyze multi-modal data sets. Methods based on independent component analysis have proven to be effective in jointly analyzing multiple modalities, including brain imaging and genetic data. This paper describes a new algorithm, three-way parallel independent component analysis (3pICA), for jointly identifying genomic loci associated with brain function and structure. The proposed algorithm relies on the use of multi-objective optimization methods to identify correlations among the modalities and maximally independent sources within modality. We test the robustness of the proposed approach by varying the effect size, cross-modality correlation, noise level, and dimensionality of the data. Simulation results suggest that 3p-ICA is robust to data with SNR levels from 0 to 10 dB and effect-sizes from 0 to 3, while presenting its best performance with high cross-modality correlations, and more than one subject per 1,000 variables. In an experimental study with 112 human subjects, the method identified links between a genetic component (pointing to brain function and mental disorder associated genes, including PPP3CC, KCNQ5, and CYP7B1), a functional component related to signal decreases in the default mode network during the task, and a brain structure component indicating increases of gray matter in brain regions of the default mode region. Although such findings need further replication, the simulation and in-vivo results validate the three-way parallel ICA algorithm presented here as a useful tool in biomedical data decomposition applications.

  4. Method for the substantial reduction of quenching effects in luminescence spectrometry

    DOEpatents

    Demas, James N.; Jones, Wesley M.; Keller, Richard A.

    1989-01-01

    Method for reducing quenching effects in analytical luminescence measurements. Two embodiments of the present invention are described which relate to a form of time resolution based on the amplitudes and phase shifts of modulated emission signals. In the first embodiment, the measured modulated emission signal is substantially independent of sample quenching at sufficiently high frequenices. In the second embodiment, the modulated amplitude and the phase shift between the emission signal and the excitation source are simultaneously measured. Using either method, the observed modulated amplitude may reduced to tis unquenched value.

  5. Charmonium-nucleon interactions from the time-dependent HAL QCD method

    NASA Astrophysics Data System (ADS)

    Sugiura, Takuya; Ikeda, Yoichi; Ishii, Noriyoshi

    2018-03-01

    The charmonium-nucleon effective central interactions have been computed by the time-dependent HAL QCD method. This gives an updated result of a previous study based on the time-independent method, which is now known to be problematic because of the difficulty in achieving the ground-state saturation. We discuss that the result is consistent with the heavy quark symmetry. No bound state is observed from the analysis of the scattering phase shift; however, this shall lead to a future search of the hidden-charm pentaquarks by considering channel-coupling effects.

  6. Retinal hemorrhage detection by rule-based and machine learning approach.

    PubMed

    Di Xiao; Shuang Yu; Vignarajan, Janardhan; Dong An; Mei-Ling Tay-Kearney; Kanagasingam, Yogi

    2017-07-01

    Robust detection of hemorrhages (HMs) in color fundus image is important in an automatic diabetic retinopathy grading system. Detection of the hemorrhages that are close to or connected with retinal blood vessels was found to be challenge. However, most methods didn't put research on it, even some of them mentioned this issue. In this paper, we proposed a novel hemorrhage detection method based on rule-based and machine learning methods. We focused on the improvement of detection of the hemorrhages that are close to or connected with retinal blood vessels, besides detecting the independent hemorrhage regions. A preliminary test for detecting HM presence was conducted on the images from two databases. We achieved sensitivity and specificity of 93.3% and 88% as well as 91.9% and 85.6% on the two datasets.

  7. Independent Component Analysis applied to Ground-based observations

    NASA Astrophysics Data System (ADS)

    Martins-Filho, Walter; Griffith, Caitlin; Pearson, Kyle; Waldmann, Ingo; Alvarez-Candal, Alvaro; Zellem, Robert Thomas

    2018-01-01

    Transit measurements of Jovian-sized exoplanetary atmospheres allow one to study the composition of exoplanets, largely independent of the planet’s temperature profile. However, measurements of hot-Jupiter transits must archive a level of accuracy in the flux to determine the spectral modulation of the exoplanetary atmosphere. To accomplish this level of precision, we need to extract systematic errors, and, for ground-based measurements, the effects of Earth’s atmosphere, from signal due to the exoplanet, which is several orders of magnitude smaller. The effects of the terrestrial atmosphere and some of the time-dependent systematic errors of ground-based transit measurements are treated mainly by dividing the host star by a reference star at each wavelength and time step of the transit. Recently, Independent Component Analysis (ICA) have been used to remove systematics effects from the raw data of space-based observations (Waldmann, 2014, 2012; Morello et al., 2016, 2015). ICA is a statistical method born from the ideas of the blind-source separations studies, which can be used to de-trend several independent source signals of a data set (Hyvarinen and Oja, 2000). This technique requires no additional prior knowledge of the data set. In addition, this technique has the advantage of requiring no reference star. Here we apply the ICA to ground-based photometry of the exoplanet XO-2b recorded by the 61” Kuiper Telescope and compare the results of the ICA to those of a previous analysis from Zellem et al. (2015), which does not use ICA. We also simulate the effects of various conditions (concerning the systematic errors, noise and the stability of object on the detector) to determine the conditions under which an ICA can be used with high precision to extract the light curve of exoplanetary photometry measurements

  8. Independent Component Analysis applied to Ground-based observations

    NASA Astrophysics Data System (ADS)

    Martins-Filho, Walter; Griffith, Caitlin Ann; Pearson, Kyle; Waldmann, Ingo; Alvarez-Candal, Alvaro; Zellem, Robert

    2017-10-01

    Transit measurements of Jovian-sized exoplanetary atmospheres allow one to study the composition of exoplanets, largely independent of the planet’s temperature profile. However, measurements of hot-Jupiter transits must archive a level of accuracy in the flux to determine the spectral modulations of the exoplanetary atmosphere. To accomplish this level of precision, we need to extract systematic errors, and, for ground-based measurements, the effects of Earth’s atmosphere, from signal due to the exoplanet, which is several orders of magnitudes smaller.The effects of the terrestrial atmosphere and some of the time dependent systematic errors of ground-based transit measurements are treated mainly by dividing the host star by a reference star at each wavelength and time step of the transit. Recently, Independent Component Analyses (ICA) have been used to remove systematics effects from the raw data of space-based observations (Waldmann, 2014, 2012; Morello et al., 2016, 2015). ICA is a statistical method born from the ideas of the blind-source separations studies, which can be used to de-trend several independent source signals of a data set (Hyvarinen and Oja, 2000). This technique requires no additional prior knowledge of the data set. In addition this technique has the advantage of requiring no reference star.Here we apply the ICA to ground-based photometry of the exoplanet XO-2b recorded by the 61” Kuiper Telescope and compare the results of the ICA to those of a previous analysis from Zellem et al. (2015), which does not use ICA. We also simulate the effects of various conditions (concerning the systematic errors, noise and the stability of object on the detector) to determine the conditions under which an ICA can be used with high precision to extract the light curve of exoplanetary photometry measurements.

  9. Model-independent curvature determination with 21 cm intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda

    2018-06-01

    Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21 cm intensity mapping experiments such as Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state, respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.

  10. Model-independent curvature determination with 21cm intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda

    2018-04-01

    Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21cm intensity mapping experiments such as HIRAX are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.

  11. Noisy cooperative intermittent processes: From blinking quantum dots to human consciousness

    NASA Astrophysics Data System (ADS)

    Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Bedini, Remo; Gemignani, Angelo; Fronzoni, Leone

    2011-07-01

    We study the superposition of a non-Poisson renewal process with the presence of a superimposed Poisson noise. The non-Poisson renewals mark the passage between meta-stable states in system with self-organization. We propose methods to measure the amount of information due to the two independent processes independently, and we see that a superficial study based on the survival probabilities yield stretched-exponential relaxations. Our method is in fact able to unravel the inverse-power law relaxation of the isolated non-Poisson processes, even when noise is present. We provide examples of this behavior in system of diverse nature, from blinking nano-crystals to weak turbulence. Finally we focus our discussion on events extracted from human electroencephalograms, and we discuss their connection with emerging properties of integrated neural dynamics, i.e. consciousness.

  12. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  13. Faculty Mentors', Graduate Students', and Performance-Based Assessments of Students' Research Skill Development

    ERIC Educational Resources Information Center

    Feldon, David F.; Maher, Michelle A.; Hurst, Melissa; Timmerman, Briana

    2015-01-01

    Faculty mentorship is thought to be a linchpin of graduate education in STEM disciplines. This mixed-method study investigates agreement between student mentees' and their faculty mentors' perceptions of the students' developing research knowledge and skills in STEM. We also compare both assessments against independent ratings of the students'…

  14. Muscle power is an independent determinant of pain and quality of life in knee osteoarthritis

    USDA-ARS?s Scientific Manuscript database

    OBJECTIVE: This study examined the relationships between leg muscle strength, power, and perceived disease severity in subjects with knee osteoarthritis (OA) in order to determine whether dynamic leg extensor muscle power would be associated with pain and quality of life in knee OA. METHODS: Baseli...

  15. Experimental Study of Sudden Solidification of Supercooled Water

    ERIC Educational Resources Information Center

    Bochnícek, Zdenek

    2014-01-01

    The two independent methods of measurement of the mass of ice created at sudden solidification of supercooled water are described. One is based on the calorimetric measurement of heat that is necessary for melting the ice and the second interprets the volume change that accompanies the water freezing. Experimental results are compared with the…

  16. Strategy Instruction versus Direct Instruction in the Education of Young Adults with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Blik, H.; Harskamp, E. G.; Naayer, H. M.

    2016-01-01

    In the Netherlands, students with intellectual disabilities (ID) attend practical education (PE). Teachers generally use demonstration as a form of direct instruction (DI) and students have difficulty working independently. Strategy instruction (SI) is a question-answer-based method that stimulates students' autonomy by getting them to verbalize…

  17. Naïve Bayes classification in R.

    PubMed

    Zhang, Zhongheng

    2016-06-01

    Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.

  18. Swarm: robust and fast clustering method for amplicon-based studies.

    PubMed

    Mahé, Frédéric; Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters' internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.

  19. Swarm: robust and fast clustering method for amplicon-based studies

    PubMed Central

    Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units. PMID:25276506

  20. Prediction of sonic boom from experimental near-field overpressure data. Volume 1: Method and results

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Reiners, S. J.

    1975-01-01

    A computerized procedure for predicting sonic boom from experimental near-field overpressure data has been developed. The procedure extrapolates near-field pressure signatures for a specified flight condition to the ground by the Thomas method. Near-field pressure signatures are interpolated from a data base of experimental pressure signatures. The program is an independently operated ODIN (Optimal Design Integration) program which obtains flight path information from other ODIN programs or from input.

  1. Species limits in the Morelet's Alligator lizard (Anguidae: Gerrhonotinae).

    PubMed

    Solano-Zavaleta, Israel; Nieto-Montes de Oca, Adrián

    2018-03-01

    The widely distributed, Central American anguid lizard Mesaspis moreletii is currently recognized as a polytypic species with five subspecies (M. m. fulvus, M. m. moreletii, M. m. rafaeli, M. m. salvadorensis, and M. m. temporalis). We reevaluated the species limits within Mesaspis moreletii using DNA sequences of one mitochondrial and three nuclear genes. The multi-locus data set included samples of all of the subspecies of M. moreletii, the other species of Mesaspis in Central America (M. cuchumatanus and M. monticola), and some populations assignable to M. moreletii but of uncertain subspecific identity from Honduras and Nicaragua. We first used a tree-based method for delimiting species based on mtDNA data to identify potential evolutionary independent lineages, and then analized the multilocus dataset with two species delimitation methods that use the multispecies coalescent model to evaluate different competing species delimitation models: the Bayes factors species delimitation method (BFD) implemented in ∗ BEAST, and the Bayesian Phylogenetics and Phylogeography (BP&P) method. Our results suggest that M. m. moreletii, M. m. rafaeli, M. m. salvadorensis, and M. m. temporalis represent distinct evolutionary independent lineages, and that the populations of uncertain status from Honduras and Nicaragua may represent additional undescribed species. Our results also suggest that M. m. fulvus is a synonym of M. m. moreletii. The biogeography of the Central American lineages of Mesaspis is discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. The one-dimensional Ly α forest power spectrum from BOSS

    DOE PAGES

    Palanque-Delabrouille, Nathalie; Yèche, Christophe; Borde, Arnaud; ...

    2013-11-19

    For this research, we have developed two independent methods for measuring the one-dimensional power spectrum of the transmitted flux in the Lyman-α forest. The first method is based on a Fourier transform and the second on a maximum-likelihood estimator. The two methods are independent and have different systematic uncertainties. Determination of the noise level in the data spectra was subject to a new treatment, because of its significant impact on the derived power spectrum. We applied the two methods to 13 821 quasar spectra from SDSS-III/BOSS DR9 selected from a larger sample of over 60 000 spectra on the basismore » of their high quality, high signal-to-noise ratio (S/N), and good spectral resolution. The power spectra measured using either approach are in good agreement over all twelve redshift bins from = 2.2 to = 4.4, and scales from 0.001 km s -1 to 0.02 km s -1. We determined the methodological andinstrumental systematic uncertainties of our measurements. We provide a preliminary cosmological interpretation of our measurements using available hydrodynamical simulations. The improvement in precision over previously published results from SDSS is a factor 2–3 for constraints on relevant cosmological parameters. For a ΛCDM model and using a constraint on H 0 that encompasses measurements based on the local distance ladder and on CMB anisotropies, we infer σ 8 = 0.83 ± 0.03 and n s = 0.97 ± 0.02 based on H i absorption in the range 2.1 < z < 3.7.« less

  3. Human cell structure-driven model construction for predicting protein subcellular location from biological images.

    PubMed

    Shao, Wei; Liu, Mingxia; Zhang, Daoqiang

    2016-01-01

    The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Using imputed genotype data in the joint score tests for genetic association and gene-environment interactions in case-control studies.

    PubMed

    Song, Minsun; Wheeler, William; Caporaso, Neil E; Landi, Maria Teresa; Chatterjee, Nilanjan

    2018-03-01

    Genome-wide association studies (GWAS) are now routinely imputed for untyped single nucleotide polymorphisms (SNPs) based on various powerful statistical algorithms for imputation trained on reference datasets. The use of predicted allele counts for imputed SNPs as the dosage variable is known to produce valid score test for genetic association. In this paper, we investigate how to best handle imputed SNPs in various modern complex tests for genetic associations incorporating gene-environment interactions. We focus on case-control association studies where inference for an underlying logistic regression model can be performed using alternative methods that rely on varying degree on an assumption of gene-environment independence in the underlying population. As increasingly large-scale GWAS are being performed through consortia effort where it is preferable to share only summary-level information across studies, we also describe simple mechanisms for implementing score tests based on standard meta-analysis of "one-step" maximum-likelihood estimates across studies. Applications of the methods in simulation studies and a dataset from GWAS of lung cancer illustrate ability of the proposed methods to maintain type-I error rates for the underlying testing procedures. For analysis of imputed SNPs, similar to typed SNPs, the retrospective methods can lead to considerable efficiency gain for modeling of gene-environment interactions under the assumption of gene-environment independence. Methods are made available for public use through CGEN R software package. © 2017 WILEY PERIODICALS, INC.

  5. Fine tuning breath-hold-based cerebrovascular reactivity analysis models.

    PubMed

    van Niftrik, Christiaan Hendrik Bas; Piccirelli, Marco; Bozinov, Oliver; Pangalu, Athina; Valavanis, Antonios; Regli, Luca; Fierstra, Jorn

    2016-02-01

    We elaborate on existing analysis methods for breath-hold (BH)-derived cerebrovascular reactivity (CVR) measurements and describe novel insights and models toward more exact CVR interpretation. Five blood-oxygen-level-dependent (BOLD) fMRI datasets of neurovascular patients with unilateral hemispheric hemodynamic impairment were used to test various BH CVR analysis methods. Temporal lag (phase), percent BOLD signal change (CVR), and explained variance (coherence) maps were calculated using three different sine models and two novel "Optimal Signal" model-free methods based on the unaffected hemisphere and the sagittal sinus fMRI signal time series, respectively. All models showed significant differences in CVR and coherence between the affected-hemodynamic impaired-and unaffected hemisphere. Voxel-wise phase determination significantly increases CVR (0.60 ± 0.18 vs. 0.82 ± 0.27; P < 0.05). Incorporating different durations of breath hold and resting period in one sine model (two-task) did increase coherence in the unaffected hemisphere, as well as eliminating negative phase commonly obtained by one-task frequency models. The novel model-free "optimal signal" methods both explained the BOLD MR data similar to the two task sine model. Our CVR analysis demonstrates an improved CVR and coherence after implementation of voxel-wise phase and frequency adjustment. The novel "optimal signal" methods provide a robust and feasible alternative to the sine models, as both are model-free and independent of compliance. Here, the sagittal sinus model may be advantageous, as it is independent of hemispheric CVR impairment.

  6. Registration of T2-weighted and diffusion-weighted MR images of the prostate: comparison between manual and landmark-based methods

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Jiang, Yulei; Soylu, Fatma N.; Tomek, Mark; Sensakovic, William; Oto, Aytekin

    2012-02-01

    Quantitative analysis of multi-parametric magnetic resonance (MR) images of the prostate, including T2-weighted (T2w) and diffusion-weighted (DW) images, requires accurate image registration. We compared two registration methods between T2w and DW images. We collected pre-operative MR images of 124 prostate cancer patients (68 patients scanned with a GE scanner and 56 with Philips scanners). A landmark-based rigid registration was done based on six prostate landmarks in both T2w and DW images identified by a radiologist. Independently, a researcher manually registered the same images. A radiologist visually evaluated the registration results by using a 5-point ordinal scale of 1 (worst) to 5 (best). The Wilcoxon signed-rank test was used to determine whether the radiologist's ratings of the results of the two registration methods were significantly different. Results demonstrated that both methods were accurate: the average ratings were 4.2, 3.3, and 3.8 for GE, Philips, and all images, respectively, for the landmark-based method; and 4.6, 3.7, and 4.2, respectively, for the manual method. The manual registration results were more accurate than the landmark-based registration results (p < 0.0001 for GE, Philips, and all images). Therefore, the manual method produces more accurate registration between T2w and DW images than the landmark-based method.

  7. Evaluating Feynman integrals by the hypergeometry

    NASA Astrophysics Data System (ADS)

    Feng, Tai-Fu; Chang, Chao-Hsi; Chen, Jian-Bin; Gu, Zhi-Hua; Zhang, Hai-Bin

    2018-02-01

    The hypergeometric function method naturally provides the analytic expressions of scalar integrals from concerned Feynman diagrams in some connected regions of independent kinematic variables, also presents the systems of homogeneous linear partial differential equations satisfied by the corresponding scalar integrals. Taking examples of the one-loop B0 and massless C0 functions, as well as the scalar integrals of two-loop vacuum and sunset diagrams, we verify our expressions coinciding with the well-known results of literatures. Based on the multiple hypergeometric functions of independent kinematic variables, the systems of homogeneous linear partial differential equations satisfied by the mentioned scalar integrals are established. Using the calculus of variations, one recognizes the system of linear partial differential equations as stationary conditions of a functional under some given restrictions, which is the cornerstone to perform the continuation of the scalar integrals to whole kinematic domains numerically with the finite element methods. In principle this method can be used to evaluate the scalar integrals of any Feynman diagrams.

  8. LEA Detection and Tracking Method for Color-Independent Visual-MIMO

    PubMed Central

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-01-01

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563

  9. LEA Detection and Tracking Method for Color-Independent Visual-MIMO.

    PubMed

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-07-02

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.

  10. Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.

  11. GEOMETRY-INDEPENDENT DETERMINATION OF RADIAL DENSITY DISTRIBUTIONS IN MOLECULAR CLOUD CORES AND OTHER ASTRONOMICAL OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krčo, Marko; Goldsmith, Paul F., E-mail: marko@astro.cornell.edu

    2016-05-01

    We present a geometry-independent method for determining the shapes of radial volume density profiles of astronomical objects whose geometries are unknown, based on a single column density map. Such profiles are often critical to understand the physics and chemistry of molecular cloud cores, in which star formation takes place. The method presented here does not assume any geometry for the object being studied, thus removing a significant source of bias. Instead, it exploits contour self-similarity in column density maps, which appears to be common in data for astronomical objects. Our method may be applied to many types of astronomical objectsmore » and observable quantities so long as they satisfy a limited set of conditions, which we describe in detail. We derive the method analytically, test it numerically, and illustrate its utility using 2MASS-derived dust extinction in molecular cloud cores. While not having made an extensive comparison of different density profiles, we find that the overall radial density distribution within molecular cloud cores is adequately described by an attenuated power law.« less

  12. Prediction of wastewater quality indicators at the inflow to the wastewater treatment plant using data mining methods

    NASA Astrophysics Data System (ADS)

    Szeląg, Bartosz; Barbusiński, Krzysztof; Studziński, Jan; Bartkiewicz, Lidia

    2017-11-01

    In the study, models developed using data mining methods are proposed for predicting wastewater quality indicators: biochemical and chemical oxygen demand, total suspended solids, total nitrogen and total phosphorus at the inflow to wastewater treatment plant (WWTP). The models are based on values measured in previous time steps and daily wastewater inflows. Also, independent prediction systems that can be used in case of monitoring devices malfunction are provided. Models of wastewater quality indicators were developed using MARS (multivariate adaptive regression spline) method, artificial neural networks (ANN) of the multilayer perceptron type combined with the classification model (SOM) and cascade neural networks (CNN). The lowest values of absolute and relative errors were obtained using ANN+SOM, whereas the MARS method produced the highest error values. It was shown that for the analysed WWTP it is possible to obtain continuous prediction of selected wastewater quality indicators using the two developed independent prediction systems. Such models can ensure reliable WWTP work when wastewater quality monitoring systems become inoperable, or are under maintenance.

  13. Solution of the equations for one-dimensional, two-phase, immiscible flow by geometric methods

    NASA Astrophysics Data System (ADS)

    Boronin, Ivan; Shevlyakov, Andrey

    2018-03-01

    Buckley-Leverett equations describe non viscous, immiscible, two-phase filtration, which is often of interest in modelling of oil production. For many parameters and initial conditions, the solutions of these equations exhibit non-smooth behaviour, namely discontinuities in form of shock waves. In this paper we obtain a novel method for the solution of Buckley-Leverett equations, which is based on geometry of differential equations. This method is fast, accurate, stable, and describes non-smooth phenomena. The main idea of the method is that classic discontinuous solutions correspond to the continuous surfaces in the space of jets - the so-called multi-valued solutions (Bocharov et al., Symmetries and conservation laws for differential equations of mathematical physics. American Mathematical Society, Providence, 1998). A mapping of multi-valued solutions from the jet space onto the plane of the independent variables is constructed. This mapping is not one-to-one, and its singular points form a curve on the plane of the independent variables, which is called the caustic. The real shock occurs at the points close to the caustic and is determined by the Rankine-Hugoniot conditions.

  14. A novel method for flow pattern identification in unstable operational conditions using gamma ray and radial basis function.

    PubMed

    Roshani, G H; Nazemi, E; Roshani, M M

    2017-05-01

    Changes of fluid properties (especially density) strongly affect the performance of radiation-based multiphase flow meter and could cause error in recognizing the flow pattern and determining void fraction. In this work, we proposed a methodology based on combination of multi-beam gamma ray attenuation and dual modality densitometry techniques using RBF neural network in order to recognize the flow regime and determine the void fraction in gas-liquid two phase flows independent of the liquid phase changes. The proposed system is consisted of one 137 Cs source, two transmission detectors and one scattering detector. The registered counts in two transmission detectors were used as the inputs of one primary Radial Basis Function (RBF) neural network for recognizing the flow regime independent of liquid phase density. Then, after flow regime identification, three RBF neural networks were utilized for determining the void fraction independent of liquid phase density. Registered count in scattering detector and first transmission detector were used as the inputs of these three RBF neural networks. Using this simple methodology, all the flow patterns were correctly recognized and the void fraction was predicted independent of liquid phase density with mean relative error (MRE) of less than 3.28%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Knowledge acquisition from natural language for expert systems based on classification problem-solving methods

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando

    1989-01-01

    It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.

  16. Ferrocene-Boronic Acid-Fructose Binding Based on Dual-Plate Generator-Collector Voltammetry and Square-Wave Voltammetry.

    PubMed

    Li, Meng; Xu, Su-Ying; Gross, Andrew J; Hammond, Jules L; Estrela, Pedro; Weber, James; Lacina, Karel; James, Tony D; Marken, Frank

    2015-06-10

    The interaction of ferrocene-boronic acid with fructose is investigated in aqueous 0.1 m phosphate buffer at pH 7, 8 and 9. Two voltammetric methods, based on 1) a dual-plate generator-collector micro-trench electrode (steady state) and 2) a square-wave voltammetry (transient) method, are applied and compared in terms of mechanistic resolution. A combination of experimental data is employed to obtain new insights into the binding rates and the cumulative binding constants for both the reduced ferrocene-boronic acid (pH dependent and weakly binding) and for the oxidised ferrocene-boronic acid (pH independent and strongly binding).

  17. A Systematic Evaluation of Field-Based Screening Methods for the Assessment of Anterior Cruciate Ligament (ACL) Injury Risk.

    PubMed

    Fox, Aaron S; Bonacci, Jason; McLean, Scott G; Spittle, Michael; Saunders, Natalie

    2016-05-01

    Laboratory-based measures provide an accurate method to identify risk factors for anterior cruciate ligament (ACL) injury; however, these methods are generally prohibitive to the wider community. Screening methods that can be completed in a field or clinical setting may be more applicable for wider community use. Examination of field-based screening methods for ACL injury risk can aid in identifying the most applicable method(s) for use in these settings. The objective of this systematic review was to evaluate and compare field-based screening methods for ACL injury risk to determine their efficacy of use in wider community settings. An electronic database search was conducted on the SPORTDiscus™, MEDLINE, AMED and CINAHL databases (January 1990-July 2015) using a combination of relevant keywords. A secondary search of the same databases, using relevant keywords from identified screening methods, was also undertaken. Studies identified as potentially relevant were independently examined by two reviewers for inclusion. Where consensus could not be reached, a third reviewer was consulted. Original research articles that examined screening methods for ACL injury risk that could be undertaken outside of a laboratory setting were included for review. Two reviewers independently assessed the quality of included studies. Included studies were categorized according to the screening method they examined. A description of each screening method, and data pertaining to the ability to prospectively identify ACL injuries, validity and reliability, recommendations for identifying 'at-risk' athletes, equipment and training required to complete screening, time taken to screen athletes, and applicability of the screening method across sports and athletes were extracted from relevant studies. Of 1077 citations from the initial search, a total of 25 articles were identified as potentially relevant, with 12 meeting all inclusion/exclusion criteria. From the secondary search, eight further studies met all criteria, resulting in 20 studies being included for review. Five ACL-screening methods-the Landing Error Scoring System (LESS), Clinic-Based Algorithm, Observational Screening of Dynamic Knee Valgus (OSDKV), 2D-Cam Method, and Tuck Jump Assessment-were identified. There was limited evidence supporting the use of field-based screening methods in predicting ACL injuries across a range of populations. Differences relating to the equipment and time required to complete screening methods were identified. Only screening methods for ACL injury risk were included for review. Field-based screening methods developed for lower-limb injury risk in general may also incorporate, and be useful in, screening for ACL injury risk. Limited studies were available relating to the OSDKV and 2D-Cam Method. The LESS showed predictive validity in identifying ACL injuries, however only in a youth athlete population. The LESS also appears practical for community-wide use due to the minimal equipment and set-up/analysis time required. The Clinic-Based Algorithm may have predictive value for ACL injury risk as it identifies athletes who exhibit high frontal plane knee loads during a landing task, but requires extensive additional equipment and time, which may limit its application to wider community settings.

  18. The web-rhetoric of companies offering home-based personal health monitoring.

    PubMed

    Nordgren, Anders

    2012-06-01

    In this paper I investigate the web-rhetoric of companies offering home-based personal health monitoring to patients and elderly people. Two main rhetorical methods are found, namely a reference to practical benefits and a use of prestige words like "quality of life" and "independence". I interpret the practical benefits in terms of instrumental values and the prestige words in terms of final values. I also reconstruct the arguments on the websites in terms of six different types of argument. Finally, I articulate a general critique of the arguments, namely that the websites neglect the context of use of personal health monitoring technologies. Whether or not a technology is good depends on the use of the technology by a particular individual in a particular context. The technology is not good-or bad-in itself. I support this critique with a number of more specific arguments such as the risk for reduced personal contact. For some elderly people social contact with care providers is more valuable than the independent living made possible by remote monitoring, for others independence is more important.

  19. Magnetic resonance elastography is as accurate as liver biopsy for liver fibrosis staging.

    PubMed

    Morisaka, Hiroyuki; Motosugi, Utaroh; Ichikawa, Shintaro; Nakazawa, Tadao; Kondo, Tetsuo; Funayama, Satoshi; Matsuda, Masanori; Ichikawa, Tomoaki; Onishi, Hiroshi

    2018-05-01

    Liver MR elastography (MRE) is available for the noninvasive assessment of liver fibrosis; however, no previous studies have compared the diagnostic ability of MRE with that of liver biopsy. To compare the diagnostic accuracy of liver fibrosis staging between MRE-based methods and liver biopsy using the resected liver specimens as the reference standard. A retrospective study at a single institution. In all, 200 patients who underwent preoperative MRE and subsequent surgical liver resection were included in this study. Data from 80 patients were used to estimate cutoff and distributions of liver stiffness values measured by MRE for each liver fibrosis stage (F0-F4, METAVIR system). In the remaining 120 patients, liver biopsy specimens were obtained from the resected liver tissues using a standard biopsy needle. 2D liver MRE with gradient-echo based sequence on a 1.5 or 3T scanner was used. Two radiologists independently measured the liver stiffness value on MRE and two types of MRE-based methods (threshold and Bayesian prediction method) were applied. Two pathologists evaluated all biopsy samples independently to stage liver fibrosis. Surgically resected whole tissue specimens were used as the reference standard. The accuracy for liver fibrosis staging was compared between liver biopsy and MRE-based methods with a modified McNemar's test. Accurate fibrosis staging was achieved in 53.3% (64/120) and 59.1% (71/120) of patients using MRE with threshold and Bayesian methods, respectively, and in 51.6% (62/120) with liver biopsy. Accuracies of MRE-based methods for diagnoses of ≥F2 (90-91% [108-9/120]), ≥F3 (79-81% [95-97/120]), and F4 (82-85% [98-102/120]) were statistically equivalent to those of liver biopsy (≥F2, 79% [95/120], P ≤ 0.01; ≥F3, 88% [105/120], P ≤ 0.006; and F4, 82% [99/120], P ≤ 0.017). MRE can be an alternative to liver biopsy for fibrosis staging. 3. Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1268-1275. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Dissociable Electroencephalograph Correlates of Visual Awareness and Feature-Based Attention

    PubMed Central

    Chen, Yifan; Wang, Xiaochun; Yu, Yanglan; Liu, Ying

    2017-01-01

    Background: The relationship between awareness and attention is complex and controversial. A growing body of literature has shown that the neural bases of consciousness and endogenous attention (voluntary attention) are independent. The important role of exogenous attention (reflexive attention) on conscious experience has been noted in several studies. However, exogenous attention can also modulate subliminal processing, suggesting independence between the two processes. The question of whether visual awareness and exogenous attention rely on independent mechanisms under certain circumstances remains unanswered. Methods: In the current study, electroencephalograph recordings were conducted using 64 channels from 16 subjects while subjects attempted to detect faint speed changes of colored rotating dots. Awareness and attention were manipulated throughout trials in order to test whether exogenous attention and visual awareness rely on independent mechanisms. Results: Neural activity related to consciousness was recorded in the following cue-locked time-windows (event related potential, cluster- based permutation test): 0–50, 150–200, and 750–800 ms. With a more liberal threshold, the inferior occipital lobe was found to be the source of awareness-related activity in the 0–50 ms range. In the later 150–200 ms range, activity in the fusiform and post-central gyrus was related to awareness. Awareness-related activation in the later 750–800 ms range was more widely distributed. This awareness-related activation pattern was quite different from that of attention. Attention-related neural activity was emphasized in the 750–800 ms time window and the main source of attention-related activity was localized to the right angular gyrus. These results suggest that exogenous attention and visual consciousness correspond to different and relatively independent neural mechanisms and are distinct processes under certain conditions. PMID:29180950

  1. Model specification in oral health-related quality of life research.

    PubMed

    Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan

    2009-10-01

    The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.

  2. Independent Component Analysis-motivated Approach to Classificatory Decomposition of Cortical Evoked Potentials

    PubMed Central

    Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A

    2006-01-01

    Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151

  3. Uncertainty quantification for accident management using ACE surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.

    The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known tomore » be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)« less

  4. Effect of surface treatment methods on the shear bond strength of auto-polymerized resin to thermoplastic denture base polymer.

    PubMed

    Koodaryan, Roodabeh; Hafezeqoran, Ali

    2016-12-01

    Polyamide polymers do not provide sufficient bond strength to auto-polymerized resins for repairing fractured denture or replacing dislodged denture teeth. Limited treatment methods have been developed to improve the bond strength between auto-polymerized reline resins and polyamide denture base materials. The objective of the present study was to evaluate the effect of surface modification by acetic acid on surface characteristics and bond strength of reline resin to polyamide denture base. 84 polyamide specimens were divided into three surface treatment groups (n=28): control (N), silica-coated (S), and acid-treated (A). Two different auto-polymerized reline resins GC and Triplex resins were bonded to the samples (subgroups T and G, respectively, n=14). The specimens were subjected to shear bond strength test after they were stored in distilled water for 1 week and thermo-cycled for 5000 cycles. Data were analyzed with independent t-test, two-way analysis of variance (ANOVA), and Tukey's post hoc multiple comparison test (α=.05). The bond strength values of A and S were significantly higher than those of N ( P <.001 for both). However, statistically significant difference was not observed between group A and group S. According to the independent Student's t-test, the shear bond strength values of AT were significantly higher than those of AG ( P <.001). The surface treatment of polyamide denture base materials with acetic acid may be an efficient and cost-effective method for increasing the shear bond strength to auto-polymerized reline resin.

  5. Study on multiple-hops performance of MOOC sequences-based optical labels for OPS networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun; Ma, Chunli

    2009-11-01

    In this paper, we utilize a new study method that is under independent case of multiple optical orthogonal codes to derive the probability function of MOOCS-OPS networks, discuss the performance characteristics for a variety of parameters, and compare some characteristics of the system employed by single optical orthogonal code or multiple optical orthogonal codes sequences-based optical labels. The performance of the system is also calculated, and our results verify that the method is effective. Additionally it is found that performance of MOOCS-OPS networks would, negatively, be worsened, compared with single optical orthogonal code-based optical label for optical packet switching (SOOC-OPS); however, MOOCS-OPS networks can greatly enlarge the scalability of optical packet switching networks.

  6. A Keplerian-based Hamiltonian splitting for gravitational N-body simulations

    NASA Astrophysics Data System (ADS)

    Gonçalves Ferrari, G.; Boekholt, T.; Portegies Zwart, S. F.

    2014-05-01

    We developed a Keplerian-based Hamiltonian splitting for solving the gravitational N-body problem. This splitting allows us to approximate the solution of a general N-body problem by a composition of multiple, independently evolved two-body problems. While the Hamiltonian splitting is exact, we show that the composition of independent two-body problems results in a non-symplectic non-time-symmetric first-order map. A time-symmetric second-order map is then constructed by composing this basic first-order map with its self-adjoint. The resulting method is precise for each individual two-body solution and produces quick and accurate results for near-Keplerian N-body systems, like planetary systems or a cluster of stars that orbit a supermassive black hole. The method is also suitable for integration of N-body systems with intrinsic hierarchies, like a star cluster with primordial binaries. The superposition of Kepler solutions for each pair of particles makes the method excellently suited for parallel computing; we achieve ≳64 per cent efficiency for only eight particles per core, but close to perfect scaling for 16 384 particles on a 128 core distributed-memory computer. We present several implementations in SAKURA, one of which is publicly available via the AMUSE framework.

  7. Optimal sensor placement for deployable antenna module health monitoring in SSPS using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Zhang, Xuepan; Huang, Xiaoqi; Cheng, ZhengAi; Zhang, Xinghua; Hou, Xinbin

    2017-11-01

    The concept of space solar power satellite (SSPS) is an advanced system for collecting solar energy in space and transmitting it wirelessly to earth. However, due to the long service life, in-orbit damage may occur in the structural system of SSPS. Therefore, sensor placement layouts for structural health monitoring should be firstly considered in this concept. In this paper, based on genetic algorithm, an optimal sensor placement method for deployable antenna module health monitoring in SSPS is proposed. According to the characteristics of the deployable antenna module, the designs of sensor placement are listed. Furthermore, based on effective independence method and effective interval index, a combined fitness function is defined to maximize linear independence in targeted modes while simultaneously avoiding redundant information at nearby positions. In addition, by considering the reliability of sensors located at deployable mechanisms, another fitness function is constituted. Moreover, the solution process of optimal sensor placement by using genetic algorithm is clearly demonstrated. At last, a numerical example about the sensor placement layout in a deployable antenna module of SSPS is presented, which by synthetically considering all the above mentioned performances. All results can illustrate the effectiveness and feasibility of the proposed sensor placement method in SSPS.

  8. BlastNeuron for Automated Comparison, Retrieval and Clustering of 3D Neuron Morphologies.

    PubMed

    Wan, Yinan; Long, Fuhui; Qu, Lei; Xiao, Hang; Hawrylycz, Michael; Myers, Eugene W; Peng, Hanchuan

    2015-10-01

    Characterizing the identity and types of neurons in the brain, as well as their associated function, requires a means of quantifying and comparing 3D neuron morphology. Presently, neuron comparison methods are based on statistics from neuronal morphology such as size and number of branches, which are not fully suitable for detecting local similarities and differences in the detailed structure. We developed BlastNeuron to compare neurons in terms of their global appearance, detailed arborization patterns, and topological similarity. BlastNeuron first compares and clusters 3D neuron reconstructions based on global morphology features and moment invariants, independent of their orientations, sizes, level of reconstruction and other variations. Subsequently, BlastNeuron performs local alignment between any pair of retrieved neurons via a tree-topology driven dynamic programming method. A 3D correspondence map can thus be generated at the resolution of single reconstruction nodes. We applied BlastNeuron to three datasets: (1) 10,000+ neuron reconstructions from a public morphology database, (2) 681 newly and manually reconstructed neurons, and (3) neurons reconstructions produced using several independent reconstruction methods. Our approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.

  9. On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.

    PubMed

    Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael

    2015-01-01

    Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.

  10. Multi-approach assessment of the spatial distribution of the specific yield: application to the Crau plain aquifer, France

    NASA Astrophysics Data System (ADS)

    Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric

    2018-06-01

    Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.

  11. Multi-approach assessment of the spatial distribution of the specific yield: application to the Crau plain aquifer, France

    NASA Astrophysics Data System (ADS)

    Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric

    2018-03-01

    Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.

  12. Kinase Identification with Supervised Laplacian Regularized Least Squares

    PubMed Central

    Zhang, He; Wang, Minghui

    2015-01-01

    Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms. PMID:26448296

  13. Communication: Charge-population based dispersion interactions for molecules and materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stöhr, Martin; Department Chemie, Technische Universität München, Lichtenbergstr. 4, D-85748 Garching; Michelitsch, Georg S.

    2016-04-21

    We introduce a system-independent method to derive effective atomic C{sub 6} coefficients and polarizabilities in molecules and materials purely from charge population analysis. This enables the use of dispersion-correction schemes in electronic structure calculations without recourse to electron-density partitioning schemes and expands their applicability to semi-empirical methods and tight-binding Hamiltonians. We show that the accuracy of our method is en par with established electron-density partitioning based approaches in describing intermolecular C{sub 6} coefficients as well as dispersion energies of weakly bound molecular dimers, organic crystals, and supramolecular complexes. We showcase the utility of our approach by incorporation of the recentlymore » developed many-body dispersion method [Tkatchenko et al., Phys. Rev. Lett. 108, 236402 (2012)] into the semi-empirical density functional tight-binding method and propose the latter as a viable technique to study hybrid organic-inorganic interfaces.« less

  14. Kinase Identification with Supervised Laplacian Regularized Least Squares.

    PubMed

    Li, Ao; Xu, Xiaoyi; Zhang, He; Wang, Minghui

    2015-01-01

    Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms.

  15. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  16. The equations of motion of a secularly precessing elliptical orbit

    NASA Astrophysics Data System (ADS)

    Casotto, S.; Bardella, M.

    2013-01-01

    The equations of motion of a secularly precessing ellipse are developed using time as the independent variable. The equations are useful when integrating numerically the perturbations about a reference trajectory which is subject to secular perturbations in the node, the argument of pericentre and the mean motion. Usually this is done in connection with Encke's method to ensure minimal rectification frequency. Similar equations are already available in the literature, but they are either given based on the true anomaly as the independent variable or in mixed mode with respect to time through the use of a supporting equation to track the anomaly. The equations developed here form a complete and independent set of six equations in time. Reformulations both of Escobal's and Kyner and Bennett's equations are also provided which lead to a more concise form.

  17. Development of a Trypanosoma cruzi strain typing assay using MS2 peptide spectral libraries (Tc-STAMS2)

    PubMed Central

    de Oliveira, Gilberto Santos; Kawahara, Rebeca; Rosa-Fernandes, Livia; Avila, Carla Cristi; Teixeira, Marta M. G.; Larsen, Martin R.

    2018-01-01

    Background Chagas disease also known as American trypanosomiasis is caused by the protozoan Trypanosoma cruzi. Over the last 30 years, Chagas disease has expanded from a neglected parasitic infection of the rural population to an urbanized chronic disease, becoming a potentially emergent global health problem. T. cruzi strains were assigned to seven genetic groups (TcI-TcVI and TcBat), named discrete typing units (DTUs), which represent a set of isolates that differ in virulence, pathogenicity and immunological features. Indeed, diverse clinical manifestations (from asymptomatic to highly severe disease) have been attempted to be related to T.cruzi genetic variability. Due to that, several DTU typing methods have been introduced. Each method has its own advantages and drawbacks such as high complexity and analysis time and all of them are based on genetic signatures. Recently, a novel method discriminated bacterial strains using a peptide identification-free, genome sequence-independent shotgun proteomics workflow. Here, we aimed to develop a Trypanosoma cruzi Strain Typing Assay using MS/MS peptide spectral libraries, named Tc-STAMS2. Methods/Principal findings The Tc-STAMS2 method uses shotgun proteomics combined with spectral library search to assign and discriminate T. cruzi strains independently on the genome knowledge. The method is based on the construction of a library of MS/MS peptide spectra built using genotyped T. cruzi reference strains. For identification, the MS/MS peptide spectra of unknown T. cruzi cells are identified using the spectral matching algorithm SpectraST. The Tc-STAMS2 method allowed correct identification of all DTUs with high confidence. The method was robust towards different sample preparations, length of chromatographic gradients and fragmentation techniques. Moreover, a pilot inter-laboratory study showed the applicability to different MS platforms. Conclusions and significance This is the first study that develops a MS-based platform for T. cruzi strain typing. Indeed, the Tc-STAMS2 method allows T. cruzi strain typing using MS/MS spectra as discriminatory features and allows the differentiation of TcI-TcVI DTUs. Similar to genomic-based strategies, the Tc-STAMS2 method allows identification of strains within DTUs. Its robustness towards different experimental and biological variables makes it a valuable complementary strategy to the current T. cruzi genotyping assays. Moreover, this method can be used to identify DTU-specific features correlated with the strain phenotype. PMID:29608573

  18. Community-Based Services for Independent Living: Topic Paper G.

    ERIC Educational Resources Information Center

    National Council on the Handicapped, Washington, DC.

    This paper assesses federal legislation and programs affecting community-based services for independent living for people with disabilities. Independent living entitlement programs are contained in Title VII of the Rehabilitation Act of 1973, and include comprehensive services, centers for independent living, and independent living services for…

  19. Application of least median of squared orthogonal distance (LMD) and LMD-based reweighted least squares (RLS) methods on the stock-recruitment relationship

    NASA Astrophysics Data System (ADS)

    Wang, Yan-Jun; Liu, Qun

    1999-03-01

    Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.

  20. Chemical and isotopic methods for quantifying ground-water recharge in a regional, semiarid environment

    USGS Publications Warehouse

    Wood, Warren W.; Sanford, Ward E.

    1995-01-01

    The High Plains aquifer underlying the semiarid Southern High Plains of Texas and New Mexico, USA was used to illustrate solute and isotopic methods for evaluating recharge fluxes, runoff, and spatial and temporal distribution of recharge. The chloride mass-balance method can provide, under certain conditions, a time-integrated technique for evaluation of recharge flux to regional aquifers that is independent of physical parameters. Applying this method to the High Plains aquifer of the Southern High Plains suggests that recharge flux is approximately 2% of precipitation, or approximately 11 ± 2 mm/y, consistent with previous estimates based on a variety of physically based measurements. The method is useful because long-term average precipitation and chloride concentrations in rain and ground water have less uncertainty and are generally less expensive to acquire than physically based parameters commonly used in analyzing recharge. Spatial and temporal distribution of recharge was evaluated by use of δ2H, δ18O, and tritium concentrations in both ground water and the unsaturated zone. Analyses suggest that nearly half of the recharge to the Southern High Plains occurs as piston flow through playa basin floors that occupy approximately 6% of the area, and that macropore recharge may be important in the remaining recharge. Tritium and chloride concentrations in the unsaturated zone were used in a new equation developed to quantify runoff. Using this equation and data from a representative basin, runoff was found to be 24 ± 3 mm/y; that is in close agreement with values obtained from water-balance measurements on experimental watersheds in the area. Such geochemical estimates are possible because tritium is used to calculate a recharge flux that is independent of precipitation and runoff, whereas recharge flux based on chloride concentration in the unsaturated zone is dependent upon the amount of runoff. The difference between these two estimates yields the amount of runoff to the basin.

  1. Preformulation considerations for controlled release dosage forms. Part III. Candidate form selection using numerical weighting and scoring.

    PubMed

    Chrzanowski, Frank

    2008-01-01

    Two numerical methods, Decision Analysis (DA) and Potential Problem Analysis (PPA) are presented as alternative selection methods to the logical method presented in Part I. In DA properties are weighted and outcomes are scored. The weighted scores for each candidate are totaled and final selection is based on the totals. Higher scores indicate better candidates. In PPA potential problems are assigned a seriousness factor and test outcomes are used to define the probability of occurrence. The seriousness-probability products are totaled and forms with minimal scores are preferred. DA and PPA have never been compared to the logical-elimination method. Additional data were available for two forms of McN-5707 to provide complete preformulation data for five candidate forms. Weight and seriousness factors (independent variables) were obtained from a survey of experienced formulators. Scores and probabilities (dependent variables) were provided independently by Preformulation. The rankings of the five candidate forms, best to worst, were similar for all three methods. These results validate the applicability of DA and PPA for candidate form selection. DA and PPA are particularly applicable in cases where there are many candidate forms and where each form has some degree of unfavorable properties.

  2. An efficient 3-D eddy-current solver using an independent impedance method for transcranial magnetic stimulation.

    PubMed

    De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc

    2011-02-01

    In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.

  3. Change detection for synthetic aperture radar images based on pattern and intensity distinctiveness analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang

    2018-04-01

    Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.

  4. New approach to CT pixel-based photon dose calculations in heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, J.W.; Henkelman, R.M.

    The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less

  5. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  6. Size and Base Composition of RNA in Supercoiled Plasmid DNA

    PubMed Central

    Williams, Peter H.; Boyer, Herbert W.; Helinski, Donald R.

    1973-01-01

    The average size and base composition of the covalently integrated RNA segment in supercoiled ColE1 DNA synthesized in Escherichia coli in the presence of chloramphenicol (CM-ColE1 DNA) have been determined by two independent methods. The two approaches yielded similar results, indicating that the RNA segment in CM-ColE1 DNA contains GMP at the 5′ end and comprises on the average 25 to 26 ribonucleotides with a base composition of 10-11 G, 3 A, 5-6 C, and 6-7 U. PMID:4359488

  7. Thermally induced stresses in cross-ply composite tubes

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Cooper, D. E.; Tompkins, S. S.

    1986-01-01

    An approximate solution for determining stresses in cross-ply composite tubes subjected to a circumferential temperature gradient is presented. The solution is based on the principle of complementary virtual work (PCVW) in conjunction with a Ritz approximation on the stress field and accounts for the temperature dependence of material properties. The PCVW method is compared with a planar elasticity solution using temperature-independent material properties and a Navier approach. The net effect of including temperature-dependent material properties is that the peak absolute values of the stresses are reduced. The dependence of the stresses on the circumferential location is also reduced in comparison with the case of temperature-independent properties.

  8. A systematic review of validated methods to capture acute bronchospasm using administrative or claims data.

    PubMed

    Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L

    2013-12-30

    To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A systematic review of pseudophakic monovision for presbyopia correction

    PubMed Central

    Labiris, Georgios; Toli, Aspa; Perente, Aslin; Ntonti, Panagiota; Kozobolis, Vassilios P.

    2017-01-01

    A systematic review of the recent literature regarding pseudophakic monovision as a reliable methods for presbyopia correction was performed based on the PubMed, MEDLINE, Nature and the American Academy of Ophthalmology databases in July 2015 and data from 18 descriptive and 12 comparative studies were included in this narrative review. Pseudophakic monosvision seems to be an effective method for presbyopia with high rates of spectacles independence and minimal dysphotopsia side-effects, that should be considered by the modern cataract surgeons. PMID:28730093

  10. The invariant of the stiffness filter function with the weight filter function of the power function form

    NASA Astrophysics Data System (ADS)

    Shang, Zhen; Sui, Yun-Kang

    2012-12-01

    Based on the independent, continuous and mapping (ICM) method and homogenization method, a research model is constructed to propose and deduce a theorem and corollary from the invariant between the weight filter function and the corresponding stiffness filter function of the form of power function. The efficiency in searching for optimum solution will be raised via the choice of rational filter functions, so the above mentioned results are very important to the further study of structural topology optimization.

  11. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  12. System and method to determine electric motor efficiency using an equivalent circuit

    DOEpatents

    Lu, Bin; Habetler, Thomas G.

    2015-10-27

    A system and method for determining electric motor efficiency includes a monitoring system having a processor programmed to determine efficiency of an electric motor under load while the electric motor is online. The determination of motor efficiency is independent of a rotor speed measurement. Further, the efficiency is based on a determination of stator winding resistance, an input voltage, and an input current. The determination of the stator winding resistance occurs while the electric motor under load is online.

  13. System and method to determine electric motor efficiency using an equivalent circuit

    DOEpatents

    Lu, Bin [Kenosha, WI; Habetler, Thomas G [Snellville, GA

    2011-06-07

    A system and method for determining electric motor efficiency includes a monitoring system having a processor programmed to determine efficiency of an electric motor under load while the electric motor is online. The determination of motor efficiency is independent of a rotor speed measurement. Further, the efficiency is based on a determination of stator winding resistance, an input voltage, and an input current. The determination of the stator winding resistance occurs while the electric motor under load is online.

  14. Simple algorithms for remote determination of mineral abundances and particle sizes from reflectance spectra

    NASA Technical Reports Server (NTRS)

    Johnson, Paul E.; Smith, Milton O.; Adams, John B.

    1992-01-01

    Algorithms were developed, based on Hapke's (1981) equations, for remote determinations of mineral abundances and particle sizes from reflectance spectra. In this method, spectra are modeled as a function of end-member abundances and illumination/viewing geometry. The method was tested on a laboratory data set. It is emphasized that, although there exist more sophisticated models, the present algorithms are particularly suited for remotely sensed data, where little opportunity exists to independently measure reflectance versus article size and phase function.

  15. Correlation structures in short-term variabilities of stock indices and exchange rates

    NASA Astrophysics Data System (ADS)

    Nakamura, Tomomichi; Small, Michael

    2007-09-01

    Financial data usually show irregular fluctuations and some trends. We investigate whether there are correlation structures in short-term variabilities (irregular fluctuations) among financial data from the viewpoint of deterministic dynamical systems. Our method is based on the small-shuffle surrogate method. The data we use are daily closing price of Standard & Poor's 500 and the volume, and daily foreign exchange rates, Euro/US Dollar (USD), British Pound/USD and Japanese Yen/USD. We found that these data are not independent.

  16. Verification of road databases using multiple road models

    NASA Astrophysics Data System (ADS)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  17. Fluid identification based on P-wave anisotropy dispersion gradient inversion for fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Zhang, J. W.; Huang, H. D.; Zhu, B. H.; Liao, W.

    2017-10-01

    Fluid identification in fractured reservoirs is a challenging issue and has drawn increasing attentions. As aligned fractures in subsurface formations can induce anisotropy, we must choose parameters independent with azimuths to characterize fractures and fluid effects such as anisotropy parameters for fractured reservoirs. Anisotropy is often frequency dependent due to wave-induced fluid flow between pores and fractures. This property is conducive for identifying fluid type using azimuthal seismic data in fractured reservoirs. Through the numerical simulation based on Chapman model, we choose the P-wave anisotropy parameter dispersion gradient (PADG) as the new fluid factor. PADG is dependent both on average fracture radius and fluid type but independent on azimuths. When the aligned fractures in the reservoir are meter-scaled, gas-bearing layer could be accurately identified using PADG attribute. The reflection coefficient formula for horizontal transverse isotropy media by Rüger is reformulated and simplified according to frequency and the target function for inverting PADG based on frequency-dependent amplitude versus azimuth is derived. A spectral decomposition method combining Orthogonal Matching Pursuit and Wigner-Ville distribution is used to prepare the frequency-division data. Through application to synthetic data and real seismic data, the results suggest that the method is useful for gas identification in reservoirs with meter-scaled fractures using high-qualified seismic data.

  18. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  19. Integrating evidence based medicine into undergraduate medical education: combining online instruction with clinical clerkships.

    PubMed

    Aronoff, Stephen C; Evans, Barry; Fleece, David; Lyons, Paul; Kaplan, Lawrence; Rojas, Roberto

    2010-07-01

    Incorporation of evidence based medicine into the undergraduate curriculum varies from school to school. The purpose of this study was to determine if an online course in evidence based medicine run concurrently with the clinical clerkships in the 3rd year of undergraduate medical education provided effective instruction in evidence based medicine (EBM). During the first 18 weeks of the 3rd year, students completed 6 online, didactic modules. Over the next 24 weeks, students developed questions independently from patients seen during clerkships and then retrieved and appraised relevant evidence. Online, faculty mentors reviewed student assignments submitted throughout the course to monitor progress. Mastery of the skills of EBM was assessed prior to and at the conclusion of the course using the Fresno test of competency. Paired data were available from 139 students. Postcourse test scores (M= 77.7; 95% CI = 59-96.4) were significantly higher than precourse scores (M= 66.6; 95% CI = 46.5-86.7), p< .001. Paired evaluations demonstrated an average improvement of 11.1 +/- 20.0 points. All of the students submitted 4 independently derived questions and successfully retrieved and appraised evidence. Medical students successfully acquired and independently applied EBM skills following extended, online, faculty mentored instruction. This method of instruction provided uniform instruction across geographic sites and medical specialties and permitted efficient use of faculty time.

  20. Comparison of Bobath based and movement science based treatment for stroke: a randomised controlled trial

    PubMed Central

    van Vliet, P M; Lincoln, N; Foxall, A

    2005-01-01

    Objectives: Bobath based (BB) and movement science based (MSB) physiotherapy interventions are widely used for patients after stroke. There is little evidence to suggest which is most effective. This single-blind randomised controlled trial evaluated the effect of these treatments on movement abilities and functional independence. Methods: A total of 120 patients admitted to a stroke rehabilitation ward were randomised into two treatment groups to receive either BB or MSB treatment. Primary outcome measures were the Rivermead Motor Assessment and the Motor Assessment Scale. Secondary measures assessed functional independence, walking speed, arm function, muscle tone, and sensation. Measures were performed by a blinded assessor at baseline, and then at 1, 3, and 6 months after baseline. Analysis of serial measurements was performed to compare outcomes between the groups by calculating the area under the curve (AUC) and inserting AUC values into Mann-Whitney U tests. Results: Comparison between groups showed no significant difference for any outcome measures. Significance values for the Rivermead Motor Assessment ranged from p = 0.23 to p = 0.97 and for the Motor Assessment Scale from p = 0.29 to p = 0.87. Conclusions: There were no significant differences in movement abilities or functional independence between patients receiving a BB or an MSB intervention. Therefore the study did not show that one approach was more effective than the other in the treatment of stroke patients. PMID:15774435

  1. Effective data validation of high-frequency data: time-point-, time-interval-, and trend-based methods.

    PubMed

    Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F

    1997-09-01

    Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.

  2. Identification and handling of artifactual gene expression profiles emerging in microarray hybridization experiments

    PubMed Central

    Brodsky, Leonid; Leontovich, Andrei; Shtutman, Michael; Feinstein, Elena

    2004-01-01

    Mathematical methods of analysis of microarray hybridizations deal with gene expression profiles as elementary units. However, some of these profiles do not reflect a biologically relevant transcriptional response, but rather stem from technical artifacts. Here, we describe two technically independent but rationally interconnected methods for identification of such artifactual profiles. Our diagnostics are based on detection of deviations from uniformity, which is assumed as the main underlying principle of microarray design. Method 1 is based on detection of non-uniformity of microarray distribution of printed genes that are clustered based on the similarity of their expression profiles. Method 2 is based on evaluation of the presence of gene-specific microarray spots within the slides’ areas characterized by an abnormal concentration of low/high differential expression values, which we define as ‘patterns of differentials’. Applying two novel algorithms, for nested clustering (method 1) and for pattern detection (method 2), we can make a dual estimation of the profile’s quality for almost every printed gene. Genes with artifactual profiles detected by method 1 may then be removed from further analysis. Suspicious differential expression values detected by method 2 may be either removed or weighted according to the probabilities of patterns that cover them, thus diminishing their input in any further data analysis. PMID:14999086

  3. Information filtering via a scaling-based function.

    PubMed

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.

  4. Kinematic fingerprint of core-collapsed globular clusters

    NASA Astrophysics Data System (ADS)

    Bianchini, P.; Webb, J. J.; Sills, A.; Vesperini, E.

    2018-03-01

    Dynamical evolution drives globular clusters towards core collapse, which strongly shapes their internal properties. Diagnostics of core collapse have so far been based on photometry only, namely on the study of the concentration of the density profiles. Here, we present a new method to robustly identify core-collapsed clusters based on the study of their stellar kinematics. We introduce the kinematic concentration parameter, ck, the ratio between the global and local degree of energy equipartition reached by a cluster, and show through extensive direct N-body simulations that clusters approaching core collapse and in the post-core collapse phase are strictly characterized by ck > 1. The kinematic concentration provides a suitable diagnostic to identify core-collapsed clusters, independent from any other previous methods based on photometry. We also explore the effects of incomplete radial and stellar mass coverage on the calculation of ck and find that our method can be applied to state-of-art kinematic data sets.

  5. Inverse metal-assisted chemical etching produces smooth high aspect ratio InP nanostructures.

    PubMed

    Kim, Seung Hyun; Mohseni, Parsian K; Song, Yi; Ishihara, Tatsumi; Li, Xiuling

    2015-01-14

    Creating high aspect ratio (AR) nanostructures by top-down fabrication without surface damage remains challenging for III-V semiconductors. Here, we demonstrate uniform, array-based InP nanostructures with lateral dimensions as small as sub-20 nm and AR > 35 using inverse metal-assisted chemical etching (I-MacEtch) in hydrogen peroxide (H2O2) and sulfuric acid (H2SO4), a purely solution-based yet anisotropic etching method. The mechanism of I-MacEtch, in contrast to regular MacEtch, is explored through surface characterization. Unique to I-MacEtch, the sidewall etching profile is remarkably smooth, independent of metal pattern edge roughness. The capability of this simple method to create various InP nanostructures, including high AR fins, can potentially enable the aggressive scaling of InP based transistors and optoelectronic devices with better performance and at lower cost than conventional etching methods.

  6. The Kitchen Is Your Laboratory: A Research-Based Term-Paper Assignment in a Science Writing Course

    ERIC Educational Resources Information Center

    Jones, Clinton D.

    2011-01-01

    A term-paper assignment that encompasses the full scientific method has been developed and implemented in an undergraduate science writing and communication course with no laboratory component. Students are required to develop their own hypotheses, design experiments to test their hypotheses, and collect empirical data as independent scientists in…

  7. Curriculum-Dependent and Curriculum-Independent Factors in Preservice Elementary Teachers' Adaptation of Science Curriculum Materials for Inquiry-Based Science

    ERIC Educational Resources Information Center

    Forbes, Cory T.

    2013-01-01

    In this nested mixed methods study I investigate factors influencing preservice elementary teachers' adaptation of science curriculum materials to better support students' engagement in science as inquiry. Analyses focus on two "reflective teaching assignments" completed by 46 preservice elementary teachers in an undergraduate elementary science…

  8. Elastoplastic properties of a low-modulus titanium-based β alloy

    NASA Astrophysics Data System (ADS)

    Betekhtin, V. I.; Kolobov, Yu. R.; Golosova, O. A.; Kardashev, B. K.; Kadomtsev, A. G.; Narykova, M. V.; Ivanov, M. B.; Vershinina, T. N.

    2013-10-01

    The elastoplastic properties (elastic modulus, amplitude-independent damping ratio, microplastic flow stress) of a Ti-26Nb-7Mo-12Zr titanium β alloy are determined using an acoustic resonance method. The effect of the strain during thermomechanical treatment on the structural features of the micro-crystalline alloy and, hence, its elastoplastic properties is analyzed.

  9. Analysis of the gut microbiome in beef cattle and its association with feed intake, growth, and efficiency

    USDA-ARS?s Scientific Manuscript database

    Next-generation sequencing has taken a central role in studies of microbial ecology, especially with regard to culture-independent methods based on molecular phylogenies of the small-subunit ribosomal RNA gene (16S rRNA gene). The ability to relate trends at the species or genus level to host/envir...

  10. Accelerating Literacy Program: The First Year 1993-94.

    ERIC Educational Resources Information Center

    Smith, Ralph J.

    The 1993-94 school year was the first year of the Accelerating Literacy Program (ALP) of the Austin (Texas) Independent School District. The ALP used a grant from the Texas Education Agency to train elementary educators in the methods of a short-term reading intervention program based on the Reading Recovery/Whole Language theory. A group of 367…

  11. Calculating CMMI-Based ROI: Why, When, What, and How?

    DTIC Science & Technology

    2007-03-01

    flows are discounted using either: • The company’s weighted average cost of capital ( WACC ) • A “hurdle rate” consisting of the company’s cost of...Provides a “one number” method of comparing projects that is independent of the company’s WACC or hurdle rate Cons • No way to know the dollar magnitude of

  12. Community detection for fluorescent lifetime microscopy image segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Achilefu, Samuel; Nussinov, Zohar

    2014-03-01

    Multiresolution community detection (CD) method has been suggested in a recent work as an efficient method for performing unsupervised segmentation of fluorescence lifetime (FLT) images of live cell images containing fluorescent molecular probes.1 In the current paper, we further explore this method in FLT images of ex vivo tissue slices. The image processing problem is framed as identifying clusters with respective average FLTs against a background or "solvent" in FLT imaging microscopy (FLIM) images derived using NIR fluorescent dyes. We have identified significant multiresolution structures using replica correlations in these images, where such correlations are manifested by information theoretic overlaps of the independent solutions ("replicas") attained using the multiresolution CD method from different starting points. In this paper, our method is found to be more efficient than a current state-of-the-art image segmentation method based on mixture of Gaussian distributions. It offers more than 1:25 times diversity based on Shannon index than the latter method, in selecting clusters with distinct average FLTs in NIR FLIM images.

  13. Solvency supervision based on a total balance sheet approach

    NASA Astrophysics Data System (ADS)

    Pitselis, Georgios

    2009-11-01

    In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.

  14. Development of a Trypanosoma cruzi strain typing assay using MS2 peptide spectral libraries (Tc-STAMS2).

    PubMed

    de Oliveira, Gilberto Santos; Kawahara, Rebeca; Rosa-Fernandes, Livia; Mule, Simon Ngao; Avila, Carla Cristi; Teixeira, Marta M G; Larsen, Martin R; Palmisano, Giuseppe

    2018-04-01

    Chagas disease also known as American trypanosomiasis is caused by the protozoan Trypanosoma cruzi. Over the last 30 years, Chagas disease has expanded from a neglected parasitic infection of the rural population to an urbanized chronic disease, becoming a potentially emergent global health problem. T. cruzi strains were assigned to seven genetic groups (TcI-TcVI and TcBat), named discrete typing units (DTUs), which represent a set of isolates that differ in virulence, pathogenicity and immunological features. Indeed, diverse clinical manifestations (from asymptomatic to highly severe disease) have been attempted to be related to T.cruzi genetic variability. Due to that, several DTU typing methods have been introduced. Each method has its own advantages and drawbacks such as high complexity and analysis time and all of them are based on genetic signatures. Recently, a novel method discriminated bacterial strains using a peptide identification-free, genome sequence-independent shotgun proteomics workflow. Here, we aimed to develop a Trypanosoma cruzi Strain Typing Assay using MS/MS peptide spectral libraries, named Tc-STAMS2. The Tc-STAMS2 method uses shotgun proteomics combined with spectral library search to assign and discriminate T. cruzi strains independently on the genome knowledge. The method is based on the construction of a library of MS/MS peptide spectra built using genotyped T. cruzi reference strains. For identification, the MS/MS peptide spectra of unknown T. cruzi cells are identified using the spectral matching algorithm SpectraST. The Tc-STAMS2 method allowed correct identification of all DTUs with high confidence. The method was robust towards different sample preparations, length of chromatographic gradients and fragmentation techniques. Moreover, a pilot inter-laboratory study showed the applicability to different MS platforms. This is the first study that develops a MS-based platform for T. cruzi strain typing. Indeed, the Tc-STAMS2 method allows T. cruzi strain typing using MS/MS spectra as discriminatory features and allows the differentiation of TcI-TcVI DTUs. Similar to genomic-based strategies, the Tc-STAMS2 method allows identification of strains within DTUs. Its robustness towards different experimental and biological variables makes it a valuable complementary strategy to the current T. cruzi genotyping assays. Moreover, this method can be used to identify DTU-specific features correlated with the strain phenotype.

  15. Retrieval of volcanic ash height from satellite-based infrared measurements

    NASA Astrophysics Data System (ADS)

    Zhu, Lin; Li, Jun; Zhao, Yingying; Gong, He; Li, Wenjie

    2017-05-01

    A new algorithm for retrieving volcanic ash cloud height from satellite-based measurements is presented. This algorithm, which was developed in preparation for China's next-generation meteorological satellite (FY-4), is based on volcanic ash microphysical property simulation and statistical optimal estimation theory. The MSG satellite's main payload, a 12-channel Spinning Enhanced Visible and Infrared Imager, was used as proxy data to test this new algorithm. A series of eruptions of Iceland's Eyjafjallajökull volcano during April to May 2010 and the Puyehue-Cordón Caulle volcanic complex eruption in the Chilean Andes on 16 June 2011 were selected as two typical cases for evaluating the algorithm under various meteorological backgrounds. Independent volcanic ash simulation training samples and satellite-based Cloud-Aerosol Lidar with Orthogonal Polarization data were used as validation data. It is demonstrated that the statistically based volcanic ash height algorithm is able to rapidly retrieve volcanic ash heights, globally. The retrieved ash heights show comparable accuracy with both independent training data and the lidar measurements, which is consistent with previous studies. However, under complicated background, with multilayers in vertical scale, underlying stratus clouds tend to have detrimental effects on the final retrieval accuracy. This is an unresolved problem, like many other previously published methods using passive satellite sensors. Compared with previous studies, the FY-4 ash height algorithm is independent of simultaneous atmospheric profiles, providing a flexible way to estimate volcanic ash height using passive satellite infrared measurements.

  16. Independent component analysis for cochlear implant artifacts attenuation from electrically evoked auditory steady-state response measurements

    NASA Astrophysics Data System (ADS)

    Deprez, Hanne; Gransier, Robin; Hofmann, Michael; van Wieringen, Astrid; Wouters, Jan; Moonen, Marc

    2018-02-01

    Objective. Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. Approach. ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. Main results. For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. Significance. We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.

  17. Hematocrit-Independent Quantitation of Stimulants in Dried Blood Spots: Pipet versus Microfluidic-Based Volumetric Sampling Coupled with Automated Flow-Through Desorption and Online Solid Phase Extraction-LC-MS/MS Bioanalysis.

    PubMed

    Verplaetse, Ruth; Henion, Jack

    2016-07-05

    A workflow overcoming microsample collection issues and hematocrit (HCT)-related bias would facilitate more widespread use of dried blood spots (DBS). This report describes comparative results between the use of a pipet and a microfluidic-based sampling device for the creation of volumetric DBS. Both approaches were successfully coupled to HCT-independent, fully automated sample preparation and online liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis allowing detection of five stimulants in finger prick blood. Reproducible, selective, accurate, and precise responses meeting generally accepted regulated bioanalysis guidelines were observed over the range of 5-1000 ng/mL whole blood. The applied heated flow-through solvent desorption of the entire spot and online solid phase extraction (SPE) procedure were unaffected by the blood's HCT value within the tested range of 28.0-61.5% HCT. Enhanced stability for mephedrone on DBS compared to liquid whole blood was observed. Finger prick blood samples were collected using both volumetric sampling approaches over a time course of 25 h after intake of a single oral dose of phentermine. A pharmacokinetic curve for the incurred phentermine was successfully produced using the described validated method. These results suggest that either volumetric sample collection method may be amenable to field-use followed by fully automated, HCT-independent DBS-SPE-LC-MS/MS bioanalysis for the quantitation of these representative controlled substances. Analytical data from DBS prepared with a pipet and microfluidic-based sampling devices were comparable, but the latter is easier to operate, making this approach more suitable for sample collection by unskilled persons.

  18. Efficient Variable Selection Method for Exposure Variables on Binary Data

    NASA Astrophysics Data System (ADS)

    Ohno, Manabu; Tarumi, Tomoyuki

    In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.

  19. Development of a “Fission-proxy” Method for the Measurement of 14-MeV Neutron Fission Yields at CAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gharibyan, Narek

    2016-10-25

    Relative fission yield measurements were made for 50 fission products from 25.6±0.5 MeV alpha-induced fission of Th-232. Quantitative comparison of these experimentally measured fission yields with the evaluated fission yields from 14-MeV neutron-induced fission of U-235 demonstrates the feasibility of the proposed fission-proxy method. This new technique, based on the Bohr-independence hypothesis, permits the measurement of fission yields from an alternate reaction pathway (Th-232 + 25.6 MeV α → U-236* vs. U-235 + 14-MeV n → U-236*) given that the fission process associated with the same compound nucleus is independent of its formation. Other suitable systems that can potentially bemore » investigated in this manner include (but are not limited to) Pu-239 and U-237.« less

  20. Study on the supply chain of an enterprise based on axiomatic design

    NASA Astrophysics Data System (ADS)

    Fan, Shu-hai; Lin, Chao-qun; Ji, Chun; Zhou, Ce; Chen, Peng

    2018-06-01

    This paper first expounds the basic theoretical knowledge of axiomatic design, and then designs and improves the enterprise supply chain through two design axioms (axiom of independence and information axiom). In the axiomatic design of the axiom of independence, the user needs to determine the needs and problems to be solved, to determine the top total goals, the total goal decomposition, and to determine their own design equations. In the application of information axiom, the concept of cloud is used to quantify the amount of information, and the two schemes are evaluated and compared. Finally, through axiomatic design, we can get the best solution for the improvement of supply chain design. Axiomatic design is a generic, systematic and sophisticated approach to design that addresses the needs of different customers. Using this method to improve the level of supply chain management is creative. As a mature method, it will make the process efficient and convenient.

  1. Application of Nuclear Magnetic Resonance and Hybrid Methods to Structure Determination of Complex Systems.

    PubMed

    Prischi, Filippo; Pastore, Annalisa

    2016-01-01

    The current main challenge of Structural Biology is to undertake the structure determination of increasingly complex systems in the attempt to better understand their biological function. As systems become more challenging, however, there is an increasing demand for the parallel use of more than one independent technique to allow pushing the frontiers of structure determination and, at the same time, obtaining independent structural validation. The combination of different Structural Biology methods has been named hybrid approaches. The aim of this review is to critically discuss the most recent examples and new developments that have allowed structure determination or experimentally-based modelling of various molecular complexes selecting them among those that combine the use of nuclear magnetic resonance and small angle scattering techniques. We provide a selective but focused account of some of the most exciting recent approaches and discuss their possible further developments.

  2. Measuring cosmological parameters

    PubMed Central

    Freedman, Wendy L.

    1998-01-01

    In this review, the status of measurements of the matter density (Ωm), the vacuum energy density or cosmological constant (ΩΛ), the Hubble constant (H0), and the ages of the oldest measured objects (t0) are summarized. Three independent types of methods for measuring the Hubble constant are considered: the measurement of time delays in multiply imaged quasars, the Sunyaev–Zel’dovich effect in clusters, and Cepheid-based extragalactic distances. Many recent independent dynamical measurements are yielding a low value for the matter density (Ωm ≈ 0.2–0.3). A wide range of Hubble constant measurements appear to be converging in the range of 60–80 km/sec per megaparsec. Areas where future improvements are likely to be made soon are highlighted—in particular, measurements of anisotropies in the cosmic microwave background. Particular attention is paid to sources of systematic error and the assumptions that underlie many of the measurement methods. PMID:9419315

  3. A cosmology-independent calibration of type Ia supernovae data

    NASA Astrophysics Data System (ADS)

    Hauret, C.; Magain, P.; Biernaux, J.

    2018-06-01

    Recently, the common methodology used to transform type Ia supernovae (SNe Ia) into genuine standard candles has been suffering criticism. Indeed, it assumes a particular cosmological model (namely the flat ΛCDM) to calibrate the standardisation corrections parameters, i.e. the dependency of the supernova peak absolute magnitude on its colour, post-maximum decline rate and host galaxy mass. As a result, this assumption could make the data compliant to the assumed cosmology and thus nullify all works previously conducted on model comparison. In this work, we verify the viability of these hypotheses by developing a cosmology-independent approach to standardise SNe Ia data from the recent JLA compilation. Our resulting corrections turn out to be very close to the ΛCDM-based corrections. Therefore, even if a ΛCDM-based calibration is questionable from a theoretical point of view, the potential compliance of SNe Ia data does not happen in practice for the JLA compilation. Previous works of model comparison based on these data do not have to be called into question. However, as this cosmology-independent standardisation method has the same degree of complexity than the model-dependent one, it is worth using it in future works, especially if smaller samples are considered, such as the superluminous type Ic supernovae.

  4. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  5. A two-step super-Gaussian independent component analysis approach for fMRI data.

    PubMed

    Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying

    2015-09-01

    Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.

  6. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  7. Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumonteil, E.; Malvagi, F.

    2012-07-01

    The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solutionmore » is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)« less

  8. FT-IR imaging for quantitative determination of liver fat content in non-alcoholic fatty liver.

    PubMed

    Kochan, K; Maslak, E; Chlopicki, S; Baranska, M

    2015-08-07

    In this work we apply FT-IR imaging of large areas of liver tissue cross-section samples (∼5 cm × 5 cm) for quantitative assessment of steatosis in murine model of Non-Alcoholic Fatty Liver (NAFLD). We quantified the area of liver tissue occupied by lipid droplets (LDs) by FT-IR imaging and Oil Red O (ORO) staining for comparison. Two alternative FT-IR based approaches are presented. The first, straightforward method, was based on average spectra from tissues and provided values of the fat content by using a PLS regression model and the reference method. The second one – the chemometric-based method – enabled us to determine the values of the fat content, independently of the reference method by means of k-means cluster (KMC) analysis. In summary, FT-IR images of large size liver sections may prove to be useful for quantifying liver steatosis without the need of tissue staining.

  9. Enriching regulatory networks by bootstrap learning using optimised GO-based gene similarity and gene links mined from PubMed abstracts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.

    2011-02-18

    Transcriptional regulatory networks are being determined using “reverse engineering” methods that infer connections based on correlations in gene state. Corroboration of such networks through independent means such as evidence from the biomedical literature is desirable. Here, we explore a novel approach, a bootstrapping version of our previous Cross-Ontological Analytic method (XOA) that can be used for semi-automated annotation and verification of inferred regulatory connections, as well as for discovery of additional functional relationships between the genes. First, we use our annotation and network expansion method on a biological network learned entirely from the literature. We show how new relevant linksmore » between genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. Second, we apply our method to annotation, verification, and expansion of a set of regulatory connections found by the Context Likelihood of Relatedness algorithm.« less

  10. Optical encryption of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Liu, Qi; Wang, Jun; Wang, Qiong-Hua

    2018-03-01

    We present an optical encryption method of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography. By modifying the Mach–Zehnder interferometer, the interference of the multiple objects beams and the one reference beam is used to simultaneously encrypt multiple objects into a ciphertext. During decryption, each three-dimensional object can be decrypted independently without having to decrypt other objects. Since the single-pixel digital holography based on compressive sensing theory is introduced, the encrypted data of this method is effectively reduced. In addition, recording fewer encrypted data can greatly reduce the bandwidth of network transmission. Moreover, the compressive sensing essentially serves as a secret key that makes an intruder attack invalid, which means that the system is more secure than the conventional encryption method. Simulation results demonstrate the feasibility of the proposed method and show that the system has good security performance. Project supported by the National Natural Science Foundation of China (Grant Nos. 61405130 and 61320106015).

  11. Change Detection of Remote Sensing Images by Dt-Cwt and Mrf

    NASA Astrophysics Data System (ADS)

    Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.

    2017-05-01

    Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.

  12. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  13. Determination of the pure silicon monocarbide content of silicon carbide and products based on silicon carbide

    NASA Technical Reports Server (NTRS)

    Prost, L.; Pauillac, A.

    1978-01-01

    Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.

  14. Method for the substantial reduction of quenching effects in luminescence spectrometry

    DOEpatents

    Demas, J.N.; Jones, W.M.; Keller, R.A.

    1987-06-26

    Method for reducing quenching effects in analytical luminescence measurements. Two embodiments of the present invention are described which relate to a form of time resolution based on the amplitudes and phase shifts of modulated emission signals. In the first embodiment, the measured modulated emission signal is substantially independent of sample quenching at sufficiently high frequencies. In the second embodiment, the modulated amplitude and the phase shift between the emission signal and the excitation source are simultaneously measured. Using either method, the observed modulated amplitude may be reduced to its unquenched value. 3 figs.

  15. A new multigroup method for cross-sections that vary rapidly in energy

    DOE PAGES

    Haut, Terry Scot; Ahrens, Cory D.; Jonko, Alexandra; ...

    2016-11-04

    Here, we present a numerical method for solving the time-independent thermal radiative transfer (TRT) equation or the neutron transport (NT) equation when the opacity (cross-section) varies rapidly in frequency (energy) on the microscale ε; ε corresponds to the characteristic spacing between absorption lines or resonances, and is much smaller than the macroscopic frequency (energy) variation of interest. The approach is based on a rigorous homogenization of the TRT/NT equation in the frequency (energy) variable. Discretization of the homogenized TRT/NT equation results in a multigroup-type system, and can therefore be solved by standard methods.

  16. A Group Recommender System for Tourist Activities

    NASA Astrophysics Data System (ADS)

    Garcia, Inma; Sebastia, Laura; Onaindia, Eva; Guzman, Cesar

    This paper introduces a method for giving recommendations of tourist activities to a group of users. This method makes recommendations based on the group tastes, their demographic classification and the places visited by the users in former trips. The group recommendation is computed from individual personal recommendations through the use of techniques such as aggregation, intersection or incremental intersection. This method is implemented as an extension of the e-Tourism tool, which is a user-adapted tourism and leisure application, whose main component is the Generalist Recommender System Kernel (GRSK), a domain-independent taxonomy-driven search engine that manages the group recommendation.

  17. Optimizing placements of ground-based snow sensors for areal snow cover estimation using a machine-learning algorithm and melt-season snow-LiDAR data

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2016-12-01

    We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.

  18. [Electormagnetic field of the mobile phone base station: case study].

    PubMed

    Bieńkowski, Paweł; Zubrzak, Bartłomiej; Surma, Robert

    2011-01-01

    The paper presents changes in the electromagnetic field intensity in a school building and its surrounding after the mobile phone base station installation on the roof of the school. The comparison of EMF intensity measured before the base station was launched (electromagnetic background measurement) and after starting its operation (two independent control measurements) is discussed. Analyses of measurements are presented and the authors also propose the method of the electromagnetic field distribution adjustment in the area of radiation antennas side lobe to reduce the intensity of the EMF level in the base station proximity. The presented method involves the regulation of the inclination. On the basis of the measurements, it was found that the EMF intensity increased in the building and its surroundings, but the values measured with wide margins meet the requirements of the Polish law on environmental protection.

  19. A semiparametric graphical modelling approach for large-scale equity selection.

    PubMed

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  20. Linear reduction method for predictive and informative tag SNP selection.

    PubMed

    He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander

    2005-01-01

    Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.

Top