47 CFR 80.225 - Requirements for selective calling equipment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... selective calling (DSC) equipment and selective calling equipment installed in ship and coast stations, and...-STD, “RTCM Recommended Minimum Standards for Digital Selective Calling (DSC) Equipment Providing... Class ‘D’ Digital Selective Calling (DSC)—Methods of testing and required test results,” March 2003. ITU...
Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.
2004-03-09
The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.
Elyasigomari, V; Lee, D A; Screen, H R C; Shaheed, M H
2017-03-01
For each cancer type, only a few genes are informative. Due to the so-called 'curse of dimensionality' problem, the gene selection task remains a challenge. To overcome this problem, we propose a two-stage gene selection method called MRMR-COA-HS. In the first stage, the minimum redundancy and maximum relevance (MRMR) feature selection is used to select a subset of relevant genes. The selected genes are then fed into a wrapper setup that combines a new algorithm, COA-HS, using the support vector machine as a classifier. The method was applied to four microarray datasets, and the performance was assessed by the leave one out cross-validation method. Comparative performance assessment of the proposed method with other evolutionary algorithms suggested that the proposed algorithm significantly outperforms other methods in selecting a fewer number of genes while maintaining the highest classification accuracy. The functions of the selected genes were further investigated, and it was confirmed that the selected genes are biologically relevant to each cancer type. Copyright © 2017. Published by Elsevier Inc.
The effect of call libraries and acoustic filters on the identification of bat echolocation.
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-09-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys. PMID:25535563
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
Selective object encryption for privacy protection
NASA Astrophysics Data System (ADS)
Zhou, Yicong; Panetta, Karen; Cherukuri, Ravindranath; Agaian, Sos
2009-05-01
This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode. In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in many different areas such as wireless networking, mobile phone services and applications in homeland security and medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks.
ERIC Educational Resources Information Center
Elwood, Bryan C.
This report provides a procedure by which the educational administrator can select from available alternatives the best method for transporting students and can evaluate at intervals the success or failure of the method selected. The report outlines the methodology used to analyze the problem, defines the range of alternatives called the…
ERIC Educational Resources Information Center
Forsblom, Lara; Negrini, Lucio; Gurtner, Jean-Luc; Schumann, Stephan
2016-01-01
In the Swiss vocational education system, which is often called a "Dual System", trainees enter into an apprenticeship contract with a training company. On average, 25% of those contracts are terminated prematurely (PCT). This article examines the relationship between training companies' selection methods and PCTs. The investigation is…
Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.
Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.
Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology
Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Nondestructive equipment study
NASA Technical Reports Server (NTRS)
1985-01-01
Identification of existing nondestructive Evaluation (NDE) methods that could be used in a low Earth orbit environment; evaluation of each method with respect to the set of criteria called out in the statement of work; selection of the most promising NDE methods for further evaluation; use of selected NDE methods to test samples of pressure vessel materials in a vacuum; pressure testing of a complex monolythic pressure vessel with known flaws using acoustic emissions in a vacuum; and recommendations for further studies based on analysis and testing are covered.
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Frequencies for digital selective calling (DSC... for digital selective calling (DSC). (a) General purpose calling. The following table describes the... Digital Selective-Calling Equipment in the Maritime Mobile Service,” with Annexes 1 through 5, 2004, and...
Hurtado-Chong, Anahí; Joeris, Alexander; Hess, Denise; Blauth, Michael
2017-07-12
A considerable number of clinical studies experience delays, which result in increased duration and costs. In multicentre studies, patient recruitment is among the leading causes of delays. Poor site selection can result in low recruitment and bad data quality. Site selection is therefore crucial for study quality and completion, but currently no specific guidelines are available. Selection of sites adequate to participate in a prospective multicentre cohort study was performed through an open call using a newly developed objective multistep approach. The method is based on use of a network, definition of objective criteria and a systematic screening process. Out of 266 interested sites, 24 were shortlisted and finally 12 sites were selected to participate in the study. The steps in the process included an open call through a network, use of selection questionnaires tailored to the study, evaluation of responses using objective criteria and scripted telephone interviews. At each step, the number of candidate sites was quickly reduced leaving only the most promising candidates. Recruitment and quality of data went according to expectations in spite of the contracting problems faced with some sites. The results of our first experience with a standardised and objective method of site selection are encouraging. The site selection method described here can serve as a guideline for other researchers performing multicentre studies. ClinicalTrials.gov: NCT02297581. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
PROFIL: A Method for the Development of Multimedia.
ERIC Educational Resources Information Center
Koper, Rob
1995-01-01
Describes a dedicated method for the design of multimedia courseware, called PROFIL, which integrates instructional design with software engineering techniques and incorporates media selection in the design methodology. The phases of development are outlined: preliminary investigation, definition, script, technical realization, implementation, and…
Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla
2018-05-01
Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Frequencies for digital selective calling (DSC... for digital selective calling (DSC). (a) General purpose calling. The following table describes the calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three...
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Selected special calls-duties... FUTURES TRADING COMMISSION SPECIAL CALLS § 21.03 Selected special calls-duties of foreign brokers... market, the Commission may issue a call for information from a futures commission merchant, clearing...
Relevant, irredundant feature selection and noisy example elimination.
Lashkia, George V; Anthony, Laurence
2004-04-01
In many real-world situations, the method for computing the desired output from a set of inputs is unknown. One strategy for solving these types of problems is to learn the input-output functionality from examples in a training set. However, in many situations it is difficult to know what information is relevant to the task at hand. Subsequently, researchers have investigated ways to deal with the so-called problem of consistency of attributes, i.e., attributes that can distinguish examples from different classes. In this paper, we first prove that the notion of relevance of attributes is directly related to the consistency of attributes, and show how relevant, irredundant attributes can be selected. We then compare different relevant attribute selection algorithms, and show the superiority of algorithms that select irredundant attributes over those that select relevant attributes. We also show that searching for an "optimal" subset of attributes, which is considered to be the main purpose of attribute selection, is not the best way to improve the accuracy of classifiers. Employing sets of relevant, irredundant attributes improves classification accuracy in many more cases. Finally, we propose a new method for selecting relevant examples, which is based on filtering the so-called pattern frequency domain. By identifying examples that are nontypical in the determination of relevant, irredundant attributes, irrelevant examples can be eliminated prior to the learning process. Empirical results using artificial and real databases show the effectiveness of the proposed method in selecting relevant examples leading to improved performance even on greatly reduced training sets.
Dynamic variable selection in SNP genotype autocalling from APEX microarray data.
Podder, Mohua; Welch, William J; Zamar, Ruben H; Tebbutt, Scott J
2006-11-30
Single nucleotide polymorphisms (SNPs) are DNA sequence variations, occurring when a single nucleotide--adenine (A), thymine (T), cytosine (C) or guanine (G)--is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX). This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart) is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA) using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU) of St. Paul's Hospital (plus one negative PCR control sample). Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our model-based genotype calling algorithm captures the redundancy in the system considering all the underlying probe features of a particular SNP, automatically down-weighting any 'bad data' corresponding to image artifacts on the microarray slide or failure of a specific chemistry. In this regard, our method is able to automatically select the probes which work well and reduce the effect of other so-called bad performing probes in a sample-specific manner, for any number of SNPs.
Measuring the effectiveness of patient-chosen reminder methods in a private orthodontic practice.
Wegrzyniak, Lauren M; Hedderly, Deborah; Chaudry, Kishore; Bollu, Prashanti
2018-05-01
To evaluate the effectiveness of patient-chosen appointment reminder methods (phone call, e-mail, or SMS text) in reducing no-show rates. This was a retrospective case study that determined the correlation between patient-chosen appointment reminder methods and no-show rates in a private orthodontic practice. This study was conducted in a single office location of a multioffice private orthodontic practice using data gathered in 2015. The subjects were patients who self-selected the appointment reminder method (phone call, e-mail, or SMS text). Patient appointment data were collected over a 6-month period. Patient attendance was analyzed with descriptive statistics to determine any significant differences among patient-chosen reminder methods. There was a total of 1193 appointments with an average no-show rate of 2.43% across the three reminder methods. No statistically significant differences ( P = .569) were observed in the no-show rates between the three methods: phone call (3.49%), e-mail (2.68%), and SMS text (1.90%). The electronic appointment reminder methods (SMS text and e-mail) had lower no-show rates compared with the phone call method, with SMS text having the lowest no-show rate of 1.90%. However, since no significant differences were observed between the three patient-chosen reminder methods, providers may want to allow patients to choose their reminder method to decrease no-shows.
Determination of the optimal number of components in independent components analysis.
Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N
2018-03-01
Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Welch, Allison M; Smith, Michael J; Gerhardt, H Carl
2014-06-01
Genetic variation in sexual displays is crucial for an evolutionary response to sexual selection, but can be eroded by strong selection. Identifying the magnitude and sources of additive genetic variance underlying sexually selected traits is thus an important issue in evolutionary biology. We conducted a quantitative genetics experiment with gray treefrogs (Hyla versicolor) to investigate genetic variances and covariances among features of the male advertisement call. Two energetically expensive traits showed significant genetic variation: call duration, expressed as number of pulses per call, and call rate, represented by its inverse, call period. These two properties also showed significant genetic covariance, consistent with an energetic constraint to call production. Combining the genetic variance-covariance matrix with previous estimates of directional sexual selection imposed by female preferences predicts a limited increase in call duration but no change in call rate despite significant selection on both traits. In addition to constraints imposed by the genetic covariance structure, an evolutionary response to sexual selection may also be limited by high energetic costs of long-duration calls and by preferences that act most strongly against very short-duration calls. Meanwhile, the persistence of these preferences could be explained by costs of mating with males with especially unattractive calls. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Selected Aspects of the eCall Emergency Notification System
NASA Astrophysics Data System (ADS)
Kaminski, Tomasz; Nowacki, Gabriel; Mitraszewska, Izabella; Niezgoda, Michał; Kruszewski, Mikołaj; Kaminska, Ewa; Filipek, Przemysław
2012-02-01
The article describes problems associated with the road collision detection for the purpose of the automatic emergency call. At the moment collision is detected, the eCall device installed in the vehicle will automatically make contact with Emergency Notification Centre and send the set of essential information on the vehicle and the place of the accident. To activate the alarm, the information about the deployment of the airbags will not be used, because connection of the eCall device might interfere with the vehicle’s safety systems. It is necessary to develop a method enabling detection of the road collision, similar to the one used in airbag systems, and based on the signals available from the acceleration sensors.
Linear reduction method for predictive and informative tag SNP selection.
He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander
2005-01-01
Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.
SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records
Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.
2013-01-01
This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.
Czechoslovak Journal of Physics (Selected Articles),
1983-01-27
Academy of Sciences - Introduction One of the most widely used methods of electroerosion treatment is the so-called anode-mechanical method, which was... Electroerosion treatment methods are significantly more productive at lower voltages than at high voltages. 3. With pulsed discharges it is easier to achieve...systematic investigation of the physical processes which occur during electroerosion . A divergence of opinion among various authors concerning the
The mechanism of sound production in túngara frogs and its role in sexual selection and speciation.
Ryan, Michael J; Guerra, Mónica A
2014-10-01
Sexual communication can evolve in response to sexual selection, and it can also cause behavioral reproductive isolation between populations and thus drive speciation. Anurans are an excellent system to investigate these links between behavior and evolution because we have detailed knowledge of how neural mechanisms generate behavioral preferences for calls and how these preferences then generate selection on call variation. But we know far less about the physical mechanisms of call production, especially how different laryngeal morphologies generate call variation. Here we review studies of a group of species that differ in the presence of a secondary call component that evolved under sexual selection. We discuss how the larynx produces this call component, and how laryngeal morphology generates sexual selection and can contribute to speciation. Copyright © 2014. Published by Elsevier Ltd.
Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC2), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible. PMID:29666661
Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC 2 ), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-10
... Trading of Shares of the Horizons S&P 500 Covered Call ETF, Horizons S&P Financial Select Sector Covered Call ETF, and Horizons S&P Energy Select Sector Covered Call ETF Under NYSE Arca Equities Rule 5.2(j)(3... and trade shares (``Shares'') of the Horizons S&P 500 Covered Call ETF, Horizons S&P Financial Select...
Piecewise SALT sampling for estimating suspended sediment yields
Robert B. Thomas
1989-01-01
A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J
2015-03-01
We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.
Mutual information estimation reveals global associations between stimuli and biological processes
Suzuki, Taiji; Sugiyama, Masashi; Kanamori, Takafumi; Sese, Jun
2009-01-01
Background Although microarray gene expression analysis has become popular, it remains difficult to interpret the biological changes caused by stimuli or variation of conditions. Clustering of genes and associating each group with biological functions are often used methods. However, such methods only detect partial changes within cell processes. Herein, we propose a method for discovering global changes within a cell by associating observed conditions of gene expression with gene functions. Results To elucidate the association, we introduce a novel feature selection method called Least-Squares Mutual Information (LSMI), which computes mutual information without density estimaion, and therefore LSMI can detect nonlinear associations within a cell. We demonstrate the effectiveness of LSMI through comparison with existing methods. The results of the application to yeast microarray datasets reveal that non-natural stimuli affect various biological processes, whereas others are no significant relation to specific cell processes. Furthermore, we discover that biological processes can be categorized into four types according to the responses of various stimuli: DNA/RNA metabolism, gene expression, protein metabolism, and protein localization. Conclusion We proposed a novel feature selection method called LSMI, and applied LSMI to mining the association between conditions of yeast and biological processes through microarray datasets. In fact, LSMI allows us to elucidate the global organization of cellular process control. PMID:19208155
Behavioral Consequences of Kainic Acid Lesions and Fetal Transplants of the Striatum
1984-06-12
Selected sections were also stained with cresyl violet in order to facilitate the visualization of neuronal cytology and morphology. All sections...tendency to mutism and depression with frequent suicidal ideation (Bruyn, 1973). The Westphal variant of HD, also called the rigid-hypokinetic...1978). In situ injections of kainic acid: A new method for selectively lesioning neuronal cell bodies while sparing axons of passage. Journal of
The assessment of biases in the acoustic discrimination of individuals
Šálek, Martin
2017-01-01
Animal vocalizations contain information about individual identity that could potentially be used for the monitoring of individuals. However, the performance of individual discrimination is subjected to many biases depending on factors such as the amount of identity information, or methods used. These factors need to be taken into account when comparing results of different studies or selecting the most cost-effective solution for a particular species. In this study, we evaluate several biases associated with the discrimination of individuals. On a large sample of little owl male individuals, we assess how discrimination performance changes with methods of call description, an increasing number of individuals, and number of calls per male. Also, we test whether the discrimination performance within the whole population can be reliably estimated from a subsample of individuals in a pre-screening study. Assessment of discrimination performance at the level of the individual and at the level of call led to different conclusions. Hence, studies interested in individual discrimination should optimize methods at the level of individuals. The description of calls by their frequency modulation leads to the best discrimination performance. In agreement with our expectations, discrimination performance decreased with population size. Increasing the number of calls per individual linearly increased the discrimination of individuals (but not the discrimination of calls), likely because it allows distinction between individuals with very similar calls. The available pre-screening index does not allow precise estimation of the population size that could be reliably monitored. Overall, projects applying acoustic monitoring at the individual level in population need to consider limitations regarding the population size that can be reliably monitored and fine-tune their methods according to their needs and limitations. PMID:28486488
A novel feature ranking method for prediction of cancer stages using proteomics data
Saghapour, Ehsan; Sehhati, Mohammadreza
2017-01-01
Proteomic analysis of cancers' stages has provided new opportunities for the development of novel, highly sensitive diagnostic tools which helps early detection of cancer. This paper introduces a new feature ranking approach called FRMT. FRMT is based on the Technique for Order of Preference by Similarity to Ideal Solution method (TOPSIS) which select the most discriminative proteins from proteomics data for cancer staging. In this approach, outcomes of 10 feature selection techniques were combined by TOPSIS method, to select the final discriminative proteins from seven different proteomic databases of protein expression profiles. In the proposed workflow, feature selection methods and protein expressions have been considered as criteria and alternatives in TOPSIS, respectively. The proposed method is tested on seven various classifier models in a 10-fold cross validation procedure that repeated 30 times on the seven cancer datasets. The obtained results proved the higher stability and superior classification performance of method in comparison with other methods, and it is less sensitive to the applied classifier. Moreover, the final introduced proteins are informative and have the potential for application in the real medical practice. PMID:28934234
Tárano, Zaida; Carballo, Luisana
2016-05-01
Communal signaling increases the likelihood of acoustic interference and impairs mate choice; consequently, mechanisms of interference avoidance are expected. Adjustment of the timing of the calls between signalers, specifically call alternation, is probably the most efficient strategy. For this reason, in the present study we analyzed call timing in dyads of males of E. johnstonei in six natural assemblages. We addressed whether males entrain their calls with those of other males at the assemblage and if they show selective attention in relation to perceived amplitude of the other males' calls, inter-male distance, or intrinsic call features (call duration, period or dominant frequency). We expected males to selectively attend to closer or louder males and/or to those of higher or similar attractiveness for females than themselves, because those would be their strongest competitors. We found that most males intercalated their calls with those of at least one male. In assemblages of 3 individuals, males seemed to attend to a fixed number of males regardless of their characteristics. In assemblages of more than 3 individuals, the perceived amplitude of the call of the neighboring male was higher, and the call periods of the males were more similar in alternating dyads than in the non-alternating ones. At the proximate level, selective attention based on perceived amplitude may relate to behavioral hearing thresholds. Selective attention based on the similarity of call periods may relate to the properties of the call oscillators controlling calling rhythms. At the ultimate level, selective attention may be related to the likelihood of acoustic competition for females. Copyright © 2016 Elsevier B.V. All rights reserved.
Czechoslovak Journal of Physics (selected articles)
NASA Astrophysics Data System (ADS)
Hermoch, V.; Zitka, B. H.
1983-01-01
One of the most widely used methods of electroerosion treatment is the so called anode mechanical method, which uses an electrolyte rather than a dielectric medium. The effect of the short term pulsed discharge, the effect of the surrounding electrolyte on the behavior of the discharge, and the effect of electromechanical changes on the surface of the electrode on the discharge mechanism were studied.
ERIC Educational Resources Information Center
Boon, Belinda
The method called CREW (Continuous Review, Evaluation, and Weeding) integrates material selection and acquisition, cataloging and processing, and circulation and reference into one ongoing routine that assures that all the necessary indirect services are accomplished in an effective way. This revision of the original guide, published in 1976,…
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Adelman, H. M.
1984-01-01
Orbiting spacecraft such as large space antennas have to maintain a highly accurate space to operate satisfactorily. Such structures require active and passive controls to mantain an accurate shape under a variety of disturbances. Methods for the optimum placement of control actuators for correcting static deformations are described. In particular, attention is focused on the case were control locations have to be selected from a large set of available sites, so that integer programing methods are called for. The effectiveness of three heuristic techniques for obtaining a near-optimal site selection is compared. In addition, efficient reanalysis techniques for the rapid assessment of control effectiveness are presented. Two examples are used to demonstrate the methods: a simple beam structure and a 55m space-truss-parabolic antenna.
A method for selective excitation of Ince-Gaussian modes in an end-pumped solid-state laser
NASA Astrophysics Data System (ADS)
Lei, J.; Hu, A.; Wang, Y.; Chen, P.
2014-12-01
A method for selective excitation of Ince-Gaussian modes is presented. The method is based on the spatial distributions of Ince-Gaussian modes as well as the transverse mode selection theory. Significant diffraction loss is introduced in a resonator by using opaque lines at zero-intensity positions, and this loss allows to excite a specific mode; we call this method "loss control." We study the method by means of numerical simulation of a half-symmetric laser resonator. The simulated field is represented by angular spectrum of the plane waves representation, and its changes are calculated by the two-dimensional fast Fourier transform algorithm when it passes through the optical elements and propagates back and forth in the resonator. The output lasing modes of our method have an overlap of over 90 % with the target Ince-Gaussian modes. The method will be beneficial to the further study of properties and potential applications of Ince-Gaussian modes.
Humble, Emily; Thorne, Michael A S; Forcada, Jaume; Hoffman, Joseph I
2016-08-26
Single nucleotide polymorphism (SNP) discovery is an important goal of many studies. However, the number of 'putative' SNPs discovered from a sequence resource may not provide a reliable indication of the number that will successfully validate with a given genotyping technology. For this it may be necessary to account for factors such as the method used for SNP discovery and the type of sequence data from which it originates, suitability of the SNP flanking sequences for probe design, and genomic context. To explore the relative importance of these and other factors, we used Illumina sequencing to augment an existing Roche 454 transcriptome assembly for the Antarctic fur seal (Arctocephalus gazella). We then mapped the raw Illumina reads to the new hybrid transcriptome using BWA and BOWTIE2 before calling SNPs with GATK. The resulting markers were pooled with two existing sets of SNPs called from the original 454 assembly using NEWBLER and SWAP454. Finally, we explored the extent to which SNPs discovered using these four methods overlapped and predicted the corresponding validation outcomes for both Illumina Infinium iSelect HD and Affymetrix Axiom arrays. Collating markers across all discovery methods resulted in a global list of 34,718 SNPs. However, concordance between the methods was surprisingly poor, with only 51.0 % of SNPs being discovered by more than one method and 13.5 % being called from both the 454 and Illumina datasets. Using a predictive modeling approach, we could also show that SNPs called from the Illumina data were on average more likely to successfully validate, as were SNPs called by more than one method. Above and beyond this pattern, predicted validation outcomes were also consistently better for Affymetrix Axiom arrays. Our results suggest that focusing on SNPs called by more than one method could potentially improve validation outcomes. They also highlight possible differences between alternative genotyping technologies that could be explored in future studies of non-model organisms.
Integrating CALL into the Classroom: The Role of Podcasting in an ESL Listening Strategies Course
ERIC Educational Resources Information Center
O'Brien, Anne; Hegelheimer, Volker
2007-01-01
Despite the increase of teacher preparation programs that emphasize the importance of training teachers to select and develop appropriate computer-assisted language learning (CALL) materials, integration of CALL into classroom settings is still frequently relegated to the use of selected CALL activities to supplement instruction or to provide…
Solder Joint Health Monitoring Testbed
NASA Technical Reports Server (NTRS)
Delaney, Michael M.; Flynn, James; Browder, Mark
2009-01-01
A method of monitoring the health of selected solder joints, called SJ-BIST, has been developed by Ridgetop Group Inc. under a Small Business Innovative Research (SBIR) contract. The primary goal of this research program is to test and validate this method in a flight environment using realistically seeded faults in selected solder joints. An additional objective is to gather environmental data for future development of physics-based and data-driven prognostics algorithms. A test board is being designed using a Xilinx FPGA. These boards will be tested both in flight and on the ground using a shaker table and an altitude chamber.
Evaluation of copy number variation detection for a SNP array platform
2014-01-01
Background Copy Number Variations (CNVs) are usually inferred from Single Nucleotide Polymorphism (SNP) arrays by use of some software packages based on given algorithms. However, there is no clear understanding of the performance of these software packages; it is therefore difficult to select one or several software packages for CNV detection based on the SNP array platform. We selected four publicly available software packages designed for CNV calling from an Affymetrix SNP array, including Birdsuite, dChip, Genotyping Console (GTC) and PennCNV. The publicly available dataset generated by Array-based Comparative Genomic Hybridization (CGH), with a resolution of 24 million probes per sample, was considered to be the “gold standard”. Compared with the CGH-based dataset, the success rate, average stability rate, sensitivity, consistence and reproducibility of these four software packages were assessed compared with the “gold standard”. Specially, we also compared the efficiency of detecting CNVs simultaneously by two, three and all of the software packages with that by a single software package. Results Simply from the quantity of the detected CNVs, Birdsuite detected the most while GTC detected the least. We found that Birdsuite and dChip had obvious detecting bias. And GTC seemed to be inferior because of the least amount of CNVs it detected. Thereafter we investigated the detection consistency produced by one certain software package and the rest three software suits. We found that the consistency of dChip was the lowest while GTC was the highest. Compared with the CNVs detecting result of CGH, in the matching group, GTC called the most matching CNVs, PennCNV-Affy ranked second. In the non-overlapping group, GTC called the least CNVs. With regards to the reproducibility of CNV calling, larger CNVs were usually replicated better. PennCNV-Affy shows the best consistency while Birdsuite shows the poorest. Conclusion We found that PennCNV outperformed the other three packages in the sensitivity and specificity of CNV calling. Obviously, each calling method had its own limitations and advantages for different data analysis. Therefore, the optimized calling methods might be identified using multiple algorithms to evaluate the concordance and discordance of SNP array-based CNV calling. PMID:24555668
Sexual selection drives speciation in an Amazonian frog
Boul, K.E.; Funk, W.C.; Darst, C.R.; Cannatella, D.C.; Ryan, M.J.
2007-01-01
One proposed mechanism of speciation is divergent sexual selection, whereby divergence in female preferences and male signals results in behavioural isolation. Despite the appeal of this hypothesis, evidence for it remains inconclusive. Here, we present several lines of evidence that sexual selection is driving behavioural isolation and speciation among populations of an Amazonian frog (Physalaemus petersi). First, sexual selection has promoted divergence in male mating calls and female preferences for calls between neighbouring populations, resulting in strong behavioural isolation. Second, phylogenetic analysis indicates that populations have become fixed for alternative call types several times throughout the species' range, and coalescent analysis rejects genetic drift as a cause for this pattern, suggesting that this divergence is due to selection. Finally, gene flow estimated with microsatellite loci is an average of 30 times lower between populations with different call types than between populations separated by a similar geographical distance with the same call type, demonstrating genetic divergence and incipient speciation. Taken together, these data provide strong evidence that sexual selection is driving behavioural isolation and speciation, supporting sexual selection as a cause for speciation in the wild. ?? 2006 The Royal Society.
OTG-snpcaller: An Optimized Pipeline Based on TMAP and GATK for SNP Calling from Ion Torrent Data
Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y. Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun
2014-01-01
Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology’s Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences. PMID:24824529
OTG-snpcaller: an optimized pipeline based on TMAP and GATK for SNP calling from ion torrent data.
Zhu, Pengyuan; He, Lingyu; Li, Yaqiao; Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun
2014-01-01
Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology's Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences.
Task Based Language Teaching: Development of CALL
ERIC Educational Resources Information Center
Anwar, Khoirul; Arifani, Yudhi
2016-01-01
The dominant complexities of English teaching in Indonesia are about limited development of teaching methods and materials which still cannot optimally reflect students' needs (in particular of how to acquire knowledge and select the most effective learning models). This research is to develop materials with complete task-based activities by using…
Confidence Wagering during Mathematics and Science Testing
ERIC Educational Resources Information Center
Jack, Brady Michael; Liu, Chia-Ju; Chiu, Hoan-Lin; Shymansky, James A.
2009-01-01
This proposal presents the results of a case study involving five 8th grade Taiwanese classes, two mathematics and three science classes. These classes used a new method of testing called confidence wagering. This paper advocates the position that confidence wagering can predict the accuracy of a student's test answer selection during…
Digital transcriptome profiling using selective hexamer priming for cDNA synthesis.
Armour, Christopher D; Castle, John C; Chen, Ronghua; Babak, Tomas; Loerch, Patrick; Jackson, Stuart; Shah, Jyoti K; Dey, John; Rohl, Carol A; Johnson, Jason M; Raymond, Christopher K
2009-09-01
We developed a procedure for the preparation of whole transcriptome cDNA libraries depleted of ribosomal RNA from only 1 microg of total RNA. The method relies on a collection of short, computationally selected oligonucleotides, called 'not-so-random' (NSR) primers, to obtain full-length, strand-specific representation of nonribosomal RNA transcripts. In this study we validated the technique by profiling human whole brain and universal human reference RNA using ultra-high-throughput sequencing.
Gerhardt, H Carl; Brooks, Robert
2009-10-01
Even simple biological signals vary in several measurable dimensions. Understanding their evolution requires, therefore, a multivariate understanding of selection, including how different properties interact to determine the effectiveness of the signal. We combined experimental manipulation with multivariate selection analysis to assess female mate choice on the simple trilled calls of male gray treefrogs. We independently and randomly varied five behaviorally relevant acoustic properties in 154 synthetic calls. We compared response times of each of 154 females to one of these calls with its response to a standard call that had mean values of the five properties. We found directional and quadratic selection on two properties indicative of the amount of signaling, pulse number, and call rate. Canonical rotation of the fitness surface showed that these properties, along with pulse rate, contributed heavily to a major axis of stabilizing selection, a result consistent with univariate studies showing diminishing effects of increasing pulse number well beyond the mean. Spectral properties contributed to a second major axis of stabilizing selection. The single major axis of disruptive selection suggested that a combination of two temporal and two spectral properties with values differing from the mean should be especially attractive.
Linear reduction methods for tag SNP selection.
He, Jingwu; Zelikovsky, Alex
2004-01-01
It is widely hoped that constructing a complete human haplotype map will help to associate complex diseases with certain SNP's. Unfortunately, the number of SNP's is huge and it is very costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNP's that should be sequenced to considerably small number of informative representatives, so called tag SNP's. In this paper, we propose a new linear algebra based method for selecting and using tag SNP's. Our method is purely combinatorial and can be combined with linkage disequilibrium (LD) and block based methods. We measure the quality of our tag SNP selection algorithm by comparing actual SNP's with SNP's linearly predicted from linearly chosen tag SNP's. We obtain an extremely good compression and prediction rates. For example, for long haplotypes (>25000 SNP's), knowing only 0.4% of all SNP's we predict the entire unknown haplotype with 2% accuracy while the prediction method is based on a 10% sample of the population.
Application of machine learning on brain cancer multiclass classification
NASA Astrophysics Data System (ADS)
Panca, V.; Rustam, Z.
2017-07-01
Classification of brain cancer is a problem of multiclass classification. One approach to solve this problem is by first transforming it into several binary problems. The microarray gene expression dataset has the two main characteristics of medical data: extremely many features (genes) and only a few number of samples. The application of machine learning on microarray gene expression dataset mainly consists of two steps: feature selection and classification. In this paper, the features are selected using a method based on support vector machine recursive feature elimination (SVM-RFE) principle which is improved to solve multiclass classification, called multiple multiclass SVM-RFE. Instead of using only the selected features on a single classifier, this method combines the result of multiple classifiers. The features are divided into subsets and SVM-RFE is used on each subset. Then, the selected features on each subset are put on separate classifiers. This method enhances the feature selection ability of each single SVM-RFE. Twin support vector machine (TWSVM) is used as the method of the classifier to reduce computational complexity. While ordinary SVM finds single optimum hyperplane, the main objective Twin SVM is to find two non-parallel optimum hyperplanes. The experiment on the brain cancer microarray gene expression dataset shows this method could classify 71,4% of the overall test data correctly, using 100 and 1000 genes selected from multiple multiclass SVM-RFE feature selection method. Furthermore, the per class results show that this method could classify data of normal and MD class with 100% accuracy.
47 CFR 80.121 - Public coast stations using telegraphy.
Code of Federal Regulations, 2014 CFR
2014-10-01
... use the ship station selective calling number (5 digits) and its assigned coast station identification number (4 digits). Calls to ship stations must employ the following format: Ship station selective call number, repeated twice; “DE”, sent once; and coast station identification number, repeated twice. When...
47 CFR 80.121 - Public coast stations using telegraphy.
Code of Federal Regulations, 2012 CFR
2012-10-01
... use the ship station selective calling number (5 digits) and its assigned coast station identification number (4 digits). Calls to ship stations must employ the following format: Ship station selective call number, repeated twice; “DE”, sent once; and coast station identification number, repeated twice. When...
47 CFR 80.121 - Public coast stations using telegraphy.
Code of Federal Regulations, 2013 CFR
2013-10-01
... use the ship station selective calling number (5 digits) and its assigned coast station identification number (4 digits). Calls to ship stations must employ the following format: Ship station selective call number, repeated twice; “DE”, sent once; and coast station identification number, repeated twice. When...
47 CFR 80.121 - Public coast stations using telegraphy.
Code of Federal Regulations, 2011 CFR
2011-10-01
... use the ship station selective calling number (5 digits) and its assigned coast station identification number (4 digits). Calls to ship stations must employ the following format: Ship station selective call number, repeated twice; “DE”, sent once; and coast station identification number, repeated twice. When...
Surface Modification of Plastic Substrates Using Atomic Hydrogen
NASA Astrophysics Data System (ADS)
Heya, Akira; Matsuo, Naoto
The surface properties of a plastic substrate were changed by a novel surface treatment called atomic hydrogen annealing (AHA). In this method, a plastic substrate was exposed to atomic hydrogen generated by cracking of hydrogen molecules on heated tungsten wire. Surface roughness was increased and halogen elements (F and Cl) were selectively etched by AHA. In addition, plastic surface was reduced by AHA. The surface can be modified by the recombination reaction of atomic hydrogen, the reduction reaction and selective etching of halogen atom. It is concluded that this method is a promising technique for improvement of adhesion between inorganic films and plastic substrates at low temperatures.
Inventing and improving ribozyme function: rational design versus iterative selection methods
NASA Technical Reports Server (NTRS)
Breaker, R. R.; Joyce, G. F.
1994-01-01
Two major strategies for generating novel biological catalysts exist. One relies on our knowledge of biopolymer structure and function to aid in the 'rational design' of new enzymes. The other, often called 'irrational design', aims to generate new catalysts, in the absence of detailed physicochemical knowledge, by using selection methods to search a library of molecules for functional variants. Both strategies have been applied, with considerable success, to the remodeling of existing ribozymes and the development of ribozymes with novel catalytic function. The two strategies are by no means mutually exclusive, and are best applied in a complementary fashion to obtain ribozymes with the desired catalytic properties.
Approximate Genealogies Under Genetic Hitchhiking
Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.
2006-01-01
The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733
Artificial Intelligence in Cardiology.
Johnson, Kipp W; Torres Soto, Jessica; Glicksberg, Benjamin S; Shameer, Khader; Miotto, Riccardo; Ali, Mohsin; Ashley, Euan; Dudley, Joel T
2018-06-12
Artificial intelligence and machine learning are poised to influence nearly every aspect of the human condition, and cardiology is not an exception to this trend. This paper provides a guide for clinicians on relevant aspects of artificial intelligence and machine learning, reviews selected applications of these methods in cardiology to date, and identifies how cardiovascular medicine could incorporate artificial intelligence in the future. In particular, the paper first reviews predictive modeling concepts relevant to cardiology such as feature selection and frequent pitfalls such as improper dichotomization. Second, it discusses common algorithms used in supervised learning and reviews selected applications in cardiology and related disciplines. Third, it describes the advent of deep learning and related methods collectively called unsupervised learning, provides contextual examples both in general medicine and in cardiovascular medicine, and then explains how these methods could be applied to enable precision cardiology and improve patient outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
The High Citadel: The Influence of Harvard Law School.
ERIC Educational Resources Information Center
Seligman, Joel
The history of Harvard Law School, a modern critique, and a proposed new model for American legal education are covered in this book by a Harvard Law graduate. Harvard Law School is called the "high citadel" of American legal education. Its admissions procedures, faculty selection, curriculum, teaching methods, and placement practices…
Traditionally, gas chromatography – mass spectrometry (GC-MS) analysis has used a targeted approach called selected ion monitoring (SIM) to quantify specific compounds that may have adverse health effects. Due to method limitations and the constraints of preparing duplicat...
Fantin, Yuri S.; Neverov, Alexey D.; Favorov, Alexander V.; Alvarez-Figueroa, Maria V.; Braslavskaya, Svetlana I.; Gordukova, Maria A.; Karandashova, Inga V.; Kuleshov, Konstantin V.; Myznikova, Anna I.; Polishchuk, Maya S.; Reshetov, Denis A.; Voiciehovskaya, Yana A.; Mironov, Andrei A.; Chulanov, Vladimir P.
2013-01-01
Sanger sequencing is a common method of reading DNA sequences. It is less expensive than high-throughput methods, and it is appropriate for numerous applications including molecular diagnostics. However, sequencing mixtures of similar DNA of pathogens with this method is challenging. This is important because most clinical samples contain such mixtures, rather than pure single strains. The traditional solution is to sequence selected clones of PCR products, a complicated, time-consuming, and expensive procedure. Here, we propose the base-calling with vocabulary (BCV) method that computationally deciphers Sanger chromatograms obtained from mixed DNA samples. The inputs to the BCV algorithm are a chromatogram and a dictionary of sequences that are similar to those we expect to obtain. We apply the base-calling function on a test dataset of chromatograms without ambiguous positions, as well as one with 3–14% sequence degeneracy. Furthermore, we use BCV to assemble a consensus sequence for an HIV genome fragment in a sample containing a mixture of viral DNA variants and to determine the positions of the indels. Finally, we detect drug-resistant Mycobacterium tuberculosis strains carrying frameshift mutations mixed with wild-type bacteria in the pncA gene, and roughly characterize bacterial communities in clinical samples by direct 16S rRNA sequencing. PMID:23382983
Novel method for in vitro depletion of T cells by monoclonal antibody-targeted photosensitization.
Berki, T; Németh, P
1998-02-01
An immunotargeting method (called photo-immunotargeting) has been developed for selective in vitro cell destruction. The procedure combines the photosensitizing (toxic) effect of light-induced dye-molecules, e.g., hematoporphyrin (HP) and the selective binding ability of monoclonal antibodies (mAb) to cell surface molecules. The photosensitizer HP molecules were covalently attached to monoclonal antibodies (a-Thy-1) recognizing an antigen on the surface of T lymphocytes, and used for T cell destruction. To increase the selectivity of the conventional targeting methods, a physical activation step (local light irradiation) as a second degree of specificity was employed. The HP in conjugated form was sufficient to induce T cell (thymocytes, EL-4 cell line) death after irradiation at 400 nm, at tenfold lower concentration compared to the photosensitizing effect of unbound HP. The selective killing of T lymphocytes (bearing the Thy-1 antigen) in a mixed cell population was demonstrated after a treatment with the phototoxic conjugate and light irradiation. This method can be useful for selective destruction of one population (target cell) in an in vitro heterogeneous cell mixture, e.g., in bone marrow transplants for T cell depletion to avoid graft vs. host reaction.
47 CFR 80.459 - Digital selective calling.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Digital selective calling. 80.459 Section 80.459 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Public Coast Stations Use of Telegraphy § 80.459 Digital selective...
47 CFR 80.459 - Digital selective calling.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Digital selective calling. 80.459 Section 80.459 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Public Coast Stations Use of Telegraphy § 80.459 Digital selective...
CSM Testbed Development and Large-Scale Structural Applications
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.
1989-01-01
A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.
Collective feature selection to identify crucial epistatic variants.
Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D
2018-01-01
Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Realization of guitar audio effects using methods of digital signal processing
NASA Astrophysics Data System (ADS)
Buś, Szymon; Jedrzejewski, Konrad
2015-09-01
The paper is devoted to studies on possibilities of realization of guitar audio effects by means of methods of digital signal processing. As a result of research, some selected audio effects corresponding to the specifics of guitar sound were realized as the real-time system called Digital Guitar Multi-effect. Before implementation in the system, the selected effects were investigated using the dedicated application with a graphical user interface created in Matlab environment. In the second stage, the real-time system based on a microcontroller and an audio codec was designed and realized. The system is designed to perform audio effects on the output signal of an electric guitar.
Evolution of advertisement calls in African clawed frogs
Tobias, Martha L.; Evans, Ben J.; Kelley, Darcy B.
2014-01-01
Summary For most frogs, advertisement calls are essential for reproductive success, conveying information on species identity, male quality, sexual state and location. While the evolutionary divergence of call characters has been examined in a number of species, the relative impacts of genetic drift or natural and sexual selection remain unclear. Insights into the evolutionary trajectory of vocal signals can be gained by examining how advertisement calls vary in a phylogenetic context. Evolution by genetic drift would be supported if more closely related species express more similar songs. Conversely, a poor correlation between evolutionary history and song expression would suggest evolution shaped by natural or sexual selection. Here, we measure seven song characters in 20 described and two undescribed species of African clawed frogs (genera Xenopus and Silurana) and four populations of X. laevis. We identify three call types — click, burst and trill — that can be distinguished by click number, call rate and intensity modulation. A fourth type is biphasic, consisting of two of the above. Call types vary in complexity from the simplest, a click, to the most complex, a biphasic call. Maximum parsimony analysis of variation in call type suggests that the ancestral type was of intermediate complexity. Each call type evolved independently more than once and call type is typically not shared by closely related species. These results indicate that call type is homoplasious and has low phylogenetic signal. We conclude that the evolution of call type is not due to genetic drift, but is under selective pressure. PMID:24723737
Fringe-period selection for a multifrequency fringe-projection phase unwrapping method
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Jiang, Kejian
2016-08-01
The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.
Sińczuk-Walczak, H
1995-01-01
A clinical picture of selected cases diagnosed or suspected of chronic poisoning by organic solvents such as: Trichlorethylene (TRI), Tetrachlorethylene (PER), Carbon Disulfide (CS2) is presented. Based on examples of diagnosed neurological syndromes, some diagnostic and certification issues concerning occupational diseases of the neurological system, are analysed. An objective assessment of patients' complaints, differentiation between occupational diseases, so called idiopathic diseases of the nervous system, selection of appropropriate diagnostic methods in order to confirm or exclude these diseases belong to essential problems among those discussed.
Distillation of Greenberger-Horne-Zeilinger states by selective information manipulation.
Cohen, O; Brun, T A
2000-06-19
Methods for distilling Greenberger-Horne-Zeilinger (GHZ) states from arbitrary entangled tripartite pure states are described. These techniques work for virtually any input state. Each technique has two stages which we call primary and secondary distillations. Primary distillation produces a GHZ state with some probability, so that when applied to an ensemble of systems a certain percentage is discarded. Secondary distillation produces further GHZs from the discarded systems. These protocols are developed with the help of an approach to quantum information theory based on absolutely selective information, which has other potential applications.
Islam, Md Rabiul; Tanaka, Toshihisa; Molla, Md Khademul Islam
2018-05-08
When designing multiclass motor imagery-based brain-computer interface (MI-BCI), a so-called tangent space mapping (TSM) method utilizing the geometric structure of covariance matrices is an effective technique. This paper aims to introduce a method using TSM for finding accurate operational frequency bands related brain activities associated with MI tasks. A multichannel electroencephalogram (EEG) signal is decomposed into multiple subbands, and tangent features are then estimated on each subband. A mutual information analysis-based effective algorithm is implemented to select subbands containing features capable of improving motor imagery classification accuracy. Thus obtained features of selected subbands are combined to get feature space. A principal component analysis-based approach is employed to reduce the features dimension and then the classification is accomplished by a support vector machine (SVM). Offline analysis demonstrates the proposed multiband tangent space mapping with subband selection (MTSMS) approach outperforms state-of-the-art methods. It acheives the highest average classification accuracy for all datasets (BCI competition dataset 2a, IIIa, IIIb, and dataset JK-HH1). The increased classification accuracy of MI tasks with the proposed MTSMS approach can yield effective implementation of BCI. The mutual information-based subband selection method is implemented to tune operation frequency bands to represent actual motor imagery tasks.
Genetic benefits of a female mating preference in gray tree frogs are context-dependent.
Welch, Allison M
2003-04-01
"Good genes" models of sexual selection predict that male courtship displays can advertise genetic quality and that, by mating with males with extreme displays, females can obtain genetic benefits for their offspring. However, because the relative performance of different genotypes can vary across environments, these genetic benefits may depend on the environmental context; in which case, static mating preferences may not be adaptive. To better understand how selection acts on the preference that female gray tree frogs (Hyla versicolor) express for long advertisement calls, I tested for genetic benefits in two realistic natural environments, by comparing the performance of half-sibling offspring sired by males with long versus short calls. Tadpoles from twelve such maternal half-sibships were raised in enclosures in their natal pond at two densities. In the low-density treatment, offspring of long-call males were larger at metamorphosis than were offspring of short-call males, whereas in the high-density treatment, offspring of males with long calls tended to metamorphose later than offspring of males with short calls. Thus, although the genes indicated by long calls were advantageous under low-density conditions, they were not beneficial under all conditions, suggesting that a static preference for long calls may not be adaptive in all environments. Such a genotype-by-environment interaction in the genetic consequences of mate choice predicts that when the environment is variable, selection may favor plasticity in female preferences or female selectivity among environments to control the conditions experienced by the offspring.
Li, Ziyi; Safo, Sandra E; Long, Qi
2017-07-11
Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.
Automatic MeSH term assignment and quality assessment.
Kim, W.; Aronson, A. R.; Wilbur, W. J.
2001-01-01
For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine. PMID:11825203
Castaño-Vinyals, Gemma; Nieuwenhuijsen, Mark J; Moreno, Víctor; Carrasco, Estela; Guinó, Elisabet; Kogevinas, Manolis; Villanueva, Cristina M
2011-01-01
Low participation rates in the selection of population controls are an increasing concern for the validity of case-control studies worldwide. We conducted a pilot study to assess two approaches to recruiting population controls in a study of colorectal cancer, including a face-to-face interview and blood sample collection. In the first approach, persons identified through a population roster were invited to participate through a telephone call by an interviewer telephoning on behalf of our research center. In the second approach, individuals were identified from the lists of selected family practitioners and were telephoned on behalf of the family practitioner. When the second method was used, participation rates increased from 42% to 57% and the percentage of refusals decreased from 47% to 13%. The reasons for refusing to participate did not differ significantly between the two methods. Contact through the family practitioner yielded higher response rates in population controls in the study area. 2010 SESPAS. Published by Elsevier Espana. All rights reserved.
The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method
NASA Technical Reports Server (NTRS)
Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.
1975-01-01
The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.
Jang, Yikweon; Hahm, Eun Hye; Lee, Hyun-Jung; Park, Soyeon; Won, Yong-Jin; Choe, Jae C.
2011-01-01
Background In a species with a large distribution relative to its dispersal capacity, geographic variation in traits may be explained by gene flow, selection, or the combined effects of both. Studies of genetic diversity using neutral molecular markers show that patterns of isolation by distance (IBD) or barrier effect may be evident for geographic variation at the molecular level in amphibian species. However, selective factors such as habitat, predator, or interspecific interactions may be critical for geographic variation in sexual traits. We studied geographic variation in advertisement calls in the tree frog Hyla japonica to understand patterns of variation in these traits across Korea and provide clues about the underlying forces for variation. Methodology We recorded calls of H. japonica in three breeding seasons from 17 localities including localities in remote Jeju Island. Call characters analyzed were note repetition rate (NRR), note duration (ND), and dominant frequency (DF), along with snout-to-vent length. Results The findings of a barrier effect on DF and a longitudinal variation in NRR seemed to suggest that an open sea between the mainland and Jeju Island and mountain ranges dominated by the north-south Taebaek Mountains were related to geographic variation in call characters. Furthermore, there was a pattern of IBD in mitochondrial DNA sequences. However, no comparable pattern of IBD was found between geographic distance and call characters. We also failed to detect any effects of habitat or interspecific interaction on call characters. Conclusions Geographic variations in call characters as well as mitochondrial DNA sequences were largely stratified by geographic factors such as distance and barriers in Korean populations of H. japoinca. Although we did not detect effects of habitat or interspecific interaction, some other selective factors such as sexual selection might still be operating on call characters in conjunction with restricted gene flow. PMID:21858061
Micro Computer Feedback Report for the Strategic Leader Development Inventory
1993-05-01
POS or NEG variables CALL CREATE MEM DIR ;make a memory directory JC SELS ;exat I error CALL SELECT-SCREEN ;dlsplay select screen JC SEL4 ;no flles in...get keyboaI Input CMP AL,1Bh3 ;ls I an Esc key ? JNZ SEL2 ;X not goto nrod test G-95 JMP SEL4 ;Exit SEL2: CMP AL,OOh Iskapick? JZ SEL ;I YES exit loop...position CALL READ DATE ;gat DOS daoe od 4e CALL F4ND -ERO ;kxlae OW In data ue JC SEL.5 SEL4 : CALL RELEASE MEM DIR ;release meu block CLC ;cler carry fag
Zhang, Xiaoshuai; Xue, Fuzhong; Liu, Hong; Zhu, Dianwen; Peng, Bin; Wiemels, Joseph L; Yang, Xiaowei
2014-12-10
Genome-wide Association Studies (GWAS) are typically designed to identify phenotype-associated single nucleotide polymorphisms (SNPs) individually using univariate analysis methods. Though providing valuable insights into genetic risks of common diseases, the genetic variants identified by GWAS generally account for only a small proportion of the total heritability for complex diseases. To solve this "missing heritability" problem, we implemented a strategy called integrative Bayesian Variable Selection (iBVS), which is based on a hierarchical model that incorporates an informative prior by considering the gene interrelationship as a network. It was applied here to both simulated and real data sets. Simulation studies indicated that the iBVS method was advantageous in its performance with highest AUC in both variable selection and outcome prediction, when compared to Stepwise and LASSO based strategies. In an analysis of a leprosy case-control study, iBVS selected 94 SNPs as predictors, while LASSO selected 100 SNPs. The Stepwise regression yielded a more parsimonious model with only 3 SNPs. The prediction results demonstrated that the iBVS method had comparable performance with that of LASSO, but better than Stepwise strategies. The proposed iBVS strategy is a novel and valid method for Genome-wide Association Studies, with the additional advantage in that it produces more interpretable posterior probabilities for each variable unlike LASSO and other penalized regression methods.
The Role of Calling in Military Engagement
2017-06-01
best people will always be good for mission and organizational success, selecting for calling may be the necessary outlet to satisfy retention goals...the long run, leading to greater life satisfaction and commitment. By selecting for and encouraging calling, we can raise standards in all levels of...R., & Tracey, J. B. (2000). The cost of turnover: Putting a price on the learning curve. The Cornell Hotel and Restaurant Administration Quarterly
BAPJ69-4A: a yeast two-hybrid strain for both positive and negative genetic selection.
Shaffer, Hally Anne; Rood, Michael Kenneth; Kashlan, Badar; Chang, Eileen I-ling; Doyle, Donald Francis; Azizi, Bahareh
2012-10-01
Genetic selection systems, such as the yeast two-hybrid system, are efficient methods to detect protein-protein and protein-ligand interactions. These systems have been further developed to assess negative interactions, such as inhibition, using the URA3 genetic selection marker. Previously, chemical complementation was used to assess positive selection in Saccharomyces cerevisiae. In this work, a new S. cerevisiae strain, called BAPJ69-4A, containing three selective markers ADE2, HIS3, and URA3 as well as the lacZ gene controlled by Gal4 response elements, was developed and characterized using the retinoid X receptor (RXR) and its ligand 9-cis retinoic acid (9cRA). Further characterization was performed using RXR variants and the synthetic ligand LG335. To assess the functionality of the strain, RXR was compared to the parent strain PJ69-4A in adenine, histidine, and uracil selective media. In positive selection, associating partners that lead to cell growth were observed in all media in the presence of ligand, whereas partners that did not associate due to the absence of ligand displayed no growth. Conversely, in negative selection, partners that did not associate in 5-FOA medium did not display cell death due to the lack of expression of the URA3 gene. The creation of the BAPJ69-4A yeast strain provides a high-throughput selection system, called negative chemical complementation, which can be used for both positive and negative selection, providing a fast, powerful tool for discovering novel ligand receptor pairs for applications in drug discovery and protein engineering. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi
2018-04-01
Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.
Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi
2018-03-13
Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.
Marker-aided genetic divergence analysis in Brassica.
Arunachalam, V; Verma, Shefali; Sujata, V; Prabhu, K V
2005-08-01
Genetic divergence was evaluated in 31 breeding lines from four Brassica species using Mahalanobis' D2. A new method of grouping using D2 values was used to group the 31 lines, based on diagnostic morphological traits (called morphoqts). Isozyme variation of the individual enzymes esterase and glutamate oxaloacetate was quantified by five parameters (called isoqts) developed earlier. Grouping by the same method was also done based on the isoqts, and the grouping by isozymes was compared with that by morphoqts. Overall, there was an agreement of 73% suggesting that isoqts can be used in the choice of parents and also first stage selection of segregants in the laboratory. It was suggested that such an exercise would help to take care of season-bound and field-related problems of breeding. The new isozyme QTs, within lane variance of relative mobility and relative absorption, accounted for about 50% of the total divergence. The utility of the new method and isoqts in cost-effective breeding were highlighted.
Bush, Sarah L.; Schul, Johannes
2010-01-01
Background Significance Communication signals that function to bring together the sexes are important for maintaining reproductive isolation in many taxa. Changes in male calls are often attributed to sexual selection, in which female preferences initiate signal divergence. Natural selection can also influence signal traits if calls attract predators or parasitoids, or if calling is energetically costly. Neutral evolution is often neglected in the context of acoustic communication. Methodology/Principal Findings We describe a signal trait that appears to have evolved in the absence of either sexual or natural selection. In the katydid genus Neoconocephalus, calls with a derived pattern in which pulses are grouped into pairs have evolved five times independently. We have previously shown that in three of these species, females require the double pulse pattern for call recognition, and hence the recognition system of the females is also in a derived state. Here we describe the remaining two species and find that although males produce the derived call pattern, females use the ancestral recognition mechanism in which no pulse pattern is required. Females respond equally well to the single and double pulse calls, indicating that the derived trait is selectively neutral in the context of mate recognition. Conclusions/Significance These results suggest that 1) neutral changes in signal traits could be important in the diversification of communication systems, and 2) males rather than females may be responsible for initiating signal divergence. PMID:20805980
A Method for Analyzing Commonalities in Clinical Trial Target Populations
He, Zhe; Carini, Simona; Hao, Tianyong; Sim, Ida; Weng, Chunhua
2014-01-01
ClinicalTrials.gov presents great opportunities for analyzing commonalities in clinical trial target populations to facilitate knowledge reuse when designing eligibility criteria of future trials or to reveal potential systematic biases in selecting population subgroups for clinical research. Towards this goal, this paper presents a novel data resource for enabling such analyses. Our method includes two parts: (1) parsing and indexing eligibility criteria text; and (2) mining common eligibility features and attributes of common numeric features (e.g., A1c). We designed and built a database called “Commonalities in Target Populations of Clinical Trials” (COMPACT), which stores structured eligibility criteria and trial metadata in a readily computable format. We illustrate its use in an example analytic module called CONECT using COMPACT as the backend. Type 2 diabetes is used as an example to analyze commonalities in the target populations of 4,493 clinical trials on this disease. PMID:25954450
Cheng, Chia-Ying; Tsai, Chia-Feng; Chen, Yu-Ju; Sung, Ting-Yi; Hsu, Wen-Lian
2013-05-03
As spectral library searching has received increasing attention for peptide identification, constructing good decoy spectra from the target spectra is the key to correctly estimating the false discovery rate in searching against the concatenated target-decoy spectral library. Several methods have been proposed to construct decoy spectral libraries. Most of them construct decoy peptide sequences and then generate theoretical spectra accordingly. In this paper, we propose a method, called precursor-swap, which directly constructs decoy spectral libraries directly at the "spectrum level" without generating decoy peptide sequences by swapping the precursors of two spectra selected according to a very simple rule. Our spectrum-based method does not require additional efforts to deal with ion types (e.g., a, b or c ions), fragment mechanism (e.g., CID, or ETD), or unannotated peaks, but preserves many spectral properties. The precursor-swap method is evaluated on different spectral libraries and the results of obtained decoy ratios show that it is comparable to other methods. Notably, it is efficient in time and memory usage for constructing decoy libraries. A software tool called Precursor-Swap-Decoy-Generation (PSDG) is publicly available for download at http://ms.iis.sinica.edu.tw/PSDG/.
2015-01-01
Retinal fundus images are widely used in diagnosing and providing treatment for several eye diseases. Prior works using retinal fundus images detected the presence of exudation with the aid of publicly available dataset using extensive segmentation process. Though it was proved to be computationally efficient, it failed to create a diabetic retinopathy feature selection system for transparently diagnosing the disease state. Also the diagnosis of diseases did not employ machine learning methods to categorize candidate fundus images into true positive and true negative ratio. Several candidate fundus images did not include more detailed feature selection technique for diabetic retinopathy. To apply machine learning methods and classify the candidate fundus images on the basis of sliding window a method called, Diabetic Fundus Image Recuperation (DFIR) is designed in this paper. The initial phase of DFIR method select the feature of optic cup in digital retinal fundus images based on Sliding Window Approach. With this, the disease state for diabetic retinopathy is assessed. The feature selection in DFIR method uses collection of sliding windows to obtain the features based on the histogram value. The histogram based feature selection with the aid of Group Sparsity Non-overlapping function provides more detailed information of features. Using Support Vector Model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy diseases. The ranking of disease level for each candidate set provides a much promising result for developing practically automated diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, specificity rate, ranking efficiency and feature selection time. PMID:25974230
Drayton, J M; Hunt, J; Brooks, R; Jennions, M D
2007-05-01
If male sexual signalling is honest because it captures genetic variation in condition then traits that are important mate choice cues should be disproportionately affected by inbreeding relative to other traits. To test this, we investigated the effect of brother-sister mating on advertisement calling by male field crickets Teleogryllus commodus. We quantified the effect of one generation of inbreeding on nightly calling effort and five finer-scale aspects of call structure that have been shown to influence attractiveness. We also quantified inbreeding depression on six life history traits and one morphological trait. Inbreeding significantly reduced hatching success, nymph survival and adult lifespan but had no detectable effect on hatching rate, developmental rate or adult body mass. The effect of inbreeding on sexually selected traits was equivocal. There was no decline in calling effort (seconds of sound production/night) by inbred males, but there were highly significant changes in three of five finer-scale call parameters. Sexually selected traits clearly vary in their susceptibility to inbreeding depression.
47 CFR 2.303 - Other forms of identification of stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... whose signals are being relayed, or by network identification. Broadcasting (television booster.... (b) Digital selective calls will be authorized by the Commission and will be formed by groups of... identification number: 4 digits. (2) Ship station selective call number: 5 digits. (3) Predetermined group of...
Call selection for the Helicopter Emergency Medical Service: implications for ambulance control.
Coats, T J; Newton, A
1994-01-01
The increasing sophistication of pre-hospital care, with paramedics and many types of 'rapid response' units, requires the use of advanced systems of ambulance control. The introduction of call selection by a paramedic in the ambulance control room significantly improved the tasking of the Helicopter Emergency Medical Service. This paper illustrates the need for a system to grade 999 calls, so that the appropriate pre-hospital response can be directed to each patient. PMID:8182675
Context-sensitive trace inlining for Java.
Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter
2013-12-01
Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.
[Effect of leader-member exchange on nurses'sense of calling in workplace].
Zhang, L G; Ma, H L; Wang, Z J; Zhou, Y Y; Jin, T T
2017-12-20
Objective: To investigate the effect of leader-member exchange on nurses'sense of calling in workplace based on self-determination theory. Methods: A total of 381 nurses were randomly selected from five tertiary general hospitals in Zhejiang province, China from October to December, 2016. They were subjected to a survey using the Leader-Member Exchange Scale, Job Autonomy Scale, Core Self-Evaluation Scale, and Calling Scale. The mediating effect was used to test the procedures and the data were subjected to hierarchical regression analysis. Results: The leader-member exchange was positively correlated with job autonomy, core self-evaluation, and sense of calling ( r =0.471, P <0.001; r =0.373, P <0.001; r =0.475, P <0.001) ; the leader-member exchange had a positive predictive effect on job autonomy and sense of calling ( β = 0.47, P <0.001; β =0.48, P <0.001) ; the job autonomy had a partial mediating effect on the relationship between leader-member exchange and sense of calling ( F =66.50, P <0.001) ; the core self-evaluation negatively adjusted the positive relationship between leader-member exchange and job autonomy ( F =27.81, P <0.001) . Conclusion: High-quality leader-member exchange enhances the sense of calling by improving staffs' job autonomy and the core self-evaluation reduces the positive relationship between leader-member exchange and job autonomy.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Integrative Analysis of High-throughput Cancer Studies with Contrasted Penalization
Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Shia, BenChang; Ma, Shuangge
2015-01-01
In cancer studies with high-throughput genetic and genomic measurements, integrative analysis provides a way to effectively pool and analyze heterogeneous raw data from multiple independent studies and outperforms “classic” meta-analysis and single-dataset analysis. When marker selection is of interest, the genetic basis of multiple datasets can be described using the homogeneity model or the heterogeneity model. In this study, we consider marker selection under the heterogeneity model, which includes the homogeneity model as a special case and can be more flexible. Penalization methods have been developed in the literature for marker selection. This study advances from the published ones by introducing the contrast penalties, which can accommodate the within- and across-dataset structures of covariates/regression coefficients and, by doing so, further improve marker selection performance. Specifically, we develop a penalization method that accommodates the across-dataset structures by smoothing over regression coefficients. An effective iterative algorithm, which calls an inner coordinate descent iteration, is developed. Simulation shows that the proposed method outperforms the benchmark with more accurate marker identification. The analysis of breast cancer and lung cancer prognosis studies with gene expression measurements shows that the proposed method identifies genes different from those using the benchmark and has better prediction performance. PMID:24395534
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
Viallon, Vivian; Banerjee, Onureena; Jougla, Eric; Rey, Grégoire; Coste, Joel
2014-03-01
Looking for associations among multiple variables is a topical issue in statistics due to the increasing amount of data encountered in biology, medicine, and many other domains involving statistical applications. Graphical models have recently gained popularity for this purpose in the statistical literature. In the binary case, however, exact inference is generally very slow or even intractable because of the form of the so-called log-partition function. In this paper, we review various approximate methods for structure selection in binary graphical models that have recently been proposed in the literature and compare them through an extensive simulation study. We also propose a modification of one existing method, that is shown to achieve good performance and to be generally very fast. We conclude with an application in which we search for associations among causes of death recorded on French death certificates. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ortiz, Marcos G.
1992-01-01
A method for modeling a conducting material sample or structure (herein called a system) as at least two regions which comprise an electrical network of resistances, for measuring electric resistance between at least two selected pairs of external leads attached to the surface of the system, wherein at least one external lead is attached to the surface of each of the regions, and, using basic circuit theory, for translating measured resistances into temperatures or thermophysical properties in corresponding regions of the system.
Ortiz, M.G.
1992-11-24
Disclosed is a method for modeling a conducting material sample or structure (herein called a system) as at least two regions which comprise an electrical network of resistances, for measuring electric resistance between at least two selected pairs of external leads attached to the surface of the system, wherein at least one external lead is attached to the surface of each of the regions, and, using basic circuit theory, for translating measured resistances into temperatures or thermophysical properties in corresponding regions of the system. 16 figs.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-19
...: Section 80.103, Digital Selective Calling (DSC) Operating Procedures--Maritime Mobile Identity (MMSI...: Individuals or households; business or other for- profit entities and Federal Government. Number of... Marine VHF radios with Digital Selective Calling (DSC) capability in this collection. The licensee...
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
Browning, Brian L.; Yu, Zhaoxia
2009-01-01
We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040
Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat.
Razak, Khaleel A; Fuzessery, Zoltan M
2015-10-01
Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. © 2014 Wiley Periodicals, Inc.
Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat
Razak, Khaleel A.; Fuzessery, Zoltan M.
2014-01-01
Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations, or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. PMID:25142131
Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran
2017-02-01
The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.
History and development of the Schmidt-Hunter meta-analysis methods.
Schmidt, Frank L
2015-09-01
In this article, I provide answers to the questions posed by Will Shadish about the history and development of the Schmidt-Hunter methods of meta-analysis. In the 1970s, I headed a research program on personnel selection at the US Office of Personnel Management (OPM). After our research showed that validity studies have low statistical power, OPM felt a need for a better way to demonstrate test validity, especially in light of court cases challenging selection methods. In response, we created our method of meta-analysis (initially called validity generalization). Results showed that most of the variability of validity estimates from study to study was because of sampling error and other research artifacts such as variations in range restriction and measurement error. Corrections for these artifacts in our research and in replications by others showed that the predictive validity of most tests was high and generalizable. This conclusion challenged long-standing beliefs and so provoked resistance, which over time was overcome. The 1982 book that we published extending these methods to research areas beyond personnel selection was positively received and was followed by expanded books in 1990, 2004, and 2014. Today, these methods are being applied in a wide variety of areas. Copyright © 2015 John Wiley & Sons, Ltd.
Puechmaille, Sébastien J.; Borissov, Ivailo M.; Zsebok, Sándor; Allegrini, Benjamin; Hizem, Mohammed; Kuenzel, Sven; Schuchmann, Maike; Teeling, Emma C.
2014-01-01
Animals employ an array of signals (i.e. visual, acoustic, olfactory) for communication. Natural selection favours signals, receptors, and signalling behaviour that optimise the received signal relative to background noise. When the signal is used for more than one function, antagonisms amongst the different signalling functions may constrain the optimisation of the signal for any one function. Sexual selection through mate choice can strongly modify the effects of natural selection on signalling systems ultimately causing maladaptive signals to evolve. Echolocating bats represent a fascinating group in which to study the evolution of signalling systems as unlike bird songs or frog calls, echolocation has a dual role in foraging and communication. The function of bat echolocation is to generate echoes that the calling bat uses for orientation and food detection with call characteristics being directly related to the exploitation of particular ecological niches. Therefore, it is commonly assumed that echolocation has been shaped by ecology via natural selection. Here we demonstrate for the first time using a novel combined behavioural, ecological and genetic approach that in a bat species, Rhinolophus mehelyi: (1) echolocation peak frequency is an honest signal of body size; (2) females preferentially select males with high frequency calls during the mating season; (3) high frequency males sire more off-spring, providing evidence that echolocation calls may play a role in female mate choice. Our data refute the sole role of ecology in the evolution of echolocation and highlight the antagonistic interplay between natural and sexual selection in shaping acoustic signals. PMID:25075972
Puechmaille, Sébastien J; Borissov, Ivailo M; Zsebok, Sándor; Allegrini, Benjamin; Hizem, Mohammed; Kuenzel, Sven; Schuchmann, Maike; Teeling, Emma C; Siemers, Björn M
2014-01-01
Animals employ an array of signals (i.e. visual, acoustic, olfactory) for communication. Natural selection favours signals, receptors, and signalling behaviour that optimise the received signal relative to background noise. When the signal is used for more than one function, antagonisms amongst the different signalling functions may constrain the optimisation of the signal for any one function. Sexual selection through mate choice can strongly modify the effects of natural selection on signalling systems ultimately causing maladaptive signals to evolve. Echolocating bats represent a fascinating group in which to study the evolution of signalling systems as unlike bird songs or frog calls, echolocation has a dual role in foraging and communication. The function of bat echolocation is to generate echoes that the calling bat uses for orientation and food detection with call characteristics being directly related to the exploitation of particular ecological niches. Therefore, it is commonly assumed that echolocation has been shaped by ecology via natural selection. Here we demonstrate for the first time using a novel combined behavioural, ecological and genetic approach that in a bat species, Rhinolophus mehelyi: (1) echolocation peak frequency is an honest signal of body size; (2) females preferentially select males with high frequency calls during the mating season; (3) high frequency males sire more off-spring, providing evidence that echolocation calls may play a role in female mate choice. Our data refute the sole role of ecology in the evolution of echolocation and highlight the antagonistic interplay between natural and sexual selection in shaping acoustic signals.
A systematic review of stakeholder views of selection methods for medical schools admission.
Kelly, M E; Patterson, F; O'Flynn, S; Mulligan, J; Murphy, A W
2018-06-15
The purpose of this paper is to systematically review the literature with respect to stakeholder views of selection methods for medical school admissions. An electronic search of nine databases was conducted between January 2000-July 2014. Two reviewers independently assessed all titles (n = 1017) and retained abstracts (n = 233) for relevance. Methodological quality of quantitative papers was assessed using the MERSQI instrument. The overall quality of evidence in this field was low. Evidence was synthesised in a narrative review. Applicants support interviews, and multiple mini interviews (MMIs). There is emerging evidence that situational judgement tests (SJTs) and selection centres (SCs) are also well regarded, but aptitude tests less so. Selectors endorse the use of interviews in general and in particular MMIs judging them to be fair, relevant and appropriate, with emerging evidence of similarly positive reactions to SCs. Aptitude tests and academic records were valued in decisions of whom to call to interview. Medical students prefer interviews based selection to cognitive aptitude tests. They are unconvinced about the transparency and veracity of written applications. Perceptions of organisational justice, which describe views of fairness in organisational processes, appear to be highly influential on stakeholders' views of the acceptability of selection methods. In particular procedural justice (perceived fairness of selection tools in terms of job relevance and characteristics of the test) and distributive justice (perceived fairness of selection outcomes in terms of equal opportunity and equity), appear to be important considerations when deciding on acceptability of selection methods. There were significant gaps with respect to both key stakeholder groups and the range of selection tools assessed. Notwithstanding the observed limitations in the quality of research in this field, there appears to be broad concordance of views on the various selection methods, across the diverse stakeholders groups. This review highlights the need for better standards, more appropriate methodologies and for broadening the scope of stakeholder research.
Intra-Operative Dosimetry in Prostate Brachytherapy
2007-11-01
of the focal spot. 2.1. Model for Reconstruction Space Transformation As illustrated in Figure 8, let A & B ( with reference frames FA & FB) be the two...simplex optimization method in MATLAB 7.0 with the search space being defined by the distortion modes from PCA. A linear combination of the modes would...arm is tracked with an X-ray fiducial system called FTRAC that is composed of optimally selected polynomial
Automating the Transformational Development of Software. Volume 1.
1983-03-01
DRACO system [Neighbors 80] uses meta-rules to derive information about which new transformations will be applicable after a particular transformation has...transformation over another. The new model, as Incorporated in a system called Glitter, explicitly represents transformation goals, methods, and selection...done anew for each new problem (compare this with Neighbor’s Draco system [Neighbors 80] which attempts to reuse domain analysis). o Is the user
EDMC: An enhanced distributed multi-channel anti-collision algorithm for RFID reader system
NASA Astrophysics Data System (ADS)
Zhang, YuJing; Cui, Yinghua
2017-05-01
In this paper, we proposes an enhanced distributed multi-channel reader anti-collision algorithm for RFID environments which is based on the distributed multi-channel reader anti-collision algorithm for RFID environments (called DiMCA). We proposes a monitor method to decide whether reader receive the latest control news after it selected the data channel. The simulation result shows that it improves interrogation delay.
Feature weight estimation for gene selection: a local hyperlinear learning approach
2014-01-01
Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071
Wang, Zhi-Guo; Chen, Zeng-Ping; Gong, Fan; Wu, Hai-Long; Yu, Ru-Qin
2002-05-01
The chromatographic peak located inside another peak in the time direction is called an embedded or inner peak in contradistinction with the embedding peak, which is called an outer peak. The chemical components corresponding to inner and outer peaks are called inner and outer components, respectively. This special case of co-eluting chromatograms was investigated using chemometric approaches taking GC-MS as an example. A novel method, named inner chromatogram projection (ICP), for resolution of GC-MS data with embedded chromatographic peaks is derived. Orthogonal projection resolution is first utilized to obtain the chromatographic profile of the inner component. Projection of the two-way data matrix columnwise-normalized along the time direction to the normalized profile of the inner component found is subsequently performed to find the selective m/z points, if they exist, which represent the chromatogram of the outer component by itself. With the profiles obtained, the mass spectra can easily be found by means of a least-squares procedure. The results for both simulated data and real samples demonstrate that the proposed method is capable of achieving satisfactory resolution performance not affected by the shapes of chromatograms and the relative positions of the components involved.
Application of a fast skyline computation algorithm for serendipitous searching problems
NASA Astrophysics Data System (ADS)
Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary
2018-02-01
Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.
NASA Astrophysics Data System (ADS)
Rachmatia, H.; Kusuma, W. A.; Hasibuan, L. S.
2017-05-01
Selection in plant breeding could be more effective and more efficient if it is based on genomic data. Genomic selection (GS) is a new approach for plant-breeding selection that exploits genomic data through a mechanism called genomic prediction (GP). Most of GP models used linear methods that ignore effects of interaction among genes and effects of higher order nonlinearities. Deep belief network (DBN), one of the architectural in deep learning methods, is able to model data in high level of abstraction that involves nonlinearities effects of the data. This study implemented DBN for developing a GP model utilizing whole-genome Single Nucleotide Polymorphisms (SNPs) as data for training and testing. The case study was a set of traits in maize. The maize dataset was acquisitioned from CIMMYT’s (International Maize and Wheat Improvement Center) Global Maize program. Based on Pearson correlation, DBN is outperformed than other methods, kernel Hilbert space (RKHS) regression, Bayesian LASSO (BL), best linear unbiased predictor (BLUP), in case allegedly non-additive traits. DBN achieves correlation of 0.579 within -1 to 1 range.
Zhou, Hongyi; Skolnick, Jeffrey
2010-01-01
In this work, we develop a method called FTCOM for assessing the global quality of protein structural models for targets of medium and hard difficulty (remote homology) produced by structure prediction approaches such as threading or ab initio structure prediction. FTCOM requires the Cα coordinates of full length models and assesses model quality based on fragment comparison and a score derived from comparison of the model to top threading templates. On a set of 361 medium/hard targets, FTCOM was applied to and assessed for its ability to improve upon the results from the SP3, SPARKS, PROSPECTOR_3, and PRO-SP3-TASSER threading algorithms. The average TM-score improves by 5%–10% for the first selected model by the new method over models obtained by the original selection procedure in the respective threading methods. Moreover the number of foldable targets (TM-score ≥0.4) increases from least 7.6% for SP3 to 54% for SPARKS. Thus, FTCOM is a promising approach to template selection. PMID:20455261
How the environment shapes animal signals: a test of the acoustic adaptation hypothesis in frogs.
Goutte, S; Dubois, A; Howard, S D; Márquez, R; Rowley, J J L; Dehling, J M; Grandcolas, P; Xiong, R C; Legendre, F
2018-01-01
Long-distance acoustic signals are widely used in animal communication systems and, in many cases, are essential for reproduction. The acoustic adaptation hypothesis (AAH) implies that acoustic signals should be selected for further transmission and better content integrity under the acoustic constraints of the habitat in which they are produced. In this study, we test predictions derived from the AAH in frogs. Specifically, we focus on the difference between torrent frogs and frogs calling in less noisy habitats. Torrents produce sounds that can mask frog vocalizations and constitute a major acoustic constraint on call evolution. We combine data collected in the field, material from scientific collections and the literature for a total of 79 primarily Asian species, of the families Ranidae, Rhacophoridae, Dicroglossidae and Microhylidae. Using phylogenetic comparative methods and including morphological and environmental potential confounding factors, we investigate putatively adaptive call features in torrent frogs. We use broad habitat categories as well as fine-scale habitat measurements and test their correlation with six call characteristics. We find mixed support for the AAH. Spectral features of torrent frog calls are different from those of frogs calling in other habitats and are related to ambient noise levels, as predicted by the AAH. However, temporal call features do not seem to be shaped by the frogs' calling habitats. Our results underline both the complexity of call evolution and the need to consider multiple factors when investigating this issue. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
What to Start: Selecting a First HIV Regimen
... to treat HIV infection is called antiretroviral therapy (ART). People on ART take a combination of HIV medicines (called an ... infection. HIV treatment (also called antiretroviral therapy or ART) begins with choosing an HIV regimen. People on ...
Zhang, Xiaohua Douglas; Yang, Xiting Cindy; Chung, Namjin; Gates, Adam; Stec, Erica; Kunapuli, Priya; Holder, Dan J; Ferrer, Marc; Espeseth, Amy S
2006-04-01
RNA interference (RNAi) high-throughput screening (HTS) experiments carried out using large (>5000 short interfering [si]RNA) libraries generate a huge amount of data. In order to use these data to identify the most effective siRNAs tested, it is critical to adopt and develop appropriate statistical methods. To address the questions in hit selection of RNAi HTS, we proposed a quartile-based method which is robust to outliers, true hits and nonsymmetrical data. We compared it with the more traditional tests, mean +/- k standard deviation (SD) and median +/- 3 median of absolute deviation (MAD). The results suggested that the quartile-based method selected more hits than mean +/- k SD under the same preset error rate. The number of hits selected by median +/- k MAD was close to that by the quartile-based method. Further analysis suggested that the quartile-based method had the greatest power in detecting true hits, especially weak or moderate true hits. Our investigation also suggested that platewise analysis (determining effective siRNAs on a plate-by-plate basis) can adjust for systematic errors in different plates, while an experimentwise analysis, in which effective siRNAs are identified in an analysis of the entire experiment, cannot. However, experimentwise analysis may detect a cluster of true positive hits placed together in one or several plates, while platewise analysis may not. To display hit selection results, we designed a specific figure called a plate-well series plot. We thus suggest the following strategy for hit selection in RNAi HTS experiments. First, choose the quartile-based method, or median +/- k MAD, for identifying effective siRNAs. Second, perform the chosen method experimentwise on transformed/normalized data, such as percentage inhibition, to check the possibility of hit clusters. If a cluster of selected hits are observed, repeat the analysis based on untransformed data to determine whether the cluster is due to an artifact in the data. If no clusters of hits are observed, select hits by performing platewise analysis on transformed data. Third, adopt the plate-well series plot to visualize both the data and the hit selection results, as well as to check for artifacts.
Modeling Sexual Selection in Túngara Frog and Rationality of Mate Choice.
Vargas Bernal, Esteban; Sanabria Malagon, Camilo
2017-12-01
The males of the species of frogs Engystomops pustulosus produce simple and complex calls to lure females, as a way of intersexual selection. Complex calls lead males to a greater reproductive success than what simple calls do. However, the complex calls are also more attractive to their main predator, the bat Trachops cirrhosus. Therefore, as M. Ryan suggests in (The túngara frog: a study in sexual selection and communication. University of Chicago Press, Chicago, 1985), the complexity of the calls lets the frogs keep a trade-off between reproductive success and predation. In this paper, we verify this trade-off from the perspective of game theory. We first model the proportion of simple calls as a symmetric game of two strategies. We also model the effect of adding a third strategy, males that keep quiet and intercept females, which would play a role of intrasexual selection. Under the assumption that the decision of the males takes into account this trade-off between reproductive success and predation, our model reproduces the observed behavior reported in the literature with minimal assumption on the parameters. From the model with three strategies, we verify that the quiet strategy could only coexists with the simple and complex strategies if the rate at which quiet males intercept females is high, which explains the rarity of the quiet strategy. We conclude that the reproductive strategy of the male frog E. pustulosus is rational.
Persistent Females and Compliant Males Coordinate Alarm Calling in Diana Monkeys.
Stephan, Claudia; Zuberbühler, Klaus
2016-11-07
Sexual dimorphisms in animal vocal behavior have been successfully explained by sexual selection theory (e.g., mammals [1-5]; birds [6, 7]; anurans [8, 9]), but this does not usually include alarm calls, which are thought to be the product of kin or individual selection (e.g., [10, 11]). Here, we present the results of playback experiments with wild Diana monkeys, a species with highly dimorphic predator-specific alarms, to investigate the communication strategies of males and females during predator encounters. First, we simulated predator presence by broadcasting vocalizations of their main predators, leopards or eagles. We found that males only produced predator-specific alarms after the females had produced theirs, in response to which the females ceased alarm calling. In a second experiment, we created congruent and incongruent situations, so that the calls of a predator were followed by playbacks of male or female alarms with a matching or mismatching referent. For congruent conditions, results were the same as in the first experiment. For incongruent conditions, however, the males always gave predator-specific alarms that referentially matched the females' calls, regardless of the previously displayed predator. In contrast, females always gave predator-specific alarms that matched the predator type, regardless of their own male's subsequent calls. Moreover, the females persistently continued to alarm call until their own male produced calls with the matching referent. Results show that males and females attend to the informational content of each other's alarm calls but prioritize them differently relative to an experienced external event, a likely reflection of different underlying selection pressures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparison and evaluation of two exome capture kits and sequencing platforms for variant calling.
Zhang, Guoqiang; Wang, Jianfeng; Yang, Jin; Li, Wenjie; Deng, Yutian; Li, Jing; Huang, Jun; Hu, Songnian; Zhang, Bing
2015-08-05
To promote the clinical application of next-generation sequencing, it is important to obtain accurate and consistent variants of target genomic regions at low cost. Ion Proton, the latest updated semiconductor-based sequencing instrument from Life Technologies, is designed to provide investigators with an inexpensive platform for human whole exome sequencing that achieves a rapid turnaround time. However, few studies have comprehensively compared and evaluated the accuracy of variant calling between Ion Proton and Illumina sequencing platforms such as HiSeq 2000, which is the most popular sequencing platform for the human genome. The Ion Proton sequencer combined with the Ion TargetSeq Exome Enrichment Kit together make up TargetSeq-Proton, whereas SureSelect-Hiseq is based on the Agilent SureSelect Human All Exon v4 Kit and the HiSeq 2000 sequencer. Here, we sequenced exonic DNA from four human blood samples using both TargetSeq-Proton and SureSelect-HiSeq. We then called variants in the exonic regions that overlapped between the two exome capture kits (33.6 Mb). The rates of shared variant loci called by two sequencing platforms were from 68.0 to 75.3% in four samples, whereas the concordance of co-detected variant loci reached 99%. Sanger sequencing validation revealed that the validated rate of concordant single nucleotide polymorphisms (SNPs) (91.5%) was higher than the SNPs specific to TargetSeq-Proton (60.0%) or specific to SureSelect-HiSeq (88.3%). With regard to 1-bp small insertions and deletions (InDels), the Sanger sequencing validated rates of concordant variants (100.0%) and SureSelect-HiSeq-specific (89.6%) were higher than those of TargetSeq-Proton-specific (15.8%). In the sequencing of exonic regions, a combination of using of two sequencing strategies (SureSelect-HiSeq and TargetSeq-Proton) increased the variant calling specificity for concordant variant loci and the sensitivity for variant loci called by any one platform. However, for the sequencing of platform-specific variants, the accuracy of variant calling by HiSeq 2000 was higher than that of Ion Proton, specifically for the InDel detection. Moreover, the variant calling software also influences the detection of SNPs and, specifically, InDels in Ion Proton exome sequencing.
The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.
1989-01-01
The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.
Capturing Accurate and Useful Information on Medication-Related Telenursing Triage Calls.
Lake, R; Li, L; Baysari, M; Byrne, M; Robinson, M; Westbrook, J I
2016-01-01
Registered nurses providing telenursing triage and advice services record information on the medication related calls they handle. However the quality and consistency of these data were rarely examined. Our aim was to examine medication related calls made to the healthdirect advice service in November 2014, to assess their basic characteristics and how the data entry format influenced information collected and data consistency. Registered nurses selected the patient question type from a range of categories, and entered the medications involved in a free text field. Medication names were manually extracted from the free text fields. We also compared the selected patient question type with the free text description of the call, in order to gauge data consistency. Results showed that nurses provided patients with advice on medication-related queries in a timely matter (the median call duration of 9 minutes). From 1835 calls, we were able to identify and classify 2156 medications into 384 generic names. However, in 204 cases (11.2% of calls) no medication name was entered. A further 308 (15.0%) of the medication names entered were not identifiable. When we compared the selected patient question with the free text description of calls, we found that these were consistent in 63.27% of cases. Telenursing and triage advice services provide a valuable resource to the public with quick and easily accessible advice. To support nurses provide quality services and record accurate information about the queries, appropriate data entry format and design would be beneficial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, G.
1984-09-01
Two classifications of fishing jobs are discussed: open hole and cased hole. When there is no casing in the area of the fish, it is called open hole fishing. When the fish is inside the casing, it is called cased hole fishing. The article lists various things that can become a fish-stuck drill pipe, including: broken drill pipe, drill collars, bit, bit cones, hand tools dropped in the well, sanded up or mud stuck tubing, packers become stuck, and much more. It is suggested that on a fishing job, all parties involved should cooperate with each other, and that fishingmore » tool people obtain all the information concerning the well. That way they can select the right tools and methods to clean out the well as quickly as possible.« less
Q-nexus: a comprehensive and efficient analysis pipeline designed for ChIP-nexus.
Hansen, Peter; Hecht, Jochen; Ibn-Salem, Jonas; Menkuec, Benjamin S; Roskosch, Sebastian; Truss, Matthias; Robinson, Peter N
2016-11-04
ChIP-nexus, an extension of the ChIP-exo protocol, can be used to map the borders of protein-bound DNA sequences at nucleotide resolution, requires less input DNA and enables selective PCR duplicate removal using random barcodes. However, the use of random barcodes requires additional preprocessing of the mapping data, which complicates the computational analysis. To date, only a very limited number of software packages are available for the analysis of ChIP-exo data, which have not yet been systematically tested and compared on ChIP-nexus data. Here, we present a comprehensive software package for ChIP-nexus data that exploits the random barcodes for selective removal of PCR duplicates and for quality control. Furthermore, we developed bespoke methods to estimate the width of the protected region resulting from protein-DNA binding and to infer binding positions from ChIP-nexus data. Finally, we applied our peak calling method as well as the two other methods MACE and MACS2 to the available ChIP-nexus data. The Q-nexus software is efficient and easy to use. Novel statistics about duplication rates in consideration of random barcodes are calculated. Our method for the estimation of the width of the protected region yields unbiased signatures that are highly reproducible for biological replicates and at the same time very specific for the respective factors analyzed. As judged by the irreproducible discovery rate (IDR), our peak calling algorithm shows a substantially better reproducibility. An implementation of Q-nexus is available at http://charite.github.io/Q/ .
Selective corneal optical aberration (SCOA) for customized ablation
NASA Astrophysics Data System (ADS)
Jean, Benedikt J.; Bende, Thomas
2001-06-01
Wavefront analysis still have some technical problems which may be solved within the next years. There are some limitations to use wavefront as a diagnostic tool for customized ablation alone. An ideal combination would be wavefront and topography. Meanwhile Selective Corneal Aberration is a method to visualize the optical quality of a measured corneal surface. It is based on a true measured 3D elevation information of a video topometer. Thus values can be interpreted either using Zernike polynomials or visualized as a so called color coded surface quality map. This map gives a quality factor (corneal aberration) for each measured point of the cornea.
NASA Astrophysics Data System (ADS)
Feng, Ke; Wang, KeSheng; Zhang, Mian; Ni, Qing; Zuo, Ming J.
2017-03-01
The planetary gearbox, due to its unique mechanical structures, is an important rotating machine for transmission systems. Its engineering applications are often in non-stationary operational conditions, such as helicopters, wind energy systems, etc. The unique physical structures and working conditions make the vibrations measured from planetary gearboxes exhibit a complex time-varying modulation and therefore yield complicated spectral structures. As a result, traditional signal processing methods, such as Fourier analysis, and the selection of characteristic fault frequencies for diagnosis face serious challenges. To overcome this drawback, this paper proposes a signal selection scheme for fault-emphasized diagnostics based upon two order tracking techniques. The basic procedures for the proposed scheme are as follows. (1) Computed order tracking is applied to reveal the order contents and identify the order(s) of interest. (2) Vold-Kalman filter order tracking is used to extract the order(s) of interest—these filtered order(s) constitute the so-called selected vibrations. (3) Time domain statistic indicators are applied to the selected vibrations for faulty information-emphasized diagnostics. The proposed scheme is explained and demonstrated in a signal simulation model and experimental studies and the method proves to be effective for planetary gearbox fault diagnosis.
A Ranking Approach to Genomic Selection.
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.
Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach
NASA Astrophysics Data System (ADS)
Taufik, Afirah; Sakinah Syed Ahmad, Sharifah
2016-06-01
The aim of this paper is to propose a method to classify the land covers of a satellite image based on fuzzy rule-based system approach. The study uses bands in Landsat 8 and other indices, such as Normalized Difference Water Index (NDWI), Normalized difference built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) as input for the fuzzy inference system. The selected three indices represent our main three classes called water, built- up land, and vegetation. The combination of the original multispectral bands and selected indices provide more information about the image. The parameter selection of fuzzy membership is performed by using a supervised method known as ANFIS (Adaptive neuro fuzzy inference system) training. The fuzzy system is tested for the classification on the land cover image that covers Klang Valley area. The results showed that the fuzzy system approach is effective and can be explored and implemented for other areas of Landsat data.
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.
Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao
2015-04-01
Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Advertisement call and genetic structure conservatism: good news for an endangered Neotropical frog
Costa, William P.; Martins, Lucas B.; Nunes-de-Almeida, Carlos H. L.; Toledo, Luís Felipe
2016-01-01
Background: Many amphibian species are negatively affected by habitat change due to anthropogenic activities. Populations distributed over modified landscapes may be subject to local extinction or may be relegated to the remaining—likely isolated and possibly degraded—patches of available habitat. Isolation without gene flow could lead to variability in phenotypic traits owing to differences in local selective pressures such as environmental structure, microclimate, or site-specific species assemblages. Methods: Here, we tested the microevolution hypothesis by evaluating the acoustic parameters of 349 advertisement calls from 15 males from six populations of the endangered amphibian species Proceratophrys moratoi. In addition, we analyzed the genetic distances among populations and the genetic diversity with a haplotype network analysis. We performed cluster analysis on acoustic data based on the Bray-Curtis index of similarity, using the UPGMA method. We correlated acoustic dissimilarities (calculated by Euclidean distance) with geographical and genetic distances among populations. Results: Spectral traits of the advertisement call of P. moratoi presented lower coefficients of variation than did temporal traits, both within and among males. Cluster analyses placed individuals without congruence in population or geographical distance, but recovered the species topology in relation to sister species. The genetic distance among populations was low; it did not exceed 0.4% for the most distant populations, and was not correlated with acoustic distance. Discussion: Both acoustic features and genetic sequences are highly conserved, suggesting that populations could be connected by recent migrations, and that they are subject to stabilizing selective forces. Although further studies are required, these findings add to a growing body of literature suggesting that this species would be a good candidate for a reintroduction program without negative effects on communication or genetic impact. PMID:27190717
Methods For Self-Organizing Software
Bouchard, Ann M.; Osbourn, Gordon C.
2005-10-18
A method for dynamically self-assembling and executing software is provided, containing machines that self-assemble execution sequences and data structures. In addition to ordered functions calls (found commonly in other software methods), mutual selective bonding between bonding sites of machines actuates one or more of the bonding machines. Two or more machines can be virtually isolated by a construct, called an encapsulant, containing a population of machines and potentially other encapsulants that can only bond with each other. A hierarchical software structure can be created using nested encapsulants. Multi-threading is implemented by populations of machines in different encapsulants that are interacting concurrently. Machines and encapsulants can move in and out of other encapsulants, thereby changing the functionality. Bonding between machines' sites can be deterministic or stochastic with bonding triggering a sequence of actions that can be implemented by each machine. A self-assembled execution sequence occurs as a sequence of stochastic binding between machines followed by their deterministic actuation. It is the sequence of bonding of machines that determines the execution sequence, so that the sequence of instructions need not be contiguous in memory.
Pitchers, W. R.; Brooks, R.; Jennions, M. D.; Tregenza, T.; Dworkin, I.; Hunt, J.
2013-01-01
Phenotypic integration and plasticity are central to our understanding of how complex phenotypic traits evolve. Evolutionary change in complex quantitative traits can be predicted using the multivariate breeders’ equation, but such predictions are only accurate if the matrices involved are stable over evolutionary time. Recent work, however, suggests that these matrices are temporally plastic, spatially variable and themselves evolvable. The data available on phenotypic variance-covariance matrix (P) stability is sparse, and largely focused on morphological traits. Here we compared P for the structure of the complex sexual advertisement call of six divergent allopatric populations of the Australian black field cricket, Teleogryllus commodus. We measured a subset of calls from wild-caught crickets from each of the populations and then a second subset after rearing crickets under common-garden conditions for three generations. In a second experiment, crickets from each population were reared in the laboratory on high- and low-nutrient diets and their calls recorded. In both experiments, we estimated P for call traits and used multiple methods to compare them statistically (Flury hierarchy, geometric subspace comparisons and random skewers). Despite considerable variation in means and variances of individual call traits, the structure of P was largely conserved among populations, across generations and between our rearing diets. Our finding that P remains largely stable, among populations and between environmental conditions, suggests that selection has preserved the structure of call traits in order that they can function as an integrated unit. PMID:23530814
Greene, Charles R; McLennan, Miles Wm; Norman, Robert G; McDonald, Trent L; Jakubczak, Ray S; Richardson, W John
2004-08-01
Bowhead whales, Balaena mysticetus, migrate west during fall approximately 10-75 km off the north coast of Alaska, passing the petroleum developments around Prudhoe Bay. Oil production operations on an artificial island 5 km offshore create sounds heard by some whales. As part of an effort to assess whether migrating whales deflect farther offshore at times with high industrial noise, an acoustical approach was selected for localizing calling whales. The technique incorporated DIFAR (directional frequency and recording) sonobuoy techniques. An array of 11 DASARs (directional autonomous seafloor acoustic recorders) was built and installed with unit-to-unit separation of 5 km. When two or more DASARs detected the same call, the whale location was determined from the bearing intersections. This article describes the acoustic methods used to determine the locations of the calling bowhead whales and shows the types and precision of the data acquired. Calibration transmissions at GPS-measured times and locations provided measures of the individual DASAR clock drift and directional orientation. The standard error of the bearing measurements at distances of 3-4 km was approximately 1.35 degrees after corrections for gain imbalance in the two directional sensors. During 23 days in 2002, 10,587 bowhead calls were detected and 8383 were localized.
A Prototype Model for Automating Nursing Diagnosis, Nurse Care Planning and Patient Classification.
1986-03-01
Each diagnosis has an assessment level. Assessment levels are defining characteristics observed by the nurse or subjectively stated by the patient... characteristics of this order line. Select IV Order (Figure 4.l.1.le] is the first screen of a series of three. Select IV Order has up to 10 selections...For I F Upatient orders. Input Files Used: IVC.Scr and Procfile.Prg * Output Files Used: None Calling Routine: IUB.Prg * Routine Called: None
Hole filling and library optimization: application to commercially available fragment libraries.
An, Yuling; Sherman, Woody; Dixon, Steven L
2012-09-15
Compound libraries comprise an integral component of drug discovery in the pharmaceutical and biotechnology industries. While in-house libraries often contain millions of molecules, this number pales in comparison to the accessible space of drug-like molecules. Therefore, care must be taken when adding new compounds to an existing library in order to ensure that unexplored regions in the chemical space are filled efficiently while not needlessly increasing the library size. In this work, we present an automated method to fill holes in an existing library using compounds from an external source and apply it to commercially available fragment libraries. The method, called Canvas HF, uses distances computed from 2D chemical fingerprints and selects compounds that fill vacuous regions while not suffering from the problem of selecting only compounds at the edge of the chemical space. We show that the method is robust with respect to different databases and the number of requested compounds to retrieve. We also present an extension of the method where chemical properties can be considered simultaneously with the selection process to bias the compounds toward a desired property space without imposing hard property cutoffs. We compare the results of Canvas HF to those obtained with a standard sphere exclusion method and with random compound selection and find that Canvas HF performs favorably. Overall, the method presented here offers an efficient and effective hole-filling strategy to augment compound libraries with compounds from external sources. The method does not have any fit parameters and therefore it should be applicable in most hole-filling applications. Copyright © 2012 Elsevier Ltd. All rights reserved.
π-Clamp-mediated cysteine conjugation
NASA Astrophysics Data System (ADS)
Zhang, Chi; Welborn, Matthew; Zhu, Tianyu; Yang, Nicole J.; Santos, Michael S.; van Voorhis, Troy; Pentelute, Bradley L.
2016-02-01
Site-selective functionalization of complex molecules is one of the most significant challenges in chemistry. Typically, protecting groups or catalysts must be used to enable the selective modification of one site among many that are similarly reactive, and general strategies that selectively tune the local chemical environment around a target site are rare. Here, we show a four-amino-acid sequence (Phe-Cys-Pro-Phe), which we call the ‘π-clamp’, that tunes the reactivity of its cysteine thiol for site-selective conjugation with perfluoroaromatic reagents. We use the π-clamp to selectively modify one cysteine site in proteins containing multiple endogenous cysteine residues. These examples include antibodies and cysteine-based enzymes that would be difficult to modify selectively using standard cysteine-based methods. Antibodies modified using the π-clamp retained binding affinity to their targets, enabling the synthesis of site-specific antibody-drug conjugates for selective killing of HER2-positive breast cancer cells. The π-clamp is an unexpected approach to mediate site-selective chemistry and provides new avenues to modify biomolecules for research and therapeutics.
Lack of phonotactic preferences of female frogs and its consequences for signal evolution.
Velásquez, Nelson A; Valdés, Jose Luis; Vásquez, Rodrigo A; Penna, Mario
2015-09-01
Sexual selection is one of the main evolutionary forces that drive signal evolution. In previous studies, we have found out that males of Pleurodema thaul, a frog with an extensive latitudinal distribution in Chile, emits advertisement calls that show remarkable variation among populations. In addition, this variation is related to intense inter-male acoustic competition (intra-sexual selection) occurring within each population. However, the extent to which female preferences contribute to the signal divergence observed is unclear. To study the responsiveness of females in each population, we stimulated females with synthetic calls designed with the acoustic structure of their own population and subsequently responsive females were subjected to a two-choice experiment, where they were stimulated with synthetic calls of their own population versus a call of a foreign population. Females do not show phonotactic preferences for calls of their own or foreign populations as measured with both linear and circular variables. The lack of phonotactic preferences suggests an absence of participation of inter-sexual selection processes in the divergence of the acoustic signals of P. thaul, highlighting the importance of intra-sexual selection for the evolution of these signals. These results concur with studies in other vertebrates emphasizing the relevance of interactions among males for the evolution of acoustic communication systems. Copyright © 2015. Published by Elsevier B.V.
RGSS-ID: an approach to new radiologic reporting system.
Ikeda, M; Sakuma, S; Maruyama, K
1990-01-01
RGSS-ID is a developmental computer system that applies artificial intelligence (AI) methods to a reporting system. The representation scheme called Generalized Finding Representation (GFR) is proposed to bridge the gap between natural language expressions in the radiology report and AI methods. The entry process of RGSS-ID is made mainly by selecting items; our system allows a radiologist to compose a sentence which can be completely parsed by the computer. Further RGSS-ID encodes findings into the expression corresponding to GFR, and stores this expression into the knowledge data base. The final printed report is made in the natural language.
Maltarollo, Vinícius G; Homem-de-Mello, Paula; Honorio, Káthia M
2011-10-01
Current researches on treatments for metabolic diseases involve a class of biological receptors called peroxisome proliferator-activated receptors (PPARs), which control the metabolism of carbohydrates and lipids. A subclass of these receptors, PPARδ, regulates several metabolic processes, and the substances that activate them are being studied as new drug candidates for the treatment of diabetes mellitus and metabolic syndrome. In this study, several PPARδ agonists with experimental biological activity were selected for a structural and chemical study. Electronic, stereochemical, lipophilic and topological descriptors were calculated for the selected compounds using various theoretical methods, such as density functional theory (DFT). Fisher's weight and principal components analysis (PCA) methods were employed to select the most relevant variables for this study. The partial least squares (PLS) method was used to construct the multivariate statistical model, and the best model obtained had 4 PCs, q ( 2 ) = 0.80 and r ( 2 ) = 0.90, indicating a good internal consistency. The prediction residues calculated for the compounds in the test set had low values, indicating the good predictive capability of our PLS model. The model obtained in this study is reliable and can be used to predict the biological activity of new untested compounds. Docking studies have also confirmed the importance of the molecular descriptors selected for this system.
Brooks, Benjamin
2008-01-01
Small to Medium Sized Enterprises (SMEs) form the majority of Australian businesses. This study uses ethnographic research methods to describe the organizational culture of a small furniture-manufacturing business in southern Australia. Results show a range of cultural assumptions variously 'embedded' within the enterprise. In line with memetics - Richard Dawkin's cultural application of Charles Darwin's theory of Evolution by Natural Selection, the author suggests that these assumptions compete to be replicated and retained within the organization. The author suggests that dominant assumptions are naturally selected, and that the selection can be better understood by considering the cultural assumptions in reference to Darwin's original principles and Frederik Barth's anthropological framework of knowledge. The results are discussed with reference to safety systems, negative cultural elements called Cultural Safety Viruses, and how our understanding of this particular organizational culture might be used to build resistance to these viruses.
LS Bound based gene selection for DNA microarray data.
Zhou, Xin; Mao, K Z
2005-04-15
One problem with discriminant analysis of DNA microarray data is that each sample is represented by quite a large number of genes, and many of them are irrelevant, insignificant or redundant to the discriminant problem at hand. Methods for selecting important genes are, therefore, of much significance in microarray data analysis. In the present study, a new criterion, called LS Bound measure, is proposed to address the gene selection problem. The LS Bound measure is derived from leave-one-out procedure of LS-SVMs (least squares support vector machines), and as the upper bound for leave-one-out classification results it reflects to some extent the generalization performance of gene subsets. We applied this LS Bound measure for gene selection on two benchmark microarray datasets: colon cancer and leukemia. We also compared the LS Bound measure with other evaluation criteria, including the well-known Fisher's ratio and Mahalanobis class separability measure, and other published gene selection algorithms, including Weighting factor and SVM Recursive Feature Elimination. The strength of the LS Bound measure is that it provides gene subsets leading to more accurate classification results than the filter method while its computational complexity is at the level of the filter method. A companion website can be accessed at http://www.ntu.edu.sg/home5/pg02776030/lsbound/. The website contains: (1) the source code of the gene selection algorithm; (2) the complete set of tables and figures regarding the experimental study; (3) proof of the inequality (9). ekzmao@ntu.edu.sg.
2015-01-01
Background Investigations into novel biomarkers using omics techniques generate large amounts of data. Due to their size and numbers of attributes, these data are suitable for analysis with machine learning methods. A key component of typical machine learning pipelines for omics data is feature selection, which is used to reduce the raw high-dimensional data into a tractable number of features. Feature selection needs to balance the objective of using as few features as possible, while maintaining high predictive power. This balance is crucial when the goal of data analysis is the identification of highly accurate but small panels of biomarkers with potential clinical utility. In this paper we propose a heuristic for the selection of very small feature subsets, via an iterative feature elimination process that is guided by rule-based machine learning, called RGIFE (Rule-guided Iterative Feature Elimination). We use this heuristic to identify putative biomarkers of osteoarthritis (OA), articular cartilage degradation and synovial inflammation, using both proteomic and transcriptomic datasets. Results and discussion Our RGIFE heuristic increased the classification accuracies achieved for all datasets when no feature selection is used, and performed well in a comparison with other feature selection methods. Using this method the datasets were reduced to a smaller number of genes or proteins, including those known to be relevant to OA, cartilage degradation and joint inflammation. The results have shown the RGIFE feature reduction method to be suitable for analysing both proteomic and transcriptomics data. Methods that generate large ‘omics’ datasets are increasingly being used in the area of rheumatology. Conclusions Feature reduction methods are advantageous for the analysis of omics data in the field of rheumatology, as the applications of such techniques are likely to result in improvements in diagnosis, treatment and drug discovery. PMID:25923811
Mirmohseni, A; Abdollahi, H; Rostamizadeh, K
2007-02-28
Net analyte signal (NAS)-based method called HLA/GO was applied for the selectively determination of binary mixture of ethanol and water by quartz crystal nanobalance (QCN) sensor. A full factorial design was applied for the formation of calibration and prediction sets in the concentration ranges 5.5-22.2 microg mL(-1) for ethanol and 7.01-28.07 microg mL(-1) for water. An optimal time range was selected by procedure which was based on the calculation of the net analyte signal regression plot in any considered time window for each test sample. A moving window strategy was used for searching the region with maximum linearity of NAS regression plot (minimum error indicator) and minimum of PRESS value. On the base of obtained results, the differences on the adsorption profiles in the time range between 1 and 600 s were used to determine mixtures of both compounds by HLA/GO method. The calculation of the net analytical signal using HLA/GO method allows determination of several figures of merit like selectivity, sensitivity, analytical sensitivity and limit of detection, for each component. To check the ability of the proposed method in the selection of linear regions of adsorption profile, a test for detecting non-linear regions of adsorption profile data in the presence of methanol was also described. The results showed that the method was successfully applied for the determination of ethanol and water.
2009-01-01
Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551
ERIC Educational Resources Information Center
Tai, Shu-Ju
2013-01-01
As researchers in the CALL teacher education field noted, teachers play the pivotal role in the language learning classrooms because they are the gate keepers who decide whether technology or CALL has a place in their teaching, and they select technology to support their teaching, which determines what CALL activities language learners are exposed…
Taylor, Ryan C.; Buchanan, Bryant W.; Doherty, Jessie L.
2007-01-01
Anuran amphibians have provided an excellent system for the study of animal communication and sexual selection. Studies of female mate choice in anurans, however, have focused almost exclusively on the role of auditory signals. In this study, we examined the effect of both auditory and visual cues on female choice in the squirrel treefrog. Our experiments used a two-choice protocol in which we varied male vocalization properties, visual cues, or both, to assess female preferences for the different cues. Females discriminated against high-frequency calls and expressed a strong preference for calls that contained more energy per unit time (faster call rate). Females expressed a preference for the visual stimulus of a model of a calling male when call properties at the two speakers were held the same. They also showed a significant attraction to a model possessing a relatively large lateral body stripe. These data indicate that visual cues do play a role in mate attraction in this nocturnal frog species. Furthermore, this study adds to a growing body of evidence that suggests that multimodal signals play an important role in sexual selection.
Kavakiotis, Ioannis; Samaras, Patroklos; Triantafyllidis, Alexandros; Vlahavas, Ioannis
2017-11-01
Single Nucleotide Polymorphism (SNPs) are, nowadays, becoming the marker of choice for biological analyses involving a wide range of applications with great medical, biological, economic and environmental interest. Classification tasks i.e. the assignment of individuals to groups of origin based on their (multi-locus) genotypes, are performed in many fields such as forensic investigations, discrimination between wild and/or farmed populations and others. Τhese tasks, should be performed with a small number of loci, for computational as well as biological reasons. Thus, feature selection should precede classification tasks, especially for Single Nucleotide Polymorphism (SNP) datasets, where the number of features can amount to hundreds of thousands or millions. In this paper, we present a novel data mining approach, called FIFS - Frequent Item Feature Selection, based on the use of frequent items for selection of the most informative markers from population genomic data. It is a modular method, consisting of two main components. The first one identifies the most frequent and unique genotypes for each sampled population. The second one selects the most appropriate among them, in order to create the informative SNP subsets to be returned. The proposed method (FIFS) was tested on a real dataset, which comprised of a comprehensive coverage of pig breed types present in Britain. This dataset consisted of 446 individuals divided in 14 sub-populations, genotyped at 59,436 SNPs. Our method outperforms the state-of-the-art and baseline methods in every case. More specifically, our method surpassed the assignment accuracy threshold of 95% needing only half the number of SNPs selected by other methods (FIFS: 28 SNPs, Delta: 70 SNPs Pairwise FST: 70 SNPs, In: 100 SNPs.) CONCLUSION: Our approach successfully deals with the problem of informative marker selection in high dimensional genomic datasets. It offers better results compared to existing approaches and can aid biologists in selecting the most informative markers with maximum discrimination power for optimization of cost-effective panels with applications related to e.g. species identification, wildlife management, and forensics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
NASA Astrophysics Data System (ADS)
Sun, Pei; Fang, Z. Zak; Zhang, Ying; Xia, Yang
2017-12-01
Commercial spherical Ti powders for additive manufacturing applications are produced today by melt-atomization methods at relatively high costs. A meltless production method, called granulation-sintering-deoxygenation (GSD), was developed recently to produce spherical Ti alloy powder at a significantly reduced cost. In this new process, fine hydrogenated Ti particles are agglomerated to form spherical granules, which are then sintered to dense spherical particles. After sintering, the solid fully dense spherical Ti alloy particles are deoxygenated using novel low-temperature deoxygenation processes with either Mg or Ca. This technical communication presents results of 3D printing using GSD powder and the selective laser melting (SLM) technique. The results showed that tensile properties of parts fabricated from spherical GSD Ti-6Al-4V powder by SLM are comparable with typical mill-annealed Ti-6Al-4V. The characteristics of 3D printed Ti-6Al-4V from GSD powder are also compared with that of commercial materials.
Autonomous learning in gesture recognition by using lobe component analysis
NASA Astrophysics Data System (ADS)
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Application of Methods of Numerical Analysis to Physical and Engineering Data.
1980-10-15
directed algorithm would seem to be called for. However, 1(0) is itself a random process, making its gradient too unreliable for such a sensitive algorithm...radiation energy on the detector . Active laser systems, on the other hand, have created now the possibility for extremely narrow path band systems...emitted by the earth and its atmosphere. The broad spectral range was selected so that the field of view of the detector could be narrowed to obtain
Derkach, Andriy; Chiang, Theodore; Gong, Jiafen; Addis, Laura; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K.; Strug, Lisa J.
2014-01-01
Motivation: Sufficiently powered case–control studies with next-generation sequence (NGS) data remain prohibitively expensive for many investigators. If feasible, a more efficient strategy would be to include publicly available sequenced controls. However, these studies can be confounded by differences in sequencing platform; alignment, single nucleotide polymorphism and variant calling algorithms; read depth; and selection thresholds. Assuming one can match cases and controls on the basis of ethnicity and other potential confounding factors, and one has access to the aligned reads in both groups, we investigate the effect of systematic differences in read depth and selection threshold when comparing allele frequencies between cases and controls. We propose a novel likelihood-based method, the robust variance score (RVS), that substitutes genotype calls by their expected values given observed sequence data. Results: We show theoretically that the RVS eliminates read depth bias in the estimation of minor allele frequency. We also demonstrate that, using simulated and real NGS data, the RVS method controls Type I error and has comparable power to the ‘gold standard’ analysis with the true underlying genotypes for both common and rare variants. Availability and implementation: An RVS R script and instructions can be found at strug.research.sickkids.ca, and at https://github.com/strug-lab/RVS. Contact: lisa.strug@utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24733292
Rough sets and Laplacian score based cost-sensitive feature selection
Yu, Shenglong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884
Rough sets and Laplacian score based cost-sensitive feature selection.
Yu, Shenglong; Zhao, Hong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.
Discriminative least squares regression for multiclass classification and feature selection.
Xiang, Shiming; Nie, Feiping; Meng, Gaofeng; Pan, Chunhong; Zhang, Changshui
2012-11-01
This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.
Defects diagnosis in laser brazing using near-infrared signals based on empirical mode decomposition
NASA Astrophysics Data System (ADS)
Cheng, Liyong; Mi, Gaoyang; Li, Shuo; Wang, Chunming; Hu, Xiyuan
2018-03-01
Real-time monitoring of laser welding plays a very important role in the modern automated production and online defects diagnosis is necessary to be implemented. In this study, the status of laser brazing was monitored in real time using an infrared photoelectric sensor. Four kinds of braze seams (including healthy weld, unfilled weld, hole weld and rough surface weld) along with corresponding near-infrared signals were obtained. Further, a new method called Empirical Mode Decomposition (EMD) was proposed to analyze the near-infrared signals. The results showed that the EMD method had a good performance in eliminating the noise on the near-infrared signals. And then, the correlation coefficient was developed for selecting the Intrinsic Mode Function (IMF) more sensitive to the weld defects. A more accurate signal was reconstructed with the selected IMF components. Simultaneously, the spectrum of selected IMF components was solved using fast Fourier transform, and the frequency characteristics were clearly revealed. The frequency energy of different frequency bands was computed to diagnose the defects. There was a significant difference in four types of weld defects. This approach has been proved to be an effective and efficient method for monitoring laser brazing defects.
GAPIT: genome association and prediction integrated tool.
Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu
2012-09-15
Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.
Learning to improve iterative repair scheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene
1992-01-01
This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.
Berger, Vance W.
2014-01-01
Recently a great deal of attention has been paid to conflicts of interest in medical research, and the Institute of Medicine has called for more research into this important area. One research question that has not received sufficient attention concerns the mechanisms of action by which conflicts of interest can result in biased and/or flawed research. What discretion do conflicted researchers have to sway the results one way or the other? We address this issue from the perspective of selective inertia, or an unnatural selection of research methods based on which are most likely to establish the preferred conclusions, rather than on which are most valid. In many cases it is abundantly clear that a method that is not being used in practice is superior to the one that is being used in practice, at least from the perspective of validity, and that it is only inertia, as opposed to any serious suggestion that the incumbent method is superior (or even comparable), that keeps the inferior procedure in use, to the exclusion of the superior one. By focusing on these flawed research methods we can go beyond statements of potential harm from real conflicts of interest, and can more directly assess actual (not potential) harm. PMID:25150846
Brunovskis, Anette; Surtees, Rebecca
2010-01-01
Recent discussions of trafficking research have included calls for more innovative studies and new methodologies in order to move beyond the current trafficking narrative, which is often based on unrepresentative samples and overly simplified images. While new methods can potentially play a role in expanding the knowledge base on trafficking, this article argues that the solution is not entirely about applying new methods, but as much about using current methods to greater effect and with careful attention to their limitations and ethical constraints. Drawing on the authors' experience in researching trafficking issues in a number of projects over the past decade, the article outlines and exemplifies some of the methodological and ethical issues to be considered and accommodated when conducting research with trafficked persons -- including unrepresentative samples; access to respondents; selection biases by "gatekeepers" and self selection by potential respondents. Such considerations should inform not only how research is undertaken but also how this information is read and understood. Moreover, many of these considerations equally apply when considering the application of new methods within this field. The article maintains that a better understanding of how these issues come into play and inform trafficking research will translate into tools for conducting improved research in this field and, by implication, new perspectives on human trafficking.
An evolutionary algorithm for large traveling salesman problems.
Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan
2004-08-01
This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-03
...The Law Enforcement Advisory Panel (LEAP) will meet via conference call on October 20, 2011 to select nominees for a Law Enforcement Office of the Year (LEOY) Award. The Council will meet via conference call on November 2, 2011 to review the LEAP's recommendations and select a deserving law enforcement officer to receive the award. See SUPPLEMENTARY INFORMATION.
Pedagogy and Related Criteria: The Selection of Software for Computer Assisted Language Learning
ERIC Educational Resources Information Center
Samuels, Jeffrey D.
2013-01-01
Computer-Assisted Language Learning (CALL) is an established field of academic inquiry with distinct applications for second language teaching and learning. Many CALL professionals direct language labs or language resource centers (LRCs) in which CALL software applications and generic software applications support language learning programs and…
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2013 CFR
2013-10-01
... calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three.... The “Series A” designation includes coast stations along, and ship stations in, the Atlantic Ocean... location of the called station and propagation conditions. Acknowledgement is made on the paired frequency...
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2012 CFR
2012-10-01
... calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three.... The “Series A” designation includes coast stations along, and ship stations in, the Atlantic Ocean... location of the called station and propagation conditions. Acknowledgement is made on the paired frequency...
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2014 CFR
2014-10-01
... calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three.... The “Series A” designation includes coast stations along, and ship stations in, the Atlantic Ocean... location of the called station and propagation conditions. Acknowledgement is made on the paired frequency...
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
Long-Term Variations of the EOP and ICRF2
NASA Technical Reports Server (NTRS)
Zharov, Vladimir; Sazhin, Mikhail; Sementsov, Valerian; Sazhina, Olga
2010-01-01
We analyzed the time series of the coordinates of the ICRF radio sources. We show that part of the radio sources, including the defining sources, shows a significant apparent motion. The stability of the celestial reference frame is provided by a no-net-rotation condition applied to the defining sources. In our case this condition leads to a rotation of the frame axes with time. We calculated the effect of this rotation on the Earth orientation parameters (EOP). In order to improve the stability of the celestial reference frame we suggest a new method for the selection of the defining sources. The method consists of two criteria: the first one we call cosmological and the second one kinematical. It is shown that a subset of the ICRF sources selected according to cosmological criteria provides the most stable reference frame for the next decade.
Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming
NASA Astrophysics Data System (ADS)
Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita
2018-03-01
We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.
Zhang, Dongqing; Zhao, Yiyuan; Noble, Jack H; Dawant, Benoit M
2018-04-01
Cochlear implants (CIs) are neural prostheses that restore hearing using an electrode array implanted in the cochlea. After implantation, the CI processor is programmed by an audiologist. One factor that negatively impacts outcomes and can be addressed by programming is cross-electrode neural stimulation overlap (NSO). We have proposed a system to assist the audiologist in programming the CI that we call image-guided CI programming (IGCIP). IGCIP permits using CT images to detect NSO and recommend deactivation of a subset of electrodes to avoid NSO. We have shown that IGCIP significantly improves hearing outcomes. Most of the IGCIP steps are robustly automated but electrode configuration selection still sometimes requires manual intervention. With expertise, distance-versus-frequency curves, which are a way to visualize the spatial relationship learned from CT between the electrodes and the nerves they stimulate, can be used to select the electrode configuration. We propose an automated technique for electrode configuration selection. A comparison between this approach and one we have previously proposed shows that our method produces results that are as good as those obtained with our previous method while being generic and requiring fewer parameters.
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Tamiya, Satoshi
2014-01-01
Multilingualism poses unique psychiatric problems, especially in the field of child psychiatry. The author discusses several linguistic and transcultural issues in relation to Language Disorder, Specific Learning Disorder and Selective Mutism. Linguistic characteristics of multiple language development, including so-called profile effects and code-switching, need to be understood for differential diagnosis. It is also emphasized that Language Disorder in a bilingual person is not different or worse than that in a monolingual person. Second language proficiency, cultural background and transfer from the first language all need to be considered in an evaluation for Specific Learning Disorder. Selective Mutism has to be differentiated from the silent period observed in the normal successive bilingual development. The author concludes the review by remarking on some caveats around methods of language evaluation in a multilingual person.
A hierarchical two-phase framework for selecting genes in cancer datasets with a neuro-fuzzy system.
Lim, Jongwoo; Wang, Bohyun; Lim, Joon S
2016-04-29
Finding the minimum number of appropriate biomarkers for specific targets such as a lung cancer has been a challenging issue in bioinformatics. We propose a hierarchical two-phase framework for selecting appropriate biomarkers that extracts candidate biomarkers from the cancer microarray datasets and then selects the minimum number of appropriate biomarkers from the extracted candidate biomarkers datasets with a specific neuro-fuzzy algorithm, which is called a neural network with weighted fuzzy membership function (NEWFM). In this context, as the first phase, the proposed framework is to extract candidate biomarkers by using a Bhattacharyya distance method that measures the similarity of two discrete probability distributions. Finally, the proposed framework is able to reduce the cost of finding biomarkers by not receiving medical supplements and improve the accuracy of the biomarkers in specific cancer target datasets.
Measuring food intake with digital photography
Martin, Corby K.; Nicklas, Theresa; Gunturk, Bahadir; Correa, John B.; Allen, H. Raymond; Champagne, Catherine
2014-01-01
The Digital Photography of Foods Method accurately estimates the food intake of adults and children in cafeterias. When using this method, imags of food selection and leftovers are quickly captured in the cafeteria. These images are later compared to images of “standard” portions of food using a computer application. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. Herein, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on Smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a Smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analyzed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behavior and to receive dietary recommendations to achieve weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children will also be reviewed. The body of research reviewed herein demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and incorporation of computer automation to improve the accuracy, efficiency, and the cost-effectiveness of the method. PMID:23848588
NASA Astrophysics Data System (ADS)
Adeniyi, D. A.; Wei, Z.; Yang, Y.
2017-10-01
Recommendation problem has been extensively studied by researchers in the field of data mining, database and information retrieval. This study presents the design and realisation of an automated, personalised news recommendations system based on Chi-square statistics-based K-nearest neighbour (χ2SB-KNN) model. The proposed χ2SB-KNN model has the potential to overcome computational complexity and information overloading problems, reduces runtime and speeds up execution process through the use of critical value of χ2 distribution. The proposed recommendation engine can alleviate scalability challenges through combined online pattern discovery and pattern matching for real-time recommendations. This work also showcases the development of a novel method of feature selection referred to as Data Discretisation-Based feature selection method. This is used for selecting the best features for the proposed χ2SB-KNN algorithm at the preprocessing stage of the classification procedures. The implementation of the proposed χ2SB-KNN model is achieved through the use of a developed in-house Java program on an experimental website called OUC newsreaders' website. Finally, we compared the performance of our system with two baseline methods which are traditional Euclidean distance K-nearest neighbour and Naive Bayesian techniques. The result shows a significant improvement of our method over the baseline methods studied.
Circling motion and screen edges as an alternative input method for on-screen target manipulation.
Ka, Hyun W; Simpson, Richard C
2017-04-01
To investigate a new alternative interaction method, called circling interface, for manipulating on-screen objects. To specify a target, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. To evaluate the circling interface, we conducted an experiment with 16 participants, comparing the performance on pointing tasks with different combinations of selection method (circling interface, physical mouse and dwelling interface) and input device (normal computer mouse, head pointer and joystick mouse emulator). A circling interface is compatible with many types of pointing devices, not requiring physical activation of mouse buttons, and is more efficient than dwell-clicking. Across all common pointing operations, the circling interface had a tendency to produce faster performance with a head-mounted mouse emulator than with a joystick mouse. The performance accuracy of the circling interface outperformed the dwelling interface. It was demonstrated that the circling interface has the potential as another alternative pointing method for selecting and manipulating objects in a graphical user interface. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. The Circling interface can also work with AAC devices.
Effective Feature Selection for Classification of Promoter Sequences.
K, Kouser; P G, Lavanya; Rangarajan, Lalitha; K, Acharya Kshitish
2016-01-01
Exploring novel computational methods in making sense of biological data has not only been a necessity, but also productive. A part of this trend is the search for more efficient in silico methods/tools for analysis of promoters, which are parts of DNA sequences that are involved in regulation of expression of genes into other functional molecules. Promoter regions vary greatly in their function based on the sequence of nucleotides and the arrangement of protein-binding short-regions called motifs. In fact, the regulatory nature of the promoters seems to be largely driven by the selective presence and/or the arrangement of these motifs. Here, we explore computational classification of promoter sequences based on the pattern of motif distributions, as such classification can pave a new way of functional analysis of promoters and to discover the functionally crucial motifs. We make use of Position Specific Motif Matrix (PSMM) features for exploring the possibility of accurately classifying promoter sequences using some of the popular classification techniques. The classification results on the complete feature set are low, perhaps due to the huge number of features. We propose two ways of reducing features. Our test results show improvement in the classification output after the reduction of features. The results also show that decision trees outperform SVM (Support Vector Machine), KNN (K Nearest Neighbor) and ensemble classifier LibD3C, particularly with reduced features. The proposed feature selection methods outperform some of the popular feature transformation methods such as PCA and SVD. Also, the methods proposed are as accurate as MRMR (feature selection method) but much faster than MRMR. Such methods could be useful to categorize new promoters and explore regulatory mechanisms of gene expressions in complex eukaryotic species.
Receivers matter: the meaning of alarm calls and competition for nest sites in a bird community.
Parejo, Deseada; Avilés, Jesús M; Expósito-Granados, Mónica
2018-04-11
Animal communities may constitute information networks where individuals gain information on predation risk by eavesdropping on alarm calls of other species. However, communities include species in different trophic levels, and it is not yet known how the trophic level of the receiver influences the informative value of a call. Furthermore, no empirical study has yet tested how increased competition may influence the value of alarm calls for distinct receivers. Here, we identify the importance of alarm calls emitted by a small owl, the little owl (Athene noctua), on the structure of a cavity-nesting bird community including mesopredators and primary prey under variable levels of competition for nest holes. Competitors sharing top predators with the callers and prey of the callers interpreted alarm and non-alarm calls differently. Competitors chose preferentially alarm and non-alarm patches over control patches to breed, while prey selected alarm patches. In contrast, competition for nest sites affected habitat selection of prey species more than that of competitors of the callers. This study provides support for a changing value of alarm calls and competition for nest sites for distinct receivers related to niche overlapping among callers and eavesdroppers, therefore, calling attention to possible cascading effects by the use of information in natural communities.
47 CFR 97.19 - Application for a vanity call sign.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES AMATEUR RADIO SERVICE General Provisions § 97.19 Application for a vanity call sign. (a) The person named in an operator/primary station license grant or in a club station license grant is eligible... sign selected by the vanity call sign system. Effective February 14, 2011, the person named in a club...
47 CFR 97.19 - Application for a vanity call sign.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES AMATEUR RADIO SERVICE General Provisions § 97.19 Application for a vanity call sign. (a) The person named in an operator/primary station license grant or in a club station license grant is eligible... sign selected by the vanity call sign system. Effective February 14, 2011, the person named in a club...
47 CFR 97.19 - Application for a vanity call sign.
Code of Federal Regulations, 2012 CFR
2012-10-01
... SERVICES AMATEUR RADIO SERVICE General Provisions § 97.19 Application for a vanity call sign. (a) The person named in an operator/primary station license grant or in a club station license grant is eligible... sign selected by the vanity call sign system. Effective February 14, 2011, the person named in a club...
47 CFR 97.19 - Application for a vanity call sign.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SERVICES AMATEUR RADIO SERVICE General Provisions § 97.19 Application for a vanity call sign. (a) The person named in an operator/primary station license grant or in a club station license grant is eligible... sign selected by the vanity call sign system. Effective February 14, 2011, the person named in a club...
Selective habituation shapes acoustic predator recognition in harbour seals.
Deecke, Volker B; Slater, Peter J B; Ford, John K B
2002-11-14
Predation is a major force in shaping the behaviour of animals, so that precise identification of predators will confer substantial selective advantages on animals that serve as food to others. Because experience with a predator can be lethal, early researchers studying birds suggested that predator recognition does not require learning. However, a predator image that can be modified by learning and experience will be advantageous in situations where cues associated with the predator are highly variable or change over time. In this study, we investigated the response of harbour seals (Phoca vitulina) to the underwater calls of different populations of killer whales (Orcinus orca). We found that the seals responded strongly to the calls of mammal-eating killer whales and unfamiliar fish-eating killer whales but not to the familiar calls of the local fish-eating population. This demonstrates that wild harbour seals are capable of complex acoustic discrimination and that they modify their predator image by selectively habituating to the calls of harmless killer whales. Fear in these animals is therefore focused on local threats by learning and experience.
The Nimbus 6 data catalog. Volume 1: 12 June 1975 through 31 August 1975. Data orbits 1 through 1082
NASA Technical Reports Server (NTRS)
1975-01-01
Subsections 1.2 through 1.10 of this catalog summarize the operational highlights of the individual experiments, present preliminary experiment results, and call attention to known data anamolies. Section 2 lists the on-off times for each experiment and provides a method for determining the geographical coverage of each experiment. Section 3 shows selected HIRS, SCAMS and ESMR images, and Section 4 presents THIR montages. Section 5 presents corrections to The Nimbus 6 User's Guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
campione, Salvatore; Warne, Larry K.; Schiek, Richard
This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to the Bell Labs electromagnetic pulse excitation. We use both a frequency-domain and a time-domain method based on transmission line theory through a code we call ATLOG - Analytic Transmission Line Over Ground. Results are compared to the circuit simulator Xyce for selected cases. Intentionally Left Blank
DOE Office of Scientific and Technical Information (OSTI.GOV)
campione, Salvatore; Warne, Larry K.; Schiek, Richard
2017-09-01
This report details the modeling results for the response of a finite-length dissipative conductor interacting with a conducting ground to a hypothetical nuclear device with the same output energy spectrum as the Fat Man device. We use a frequency-domain method based on transmission line theory and implemented it in a code we call ATLOG - Analytic Transmission Line Over Ground. Select results are compared to ones computed using the circuit simulator Xyce. Intentionally Left Blank
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
Receptive fields selection for binary feature description.
Fan, Bin; Kong, Qingqun; Trzcinski, Tomasz; Wang, Zhiheng; Pan, Chunhong; Fua, Pascal
2014-06-01
Feature description for local image patch is widely used in computer vision. While the conventional way to design local descriptor is based on expert experience and knowledge, learning-based methods for designing local descriptor become more and more popular because of their good performance and data-driven property. This paper proposes a novel data-driven method for designing binary feature descriptor, which we call receptive fields descriptor (RFD). Technically, RFD is constructed by thresholding responses of a set of receptive fields, which are selected from a large number of candidates according to their distinctiveness and correlations in a greedy way. Using two different kinds of receptive fields (namely rectangular pooling area and Gaussian pooling area) for selection, we obtain two binary descriptors RFDR and RFDG .accordingly. Image matching experiments on the well-known patch data set and Oxford data set demonstrate that RFD significantly outperforms the state-of-the-art binary descriptors, and is comparable with the best float-valued descriptors at a fraction of processing time. Finally, experiments on object recognition tasks confirm that both RFDR and RFDG successfully bridge the performance gap between binary descriptors and their floating-point competitors.
NASA Astrophysics Data System (ADS)
Heya, Akira; Matsuo, Naoto
2007-06-01
The surface properties of a plastic substrate were changed by a novel surface treatment called atomic hydrogen annealing (AHA). In this method, a plastic substrate was exposed to atomic hydrogen generated by cracking hydrogen molecules on heated tungsten wire. For the substrate, surface roughness was increased and halogen elements (F and Cl) were selectively etched by AHA. AHA was useful for pretreatment before film deposition on a plastic substrate because the changes in surface state relate to adhesion improvement. It is concluded that this method is a promising technique for preparing high-performance plastic substrates at low temperatures.
The effects of list-method directed forgetting on recognition memory.
Benjamin, Aaron S
2006-10-01
It is an almost universally accepted claim that the list-method procedure of inducing directed forgetting does not affect recognition. However, previous studies have omitted a critical comparison in reaching this conclusion. This article reports evidence that recognition of material learned after cue presentation is superior for conditions in which the material that preceded cue presentation was designated as to-be-forgotten. Because the absence of an effect of directed-forgetting instructions on recognition is the linchpin of the theoretical claim that retrieval inhibition and not selective rehearsal underlies that effect, the present results call into question the need to postulate a role for inhibition in directed forgetting.
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
A probabilistic union model with automatic order selection for noisy speech recognition.
Jancovic, P; Ming, J
2001-09-01
A critical issue in exploiting the potential of the sub-band-based approach to robust speech recognition is the method of combining the sub-band observations, for selecting the bands unaffected by noise. A new method for this purpose, i.e., the probabilistic union model, was recently introduced. This model has been shown to be capable of dealing with band-limited corruption, requiring no knowledge about the band position and statistical distribution of the noise. A parameter within the model, which we call its order, gives the best results when it equals the number of noisy bands. Since this information may not be available in practice, in this paper we introduce an automatic algorithm for selecting the order, based on the state duration pattern generated by the hidden Markov model (HMM). The algorithm has been tested on the TIDIGITS database corrupted by various types of additive band-limited noise with unknown noisy bands. The results have shown that the union model equipped with the new algorithm can achieve a recognition performance similar to that achieved when the number of noisy bands is known. The results show a very significant improvement over the traditional full-band model, without requiring prior information on either the position or the number of noisy bands. The principle of the algorithm for selecting the order based on state duration may also be applied to other sub-band combination methods.
Beluga whale, Delphinapterus leucas, vocalizations from the Churchill River, Manitoba, Canada.
Chmelnitsky, Elly G; Ferguson, Steven H
2012-06-01
Classification of animal vocalizations is often done by a human observer using aural and visual analysis but more efficient, automated methods have also been utilized to reduce bias and increase reproducibility. Beluga whale, Delphinapterus leucas, calls were described from recordings collected in the summers of 2006-2008, in the Churchill River, Manitoba. Calls (n=706) were classified based on aural and visual analysis, and call characteristics were measured; calls were separated into 453 whistles (64.2%; 22 types), 183 pulsed∕noisy calls (25.9%; 15 types), and 70 combined calls (9.9%; seven types). Measured parameters varied within each call type but less variation existed in pulsed and noisy call types and some combined call types than in whistles. A more efficient and repeatable hierarchical clustering method was applied to 200 randomly chosen whistles using six call characteristics as variables; twelve groups were identified. Call characteristics varied less in cluster analysis groups than in whistle types described by visual and aural analysis and results were similar to the whistle contours described. This study provided the first description of beluga calls in Hudson Bay and using two methods provides more robust interpretations and an assessment of appropriate methods for future studies.
A meta-heuristic method for solving scheduling problem: crow search algorithm
NASA Astrophysics Data System (ADS)
Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi
2018-04-01
Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.
Ulsenheimer, K
2001-01-01
In the interest of our patients and advancement in medicine and with the guarantee by law of the freedom to select methods of treatment, we may and must try out new strategies even in the face of initially unknown risks, side effects, and consequences. Every innovation assumes, however, that the advantages and disadvantages, potential complications, and burden to the patient have been weighed against those of conventional methods. The risks that a pioneer takes must be justified and presented comprehensively and clearly to the patient. Otherwise the threat is posed of civil and criminal accusations of negligence in responsibility or in the obligation to fully inform patients. Patient protection and safety must always be the first priority.
47 CFR 80.103 - Digital selective calling (DSC) operating procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... DSC “Acknowledgment of distress calls” and “Distress relays.” (See subpart W of this part.) (d) Group calls to vessels under the common control of a single entity are authorized. A group call identity may... (ITU), Place des Nations, CH-1211 Geneva 20, Switzerland. [68 FR 46961, Aug. 7, 2003, as amended at 73...
47 CFR 80.103 - Digital selective calling (DSC) operating procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... DSC “Acknowledgment of distress calls” and “Distress relays.” (See subpart W of this part.) (d) Group calls to vessels under the common control of a single entity are authorized. A group call identity may... (ITU), Place des Nations, CH-1211 Geneva 20, Switzerland. [68 FR 46961, Aug. 7, 2003, as amended at 73...
Liu, Zhenqiu; Hsiao, William; Cantarel, Brandi L; Drábek, Elliott Franco; Fraser-Liggett, Claire
2011-12-01
Direct sequencing of microbes in human ecosystems (the human microbiome) has complemented single genome cultivation and sequencing to understand and explore the impact of commensal microbes on human health. As sequencing technologies improve and costs decline, the sophistication of data has outgrown available computational methods. While several existing machine learning methods have been adapted for analyzing microbiome data recently, there is not yet an efficient and dedicated algorithm available for multiclass classification of human microbiota. By combining instance-based and model-based learning, we propose a novel sparse distance-based learning method for simultaneous class prediction and feature (variable or taxa, which is used interchangeably) selection from multiple treatment populations on the basis of 16S rRNA sequence count data. Our proposed method simultaneously minimizes the intraclass distance and maximizes the interclass distance with many fewer estimated parameters than other methods. It is very efficient for problems with small sample sizes and unbalanced classes, which are common in metagenomic studies. We implemented this method in a MATLAB toolbox called MetaDistance. We also propose several approaches for data normalization and variance stabilization transformation in MetaDistance. We validate this method on several real and simulated 16S rRNA datasets to show that it outperforms existing methods for classifying metagenomic data. This article is the first to address simultaneous multifeature selection and class prediction with metagenomic count data. The MATLAB toolbox is freely available online at http://metadistance.igs.umaryland.edu/. zliu@umm.edu Supplementary data are available at Bioinformatics online.
An improved method to detect correct protein folds using partial clustering.
Zhou, Jianjun; Wishart, David S
2013-01-16
Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.
An improved method to detect correct protein folds using partial clustering
2013-01-01
Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835
Tang, Hua; Chen, Wei; Lin, Hao
2016-04-01
Immunoglobulins, also called antibodies, are a group of cell surface proteins which are produced by the immune system in response to the presence of a foreign substance (called antigen). They play key roles in many medical, diagnostic and biotechnological applications. Correct identification of immunoglobulins is crucial to the comprehension of humoral immune function. With the avalanche of protein sequences identified in postgenomic age, it is highly desirable to develop computational methods to timely identify immunoglobulins. In view of this, we designed a predictor called "IGPred" by formulating protein sequences with the pseudo amino acid composition into which nine physiochemical properties of amino acids were incorporated. Jackknife cross-validated results showed that 96.3% of immunoglobulins and 97.5% of non-immunoglobulins can be correctly predicted, indicating that IGPred holds very high potential to become a useful tool for antibody analysis. For the convenience of most experimental scientists, a web-server for IGPred was established at http://lin.uestc.edu.cn/server/IGPred. We believe that the web-server will become a powerful tool to study immunoglobulins and to guide related experimental validations.
76 FR 4896 - Call for Candidates
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... designated to establish generally accepted accounting principles for federal government entities. Generally, non-federal Board members are selected from the general financial community, the accounting and... FEDERAL ACCOUNTING STANDARDS ADVISORY BOARD Call for Candidates AGENCY: Federal Accounting...
A System for Automatically Generating Scheduling Heuristics
NASA Technical Reports Server (NTRS)
Morris, Robert
1996-01-01
The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.
Statistical auditing and randomness test of lotto k/N-type games
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.
2008-11-01
One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.
Micro-array isolation of circulating tumor cells (CTCs): the droplet biopsy chip
NASA Astrophysics Data System (ADS)
Panchapakesan, B.
2017-08-01
We present a new method for circulating tumor cell capture based on micro-array isolation from droplets. Called droplet biopsy, our technique uses a 76-element array of carbon nanotube devices functionalized with anti-EpCAM and antiHer2 antibodies for immunocapture of spiked breast cancer cells in the blood. This droplet biopsy chip can enable capture of CTCs based on both positive and negative selection strategy. Negative selection is achieved through depletion of contaminating leukocytes through the differential settling of blood into layers. We report 55%-100% cancer cell capture yield in this first droplet biopsy chip study. The droplet biopsy is an enabling idea where one can capture CTCs based on multiple biomarkers in a single blood sample.
Enabling technologies and green processes in cyclodextrin chemistry.
Cravotto, Giancarlo; Caporaso, Marina; Jicsinszky, Laszlo; Martina, Katia
2016-01-01
The design of efficient synthetic green strategies for the selective modification of cyclodextrins (CDs) is still a challenging task. Outstanding results have been achieved in recent years by means of so-called enabling technologies, such as microwaves, ultrasound and ball mills, that have become irreplaceable tools in the synthesis of CD derivatives. Several examples of sonochemical selective modification of native α-, β- and γ-CDs have been reported including heterogeneous phase Pd- and Cu-catalysed hydrogenations and couplings. Microwave irradiation has emerged as the technique of choice for the production of highly substituted CD derivatives, CD grafted materials and polymers. Mechanochemical methods have successfully furnished greener, solvent-free syntheses and efficient complexation, while flow microreactors may well improve the repeatability and optimization of critical synthetic protocols.
Petruzzelli, D; De Florio, L; Dell'Erba, A; Liberti, L; Notarnicola, M; Sengupta, A K
2003-01-01
P-control technologies for municipal wastewater are essentially based on "destructive" methods, that lead to formation of concentrated solid-phases (sludge), usually disposed-off in controlled landfills. Ion exchange, as a "non-destructive" technology, allows for selective removal and simultaneous recovery of pollutants, which can be recycled to the same and/or related productive lines. In this context, the REM NUT process removes nutrient species (HPO4 = , NH4+, K+) present in biologically oxidised municipal effluents and recovers them in the form of struvites (MgNH4PO4; MgKPO4), premium quality slow release fertilisers. The main limitation to the extensive application of this ion exchange based process is the non-availability of selective exchangers for specific removal of nutrient species. This paper illustrates laboratory investigation and pilot scale development of a so-called "P-driven" modified REM NUT scheme based on a new phosphate-selective sorbent developed at Lehigh University, PA, USA.
Comparison of reminder methods in selected adolescents with records in an immunization registry.
Morris, Jessica; Wang, Wendy; Wang, Lawrence; Peddecord, K Michael; Sawyer, Mark H
2015-05-01
The aim of this study was to assess the effectiveness and cost efficiency of three reminder/recall methods for improving adolescent vaccination rates using the San Diego Immunization Registry. Parents of 5,050 adolescents whose records indicated they lacked one or more adolescent vaccines were identified from the San Diego Immunization Registry and contacted by telephone. Based on their preference, consenting participants were enrolled to receive either postal mail (n = 282), e-mail (n = 963), or text (n = 552) reminders for vaccination. The intervention groups were sent a series of up to three reminders. The vaccination completion rate was compared between the intervention groups and two control groups-the enrollment phone call-only group who declined to participate and a no contact group-using logistic regression. The participants who received any reminder were more likely (24.6% vs. 12.4%; p < .001) to become up-to-date (UTD) than those in the enrollment phone call-only group. At the conclusion of the study observation, UTD status was reached by 32.1% of text message recipients, 23.0% of postcard recipients, and 20.8% of e-mail recipients compared to 12.4% for the enrollment phone call recipients. Only 9.7% of nonintervention adolescents became UTD. All three reminder interventions were effective in improving adolescent vaccination rates. Although postal mail reminders were preferred by most participants, text messaging and e-mail were the more effective reminder methods. Text messaging and e-mail as reminder methods for receiving vaccinations should be considered for use to boost vaccination completion among adolescents. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Social Communication and Vocal Recognition in Free-Ranging Rhesus Monkeys
NASA Astrophysics Data System (ADS)
Rendall, Christopher Andrew
Kinship and individual identity are key determinants of primate sociality, and the capacity for vocal recognition of individuals and kin is hypothesized to be an important adaptation facilitating intra-group social communication. Research was conducted on adult female rhesus monkeys on Cayo Santiago, Puerto Rico to test this hypothesis for three acoustically distinct calls characterized by varying selective pressures on communicating identity: coos (contact calls), grunts (close range social calls), and noisy screams (agonistic recruitment calls). Vocalization playback experiments confirmed a capacity for both individual and kin recognition of coos, but not screams (grunts were not tested). Acoustic analyses, using traditional spectrographic methods as well as linear predictive coding techniques, indicated that coos (but not grunts or screams) were highly distinctive, and that the effects of vocal tract filtering--formants --contributed more to statistical discriminations of both individuals and kin groups than did temporal or laryngeal source features. Formants were identified from very short (23 ms.) segments of coos and were stable within calls, indicating that formant cues to individual and kin identity were available throughout a call. This aspect of formant cues is predicted to be an especially important design feature for signaling identity efficiently in complex acoustic environments. Results of playback experiments involving manipulated coo stimuli provided preliminary perceptual support for the statistical inference that formant cues take precedence in facilitating vocal recognition. The similarity of formants among female kin suggested a mechanism for the development of matrilineal vocal signatures from the genetic and environmental determinants of vocal tract morphology shared among relatives. The fact that screams --calls strongly expected to communicate identity--were not individually distinctive nor recognized suggested the possibility that their acoustic structure and role in signaling identity might be constrained by functional or morphological design requirements associated with their role in signaling submission.
A Comparative Study to Predict Student’s Performance Using Educational Data Mining Techniques
NASA Astrophysics Data System (ADS)
Uswatun Khasanah, Annisa; Harwati
2017-06-01
Student’s performance prediction is essential to be conducted for a university to prevent student fail. Number of student drop out is one of parameter that can be used to measure student performance and one important point that must be evaluated in Indonesia university accreditation. Data Mining has been widely used to predict student’s performance, and data mining that applied in this field usually called as Educational Data Mining. This study conducted Feature Selection to select high influence attributes with student performance in Department of Industrial Engineering Universitas Islam Indonesia. Then, two popular classification algorithm, Bayesian Network and Decision Tree, were implemented and compared to know the best prediction result. The outcome showed that student’s attendance and GPA in the first semester were in the top rank from all Feature Selection methods, and Bayesian Network is outperforming Decision Tree since it has higher accuracy rate.
A privacy-preserving solution for compressed storage and selective retrieval of genomic data.
Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre
2016-12-01
In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.
A privacy-preserving solution for compressed storage and selective retrieval of genomic data
Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S.; Molyneaux, Adam; Xu, Zhenyu; Hubaux, Jean-Pierre
2016-01-01
In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients’ complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. PMID:27789525
Derkach, Andriy; Chiang, Theodore; Gong, Jiafen; Addis, Laura; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K; Strug, Lisa J
2014-08-01
Sufficiently powered case-control studies with next-generation sequence (NGS) data remain prohibitively expensive for many investigators. If feasible, a more efficient strategy would be to include publicly available sequenced controls. However, these studies can be confounded by differences in sequencing platform; alignment, single nucleotide polymorphism and variant calling algorithms; read depth; and selection thresholds. Assuming one can match cases and controls on the basis of ethnicity and other potential confounding factors, and one has access to the aligned reads in both groups, we investigate the effect of systematic differences in read depth and selection threshold when comparing allele frequencies between cases and controls. We propose a novel likelihood-based method, the robust variance score (RVS), that substitutes genotype calls by their expected values given observed sequence data. We show theoretically that the RVS eliminates read depth bias in the estimation of minor allele frequency. We also demonstrate that, using simulated and real NGS data, the RVS method controls Type I error and has comparable power to the 'gold standard' analysis with the true underlying genotypes for both common and rare variants. An RVS R script and instructions can be found at strug.research.sickkids.ca, and at https://github.com/strug-lab/RVS. lisa.strug@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.
32 CFR 1624.2 - Issuance of induction orders.
Code of Federal Regulations, 2010 CFR
2010-07-01
... numbers and at such times as will assure that such call or requisition is filled. The names contained in... INDUCTIONS § 1624.2 Issuance of induction orders. The Director of Selective Service, upon receipt of a call...
NASA Astrophysics Data System (ADS)
Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram
2018-05-01
The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic errors. These results also demonstrate the high potential of linear features as reliable control features to reach sub-pixel accuracy in registration applications.
Social calls provide novel insights into the evolution of vocal learning
Sewall, Kendra B.; Young, Anna M.; Wright, Timothy F.
2016-01-01
Learned song is among the best-studied models of animal communication. In oscine songbirds, where learned song is most prevalent, it is used primarily for intrasexual selection and mate attraction. Learning of a different class of vocal signals, known as contact calls, is found in a diverse array of species, where they are used to mediate social interactions among individuals. We argue that call learning provides a taxonomically rich system for studying testable hypotheses for the evolutionary origins of vocal learning. We describe and critically evaluate four nonmutually exclusive hypotheses for the origin and current function of vocal learning of calls, which propose that call learning (1) improves auditory detection and recognition, (2) signals local knowledge, (3) signals group membership, or (4) allows for the encoding of more complex social information. We propose approaches to testing these four hypotheses but emphasize that all of them share the idea that social living, not sexual selection, is a central driver of vocal learning. Finally, we identify future areas for research on call learning that could provide new perspectives on the origins and mechanisms of vocal learning in both animals and humans. PMID:28163325
Measuring food intake with digital photography.
Martin, C K; Nicklas, T; Gunturk, B; Correa, J B; Allen, H R; Champagne, C
2014-01-01
The digital photography of foods method accurately estimates the food intake of adults and children in cafeterias. When using this method, images of food selection and leftovers are quickly captured in the cafeteria. These images are later compared with images of 'standard' portions of food using computer software. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. In the present review, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analysed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behaviour and to receive dietary recommendations for achieving weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children is also reviewed. In sum, the body of research reviewed demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and the incorporation of computer automation to improve the accuracy, efficiency and cost-effectiveness of the method. © 2013 The British Dietetic Association Ltd.
Ferrarini, Alberto; Forcato, Claudio; Buson, Genny; Tononi, Paola; Del Monaco, Valentina; Terracciano, Mario; Bolognesi, Chiara; Fontana, Francesca; Medoro, Gianni; Neves, Rui; Möhlendick, Birte; Rihawi, Karim; Ardizzoni, Andrea; Sumanasuriya, Semini; Flohr, Penny; Lambros, Maryou; de Bono, Johann; Stoecklein, Nikolas H; Manaresi, Nicolò
2018-01-01
Chromosomal instability and associated chromosomal aberrations are hallmarks of cancer and play a critical role in disease progression and development of resistance to drugs. Single-cell genome analysis has gained interest in latest years as a source of biomarkers for targeted-therapy selection and drug resistance, and several methods have been developed to amplify the genomic DNA and to produce libraries suitable for Whole Genome Sequencing (WGS). However, most protocols require several enzymatic and cleanup steps, thus increasing the complexity and length of protocols, while robustness and speed are key factors for clinical applications. To tackle this issue, we developed a single-tube, single-step, streamlined protocol, exploiting ligation mediated PCR (LM-PCR) Whole Genome Amplification (WGA) method, for low-pass genome sequencing with the Ion Torrent™ platform and copy number alterations (CNAs) calling from single cells. The method was evaluated on single cells isolated from 6 aberrant cell lines of the NCI-H series. In addition, to demonstrate the feasibility of the workflow on clinical samples, we analyzed single circulating tumor cells (CTCs) and white blood cells (WBCs) isolated from the blood of patients affected by prostate cancer or lung adenocarcinoma. The results obtained show that the developed workflow generates data accurately representing whole genome absolute copy number profiles of single cell and allows alterations calling at resolutions down to 100 Kbp with as few as 200,000 reads. The presented data demonstrate the feasibility of the Ampli1™ WGA-based low-pass workflow for detection of CNAs in single tumor cells which would be of particular interest for genome-driven targeted therapy selection and for monitoring of disease progression.
Roff, D A; Crnokrak, P; Fairbairn, D J
2003-07-01
Quantitative genetic theory assumes that trade-offs are best represented by bivariate normal distributions. This theory predicts that selection will shift the trade-off function itself and not just move the mean trait values along a fixed trade-off line, as is generally assumed in optimality models. As a consequence, quantitative genetic theory predicts that the trade-off function will vary among populations in which at least one of the component traits itself varies. This prediction is tested using the trade-off between call duration and flight capability, as indexed by the mass of the dorsolateral flight muscles, in the macropterous morph of the sand cricket. We use four different populations of crickets that vary in the proportion of macropterous males (Lab = 33%, Florida = 29%, Bermuda = 72%, South Carolina = 80%). We find, as predicted, that there is significant variation in the intercept of the trade-off function but not the slope, supporting the hypothesis that trade-off functions are better represented as bivariate normal distributions rather than single lines. We also test the prediction from a quantitative genetical model of the evolution of wing dimorphism that the mean call duration of macropterous males will increase with the percentage of macropterous males in the population. This prediction is also supported. Finally, we estimate the probability of a macropterous male attracting a female, P, as a function of the relative time spent calling (P = time spent calling by macropterous male/(total time spent calling by both micropterous and macropterous male). We find that in the Lab and Florida populations the probability of a female selecting the macropterous male is equal to P, indicating that preference is due simply to relative call duration. But in the Bermuda and South Carolina populations the probability of a female selecting a macropterous male is less than P, indicating a preference for the micropterous male even after differences in call duration are accounted for.
Muley, Vijaykumar Yogesh; Ranjan, Akash
2012-01-01
Recent progress in computational methods for predicting physical and functional protein-protein interactions has provided new insights into the complexity of biological processes. Most of these methods assume that functionally interacting proteins are likely to have a shared evolutionary history. This history can be traced out for the protein pairs of a query genome by correlating different evolutionary aspects of their homologs in multiple genomes known as the reference genomes. These methods include phylogenetic profiling, gene neighborhood and co-occurrence of the orthologous protein coding genes in the same cluster or operon. These are collectively known as genomic context methods. On the other hand a method called mirrortree is based on the similarity of phylogenetic trees between two interacting proteins. Comprehensive performance analyses of these methods have been frequently reported in literature. However, very few studies provide insight into the effect of reference genome selection on detection of meaningful protein interactions. We analyzed the performance of four methods and their variants to understand the effect of reference genome selection on prediction efficacy. We used six sets of reference genomes, sampled in accordance with phylogenetic diversity and relationship between organisms from 565 bacteria. We used Escherichia coli as a model organism and the gold standard datasets of interacting proteins reported in DIP, EcoCyc and KEGG databases to compare the performance of the prediction methods. Higher performance for predicting protein-protein interactions was achievable even with 100-150 bacterial genomes out of 565 genomes. Inclusion of archaeal genomes in the reference genome set improves performance. We find that in order to obtain a good performance, it is better to sample few genomes of related genera of prokaryotes from the large number of available genomes. Moreover, such a sampling allows for selecting 50-100 genomes for comparable accuracy of predictions when computational resources are limited.
Mocan, Lucian; Tabaran, Flaviu A; Mocan, Teodora; Bele, Constantin; Orza, Anamaria Ioana; Lucan, Ciprian; Stiufiuc, Rares; Manaila, Ioana; Iulia, Ferencz; Dana, Iancu; Zaharie, Florin; Osian, Gelu; Vlad, Liviu; Iancu, Cornel
2011-01-01
The process of laser-mediated ablation of cancer cells marked with biofunctionalized carbon nanotubes is frequently called "nanophotothermolysis". We herein present a method of selective nanophotothermolisys of pancreatic cancer (PC) using multiwalled carbon nanotubes (MWCNTs) functionalized with human serum albumin (HSA). With the purpose of testing the therapeutic value of these nanobioconjugates, we have developed an ex-vivo experimental platform. Surgically resected specimens from patients with PC were preserved in a cold medium and kept alive via intra-arterial perfusion. Additionally, the HSA-MWCNTs have been intra-arterially administered in the greater pancreatic artery under ultrasound guidance. Confocal and transmission electron microscopy combined with immunohistochemical staining have confirmed the selective accumulation of HSA-MWCNTs inside the human PC tissue. The external laser irradiation of the specimen has significantly produced extensive necrosis of the malign tissue after the intra-arterial administration of HSA-MWCNTs, without any harmful effects on the surrounding healthy parenchyma. We have obtained a selective photothermal ablation of the malign tissue based on the selective internalization of MWCNTs with HSA cargo inside the pancreatic adenocarcinoma after the ex-vivo intra-arterial perfusion.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.
Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification
Feng, Yang; Jiang, Jiancheng; Tong, Xin
2015-01-01
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
The construction of support vector machine classifier using the firefly algorithm.
Chao, Chih-Feng; Horng, Ming-Huwi
2015-01-01
The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.
The Construction of Support Vector Machine Classifier Using the Firefly Algorithm
Chao, Chih-Feng; Horng, Ming-Huwi
2015-01-01
The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511
NASA Astrophysics Data System (ADS)
Mrozek, T.; Perlicki, K.; Tajmajer, T.; Wasilewski, P.
2017-08-01
The article presents an image analysis method, obtained from an asynchronous delay tap sampling (ADTS) technique, which is used for simultaneous monitoring of various impairments occurring in the physical layer of the optical network. The ADTS method enables the visualization of the optical signal in the form of characteristics (so called phase portraits) that change their shape under the influence of impairments such as chromatic dispersion, polarization mode dispersion and ASE noise. Using this method, a simulation model was built with OptSim 4.0. After the simulation study, data were obtained in the form of images that were further analyzed using the convolutional neural network algorithm. The main goal of the study was to train a convolutional neural network to recognize the selected impairment (distortion); then to test its accuracy and estimate the impairment for the selected set of test images. The input data consisted of processed binary images in the form of two-dimensional matrices, with the position of the pixel. This article focuses only on the analysis of images containing chromatic dispersion.
A Study on Real-Time Scheduling Methods in Holonic Manufacturing Systems
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Taimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new architectures of manufacturing systems have been proposed to realize flexible control structures of the manufacturing systems, which can cope with the dynamic changes in the volume and the variety of the products and also the unforeseen disruptions, such as failures of manufacturing resources and interruptions by high priority jobs. They are so called as the autonomous distributed manufacturing system, the biological manufacturing system and the holonic manufacturing system. Rule-based scheduling methods were proposed and applied to the real-time production scheduling problems of the HMS (Holonic Manufacturing System) in the previous report. However, there are still remaining problems from the viewpoint of the optimization of the whole production schedules. New procedures are proposed, in the present paper, to select the production schedules, aimed at generating effective production schedules in real-time. The proposed methods enable the individual holons to select suitable machining operations to be carried out in the next time period. Coordination process among the holons is also proposed to carry out the coordination based on the effectiveness values of the individual holons.
Alpha-beta coordination method for collective search
Goldsmith, Steven Y.
2002-01-01
The present invention comprises a decentralized coordination strategy called alpha-beta coordination. The alpha-beta coordination strategy is a family of collective search methods that allow teams of communicating agents to implicitly coordinate their search activities through a division of labor based on self-selected roles and self-determined status. An agent can play one of two complementary roles. An agent in the alpha role is motivated to improve its status by exploring new regions of the search space. An agent in the beta role is also motivated to improve its status, but is conservative and tends to remain aggregated with other agents until alpha agents have clearly identified and communicated better regions of the search space. An agent can select its role dynamically based on its current status value relative to the status values of neighboring team members. Status can be determined by a function of the agent's sensor readings, and can generally be a measurement of source intensity at the agent's current location. An agent's decision cycle can comprise three sequential decision rules: (1) selection of a current role based on the evaluation of the current status data, (2) selection of a specific subset of the current data, and (3) determination of the next heading using the selected data. Variations of the decision rules produce different versions of alpha and beta behaviors that lead to different collective behavior properties.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Gene Regulatory Network Inferences Using a Maximum-Relevance and Maximum-Significance Strategy
Liu, Wei; Zhu, Wen; Liao, Bo; Chen, Xiangtao
2016-01-01
Recovering gene regulatory networks from expression data is a challenging problem in systems biology that provides valuable information on the regulatory mechanisms of cells. A number of algorithms based on computational models are currently used to recover network topology. However, most of these algorithms have limitations. For example, many models tend to be complicated because of the “large p, small n” problem. In this paper, we propose a novel regulatory network inference method called the maximum-relevance and maximum-significance network (MRMSn) method, which converts the problem of recovering networks into a problem of how to select the regulator genes for each gene. To solve the latter problem, we present an algorithm that is based on information theory and selects the regulator genes for a specific gene by maximizing the relevance and significance. A first-order incremental search algorithm is used to search for regulator genes. Eventually, a strict constraint is adopted to adjust all of the regulatory relationships according to the obtained regulator genes and thus obtain the complete network structure. We performed our method on five different datasets and compared our method to five state-of-the-art methods for network inference based on information theory. The results confirm the effectiveness of our method. PMID:27829000
Igne, Benoît; de Juan, Anna; Jaumot, Joaquim; Lallemand, Jordane; Preys, Sébastien; Drennen, James K; Anderson, Carl A
2014-10-01
The implementation of a blend monitoring and control method based on a process analytical technology such as near infrared spectroscopy requires the selection and optimization of numerous criteria that will affect the monitoring outputs and expected blend end-point. Using a five component formulation, the present article contrasts the modeling strategies and end-point determination of a traditional quantitative method based on the prediction of the blend parameters employing partial least-squares regression with a qualitative strategy based on principal component analysis and Hotelling's T(2) and residual distance to the model, called Prototype. The possibility to monitor and control blend homogeneity with multivariate curve resolution was also assessed. The implementation of the above methods in the presence of designed experiments (with variation of the amount of active ingredient and excipients) and with normal operating condition samples (nominal concentrations of the active ingredient and excipients) was tested. The impact of criteria used to stop the blends (related to precision and/or accuracy) was assessed. Results demonstrated that while all methods showed similarities in their outputs, some approaches were preferred for decision making. The selectivity of regression based methods was also contrasted with the capacity of qualitative methods to determine the homogeneity of the entire formulation. Copyright © 2014. Published by Elsevier B.V.
Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning.
Xu, Zhoubing; Burke, Ryan P; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A
2015-08-01
Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining. Copyright © 2015 Elsevier B.V. All rights reserved.
Protection of cooled blades of complex internal structure
NASA Technical Reports Server (NTRS)
Glamiche, P.
1977-01-01
The problem of general protection of cooled blades of complex internal structure was solved by a method called SF technique which makes possible the protection of both external and internal surfaces, as well as those of the orifices of cooling air, whatever their diameter. The SF method is most often applied in the case of pack process, at controlled or high activity; it can be of use on previously uncoated parts, but also on pieces already coated by a thermochemical, chemical or PVD method. The respective thickness of external and internal coatings may be precisely predetermined, no parasitic particle being liable to remain inside the parts after application of the protecting treatment. Results obtained to date by application of this method are illustrated by the presentation and examination of a various selection of advanced turbo engines.
SD-MSAEs: Promoter recognition in human genome based on deep feature extraction.
Xu, Wenxuan; Zhang, Li; Lu, Yaping
2016-06-01
The prediction and recognition of promoter in human genome play an important role in DNA sequence analysis. Entropy, in Shannon sense, of information theory is a multiple utility in bioinformatic details analysis. The relative entropy estimator methods based on statistical divergence (SD) are used to extract meaningful features to distinguish different regions of DNA sequences. In this paper, we choose context feature and use a set of methods of SD to select the most effective n-mers distinguishing promoter regions from other DNA regions in human genome. Extracted from the total possible combinations of n-mers, we can get four sparse distributions based on promoter and non-promoters training samples. The informative n-mers are selected by optimizing the differentiating extents of these distributions. Specially, we combine the advantage of statistical divergence and multiple sparse auto-encoders (MSAEs) in deep learning to extract deep feature for promoter recognition. And then we apply multiple SVMs and a decision model to construct a human promoter recognition method called SD-MSAEs. Framework is flexible that it can integrate new feature extraction or new classification models freely. Experimental results show that our method has high sensitivity and specificity. Copyright © 2016 Elsevier Inc. All rights reserved.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
ERIC Educational Resources Information Center
Carter, Angela
This study involved observing a second-grade classroom to investigate how the teacher called on students, noting whether the teacher gave enough attention to students who raised their hands frequently by calling on them and examining students' responses when called on. Researchers implemented a new method of calling on students using name cards,…
Two sympatric species of passerine birds imitate the same raptor calls in alarm contexts
NASA Astrophysics Data System (ADS)
Ratnayake, Chaminda P.; Goodale, Eben; Kotagama, Sarath W.
2010-01-01
While some avian mimics appear to select sounds randomly, other species preferentially imitate sounds such as predator calls that are associated with danger. Previous work has shown that the Greater Racket-tailed Drongo ( Dicrurus paradiseus) incorporates predator calls and heterospecific alarm calls into its own species-typical alarm vocalizations. Here, we show that another passerine species, the Sri Lanka Magpie ( Urocissa ornata), which inhabits the same Sri Lankan rainforest, imitates three of the same predator calls that drongos do. For two of these call types, there is evidence that magpies also use them in alarm contexts. Our results support the hypothesis that imitated predator calls can serve as signals of alarm to multiple species.
Jović, Ozren
2016-12-15
A novel method for quantitative prediction and variable-selection on spectroscopic data, called Durbin-Watson partial least-squares regression (dwPLS), is proposed in this paper. The idea is to inspect serial correlation in infrared data that is known to consist of highly correlated neighbouring variables. The method selects only those variables whose intervals have a lower Durbin-Watson statistic (dw) than a certain optimal cutoff. For each interval, dw is calculated on a vector of regression coefficients. Adulteration of cold-pressed linseed oil (L), a well-known nutrient beneficial to health, is studied in this work by its being mixed with cheaper oils: rapeseed oil (R), sesame oil (Se) and sunflower oil (Su). The samples for each botanical origin of oil vary with respect to producer, content and geographic origin. The results obtained indicate that MIR-ATR, combined with dwPLS could be implemented to quantitative determination of edible-oil adulteration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Signal evaluation environment: a new method for the design of peripheral in-vehicle warning signals.
Werneke, Julia; Vollrath, Mark
2011-06-01
An evaluation method called the Signal Evaluation Environment (SEE) was developed for use in the early stages of the design process of peripheral warning signals while driving. Accident analyses have shown that with complex driving situations such as intersections, the visual scan strategies of the driver contribute to overlooking other road users who have the right of way. Salient peripheral warning signals could disrupt these strategies and direct drivers' attention towards these road users. To select effective warning signals, the SEE was developed as a laboratory task requiring visual-cognitive processes similar to those used at intersections. For validation of the SEE, four experiments were conducted using different stimulus characteristics (size, colour contrast, shape, flashing) that influence peripheral vision. The results confirm that the SEE is able to differentiate between the selected stimulus characteristics. The SEE is a useful initial tool for designing peripheral signals, allowing quick and efficient preselection of beneficial signals.
Burstiness and tie activation strategies in time-varying social networks.
Ubaldi, Enrico; Vezzani, Alessandro; Karsai, Márton; Perra, Nicola; Burioni, Raffaella
2017-04-13
The recent developments in the field of social networks shifted the focus from static to dynamical representations, calling for new methods for their analysis and modelling. Observations in real social systems identified two main mechanisms that play a primary role in networks' evolution and influence ongoing spreading processes: the strategies individuals adopt when selecting between new or old social ties, and the bursty nature of the social activity setting the pace of these choices. We introduce a time-varying network model accounting both for ties selection and burstiness and we analytically study its phase diagram. The interplay of the two effects is non trivial and, interestingly, the effects of burstiness might be suppressed in regimes where individuals exhibit a strong preference towards previously activated ties. The results are tested against numerical simulations and compared with two empirical datasets with very good agreement. Consequently, the framework provides a principled method to classify the temporal features of real networks, and thus yields new insights to elucidate the effects of social dynamics on spreading processes.
Support-vector-based emergent self-organising approach for emotional understanding
NASA Astrophysics Data System (ADS)
Nguwi, Yok-Yen; Cho, Siu-Yeung
2010-12-01
This study discusses the computational analysis of general emotion understanding from questionnaires methodology. The questionnaires method approaches the subject by investigating the real experience that accompanied the emotions, whereas the other laboratory approaches are generally associated with exaggerated elements. We adopted a connectionist model called support-vector-based emergent self-organising map (SVESOM) to analyse the emotion profiling from the questionnaires method. The SVESOM first identifies the important variables by giving discriminative features with high ranking. The classifier then performs the classification based on the selected features. Experimental results show that the top rank features are in line with the work of Scherer and Wallbott [(1994), 'Evidence for Universality and Cultural Variation of Differential Emotion Response Patterning', Journal of Personality and Social Psychology, 66, 310-328], which approached the emotions physiologically. While the performance measures show that using the full features for classifications can degrade the performance, the selected features provide superior results in terms of accuracy and generalisation.
Positive-negative corresponding normalized ghost imaging based on an adaptive threshold
NASA Astrophysics Data System (ADS)
Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.
2016-11-01
Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.
Burstiness and tie activation strategies in time-varying social networks
NASA Astrophysics Data System (ADS)
Ubaldi, Enrico; Vezzani, Alessandro; Karsai, Márton; Perra, Nicola; Burioni, Raffaella
2017-04-01
The recent developments in the field of social networks shifted the focus from static to dynamical representations, calling for new methods for their analysis and modelling. Observations in real social systems identified two main mechanisms that play a primary role in networks’ evolution and influence ongoing spreading processes: the strategies individuals adopt when selecting between new or old social ties, and the bursty nature of the social activity setting the pace of these choices. We introduce a time-varying network model accounting both for ties selection and burstiness and we analytically study its phase diagram. The interplay of the two effects is non trivial and, interestingly, the effects of burstiness might be suppressed in regimes where individuals exhibit a strong preference towards previously activated ties. The results are tested against numerical simulations and compared with two empirical datasets with very good agreement. Consequently, the framework provides a principled method to classify the temporal features of real networks, and thus yields new insights to elucidate the effects of social dynamics on spreading processes.
A comparison between computer-controlled and set work rate exercise based on target heart rate
NASA Technical Reports Server (NTRS)
Pratt, Wanda M.; Siconolfi, Steven F.; Webster, Laurie; Hayes, Judith C.; Mazzocca, Augustus D.; Harris, Bernard A., Jr.
1991-01-01
Two methods are compared for observing the heart rate (HR), metabolic equivalents, and time in target HR zone (defined as the target HR + or - 5 bpm) during 20 min of exercise at a prescribed intensity of the maximum working capacity. In one method, called set-work rate exercise, the information from a graded exercise test is used to select a target HR and to calculate a corresponding constant work rate that should induce the desired HR. In the other method, the work rate is controlled by a computer algorithm to achieve and maintain a prescribed target HR. It is shown that computer-controlled exercise is an effective alternative to the traditional set work rate exercise, particularly when tight control of cardiovascular responses is necessary.
Upweighting rare favourable alleles increases long-term genetic gain in genomic selection programs.
Liu, Huiming; Meuwissen, Theo H E; Sørensen, Anders C; Berg, Peer
2015-03-21
The short-term impact of using different genomic prediction (GP) models in genomic selection has been intensively studied, but their long-term impact is poorly understood. Furthermore, long-term genetic gain of genomic selection is expected to improve by using Jannink's weighting (JW) method, in which rare favourable marker alleles are upweighted in the selection criterion. In this paper, we extend the JW method by including an additional parameter to decrease the emphasis on rare favourable alleles over the time horizon, with the purpose of further improving the long-term genetic gain. We call this new method dynamic weighting (DW). The paper explores the long-term impact of different GP models with or without weighting methods. Different selection criteria were tested by simulating a population of 500 animals with truncation selection of five males and 50 females. Selection criteria included unweighted and weighted genomic estimated breeding values using the JW or DW methods, for which ridge regression (RR) and Bayesian lasso (BL) were used to estimate marker effects. The impacts of these selection criteria were compared under three genetic architectures, i.e. varying numbers of QTL for the trait and for two time horizons of 15 (TH15) or 40 (TH40) generations. For unweighted GP, BL resulted in up to 21.4% higher long-term genetic gain and 23.5% lower rate of inbreeding under TH40 than RR. For weighted GP, DW resulted in 1.3 to 5.5% higher long-term gain compared to unweighted GP. JW, however, showed a 6.8% lower long-term genetic gain relative to unweighted GP when BL was used to estimate the marker effects. Under TH40, both DW and JW obtained significantly higher genetic gain than unweighted GP. With DW, the long-term genetic gain was increased by up to 30.8% relative to unweighted GP, and also increased by 8% relative to JW, although at the expense of a lower short-term gain. Irrespective of the number of QTL simulated, BL is superior to RR in maintaining genetic variance and therefore results in higher long-term genetic gain. Moreover, DW is a promising method with which high long-term genetic gain can be expected within a fixed time frame.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics
Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.
2015-01-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564
Rebollar, Eria A; Antwis, Rachael E; Becker, Matthew H; Belden, Lisa K; Bletz, Molly C; Brucker, Robert M; Harrison, Xavier A; Hughey, Myra C; Kueneman, Jordan G; Loudon, Andrew H; McKenzie, Valerie; Medina, Daniel; Minbiole, Kevin P C; Rollins-Smith, Louise A; Walke, Jenifer B; Weiss, Sophie; Woodhams, Douglas C; Harris, Reid N
2016-01-01
Emerging infectious diseases in wildlife are responsible for massive population declines. In amphibians, chytridiomycosis caused by Batrachochytrium dendrobatidis, Bd, has severely affected many amphibian populations and species around the world. One promising management strategy is probiotic bioaugmentation of antifungal bacteria on amphibian skin. In vivo experimental trials using bioaugmentation strategies have had mixed results, and therefore a more informed strategy is needed to select successful probiotic candidates. Metagenomic, transcriptomic, and metabolomic methods, colloquially called "omics," are approaches that can better inform probiotic selection and optimize selection protocols. The integration of multiple omic data using bioinformatic and statistical tools and in silico models that link bacterial community structure with bacterial defensive function can allow the identification of species involved in pathogen inhibition. We recommend using 16S rRNA gene amplicon sequencing and methods such as indicator species analysis, the Kolmogorov-Smirnov Measure, and co-occurrence networks to identify bacteria that are associated with pathogen resistance in field surveys and experimental trials. In addition to 16S amplicon sequencing, we recommend approaches that give insight into symbiont function such as shotgun metagenomics, metatranscriptomics, or metabolomics to maximize the probability of finding effective probiotic candidates, which can then be isolated in culture and tested in persistence and clinical trials. An effective mitigation strategy to ameliorate chytridiomycosis and other emerging infectious diseases is necessary; the advancement of omic methods and the integration of multiple omic data provide a promising avenue toward conservation of imperiled species.
Frith, Emily; Loprinzi, Paul D.
2018-01-01
Background: We evaluated the differential influence of preferred versus imposed media selections on distinct hedonic responses to an acute bout of treadmill walking. Methods: Twenty university students were recruited for this [160 person-visit] laboratory experiment, which employed a within-subject, counter-balanced design. Participants were exposed to 8 experimental conditions, including (1) Exercise Only, (2) Texting Only, (3) Preferred Phone Call, (4) Imposed Phone Call, (5) Preferred Music Playlist, (6) Imposed Music Playlist, (7)Preferred Video and (8) Imposed Video. During each visit (except Texting Only), participants completed a 10-minute bout of walking on the treadmill at a self-selected pace. Walking speed was identical for all experimental conditions. Before, at the midpoint of exercise, and post-exercise, participants completed the Feeling Scale (FS) and the Felt Arousal Scale (FAS) to measure acute hedonic response. The Affective Circumplex Scale was administered pre-exercise and post-exercise. Results: Significant pre-post change scores were observed for happy (Imposed Call: P=0.05;Preferred Music: P=0.02; Imposed Video: P=0.03), excited (Exercise Only: P=0.001; PreferredVideo: P=0.01; Imposed Video: P=0.03), sad (Preferred Music: P=0.05), anxious (ExerciseOnly: P=0.05; Preferred Video: P=0.01), and fatigue (Exercise Only: P=0.03; Imposed Video:P=0.002). For the FS all change scores were statistically significant from pre-to-mid and pre-topost (P<0.05). Conclusion: This experiment provides strong evidence that entertaining media platforms substantively influences hedonic responses to exercise. Implications of these findings are discussed. PMID:29744306
Frith, Emily; Loprinzi, Paul D
2018-01-01
Background: We evaluated the differential influence of preferred versus imposed media selections on distinct hedonic responses to an acute bout of treadmill walking. Methods: Twenty university students were recruited for this [160 person-visit] laboratory experiment, which employed a within-subject, counter-balanced design. Participants were exposed to 8 experimental conditions, including (1) Exercise Only, (2) Texting Only, (3) Preferred Phone Call, (4) Imposed Phone Call, (5) Preferred Music Playlist, (6) Imposed Music Playlist, (7)Preferred Video and (8) Imposed Video. During each visit (except Texting Only), participants completed a 10-minute bout of walking on the treadmill at a self-selected pace. Walking speed was identical for all experimental conditions. Before, at the midpoint of exercise, and post-exercise, participants completed the Feeling Scale (FS) and the Felt Arousal Scale (FAS) to measure acute hedonic response. The Affective Circumplex Scale was administered pre-exercise and post-exercise. Results: Significant pre-post change scores were observed for happy (Imposed Call: P=0.05;Preferred Music: P=0.02; Imposed Video: P=0.03), excited (Exercise Only: P=0.001; PreferredVideo: P=0.01; Imposed Video: P=0.03), sad (Preferred Music: P=0.05), anxious (ExerciseOnly: P=0.05; Preferred Video: P=0.01), and fatigue (Exercise Only: P=0.03; Imposed Video:P=0.002). For the FS all change scores were statistically significant from pre-to-mid and pre-topost (P<0.05). Conclusion: This experiment provides strong evidence that entertaining media platforms substantively influences hedonic responses to exercise. Implications of these findings are discussed.
Automatic computer subprogram selection from application program libraries
NASA Technical Reports Server (NTRS)
Drozdowski, J. M.
1972-01-01
The program ALTLIB (ALTernate LIBrary) which allows a user access to an alternate subprogram library with a minimum effort is discussed. The ALTLIB program selects subprograms from an alternate library file and merges them with the user's program load file. Only subprograms that are called for (directly or indirectly) by the user's programs and that are available on the alternate library file will be selected. ALTLIB eliminates the need for elaborate control-card manipulations to add subprograms from a subprogram file. ALTLIB returns to the user his binary file and the selected subprograms in correct order for a call to the loader. The user supplies the alternate library file. Subprogram requests which are not satisfied from the alternate library file will be satisfied at load time from the system library.
Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian
2018-04-18
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.
Sound imaging of nocturnal animal calls in their natural habitat.
Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G
2011-09-01
We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Visual Aggregate Analysis of Eligibility Features of Clinical Trials
He, Zhe; Carini, Simona; Sim, Ida; Weng, Chunhua
2015-01-01
Objective To develop a method for profiling the collective populations targeted for recruitment by multiple clinical studies addressing the same medical condition using one eligibility feature each time. Methods Using a previously published database COMPACT as the backend, we designed a scalable method for visual aggregate analysis of clinical trial eligibility features. This method consists of four modules for eligibility feature frequency analysis, query builder, distribution analysis, and visualization, respectively. This method is capable of analyzing (1) frequently used qualitative and quantitative features for recruiting subjects for a selected medical condition, (2) distribution of study enrollment on consecutive value points or value intervals of each quantitative feature, and (3) distribution of studies on the boundary values, permissible value ranges, and value range widths of each feature. All analysis results were visualized using Google Charts API. Five recruited potential users assessed the usefulness of this method for identifying common patterns in any selected eligibility feature for clinical trial participant selection. Results We implemented this method as a Web-based analytical system called VITTA (Visual Analysis Tool of Clinical Study Target Populations). We illustrated the functionality of VITTA using two sample queries involving quantitative features BMI and HbA1c for conditions “hypertension” and “Type 2 diabetes”, respectively. The recruited potential users rated the user-perceived usefulness of VITTA with an average score of 86.4/100. Conclusions We contributed a novel aggregate analysis method to enable the interrogation of common patterns in quantitative eligibility criteria and the collective target populations of multiple related clinical studies. A larger-scale study is warranted to formally assess the usefulness of VITTA among clinical investigators and sponsors in various therapeutic areas. PMID:25615940
A spatial approach of magnitude-squared coherence applied to selective attention detection.
Bonato Felix, Leonardo; de Souza Ranaudo, Fernando; D'affonseca Netto, Aluizio; Ferreira Leite Miranda de Sá, Antonio Mauricio
2014-05-30
Auditory selective attention is the human ability of actively focusing in a certain sound stimulus while avoiding all other ones. This ability can be used, for example, in behavioral studies and brain-machine interface. In this work we developed an objective method - called Spatial Coherence - to detect the side where a subject is focusing attention to. This method takes into consideration the Magnitude Squared Coherence and the topographic distribution of responses among electroencephalogram electrodes. The individuals were stimulated with amplitude-modulated tones binaurally and were oriented to focus attention to only one of the stimuli. The results indicate a contralateral modulation of ASSR in the attention condition and are in agreement with prior studies. Furthermore, the best combination of electrodes led to a hit rate of 82% for 5.03 commands per minute. Using a similar paradigm, in a recent work, a maximum hit rate of 84.33% was achieved, but with a greater a classification time (20s, i.e. 3 commands per minute). It seems that Spatial Coherence is a useful technique for detecting focus of auditory selective attention. Copyright © 2014 Elsevier B.V. All rights reserved.
Clevenger, Josh; Chu, Ye; Chavarro, Carolina; Botton, Stephanie; Culbreath, Albert; Isleib, Thomas G; Holbrook, C C; Ozias-Akins, Peggy
2018-01-01
Late leaf spot (LLS; Cercosporidium personatum ) is a major fungal disease of cultivated peanut ( Arachis hypogaea ). A recombinant inbred line population segregating for quantitative field resistance was used to identify quantitative trait loci (QTL) using QTL-seq. High rates of false positive SNP calls using established methods in this allotetraploid crop obscured significant QTLs. To resolve this problem, robust parental SNPs were first identified using polyploid-specific SNP identification pipelines, leading to discovery of significant QTLs for LLS resistance. These QTLs were confirmed over 4 years of field data. Selection with markers linked to these QTLs resulted in a significant increase in resistance, showing that these markers can be immediately applied in breeding programs. This study demonstrates that QTL-seq can be used to rapidly identify QTLs controlling highly quantitative traits in polyploid crops with complex genomes. Markers identified can then be deployed in breeding programs, increasing the efficiency of selection using molecular tools. Key Message: Field resistance to late leaf spot is a quantitative trait controlled by many QTLs. Using polyploid-specific methods, QTL-seq is faster and more cost effective than QTL mapping.
Clevenger, Josh; Chu, Ye; Chavarro, Carolina; Botton, Stephanie; Culbreath, Albert; Isleib, Thomas G.; Holbrook, C. C.; Ozias-Akins, Peggy
2018-01-01
Late leaf spot (LLS; Cercosporidium personatum) is a major fungal disease of cultivated peanut (Arachis hypogaea). A recombinant inbred line population segregating for quantitative field resistance was used to identify quantitative trait loci (QTL) using QTL-seq. High rates of false positive SNP calls using established methods in this allotetraploid crop obscured significant QTLs. To resolve this problem, robust parental SNPs were first identified using polyploid-specific SNP identification pipelines, leading to discovery of significant QTLs for LLS resistance. These QTLs were confirmed over 4 years of field data. Selection with markers linked to these QTLs resulted in a significant increase in resistance, showing that these markers can be immediately applied in breeding programs. This study demonstrates that QTL-seq can be used to rapidly identify QTLs controlling highly quantitative traits in polyploid crops with complex genomes. Markers identified can then be deployed in breeding programs, increasing the efficiency of selection using molecular tools. Key Message: Field resistance to late leaf spot is a quantitative trait controlled by many QTLs. Using polyploid-specific methods, QTL-seq is faster and more cost effective than QTL mapping. PMID:29459876
NASA Astrophysics Data System (ADS)
Wang, Jingtao; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-02-01
In this study, we propose the concept of judgment space to investigate the quantum-secret-sharing scheme based on local distinguishability (called LOCC-QSS). Because of the proposing of this conception, the property of orthogonal mutiqudit entangled states under restricted local operation and classical communication (LOCC) can be described more clearly. According to these properties, we reveal that, in the previous (k ,n )-threshold LOCC-QSS scheme, there are two required conditions for the selected quantum states to resist the unambiguous attack: (i) their k -level judgment spaces are orthogonal, and (ii) their (k -1 )-level judgment spaces are equal. Practically, if k
Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm
NASA Technical Reports Server (NTRS)
Baskaran, Subbiah; Noever, D.
1999-01-01
Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.
Enabling technologies and green processes in cyclodextrin chemistry
Caporaso, Marina; Jicsinszky, Laszlo; Martina, Katia
2016-01-01
Summary The design of efficient synthetic green strategies for the selective modification of cyclodextrins (CDs) is still a challenging task. Outstanding results have been achieved in recent years by means of so-called enabling technologies, such as microwaves, ultrasound and ball mills, that have become irreplaceable tools in the synthesis of CD derivatives. Several examples of sonochemical selective modification of native α-, β- and γ-CDs have been reported including heterogeneous phase Pd- and Cu-catalysed hydrogenations and couplings. Microwave irradiation has emerged as the technique of choice for the production of highly substituted CD derivatives, CD grafted materials and polymers. Mechanochemical methods have successfully furnished greener, solvent-free syntheses and efficient complexation, while flow microreactors may well improve the repeatability and optimization of critical synthetic protocols. PMID:26977187
Music viewed by its entropy content: A novel window for comparative analysis
Febres, Gerardo; Jaffe, Klaus
2017-01-01
Polyphonic music files were analyzed using the set of symbols that produced the Minimal Entropy Description, which we call the Fundamental Scale. This allowed us to create a novel space to represent music pieces by developing: (a) a method to adjust a textual description from its original scale of observation to an arbitrarily selected scale, (b) a method to model the structure of any textual description based on the shape of the symbol frequency profiles, and (c) the concept of higher order entropy as the entropy associated with the deviations of a frequency-ranked symbol profile from a perfect Zipfian profile. We call this diversity index the ‘2nd Order Entropy’. Applying these methods to a variety of musical pieces showed how the space of ‘symbolic specific diversity-entropy’ and that of ‘2nd order entropy’ captures characteristics that are unique to each music type, style, composer and genre. Some clustering of these properties around each musical category is shown. These methods allow us to visualize a historic trajectory of academic music across this space, from medieval to contemporary academic music. We show that the description of musical structures using entropy, symbol frequency profiles and specific symbolic diversity allows us to characterize traditional and popular expressions of music. These classification techniques promise to be useful in other disciplines for pattern recognition and machine learning. PMID:29040288
Music viewed by its entropy content: A novel window for comparative analysis.
Febres, Gerardo; Jaffe, Klaus
2017-01-01
Polyphonic music files were analyzed using the set of symbols that produced the Minimal Entropy Description, which we call the Fundamental Scale. This allowed us to create a novel space to represent music pieces by developing: (a) a method to adjust a textual description from its original scale of observation to an arbitrarily selected scale, (b) a method to model the structure of any textual description based on the shape of the symbol frequency profiles, and (c) the concept of higher order entropy as the entropy associated with the deviations of a frequency-ranked symbol profile from a perfect Zipfian profile. We call this diversity index the '2nd Order Entropy'. Applying these methods to a variety of musical pieces showed how the space of 'symbolic specific diversity-entropy' and that of '2nd order entropy' captures characteristics that are unique to each music type, style, composer and genre. Some clustering of these properties around each musical category is shown. These methods allow us to visualize a historic trajectory of academic music across this space, from medieval to contemporary academic music. We show that the description of musical structures using entropy, symbol frequency profiles and specific symbolic diversity allows us to characterize traditional and popular expressions of music. These classification techniques promise to be useful in other disciplines for pattern recognition and machine learning.
Selecting a restoration technique to minimize OCR error.
Cannon, M; Fugate, M; Hush, D R; Scovel, C
2003-01-01
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.
Encoding the local connectivity patterns of fMRI for cognitive task and state classification.
Onal Ertugrul, Itir; Ozay, Mete; Yarman Vural, Fatos T
2018-06-15
In this work, we propose a novel framework to encode the local connectivity patterns of brain, using Fisher vectors (FV), vector of locally aggregated descriptors (VLAD) and bag-of-words (BoW) methods. We first obtain local descriptors, called mesh arc descriptors (MADs) from fMRI data, by forming local meshes around anatomical regions, and estimating their relationship within a neighborhood. Then, we extract a dictionary of relationships, called brain connectivity dictionary by fitting a generative Gaussian mixture model (GMM) to a set of MADs, and selecting codewords at the mean of each component of the mixture. Codewords represent connectivity patterns among anatomical regions. We also encode MADs by VLAD and BoW methods using k-Means clustering. We classify cognitive tasks using the Human Connectome Project (HCP) task fMRI dataset and cognitive states using the Emotional Memory Retrieval (EMR). We train support vector machines (SVMs) using the encoded MADs. Results demonstrate that, FV encoding of MADs can be successfully employed for classification of cognitive tasks, and outperform VLAD and BoW representations. Moreover, we identify the significant Gaussians in mixture models by computing energy of their corresponding FV parts, and analyze their effect on classification accuracy. Finally, we suggest a new method to visualize the codewords of the learned brain connectivity dictionary.
Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef
2012-10-01
The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.
Females that experience threat are better teachers
Kleindorfer, Sonia; Evans, Christine; Colombelli-Négrel, Diane
2014-01-01
Superb fairy-wren (Malurus cyaneus) females use an incubation call to teach their embryos a vocal password to solicit parental feeding care after hatching. We previously showed that high call rate by the female was correlated with high call similarity in fairy-wren chicks, but not in cuckoo chicks, and that parent birds more often fed chicks with high call similarity. Hosts should be selected to increase their defence behaviour when the risk of brood parasitism is highest, such as when cuckoos are present in the area. Therefore, we experimentally test whether hosts increase call rate to embryos in the presence of a singing Horsfield's bronze-cuckoo (Chalcites basalis). Female fairy-wrens increased incubation call rate when we experimentally broadcast cuckoo song near the nest. Embryos had higher call similarity when females had higher incubation call rate. We interpret the findings of increased call rate as increased teaching effort in response to a signal of threat. PMID:24806422
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
Leary, Christopher J; Garcia, Apryl M; Knapp, Rosemary
2006-10-01
The effects of androgens on male-typical traits suggest that variation among males in circulating levels can play a major role in sexual selection. We examined whether variation in vocal attractiveness is attributable to differences in androgen levels among Great Plains toads (Bufo cognatus). We found that noncalling "satellite" males practicing an alternative mating tactic were more likely to associate with males producing long calls. However, callers with satellites did not have higher androgen levels than callers without satellites. Rather, callers with satellites had significantly lower corticosterone (CORT) levels than callers without satellites. A CORT manipulation experiment suggested that differences in calls for males with and without satellites were related to differences in CORT levels. Furthermore, there was a negative correlation between CORT level and call duration within most nights of chorus activity. However, the correlation was weak for the pooled data (across all nights), suggesting that local environmental and/or social factors also affect call duration. Last, we show that females preferred broadcast calls of longer duration, characteristic of males with satellites and low CORT. These results imply that satellites optimize their reproductive success by associating with males producing long calls. However, this association should negatively affect the fitness of attractive callers.
Extended FDD-WT method based on correcting the errors due to non-synchronous sensing of sensors
NASA Astrophysics Data System (ADS)
Tarinejad, Reza; Damadipour, Majid
2016-05-01
In this research, a combinational non-parametric method called frequency domain decomposition-wavelet transform (FDD-WT) that was recently presented by the authors, is extended for correction of the errors resulting from asynchronous sensing of sensors, in order to extend the application of the algorithm for different kinds of structures, especially for huge structures. Therefore, the analysis process is based on time-frequency domain decomposition and is performed with emphasis on correcting time delays between sensors. Time delay estimation (TDE) methods are investigated for their efficiency and accuracy for noisy environmental records and the Phase Transform - β (PHAT-β) technique was selected as an appropriate method to modify the operation of traditional FDD-WT in order to achieve the exact results. In this paper, a theoretical example (3DOF system) has been provided in order to indicate the non-synchronous sensing effects of the sensors on the modal parameters; moreover, the Pacoima dam subjected to 13 Jan 2001 earthquake excitation was selected as a case study. The modal parameters of the dam obtained from the extended FDD-WT method were compared with the output of the classical signal processing method, which is referred to as 4-Spectral method, as well as other literatures relating to the dynamic characteristics of Pacoima dam. The results comparison indicates that values are correct and reliable.
Micro Computer Feedback Report for the Strategic Leader Development Inventory; Source Code
1994-03-01
SEL5 ;exit if error CALL SELZCT SCRZEN ;display select screen JC SEL4 ;no files in directory .------- display the files NOV BX, [BarPos] ;starting...SEL2 ;if not goto next test imp SEL4 ; Ecit SEL2: CUP AL,ODh ;in it a pick ? 3Z SEL3 ;if YES exit loop ------- see if an active control key was...file CALL READCOMFIG eread file into memory JC SEL5 ;exit to main menu CALL OPEN DATA FILE ;is data arailable? SEL4 : CALL RELEASE_ _MDR ;release mom
Ancestrality and evolution of trait syndromes in finches (Fringillidae).
Ponge, Jean-François; Zuccon, Dario; Elias, Marianne; Pavoine, Sandrine; Henry, Pierre-Yves; Théry, Marc; Guilbert, Éric
2017-12-01
Species traits have been hypothesized by one of us (Ponge, 2013) to evolve in a correlated manner as species colonize stable, undisturbed habitats, shifting from "ancestral" to "derived" strategies. We predicted that generalism, r-selection, sexual monomorphism, and migration/gregariousness are the ancestral states (collectively called strategy A) and evolved correlatively toward specialism, K-selection, sexual dimorphism, and residence/territoriality as habitat stabilized (collectively called B strategy). We analyzed the correlated evolution of four syndromes, summarizing the covariation between 53 traits, respectively, involved in ecological specialization, r-K gradient, sexual selection, and dispersal/social behaviors in 81 species representative of Fringillidae, a bird family with available natural history information and that shows variability for all these traits. The ancestrality of strategy A was supported for three of the four syndromes, the ancestrality of generalism having a weaker support, except for the core group Carduelinae (69 species). It appeared that two different B-strategies evolved from the ancestral state A, both associated with highly predictable environments: one in poorly seasonal environments, called B1, with species living permanently in lowland tropics, with "slow pace of life" and weak sexual dimorphism, and one in highly seasonal environments, called B2, with species breeding out-of-the-tropics, migratory, with a "fast pace of life" and high sexual dimorphism.
Selecting materialized views using random algorithm
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Scope. 80.351 Section 80.351 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME..., call and reply. —Working. —Digital selective calling (DSC). —Narrow-band direct-printing (NB-DP...
Winslow, J T; Insel, T R
1991-01-01
A modulatory role for serotonin has been described for the development and expression of the ultrasonic call of infant rat pups during brief maternal separations. In previous studies, serotonin reuptake inhibitors selectively reduced the rate of calling following acute administration to 9-11-day-old pups and a serotonin neurotoxin (MDMA) systematically disrupted the development of ultrasonic vocalizations but not other measures of motor development. In the current studies, we extended our investigations to include drugs with purported receptor subtype selectivities. Consistent with previous reports, acute administration of 5HT1A agonists buspirone and 8-OH-DPAT [+/-)-8-hydroxy-2-(di-N-propylamino)tetralin) reduced the rate of calling at doses which did not affect motor activity or core body temperature. The rate reducing effects of buspirone persisted up to 1 but not 2 h after injection. Administration of purported 5HT1B receptor agonists, CGS12066B (7-trifluoromethyl-4(4-methyl-1-piperazinyl)-pyrrolo[1,2-a] quinoxaline) and TFMPP (1-[3-fluoromethyl)phenyl]-piperazine) increased the rate of calling depending on the specificity of the drug for the 5HT1B receptor. d,l-Propranolol, a 5HT1 receptor antagonist, blocked the effects of both 8-OH-DPAT and TFMPP. m-CPP (1-(3-chlorophenyl)piperazine) and DOI [+/-)-1-(2,5-dimethoxy-4-iodophenyl)-2-aminopropane), drugs with putative actions at 5HT1C and 5HT2 receptor sites both decreased calling but differed according to their effects on motor activity. Ritanserin, a 5HT2 and 5HT1C antagonist, produced a dose-related increase in call rate. A dose of ritanserin with no apparent intrinsic effects effectively antagonized DOI rate reducing effects but potentiated the rate reducing effects of m-CPP.(ABSTRACT TRUNCATED AT 250 WORDS)
ALCF Data Science Program: Productive Data-centric Supercomputing
NASA Astrophysics Data System (ADS)
Romero, Nichols; Vishwanath, Venkatram
The ALCF Data Science Program (ADSP) is targeted at big data science problems that require leadership computing resources. The goal of the program is to explore and improve a variety of computational methods that will enable data-driven discoveries across all scientific disciplines. The projects will focus on data science techniques covering a wide area of discovery including but not limited to uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows. Project teams will be among the first to access Theta, ALCFs forthcoming 8.5 petaflops Intel/Cray system. The program will transition to the 200 petaflop/s Aurora supercomputing system when it becomes available. In 2016, four projects have been selected to kick off the ADSP. The selected projects span experimental and computational sciences and range from modeling the brain to discovering new materials for solar-powered windows to simulating collision events at the Large Hadron Collider (LHC). The program will have a regular call for proposals with the next call expected in Spring 2017.http://www.alcf.anl.gov/alcf-data-science-program This research used resources of the ALCF, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
Machine Learned Replacement of N-Labels for Basecalled Sequences in DNA Barcoding.
Ma, Eddie Y T; Ratnasingham, Sujeevan; Kremer, Stefan C
2018-01-01
This study presents a machine learning method that increases the number of identified bases in Sanger Sequencing. The system post-processes a KB basecalled chromatogram. It selects a recoverable subset of N-labels in the KB-called chromatogram to replace with basecalls (A,C,G,T). An N-label correction is defined given an additional read of the same sequence, and a human finished sequence. Corrections are added to the dataset when an alignment determines the additional read and human agree on the identity of the N-label. KB must also rate the replacement with quality value of in the additional read. Corrections are only available during system training. Developing the system, nearly 850,000 N-labels are obtained from Barcode of Life Datasystems, the premier database of genetic markers called DNA Barcodes. Increasing the number of correct bases improves reference sequence reliability, increases sequence identification accuracy, and assures analysis correctness. Keeping with barcoding standards, our system maintains an error rate of percent. Our system only applies corrections when it estimates low rate of error. Tested on this data, our automation selects and recovers: 79 percent of N-labels from COI (animal barcode); 80 percent from matK and rbcL (plant barcodes); and 58 percent from non-protein-coding sequences (across eukaryotes).
Visual aggregate analysis of eligibility features of clinical trials.
He, Zhe; Carini, Simona; Sim, Ida; Weng, Chunhua
2015-04-01
To develop a method for profiling the collective populations targeted for recruitment by multiple clinical studies addressing the same medical condition using one eligibility feature each time. Using a previously published database COMPACT as the backend, we designed a scalable method for visual aggregate analysis of clinical trial eligibility features. This method consists of four modules for eligibility feature frequency analysis, query builder, distribution analysis, and visualization, respectively. This method is capable of analyzing (1) frequently used qualitative and quantitative features for recruiting subjects for a selected medical condition, (2) distribution of study enrollment on consecutive value points or value intervals of each quantitative feature, and (3) distribution of studies on the boundary values, permissible value ranges, and value range widths of each feature. All analysis results were visualized using Google Charts API. Five recruited potential users assessed the usefulness of this method for identifying common patterns in any selected eligibility feature for clinical trial participant selection. We implemented this method as a Web-based analytical system called VITTA (Visual Analysis Tool of Clinical Study Target Populations). We illustrated the functionality of VITTA using two sample queries involving quantitative features BMI and HbA1c for conditions "hypertension" and "Type 2 diabetes", respectively. The recruited potential users rated the user-perceived usefulness of VITTA with an average score of 86.4/100. We contributed a novel aggregate analysis method to enable the interrogation of common patterns in quantitative eligibility criteria and the collective target populations of multiple related clinical studies. A larger-scale study is warranted to formally assess the usefulness of VITTA among clinical investigators and sponsors in various therapeutic areas. Copyright © 2015 Elsevier Inc. All rights reserved.
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554
Projection methods for line radiative transfer in spherical media.
NASA Astrophysics Data System (ADS)
Anusha, L. S.; Nagendra, K. N.
An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).
Recursive regularization for inferring gene networks from time-course gene expression profiles
Shimamura, Teppei; Imoto, Seiya; Yamaguchi, Rui; Fujita, André; Nagasaki, Masao; Miyano, Satoru
2009-01-01
Background Inferring gene networks from time-course microarray experiments with vector autoregressive (VAR) model is the process of identifying functional associations between genes through multivariate time series. This problem can be cast as a variable selection problem in Statistics. One of the promising methods for variable selection is the elastic net proposed by Zou and Hastie (2005). However, VAR modeling with the elastic net succeeds in increasing the number of true positives while it also results in increasing the number of false positives. Results By incorporating relative importance of the VAR coefficients into the elastic net, we propose a new class of regularization, called recursive elastic net, to increase the capability of the elastic net and estimate gene networks based on the VAR model. The recursive elastic net can reduce the number of false positives gradually by updating the importance. Numerical simulations and comparisons demonstrate that the proposed method succeeds in reducing the number of false positives drastically while keeping the high number of true positives in the network inference and achieves two or more times higher true discovery rate (the proportion of true positives among the selected edges) than the competing methods even when the number of time points is small. We also compared our method with various reverse-engineering algorithms on experimental data of MCF-7 breast cancer cells stimulated with two ErbB ligands, EGF and HRG. Conclusion The recursive elastic net is a powerful tool for inferring gene networks from time-course gene expression profiles. PMID:19386091
Improvement of kurtosis-guided-grams via Gini index for bearing fault feature identification
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing
2017-12-01
A group of kurtosis-guided-grams, such as Kurtogram, Protrugram and SKRgram, is designed to detect the resonance band excited by faults based on the sparsity index. However, a common issue associated with these methods is that they tend to choose the frequency band with individual impulses rather than the desired fault impulses. This may be attributed to the selection of the sparsity index, kurtosis, which is vulnerable to impulsive noise. In this paper, to solve the problem, a sparsity index, called the Gini index, is introduced as an alternative estimator for the selection of the resonance band. It has been found that the sparsity index is still able to provide guidelines for the selection of the fault band without prior information of the fault period. More importantly, the Gini index has unique performance in random-impulse resistance, which renders the improved methods using the index free from the random impulse caused by external knocks on the bearing housing, or electromagnetic interference. By virtue of these advantages, the improved methods using the Gini index not only overcome the shortcomings but are more effective under harsh working conditions, even in the complex structure. Finally, the comparison between the kurtosis-guided-grams and the improved methods using the Gini index is made using the simulated and experimental data. The results verify the effectiveness of the improvement by both the fixed-axis bearing and planetary bearing fault signals.
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
Seeking inclusion in an exclusive process: discourses of medical school student selection.
Razack, Saleem; Hodges, Brian; Steinert, Yvonne; Maguire, Mary
2015-01-01
Calls to increase medical class representativeness to better reflect the diversity of society represent a growing international trend. There is an inherent tension between these calls and competitive student selection processes driven by academic achievement. How is this tension manifested? Our three-phase interdisciplinary research programme focused on the discourses of excellence, equity and diversity in the medical school selection process, as conveyed by key stakeholders: (i) institutions and regulatory bodies (the websites of 17 medical schools and 15 policy documents from national regulatory bodies); (ii) admissions committee members (ACMs) (according to semi-structured interviews [n = 9]), and (iii) successful applicants (according to semi-structured interviews [n = 14]). The work is theoretically situated within the works of Foucault, Bourdieu and Bakhtin. The conceptual framework is supplemented by critical hermeneutics and the performance theories of Goffman. Academic excellence discourses consistently predominate over discourses calling for greater representativeness in medical classes. Policy addressing demographic representativeness in medicine may unwittingly contribute to the reproduction of historical patterns of exclusion of under-represented groups. In ACM selection practices, another discursive tension is exposed as the inherent privilege in the process is marked, challenging the ideal of medicine as a meritocracy. Applicants' representations of self in the 'performance' of interviewing demonstrate implicit recognition of the power inherent in the act of selection and are manifested in the use of explicit strategies to 'fit in'. How can this critical discourse analysis inform improved inclusiveness in student selection? Policymakers addressing diversity and equity issues in medical school admissions should explicitly recognise the power dynamics at play between the profession and marginalised groups. For greater inclusion and to avoid one authoritative definition of excellence, we suggest a transformative model of faculty development aimed at promoting multiple kinds of excellence. Through this multi-pronged approach, we call for the profession to courageously confront the cherished notion of the medical meritocracy in order to avoid unwanted aspects of elitism. © 2014 John Wiley & Sons Ltd.
Paradigms, pragmatism and possibilities: mixed-methods research in speech and language therapy.
Glogowska, Margaret
2011-01-01
After the decades of the so-called 'paradigm wars' in social science research methodology and the controversy about the relative place and value of quantitative and qualitative research methodologies, 'paradigm peace' appears to have now been declared. This has come about as many researchers have begun to take a 'pragmatic' approach in the selection of research methodology, choosing the methodology best suited to answering the research question rather than conforming to a methodological orthodoxy. With the differences in the philosophical underpinnings of the two traditions set to one side, an increasing awareness, and valuing, of the 'mixed-methods' approach to research is now present in the fields of social, educational and health research. To explore what is meant by mixed-methods research and the ways in which quantitative and qualitative methodologies and methods can be combined and integrated, particularly in the broad field of health services research and the narrower one of speech and language therapy. The paper discusses the ways in which methodological approaches have already been combined and integrated in health services research and speech and language therapy, highlighting the suitability of mixed-methods research for answering the typically multifaceted questions arising from the provision of complex interventions. The challenges of combining and integrating quantitative and qualitative methods and the barriers to the adoption of mixed-methods approaches are also considered. The questions about healthcare, as it is being provided in the 21st century, calls for a range of methodological approaches. This is particularly the case for human communication and its disorders, where mixed-methods research offers a wealth of possibilities. In turn, speech and language therapy research should be able to contribute substantively to the future development of mixed-methods research. © 2010 Royal College of Speech & Language Therapists.
3D sensor placement strategy using the full-range pheromone ant colony system
NASA Astrophysics Data System (ADS)
Shuo, Feng; Jingqing, Jia
2016-07-01
An optimized sensor placement strategy will be extremely beneficial to ensure the safety and cost reduction considerations of structural health monitoring (SHM) systems. The sensors must be placed such that important dynamic information is obtained and the number of sensors is minimized. The practice is to select individual sensor directions by several 1D sensor methods and the triaxial sensors are placed in these directions for monitoring. However, this may lead to non-optimal placement of many triaxial sensors. In this paper, a new method, called FRPACS, is proposed based on the ant colony system (ACS) to solve the optimal placement of triaxial sensors. The triaxial sensors are placed as single units in an optimal fashion. And then the new method is compared with other algorithms using Dalian North Bridge. The computational precision and iteration efficiency of the FRPACS has been greatly improved compared with the original ACS and EFI method.
NASA Astrophysics Data System (ADS)
Botti, Lorenzo; Di Pietro, Daniele A.
2018-10-01
We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096
47 CFR 80.148 - Watch on 156.8 MHz (Channel 16).
Code of Federal Regulations, 2010 CFR
2010-10-01
... exchanging communications. For GMDSS ships, 156.525 MHz is the calling frequency for distress, safety, and general communications using digital selective calling and the watch on 156.800 MHz is provided so that... 80.148 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO...
Guimaraes, Rodrigo Soares; Delorme-Axford, Elizabeth; Klionsky, Daniel J; Reggiori, Fulvio
2015-03-01
Autophagy is a conserved intracellular catabolic pathway that degrades unnecessary or dysfunctional cellular components. Components destined for degradation are sequestered into double-membrane vesicles called autophagosomes, which subsequently fuse with the vacuole/lysosome delivering their cargo into the interior of this organelle for turnover. Autophagosomes are generated through the concerted action of the autophagy-related (Atg) proteins. The yeast Saccharomyces cerevisiae has been key in the identification of the corresponding genes and their characterization, and it remains one of the leading model systems for the investigation of the molecular mechanism and functions of autophagy. In particular, it is still pivotal for the study of selective types of autophagy. The objective of this review is to present detailed protocols of the methods available to monitor the progression of both nonselective and selective types of autophagy, and to discuss their advantages and disadvantages. The ultimate aim is to provide researchers with the information necessary to select the optimal approach to address their biological question. Copyright © 2014 Elsevier Inc. All rights reserved.
Chimpanzee quiet hoo variants differ according to context.
Crockford, Catherine; Gruber, Thibaud; Zuberbühler, Klaus
2018-05-01
In comparative studies of evolution of communication, the function and use of animal quiet calls have typically been understudied, despite that these signals are presumably under selection like other vocalizations, such as alarm calls. Here, we examine vocalization diversification of chimpanzee quiet 'hoos' produced in three contexts-travel, rest and alert-and potential pressures promoting diversification. Previous playback and observational studies have suggested that the overarching function of chimpanzee hoos is to stay in contact with others, particularly bond partners. We conducted an acoustic analysis of hoos using audio recordings from wild chimpanzees ( Pan troglodytes schweinfurthii ) of Budongo Forest, Uganda. We identified three acoustically distinguishable, context-specific hoo variants. Each call variant requires specific responses from receivers to avoid breaking up the social unit. We propose that callers may achieve coordination by using acoustically distinguishable calls, advertising their own behavioural intentions. We conclude that natural selection has acted towards acoustically diversifying an inconspicuous, quiet vocalization, the chimpanzee hoo. This evolutionary process may have been favoured by the fact that signallers and recipients share the same goal, to maintain social cohesion, particularly among those who regularly cooperate, suggesting that call diversification has been favoured by the demands of cooperative activities.
Condition-dependent reproductive effort in frogs infected by a widespread pathogen
Roznik, Elizabeth A.; Sapsford, Sarah J.; Pike, David A.; Schwarzkopf, Lin; Alford, Ross A.
2015-01-01
To minimize the negative effects of an infection on fitness, hosts can respond adaptively by altering their reproductive effort or by adjusting their timing of reproduction. We studied effects of the pathogenic fungus Batrachochytrium dendrobatidis on the probability of calling in a stream-breeding rainforest frog (Litoria rheocola). In uninfected frogs, calling probability was relatively constant across seasons and body conditions, but in infected frogs, calling probability differed among seasons (lowest in winter, highest in summer) and was strongly and positively related to body condition. Infected frogs in poor condition were up to 40% less likely to call than uninfected frogs, whereas infected frogs in good condition were up to 30% more likely to call than uninfected frogs. Our results suggest that frogs employed a pre-existing, plastic, life-history strategy in response to infection, which may have complex evolutionary implications. If infected males in good condition reproduce at rates equal to or greater than those of uninfected males, selection on factors affecting disease susceptibility may be minimal. However, because reproductive effort in infected males is positively related to body condition, there may be selection on mechanisms that limit the negative effects of infections on hosts. PMID:26063847
Cheng, Qiang; Zhou, Hongbo; Cheng, Jie
2011-06-01
Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.
Feature selection for examining behavior by pathology laboratories.
Hawkins, S; Williams, G; Baxter, R
2001-08-01
Australia has a universal health insurance scheme called Medicare, which is managed by Australia's Health Insurance Commission. Medicare payments for pathology services generate voluminous transaction data on patients, doctors and pathology laboratories. The Health Insurance Commission (HIC) currently uses predictive models to monitor compliance with regulatory requirements. The HIC commissioned a project to investigate the generation of new features from the data. Feature generation has not appeared as an important step in the knowledge discovery in databases (KDD) literature. New interesting features for use in predictive modeling are generated. These features were summarized, visualized and used as inputs for clustering and outlier detection methods. Data organization and data transformation methods are described for the efficient access and manipulation of these new features.
A test of the acoustic adaptation hypothesis in four species of marmots.
Daniel; Blumstein
1998-12-01
Acoustic signals must be transmitted from a signaller to a receiver during which time they become modified. The acoustic adaptation hypothesis suggests that selection should shape the structure of long-distance signals to maximize transmission through different habitats. A specific prediction of the acoustic adaptation hypothesis is that long-distance signals of animals in their native habitat are expected to change less during transmission than non-native signals within that habitat. This prediction was tested using the alarm calls of four species of marmots that live in acoustically different habitats and produce species-specific, long-distance alarm vocalizations: yellow-bellied marmot, Marmota flaviventris; Olympic marmot, M. olympus; hoary marmot, M. caligata; and woodchuck, M. monax. By doing so, we evaluated the relative importance the acoustic environment plays on selecting for divergent marmot alarm calls. Representative alarm calls of the four species were broadcast and rerecorded in each species' habitat at four distances from a source. Rerecorded, and therefore degraded alarm calls, were compared to undegraded calls using spectrogram correlation. If each species' alarm call was transmitted with less overall degradation in its own environment, a significant interaction between species' habitat and species' call type would be expected. Transmission fidelity at each of four distances was treated as a multivariate response and differences among habitat and call type were tested in a two-way MANOVA. Although significant overall differences in the transmission properties of the habitats were found, and significant overall differences in the transmission properties of the call types were found, there was no significant interaction between habitat and call type. Thus, the evidence did not support the acoustic adaptation hypothesis for these marmot species. Factors other than maximizing long-distance transmission through the environment may be important in the evolution of species-specific marmot alarm calls. (c) 1998 The Association for the Study of Animal Behaviour.
Lee, Ko-Huan; Shaner, Pei-Jen L; Lin, Yen-Po; Lin, Si-Min
2016-05-01
Acoustic signals for mating are important traits that could drive population differentiation and speciation. Ecology may play a role in acoustic divergence through direct selection (e.g., local adaptation to abiotic environment), constraint of correlated traits (e.g., acoustic traits linked to another trait under selection), and/or interspecific competition (e.g., character displacement). However, genetic drift alone can also drive acoustic divergence. It is not always easy to differentiate the role of ecology versus drift in acoustic divergence. In this study, we tested the role of ecology and drift in shaping geographic variation in the advertisement calls of Microhyla fissipes. We examined three predictions based on ecological processes: (1) the correlation between temperature and call properties across M. fissipes populations; (2) the correlation between call properties and body size across M. fissipes populations; and (3) reproductive character displacement (RCD) in call properties between M. fissipes populations that are sympatric with and allopatric to a congener M. heymonsi. To test genetic drift, we examined correlations among call divergence, geographic distance, and genetic distance across M. fissipes populations. We recorded the advertisement calls from 11 populations of M. fissipes in Taiwan, five of which are sympatrically distributed with M. heymonsi. We found geographic variation in both temporal and spectral properties of the advertisement calls of M. fissipes. However, the call properties were not correlated with local temperature or the callers' body size. Furthermore, we did not detect RCD. By contrast, call divergence, geographic distance, and genetic distance between M. fissipes populations were all positively correlated. The comparisons between phenotypic Q st (P st) and F st values did not show significant differences, suggesting a role of drift. We concluded that genetic drift, rather than ecological processes, is the more likely driver for the geographic variation in the advertisement calls of M. fissipes.
Statistical technique for analysing functional connectivity of multiple spike trains.
Masud, Mohammad Shahed; Borisyuk, Roman
2011-03-15
A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.
Efficient visualization of urban spaces
NASA Astrophysics Data System (ADS)
Stamps, A. E.
2012-10-01
This chapter presents a new method for calculating efficiency and applies that method to the issues of selecting simulation media and evaluating the contextual fit of new buildings in urban spaces. The new method is called "meta-analysis". A meta-analytic review of 967 environments indicated that static color simulations are the most efficient media for visualizing urban spaces. For contextual fit, four original experiments are reported on how strongly five factors influence visual appeal of a street: architectural style, trees, height of a new building relative to the heights of existing buildings, setting back a third story, and distance. A meta-analysis of these four experiments and previous findings, covering 461 environments, indicated that architectural style, trees, and height had effects strong enough to warrant implementation, but the effects of setting back third stories and distance were too small to warrant implementation.
ERIC Educational Resources Information Center
Levy, Mike
2015-01-01
The article considers the role of qualitative research methods in CALL through describing a series of examples. These examples are used to highlight the importance and value of qualitative data in relation to a specific research objective in CALL. The use of qualitative methods in conjunction with other approaches as in mixed method research…
Identifying relevant group of miRNAs in cancer using fuzzy mutual information.
Pal, Jayanta Kumar; Ray, Shubhra Sankar; Pal, Sankar K
2016-04-01
MicroRNAs (miRNAs) act as a major biomarker of cancer. All miRNAs in human body are not equally important for cancer identification. We propose a methodology, called FMIMS, which automatically selects the most relevant miRNAs for a particular type of cancer. In FMIMS, miRNAs are initially grouped by using a SVM-based algorithm; then the group with highest relevance is determined and the miRNAs in that group are finally ranked for selection according to their redundancy. Fuzzy mutual information is used in computing the relevance of a group and the redundancy of miRNAs within it. Superiority of the most relevant group to all others, in deciding normal or cancer, is demonstrated on breast, renal, colorectal, lung, melanoma and prostate data. The merit of FMIMS as compared to several existing methods is established. While 12 out of 15 selected miRNAs by FMIMS corroborate with those of biological investigations, three of them viz., "hsa-miR-519," "hsa-miR-431" and "hsa-miR-320c" are possible novel predictions for renal cancer, lung cancer and melanoma, respectively. The selected miRNAs are found to be involved in disease-specific pathways by targeting various genes. The method is also able to detect the responsible miRNAs even at the primary stage of cancer. The related code is available at http://www.jayanta.droppages.com/FMIMS.html .
Space-ecology set covering problem for modeling Daiyun Mountain Reserve, China
NASA Astrophysics Data System (ADS)
Lin, Chih-Wei; Liu, Jinfu; Huang, Jiahang; Zhang, Huiguang; Lan, Siren; Hong, Wei; Li, Wenzhou
2018-02-01
Site selection is an important issue in designing the nature reserve that has been studied over the years. However, a well-balanced relationship between preservation of biodiversity and site selection is still challenging. Unlike the existing methods, we consider three critical components, the spatial continuity, spatial compactness and ecological information to address the problem of designing the reserve. In this paper, we propose a new mathematical model of set covering problem called Space-ecology Set Covering Problem (SeSCP) for designing a reserve network. First, we generate the ecological information by forest resource investigation. Then, we split the landscape into elementary cells and calculate the ecological score of each cell. Next, we associate the ecological information with the spatial properties to select a set of cells to form a nature reserve for improving the ability of protecting the biodiversity. Two spatial constraints, continuity and compactability, are given in SeSCP. The continuity is to ensure that any selected site has to be connected with adjacent sites and the compactability is to minimize the perimeter of the selected sites. In computational experiments, we take Daiyun Mountain as a study area to demonstrate the feasibility and effectiveness of the proposed model.
Xu, Ziwei; Qiu, Lu; Ding, Feng
2018-03-21
Depending on its specific structure, or so-called chirality, a single-walled carbon nanotube (SWCNT) can be either a conductor or a semiconductor. This feature ensures great potential for building ∼1 nm sized electronics if chirality-selected SWCNTs could be achieved. However, due to the limited understanding of the growth mechanism of SWCNTs, reliable methods for chirality-selected SWCNTs are still pending. Here we present a theoretical model on the chirality assignment and control of SWCNTs during the catalytic growth. This study reveals that the chirality of a SWCNT is determined by the kinetic incorporation of pentagons, especially the last (6 th ) one, during the nucleation stage. Our analysis showed that the chirality of a SWCNT is randomly assigned on a liquid or liquid-like catalyst surface, and two routes of synthesizing chirality-selected SWCNTs, which are verified by recent experimental achievements, are demonstrated. They are (i) by using high melting point crystalline catalysts, such as Ta, W, Re, Os, or their alloys, and (ii) by frequently changing the chirality of SWCNTs during their growth. This study paves the way for achieving chirality-selective SWCNT growth for high performance SWCNT based electronics.
Mocan, Lucian; Tabaran, Flaviu A; Mocan, Teodora; Bele, Constantin; Orza, Anamaria Ioana; Lucan, Ciprian; Stiufiuc, Rares; Manaila, Ioana; Iulia, Ferencz; Dana, Iancu; Zaharie, Florin; Osian, Gelu; Vlad, Liviu; Iancu, Cornel
2011-01-01
The process of laser-mediated ablation of cancer cells marked with biofunctionalized carbon nanotubes is frequently called “nanophotothermolysis”. We herein present a method of selective nanophotothermolisys of pancreatic cancer (PC) using multiwalled carbon nanotubes (MWCNTs) functionalized with human serum albumin (HSA). With the purpose of testing the therapeutic value of these nanobioconjugates, we have developed an ex-vivo experimental platform. Surgically resected specimens from patients with PC were preserved in a cold medium and kept alive via intra-arterial perfusion. Additionally, the HSA-MWCNTs have been intra-arterially administered in the greater pancreatic artery under ultrasound guidance. Confocal and transmission electron microscopy combined with immunohistochemical staining have confirmed the selective accumulation of HSA-MWCNTs inside the human PC tissue. The external laser irradiation of the specimen has significantly produced extensive necrosis of the malign tissue after the intra-arterial administration of HSA-MWCNTs, without any harmful effects on the surrounding healthy parenchyma. We have obtained a selective photothermal ablation of the malign tissue based on the selective internalization of MWCNTs with HSA cargo inside the pancreatic adenocarcinoma after the ex-vivo intra-arterial perfusion. PMID:21720504
Development of a fuzzy logic expert system for pile selection. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulshafer, M.L.
1989-01-01
This thesis documents the development of prototype expert system for pile selection for use on microcomputers. It concerns the initial selection of a pile foundation taking into account the parameters such as soil condition, pile length, loading scenario, material availability, contractor experience, and noise or vibration constraints. The prototype expert system called Pile Selection, version 1 (PS1) was developed using an expert system shell FLOPS. FLOPS is a shell based on the AI language OPS5 with many unique features. The system PS1 utilizes all of these unique features. Among the features used are approximate reasoning with fuzzy set theory, themore » blackboard architecture, and the emulated parallel processing of fuzzy production rules. A comprehensive review of the parameters used in selecting a pile was made, and the effects of the uncertainties associated with the vagueness of these parameters was examined in detail. Fuzzy set theory was utilized to deal with such uncertainties and provides the basis for developing a method for determining the best possible choice of piles for a given situation. Details of the development of PS1, including documenting and collating pile information for use in the expert knowledge data bases, are discussed.« less
Optically-Induced Cell Fusion on Cell Pairing Microstructures
NASA Astrophysics Data System (ADS)
Yang, Po-Fu; Wang, Chih-Hung; Lee, Gwo-Bin
2016-02-01
Cell fusion is a critical operation for numerous biomedical applications including cell reprogramming, hybridoma formation, cancer immunotherapy, and tissue regeneration. However, unstable cell contact and random cell pairings have limited efficiency and yields when utilizing traditional methods. Furthermore, it is challenging to selectively perform cell fusion within a group of cells. This study reports a new approach called optically-induced cell fusion (OICF), which integrates cell-pairing microstructures with an optically-induced, localized electrical field. By projecting light patterns onto a photoconductive film (hydrogen-rich, amorphous silicon) coated on an indium-tin-oxide (ITO) glass while an alternating current electrical field was applied between two such ITO glass slides, “virtual” electrodes could be generated that could selectively fuse pairing cells. At 10 kHz, a 57% cell paring rate and an 87% fusion efficiency were successfully achieved at a driving voltage of 20 Vpp, suggesting that this new technology could be promising for selective cell fusion within a group of cells.
NASA Technical Reports Server (NTRS)
Patterson, Michael D.; Derlaga, Joseph M.; Borer, Nicholas K.
2016-01-01
Although the primary function of propellers is typically to produce thrust, aircraft equipped with distributed electric propulsion (DEP) may utilize propellers whose main purpose is to act as a form of high-lift device. These \\high-lift propellers" can be placed upstream of wing such that, when the higher-velocity ow in the propellers' slipstreams interacts with the wing, the lift is increased. This technique is a main design feature of a new NASA advanced design project called Scalable Convergent Electric Propulsion Technology Operations Research (SCEPTOR). The goal of the SCEPTOR project is design, build, and y a DEP aircraft to demonstrate that such an aircraft can be much more ecient than conventional designs. This paper provides details into the high-lift propeller system con guration selection for the SCEPTOR ight demonstrator. The methods used in the high-lift propeller system conceptual design and the tradeo s considered in selecting the number of propellers are discussed.
NASA Astrophysics Data System (ADS)
Ye, Fengying; Feng, Chenqi; Fu, Ning; Wu, Huihui; Jiang, Jibo; Han, Sheng
2015-12-01
A home-made carbon paste electrode (CPE) was reformed by graphene oxide (GO)/lanthanum (La) complexes, and a modified electrode, called GO-La/CPE, was fabricated for the selective determination of dopamine (DA) by cyclic voltammetry (CV) and differential pulse voltammetry (DPV). Several factors affecting the electrocatalytic performance of the modified sensor were investigated. Owning to the combination of GO and La ions, the GO-La/CPE sensor exhibited large surface area, well selectivity, good repeatability and stability in the oxidation reaction of DA. At optimal conditions, the response of the GO-La/CPE electrode for determining DA was linear in the region of 0.01-0.1 μM and 0.1-400.0 μM. The limit of detection was down to 0.32 nM (S/N = 3). In addition, this modified electrode was successfully applied to the detection of DA in real urine and serum samples by using standard adding method, showing its promising application in the electroanalysis of real samples.
SAM-VI RNAs selectively bind S-adenosylmethionine and exhibit similarities to SAM-III riboswitches.
Mirihana Arachchilage, Gayan; Sherlock, Madeline E; Weinberg, Zasha; Breaker, Ronald R
2018-03-04
Five distinct riboswitch classes that regulate gene expression in response to the cofactor S-adenosylmethionine (SAM) or its metabolic breakdown product S-adenosylhomocysteine (SAH) have been reported previously. Collectively, these SAM- or SAH-sensing RNAs constitute the most abundant collection of riboswitches, and are found in nearly every major bacterial lineage. Here, we report a potential sixth member of this pervasive riboswitch family, called SAM-VI, which is predominantly found in Bifidobacterium species. SAM-VI aptamers selectively bind the cofactor SAM and strongly discriminate against SAH. The consensus sequence and structural model for SAM-VI share some features with the consensus model for the SAM-III riboswitch class, whose members are mainly found in lactic acid bacteria. However, there are sufficient differences between the two classes such that current bioinformatics methods separately cluster representatives of the two motifs. These findings highlight the abundance of RNA structures that can form to selectively recognize SAM, and showcase the ability of RNA to utilize diverse strategies to perform similar biological functions.
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
Code of Federal Regulations, 2010 CFR
2010-10-01
... interconnected VoIP services. Further, the following requirements apply only to 911 calls placed by users whose..., includes a selective router). (b) E911 Service. As of November 28, 2005: (1) Interconnected VoIP service... as described in this section; (2) Interconnected VoIP service providers must transmit all 911 calls...
Code of Federal Regulations, 2011 CFR
2011-10-01
... interconnected VoIP services. Further, the following requirements apply only to 911 calls placed by users whose..., includes a selective router). (b) E911 Service. As of November 28, 2005: (1) Interconnected VoIP service... as described in this section; (2) Interconnected VoIP service providers must transmit all 911 calls...
Reichert, Michael S; Höbel, Gerlinde
2018-03-01
Animal signals are inherently complex phenotypes with many interacting parts combining to elicit responses from receivers. The pattern of interrelationships between signal components reflects the extent to which each component is expressed, and responds to selection, either in concert with or independently of others. Furthermore, many species have complex repertoires consisting of multiple signal types used in different contexts, and common morphological and physiological constraints may result in interrelationships extending across the multiple signals in species' repertoires. The evolutionary significance of interrelationships between signal traits can be explored within the framework of phenotypic integration, which offers a suite of quantitative techniques to characterize complex phenotypes. In particular, these techniques allow for the assessment of modularity and integration, which describe, respectively, the extent to which sets of traits covary either independently or jointly. Although signal and repertoire complexity are thought to be major drivers of diversification and social evolution, few studies have explicitly measured the phenotypic integration of signals to investigate the evolution of diverse communication systems. We applied methods from phenotypic integration studies to quantify integration in the two primary vocalization types (advertisement and aggressive calls) in the treefrogs Hyla versicolor , Hyla cinerea, and Dendropsophus ebraccatus . We recorded male calls and calculated standardized phenotypic variance-covariance ( P ) matrices for characteristics within and across call types. We found significant integration across call types, but the strength of integration varied by species and corresponded with the acoustic similarity of the call types within each species. H. versicolor had the most modular advertisement and aggressive calls and the least acoustically similar call types. Additionally, P was robust to changing social competition levels in H. versicolor . Our findings suggest new directions in animal communication research in which the complex relationships among the traits of multiple signals are a key consideration for understanding signal evolution.
A comparison of soil organic matter physical fractionation methods
NASA Astrophysics Data System (ADS)
Duddigan, Sarah; Alexander, Paul; Shaw, Liz; Collins, Chris
2017-04-01
Selecting a suitable physical fractionation to investigate soil organic matter dynamics from the plethora that are available is a difficult task. An initial investigation of four different physical fractionation methods was conducted (i) Six et al. (2002); (ii) Zimmermann et al. (2007); (iii) Sohi et al. (2001); and (iv) Plaza et al. (2013). Soils used for this were from a long-term organic matter field plot study where a sandy loam soil was subjected to the following treatments: Peat (Pt), Horse Manure (H), Garden Compost (GCf), Garden Compost at half rate (GCh), and a bare plot control (BP). Although each of these methods involved the isolation of unique fractions, in the interest of comparison, each fraction was categorised as either being (i) physically protected (i.e. in aggregates); (ii) chemically protected (such as in organo-mineral complexes); or (iii) unprotected by either of these mechanisms (so-called 'free' organic matter). Regardless of the fractionation method used, a large amount of the variation in total C contents of the different treated soils is accounted for by the differences in unprotected particulate organic matter. When comparing the methods to one another there were no consistent differences in carbon content in the physically protected, chemically protected, or unprotected fractions as operationally defined across all the five organic matter treatments. Therefore fractionation method selection, for this research, was primarily driven by the practicalities of conducting each method in the lab. All of the methods tested had their limitations, for use in this research. This is not a criticism of the methods themselves but largely a result of the lack of suitability for these particular samples. For example, samples that contain a lot of gravel can lead to problems for methods that use size distribution for fractionation. Problems can also be encountered when free particulate organic matter contributes a large proportion of the sample, leaving insufficient sample for further fractionation. This highlights the need for an understanding of the nature of your sample prior to method selection.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.
Selective visual region of interest to enhance medical video conferencing
NASA Astrophysics Data System (ADS)
Bonneau, Walt, Jr.; Read, Christopher J.; Shirali, Girish
1998-06-01
The continued economic pressure that is being placed upon the healthcare industry creates both challenge and opportunity to develop cost effective healthcare tools. Tools that provide improvements in the quality of medical care at the same time improve the distribution of efficient care will create product demand. Video Conferencing systems are one of the latest product technologies that are evolving their way into healthcare applications. The systems that provide quality Bi- directional video and imaging at the lowest system and communication cost are creating many possible options for the healthcare industry. A method to use only 128k bits/sec. of ISDN bandwidth while providing quality video images in selected regions will be applied to echocardiograms using a low cost video conferencing system operating within a basic rate ISDN line bandwidth. Within a given display area (frame) it has been observed that only selected informational areas of the frame of are of value when viewing for detail and precision within an image. Much in the same manner that a photograph is cropped. If a method to accomplish Region Of Interest (ROI) was applied to video conferencing using H.320 with H.263 (compression) and H.281 (camera control) international standards, medical image quality could be achieved in a cost-effective manner. For example, the cardiologist could be provided with a selectable three to eight end-point viewable ROI polygon that defines the ROI in the image. This is achieved by the video system calculating the selected regional end-points and creating an alpha mask to signify the importance of the ROI to the compression processor. This region is then applied to the compression algorithm in a manner that the majority of the video conferencing processor cycles are focused on the ROI of the image. An occasional update of the non-ROI area is processed to maintain total image coherence. The user could control the non-ROI area updates. Providing encoder side ROI specification is of value. However, the power of this capability is improved if remote access and selection of the ROI is also provided. Using the H.281 camera standard and proposing an additional option to the standard to allow for remote ROI selection would make this possible. When ROI is applied the ability to reach the equivalent of 384K bits/sec ISDN rates may be achieved or exceeded depending upon the size of the selected ROI using 128K bits/sec. This opens additional opportunity to establish international calling and reduced call rates by up to sixty- six percent making reoccurring communication costs attractive. Rates of twenty to thirty quality ROI updates could be achieved. It is however important to understand that this technique is still under development.
Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang
2018-01-05
DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html.
Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang
2018-01-01
DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html PMID:29416743
Advertisement call and genetic structure conservatism: good news for an endangered Neotropical frog.
Forti, Lucas R; Costa, William P; Martins, Lucas B; Nunes-de-Almeida, Carlos H L; Toledo, Luís Felipe
2016-01-01
Many amphibian species are negatively affected by habitat change due to anthropogenic activities. Populations distributed over modified landscapes may be subject to local extinction or may be relegated to the remaining-likely isolated and possibly degraded-patches of available habitat. Isolation without gene flow could lead to variability in phenotypic traits owing to differences in local selective pressures such as environmental structure, microclimate, or site-specific species assemblages. Here, we tested the microevolution hypothesis by evaluating the acoustic parameters of 349 advertisement calls from 15 males from six populations of the endangered amphibian species Proceratophrys moratoi. In addition, we analyzed the genetic distances among populations and the genetic diversity with a haplotype network analysis. We performed cluster analysis on acoustic data based on the Bray-Curtis index of similarity, using the UPGMA method. We correlated acoustic dissimilarities (calculated by Euclidean distance) with geographical and genetic distances among populations. Spectral traits of the advertisement call of P. moratoi presented lower coefficients of variation than did temporal traits, both within and among males. Cluster analyses placed individuals without congruence in population or geographical distance, but recovered the species topology in relation to sister species. The genetic distance among populations was low; it did not exceed 0.4% for the most distant populations, and was not correlated with acoustic distance. Both acoustic features and genetic sequences are highly conserved, suggesting that populations could be connected by recent migrations, and that they are subject to stabilizing selective forces. Although further studies are required, these findings add to a growing body of literature suggesting that this species would be a good candidate for a reintroduction program without negative effects on communication or genetic impact.
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, Jakob
Anuran amphibians (frogs and toads) of most of the 3,500 species that exist today are highly vocal animals. In most frogs, males will spend considerable energy on calling and incur sizeable predation risks and the females’ detection and localization of the calls of conspecific males is often a prerequisite for successful mating. Therefore, acoustic communication is evidently evolutionarily important in the anurans, and their auditory system is probably shaped by the selective pressures associated with production, detection and localization of the communication calls.
1998-04-01
34that he would do what was right but that the remarks at the staff conference were ’at a minimum in my subconscious and, you know, parts of it are very...trained (or reprogrammed ) to the point that, setting aside the teachings of a lifetime, they will be able to violently kill other human beings upon
Paternal kin recognition in the high frequency / ultrasonic range in a solitary foraging mammal
2012-01-01
Background Kin selection is a driving force in the evolution of mammalian social complexity. Recognition of paternal kin using vocalizations occurs in taxa with cohesive, complex social groups. This is the first investigation of paternal kin recognition via vocalizations in a small-brained, solitary foraging mammal, the grey mouse lemur (Microcebus murinus), a frequent model for ancestral primates. We analyzed the high frequency/ultrasonic male advertisement (courtship) call and alarm call. Results Multi-parametric analyses of the calls’ acoustic parameters and discriminant function analyses showed that advertisement calls, but not alarm calls, contain patrilineal signatures. Playback experiments controlling for familiarity showed that females paid more attention to advertisement calls from unrelated males than from their fathers. Reactions to alarm calls from unrelated males and fathers did not differ. Conclusions 1) Findings provide the first evidence of paternal kin recognition via vocalizations in a small-brained, solitarily foraging mammal. 2) High predation, small body size, and dispersed social systems may select for acoustic paternal kin recognition in the high frequency/ultrasonic ranges, thus limiting risks of inbreeding and eavesdropping by predators or conspecific competitors. 3) Paternal kin recognition via vocalizations in mammals is not dependent upon a large brain and high social complexity, but may already have been an integral part of the dispersed social networks from which more complex, kin-based sociality emerged. PMID:23198727
NASA Astrophysics Data System (ADS)
Dijkstra, Yoeri M.; Brouwer, Ronald L.; Schuttelaars, Henk M.; Schramkowski, George P.
2017-07-01
The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the understanding of the underlying physics and the testing of parameter sensitivity. A comparison of the model results to measurements shows a good qualitative agreement. iFlow is written in Python and is available as open source code under the LGPL license.
Guss, David A; Gray, Siobhan; Castillo, Edward M
2014-04-01
Patient satisfaction is a common parameter tracked by health care systems, and likely influences patient provider choice and may impact insurer payment. Achieving high satisfaction in an academic emergency department (ED) can be a daunting task due to variable volumes, acuity, and overcrowding. The objective of this study was to assess the impact of a postdischarge telephone call by a staff member after discharge from the ED on patient satisfaction. This was a prospective cohort study conducted in the two University of California San Diego Health System EDs. Press Ganey patient satisfaction surveys are mailed to a random sample of 50% of all discharged patients. In August 2010 a program of MD and RN telephone call back 1 to 5 days after the ED visit was initiated. In conjunction with this program, a custom question was added to the standard survey, "Called back after discharge, Yes/No?" All surveys returned between September 22, 2010 and December 7, 2010 were reviewed, and those that chose to self-identify were selected to allow for ED chart review. The key outcome variable "likelihood to recommend score" was dichotomized into the highest category, 5 (very good) and remaining levels, 1-4 (very poor, poor, fair, good). ED records were abstracted for data on waiting time (WT), length of stay (LOS), and triage class (TC). These variables were selected because they have been shown to impact patient satisfaction in prior studies. Likelihood to recommend ratings for those reporting "Yes" to call back were compared to those reporting "No" to call back. Summary statistics were generated for patient characteristics in the "Yes" and "No" groups. Ninety-five percent confidence intervals (CIs) for all counts and proportions were calculated with the "exact" method. A logistic regression model was constructed assessing odds ratio (OR) for likelihood-to-recommend score 5 while controlling for the variables of WT, LOS, and TC. In the study period, about 5000 surveys were mailed, 507 were returned, and 368 self-identified. Of those that self-identified, 136 patients answered "Yes" to the callback question and 232 answered "No." The mean age for those indicating "Yes" was 55.8 years (CI 52.9-58.7), and for those indicating "No," 50.7 years (CI 47.9-53.5). Gender and triage code were similar between the two groups. Among those answering "Yes," 89.0% (CI 82.5-93.7) provided a "5" rating for "likelihood to recommend," compared to 55.6% (CI 49.0-62.1) who replied "No" for call back. The logistic regression model generated an OR of 6.35 (CI 3.4-11.7) for providing a level 5 rating for "likelihood to recommend" for patients reporting "Yes" for call back after controlling for WT, LOS, and TC. In the study institution, patients that are called back are much more likely to have a favorable impression of the visit as assessed by likelihood to recommend regardless of WT, LOS, or TC. These data support "call back" as an effective strategy to improve ED patient satisfaction. Copyright © 2014 Elsevier Inc. All rights reserved.
Landscape correlates along mourning dove call-count routes in Mississippi
Elmore, R.D.; Vilella, F.J.; Gerard, P.D.
2007-01-01
Mourning dove (Zenaida macroura) call-count surveys in Mississippi, USA, suggest declining populations. We used available mourning dove call-count data to evaluate long-term mourning dove habitat relationships. Dove routes were located in the Mississippi Alluvial Valley, Deep Loess Province, Mid Coastal Plain, and Hilly Coastal Plain physiographic provinces of Mississippi. We also included routes in the Blackbelt Prairie region of Mississippi and Alabama, USA. We characterized landscape structure and composition within 1.64-km buffers around 10 selected mourning dove call-count routes during 3 time periods. Habitat classes included agriculture, forest, urban, regeneration stands, wetland, and woodlot. We used Akaike's Information Criterion to select the best candidate model. We selected a model containing percent agriculture and edge density that contained approximately 40% of the total variability in the data set. Percent agriculture was positively correlated with relative dove abundance. Interestingly, we found a negative relationship between edge density and dove abundance. Researchers should conduct future research on dove nesting patterns in Mississippi and threshold levels of edge necessary to maximize dove density. During the last 20 years, Mississippi lost more than 800,000 ha of cropland while forest cover represented largely by pine (Pinus taeda) plantations increased by more than 364,000 ha. Our results suggest observed localized declines in mourning dove abundance in Mississippi may be related to the documented conversion of agricultural lands to pine plantations.
Words, Words Everywhere, but Which Ones Do We Teach?
ERIC Educational Resources Information Center
Graves, Michael F.; Baumann, James F.; Blachowicz, Camille L. Z.; Manyak, Patrick; Bates, Ann; Cieply, Char; Davis, Jeni R.; Von Gunten, Heather
2014-01-01
This article highlights the challenging task teachers face in selecting vocabulary to teach. First, we briefly discuss three features of the English lexicon that are crucial to keep in mind when selecting vocabulary for instruction and three approaches that have been suggested. Then, we present a theoretically based approach called Selecting Words…
ParticleCall: A particle filter for base calling in next-generation sequencing systems
2012-01-01
Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067
McIntosh, Heather M; Calvert, Julie; Macpherson, Karen J; Thompson, Lorna
2016-06-01
Rapid review has become widely adopted by health technology assessment agencies in response to demand for evidence-based information to support imperative decisions. Concern about the credibility of rapid reviews and the reliability of their findings has prompted a call for wider publication of their methods. In publishing this overview of the accredited rapid review process developed by Healthcare Improvement Scotland, we aim to raise awareness of our methods and advance the discourse on best practice. Healthcare Improvement Scotland produces rapid reviews called evidence notes using a process that has achieved external accreditation through the National Institute for Health and Care Excellence. Key components include a structured approach to topic selection, initial scoping, considered stakeholder involvement, streamlined systematic review, internal quality assurance, external peer review and updating. The process was introduced in 2010 and continues to be refined over time in response to user feedback and operational experience. Decision-makers value the responsiveness of the process and perceive it as being a credible source of unbiased evidence-based information supporting advice for NHSScotland. Many agencies undertaking rapid reviews are striving to balance efficiency with methodological rigour. We agree that there is a need for methodological guidance and that it should be informed by better understanding of current approaches and the consequences of different approaches to streamlining systematic review methods. Greater transparency in the reporting of rapid review methods is essential to enable that to happen.
Image Analyzed by Mars Rover for Selection of Target
2010-03-23
NASA Opportunity used newly developed and uploaded software called AEGIS, to analyze images to identify features that best matched criteria for selecting an observation target; the criteria in this image -- rocks that are larger and darker than others.
DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation.
Kalsi, Shruti; Kaur, Harleen; Chang, Victor
2017-12-05
Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don't exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning.
A Scoring Rubric for Students' Responses to Simple Evolution Questions: Darwinian Components
ERIC Educational Resources Information Center
Jensen, Murray; Moore, Randy; Hatch, Jay; Hsu, Leon
2007-01-01
The call to teach students Darwin's theory of evolution by natural selection has been made by a variety of professional organizations. In addition to these national organizations, almost every state has science education guidelines calling for the teaching of evolution. Many administrators and policymakers believe that evolution is being taught,…
"Heads Bowed, Eyes Closed": Analyzing the Discourse of Online Evangelical Altar Calls
ERIC Educational Resources Information Center
Bryan, Clint D.
2016-01-01
This discourse analysis study examines the final moments of selected online sermons delivered by America's leading evangelical pastors and speakers, paying particular attention to the language employed in the presentation of Christian gospel tenets, the public invitation for salvation, the altar call that identifies new followers, and the…
Developmental plasticity of mating calls enables acoustic communication in diverse environments
Beckers, Oliver M; Schul, Johannes
2008-01-01
Male calls of the katydid Neoconocephalus triops exhibit substantial developmental plasticity in two parameters: (i) calls of winter males are continuous and lack the verse structure of summer calls and (ii) at equal temperatures, summer males produce calls with a substantially higher pulse rate than winter males. We raised female N. triops under conditions that reliably induced either summer or winter phenotype and tested their preferences for the call parameters that differ between summer and winter males. Neither generation was selective for the presence of verses, but females had strong preferences for pulse rates: only a narrow range of pulse rates was attractive. The attractive ranges did not differ between summer and winter females. Both male pulse rate and female preference for pulse rate changed with ambient temperature, but female preference changed more than the male calls. As a result, the summer call was attractive only at 25°C, whereas the slower winter call was attractive only at 20°C. Thus, developmental plasticity of male calls compensates for differences in temperature dependency between calls and preferences and enables the communication system to function in heterogeneous environments. The potential role of call plasticity during the invasion of new habitats is discussed. PMID:18302998
Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.
2016-09-28
Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less
CombiROC: an interactive web tool for selecting accurate marker combinations of omics data.
Mazzara, Saveria; Rossi, Riccardo L; Grifantini, Renata; Donizetti, Simone; Abrignani, Sergio; Bombaci, Mauro
2017-03-30
Diagnostic accuracy can be improved considerably by combining multiple markers, whose performance in identifying diseased subjects is usually assessed via receiver operating characteristic (ROC) curves. The selection of multimarker signatures is a complicated process that requires integration of data signatures with sophisticated statistical methods. We developed a user-friendly tool, called CombiROC, to help researchers accurately determine optimal markers combinations from diverse omics methods. With CombiROC data from different domains, such as proteomics and transcriptomics, can be analyzed using sensitivity/specificity filters: the number of candidate marker panels rising from combinatorial analysis is easily optimized bypassing limitations imposed by the nature of different experimental approaches. Leaving to the user full control on initial selection stringency, CombiROC computes sensitivity and specificity for all markers combinations, performances of best combinations and ROC curves for automatic comparisons, all visualized in a graphic interface. CombiROC was designed without hard-coded thresholds, allowing a custom fit to each specific data: this dramatically reduces the computational burden and lowers the false negative rates given by fixed thresholds. The application was validated with published data, confirming the marker combination already originally described or even finding new ones. CombiROC is a novel tool for the scientific community freely available at http://CombiROC.eu.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.
Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less
Lifshits, A M
1979-01-01
General characteristics of the multivariate statistical analysis (MSA) is given. Methodical premises and criteria for the selection of an adequate MSA method applicable to pathoanatomic investigations of the epidemiology of multicausal diseases are presented. The experience of using MSA with computors and standard computing programs in studies of coronary arteries aterosclerosis on the materials of 2060 autopsies is described. The combined use of 4 MSA methods: sequential, correlational, regressional, and discriminant permitted to quantitate the contribution of each of the 8 examined risk factors in the development of aterosclerosis. The most important factors were found to be the age, arterial hypertension, and heredity. Occupational hypodynamia and increased fatness were more important in men, whereas diabetes melitus--in women. The registration of this combination of risk factors by MSA methods provides for more reliable prognosis of the likelihood of coronary heart disease with a fatal outcome than prognosis of the degree of coronary aterosclerosis.
Efficient Computing Budget Allocation for Finding Simplest Good Designs
Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung
2012-01-01
In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404
Rapid Hypothesis Testing with Candida albicans through Gene Disruption with Short Homology Regions
Wilson, R. Bryce; Davis, Dana; Mitchell, Aaron P.
1999-01-01
Disruption of newly identified genes in the pathogen Candida albicans is a vital step in determination of gene function. Several gene disruption methods described previously employ long regions of homology flanking a selectable marker. Here, we describe disruption of C. albicans genes with PCR products that have 50 to 60 bp of homology to a genomic sequence on each end of a selectable marker. We used the method to disrupt two known genes, ARG5 and ADE2, and two sequences newly identified through the Candida genome project, HRM101 and ENX3. HRM101 and ENX3 are homologous to genes in the conserved RIM101 (previously called RIM1) and PacC pathways of Saccharomyces cerevisiae and Aspergillus nidulans. We show that three independent hrm101/hrm101 mutants and two independent enx3/enx3 mutants are defective in filamentation on Spider medium. These observations argue that HRM101 and ENX3 sequences are indeed portions of genes and that the respective gene products have related functions. PMID:10074081
NASA Astrophysics Data System (ADS)
Yudhanto, F.; Jamasri; Rochardjo, Heru S. B.
2018-05-01
The characterized agave cantala fiber in this research came from Sumenep, Madura, Indonesia was chemically processed using sodium hydroxide (NaOH) and hydrogen peroxide (H2O2) solution. The treatment with both solutions is called bleaching process. Tensile strength test of single fiber was used to get mechanical properties from selecting process of the various parameter are temperature, PH and concentration of H2O2 with an L9 orthogonal array by Taguchi method. The results indicate that PH is most significant parameter influencing the tensile strength followed by temperature and concentration H2O2. The influence of bleaching treatment on tensile strength showed increasing of crystallinity index of fiber by 21%. It showed by lost of hemicellulose and lignin layers of fiber can be seen from waveforms changes of 1735 (C=O), 1627 (OH), 1319 (CH2), 1250 (C-O) by FTIR graph. The photo SEM showed that the bleaching of fibers causes the fibers more roughly and clearly than untreated fibers.
A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.
Liu, Chun-Han; Liu, Lian
2017-05-08
BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.
Receiver discriminability drives the evolution of complex sexual signals by sexual selection.
Cui, Jianguo; Song, Xiaowei; Zhu, Bicheng; Fang, Guangzhan; Tang, Yezhong; Ryan, Michael J
2016-04-01
A hallmark of sexual selection by mate choice is the evolution of exaggerated traits, such as longer tails in birds and more acoustic components in the calls of birds and frogs. Trait elaboration can be opposed by costs such as increased metabolism and greater predation risk, but cognitive processes of the receiver can also put a brake on trait elaboration. For example, according to Weber's Law traits of a fixed absolute difference will be more difficult to discriminate as the absolute magnitude increases. Here, we show that in the Emei music frog (Babina daunchina) increases in the fundamental frequency between successive notes in the male advertisement call, which increases the spectral complexity of the call, facilitates the female's ability to compare the number of notes between calls. These results suggest that female's discriminability provides the impetus to switch from enhancement of signaling magnitude (i.e., adding more notes into calls) to employing a new signal feature (i.e., increasing frequency among notes) to increase complexity. We suggest that increasing the spectral complexity of notes ameliorates some of the effects of Weber's Law, and highlights how perceptual and cognitive biases of choosers can have important influences on the evolution of courtship signals. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Love, Elliot K; Bee, Mark A
2010-09-01
One strategy for coping with the constraints on acoustic signal reception posed by ambient noise is to signal louder as noise levels increase. Termed the 'Lombard effect', this reflexive behaviour is widespread among birds and mammals and occurs with a diversity of signal types, leading to the hypothesis that voice amplitude regulation represents a general vertebrate mechanism for coping with environmental noise. Support for this evolutionary hypothesis, however, remains limited due to a lack of studies in taxa other than birds and mammals. Here, we report the results of an experimental test of the hypothesis that male grey treefrogs increase the amplitude of their advertisement calls in response to increasing levels of chorus-shaped noise. We recorded spontaneously produced calls in quiet and in the presence of noise broadcast at sound pressure levels ranging between 40 dB and 70 dB. While increasing noise levels induced predictable changes in call duration and rate, males did not regulate call amplitude. These results do not support the hypothesis that voice amplitude regulation is a generic vertebrate mechanism for coping with noise. We discuss the possibility that intense sexual selection and high levels of competition for mates in choruses place some frogs under strong selection to call consistently as loudly as possible.
Progressive sample processing of band selection for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu
2017-10-01
Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.
Exome sequencing of a multigenerational human pedigree.
Hedges, Dale J; Hedges, Dale; Burges, Dan; Powell, Eric; Almonte, Cherylyn; Huang, Jia; Young, Stuart; Boese, Benjamin; Schmidt, Mike; Pericak-Vance, Margaret A; Martin, Eden; Zhang, Xinmin; Harkins, Timothy T; Züchner, Stephan
2009-12-14
Over the next few years, the efficient use of next-generation sequencing (NGS) in human genetics research will depend heavily upon the effective mechanisms for the selective enrichment of genomic regions of interest. Recently, comprehensive exome capture arrays have become available for targeting approximately 33 Mb or approximately 180,000 coding exons across the human genome. Selective genomic enrichment of the human exome offers an attractive option for new experimental designs aiming to quickly identify potential disease-associated genetic variants, especially in family-based studies. We have evaluated a 2.1 M feature human exome capture array on eight individuals from a three-generation family pedigree. We were able to cover up to 98% of the targeted bases at a long-read sequence read depth of > or = 3, 86% at a read depth of > or = 10, and over 50% of all targets were covered with > or = 20 reads. We identified up to 14,284 SNPs and small indels per individual exome, with up to 1,679 of these representing putative novel polymorphisms. Applying the conservative genotype calling approach HCDiff, the average rate of detection of a variant allele based on Illumina 1 M BeadChips genotypes was 95.2% at > or = 10x sequence. Further, we propose an advantageous genotype calling strategy for low covered targets that empirically determines cut-off thresholds at a given coverage depth based on existing genotype data. Application of this method was able to detect >99% of SNPs covered > or = 8x. Our results offer guidance for "real-world" applications in human genetics and provide further evidence that microarray-based exome capture is an efficient and reliable method to enrich for chromosomal regions of interest in next-generation sequencing experiments.
Scherbaum, Stefan; Frisch, Simon; Dshemuchadse, Maja
2016-01-01
Selective attention and its adaptation by cognitive control processes are considered a core aspect of goal-directed action. Often, selective attention is studied behaviorally with conflict tasks, but an emerging neuroscientific method for the study of selective attention is EEG frequency tagging. It applies different flicker frequencies to the stimuli of interest eliciting steady state visual evoked potentials (SSVEPs) in the EEG. These oscillating SSVEPs in the EEG allow tracing the allocation of selective attention to each tagged stimulus continuously over time. The present behavioral investigation points to an important caveat of using tagging frequencies: The flicker of stimuli not only produces a useful neuroscientific marker of selective attention, but interacts with the adaptation of selective attention itself. Our results indicate that RT patterns of adaptation after response conflict (so-called conflict adaptation) are reversed when flicker frequencies switch at once. However, this effect of frequency switches is specific to the adaptation by conflict-driven control processes, since we find no effects of frequency switches on inhibitory control processes after no-go trials. We discuss the theoretical implications of this finding and propose precautions that should be taken into account when studying conflict adaptation using frequency tagging in order to control for the described confounds. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Siqi; Wang, Xiaorong; Wu, Junyong
2018-01-01
The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
Evaluation of 3-D graphics software: A case study
NASA Technical Reports Server (NTRS)
Lores, M. E.; Chasen, S. H.; Garner, J. M.
1984-01-01
An efficient 3-D geometry graphics software package which is suitable for advanced design studies was developed. The advanced design system is called GRADE--Graphics for Advanced Design. Efficiency and ease of use are gained by sacrificing flexibility in surface representation. The immediate options were either to continue development of GRADE or to acquire a commercially available system which would replace or complement GRADE. Test cases which would reveal the ability of each system to satisfy the requirements were developed. A scoring method which adequately captured the relative capabilities of the three systems was presented. While more complex multi-attribute decision methods could be used, the selected method provides all the needed information without being so complex that it is difficult to understand. If the value factors are modestly perturbed, system Z is a clear winner based on its overall capabilities. System Z is superior in two vital areas: surfacing and ease of interface with application programs.
Clonal Selection Based Artificial Immune System for Generalized Pattern Recognition
NASA Technical Reports Server (NTRS)
Huntsberger, Terry
2011-01-01
The last two decades has seen a rapid increase in the application of AIS (Artificial Immune Systems) modeled after the human immune system to a wide range of areas including network intrusion detection, job shop scheduling, classification, pattern recognition, and robot control. JPL (Jet Propulsion Laboratory) has developed an integrated pattern recognition/classification system called AISLE (Artificial Immune System for Learning and Exploration) based on biologically inspired models of B-cell dynamics in the immune system. When used for unsupervised or supervised classification, the method scales linearly with the number of dimensions, has performance that is relatively independent of the total size of the dataset, and has been shown to perform as well as traditional clustering methods. When used for pattern recognition, the method efficiently isolates the appropriate matches in the data set. The paper presents the underlying structure of AISLE and the results from a number of experimental studies.
Global quasi-linearization (GQL) versus QSSA for a hydrogen-air auto-ignition problem.
Yu, Chunkan; Bykov, Viatcheslav; Maas, Ulrich
2018-04-25
A recently developed automatic reduction method for systems of chemical kinetics, the so-called Global Quasi-Linearization (GQL) method, has been implemented to study and reduce the dimensions of a homogeneous combustion system. The results of application of the GQL and the Quasi-Steady State Assumption (QSSA) are compared. A number of drawbacks of the QSSA are discussed, i.e. the selection criteria of QSS-species and its sensitivity to system parameters, initial conditions, etc. To overcome these drawbacks, the GQL approach has been developed as a robust, automatic and scaling invariant method for a global analysis of the system timescale hierarchy and subsequent model reduction. In this work the auto-ignition problem of the hydrogen-air system is considered in a wide range of system parameters and initial conditions. The potential of the suggested approach to overcome most of the drawbacks of the standard approaches is illustrated.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Best practices for evaluating single nucleotide variant calling methods for microbial genomics
Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.
2015-01-01
Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
Rodríguez-Gómez, R; Jiménez-Díaz, I; Zafra-Gómez, A; Ballesteros, O; Navalón, A
2014-12-01
In recent decades, in parallel to industrial development, a large amount of new chemicals have emerged that are able to produce disorders in human endocrine system. These groups of substances, so-called endocrine disrupting chemicals (EDCs), include many families of compounds, such as parabens, benzophenone-UV filters and bisphenols. Given the demonstrated biological activity of those compounds, it is necessary to develop new analytical procedures to evaluate the exposure with the final objective of establishing, in an accurate way, relationships between EDCs concentrations and the harmful health effects observed in population. In the present work, a method based on a simplified sample treatment involving steps of precipitation, evaporation and clean-up of the extracts with C18 followed by ultrahigh performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis for the determination of bisphenol A and its chlorinated derivatives (monochloro-, dichloro-, trichloro- and tetrachlorobisphenol A), parabens (methyl-, ethyl-, propyl- and butylparaben) and benzophenone-UV filters (benzophenone -1,-2, -3, -6, -8 and 4-hydroxybenzophenone) in human breast milk samples is proposed and validated. The limits of detections found ranged from 0.02 to 0.05 ng mL(-1). The method was validated using matrix-matched standard calibration followed by a recovery assay with spiked samples. Recovery rates ranged from 91% to 110% and the precision (evaluated as relative standard deviation) was lower than 15% for all compounds, being within the acceptable limits for the selected bioanalytical method validation guide. The method was satisfactorily applied for the determination of these compounds in human breast milk samples collected from 10 randomly selected women. Copyright © 2014 Elsevier B.V. All rights reserved.
Moradi, Saleh; Nima, Ali A.; Rapp Ricciardi, Max; Archer, Trevor; Garcia, Danilo
2014-01-01
Background: Performance monitoring might have an adverse influence on call center agents' well-being. We investigate how performance, over a 6-month period, is related to agents' perceptions of their learning climate, character strengths, well-being (subjective and psychological), and physical activity. Method: Agents (N = 135) self-reported perception of the learning climate (Learning Climate Questionnaire), character strengths (Values In Action Inventory Short Version), well-being (Positive Affect, Negative Affect Schedule, Satisfaction With Life Scale, Psychological Well-Being Scales Short Version), and how often/intensively they engaged in physical activity. Performance, “time on the phone,” was monitored for 6 consecutive months by the same system handling the calls. Results: Performance was positively related to having opportunities to develop, the character strengths clusters of Wisdom and Knowledge (e.g., curiosity for learning, perspective) and Temperance (e.g., having self-control, being prudent, humble, and modest), and exercise frequency. Performance was negatively related to the sense of autonomy and responsibility, contentedness, the character strengths clusters of Humanity and Love (e.g., helping others, cooperation) and Justice (e.g., affiliation, fairness, leadership), positive affect, life satisfaction and exercise Intensity. Conclusion: Call centers may need to create opportunities to develop to increase agents' performance and focus on individual differences in the recruitment and selection of agents to prevent future shortcomings or worker dissatisfaction. Nevertheless, performance measurement in call centers may need to include other aspects that are more attuned with different character strengths. After all, allowing individuals to put their strengths at work should empower the individual and at the end the organization itself. Finally, physical activity enhancement programs might offer considerable positive work outcomes. PMID:25002853
Kessler, Sharon E; Radespiel, Ute; Hasiniaina, Alida I F; Leliveld, Lisette M C; Nash, Leanne T; Zimmermann, Elke
2014-02-20
Maternal kin selection is a driving force in the evolution of mammalian social complexity and it requires that kin are distinctive from nonkin. The transition from the ancestral state of asociality to the derived state of complex social groups is thought to have occurred via solitary foraging, in which individuals forage alone, but, unlike the asocial ancestors, maintain dispersed social networks via scent-marks and vocalizations. We hypothesize that matrilineal signatures in vocalizations were an important part of these networks. We used the solitary foraging gray mouse lemur (Microcebus murinus) as a model for ancestral solitary foragers and tested for matrilineal signatures in their calls, thus investigating whether such signatures are already present in solitary foragers and could have facilitated the kin selection thought to have driven the evolution of increased social complexity in mammals. Because agonism can be very costly, selection for matrilineal signatures in agonistic calls should help reduce agonism between unfamiliar matrilineal kin. We conducted this study on a well-studied population of wild mouse lemurs at Ankarafantsika National Park, Madagascar. We determined pairwise relatedness using seven microsatellite loci, matrilineal relatedness by sequencing the mitrochondrial D-loop, and sleeping group associations using radio-telemetry. We recorded agonistic calls during controlled social encounters and conducted a multi-parametric acoustic analysis to determine the spectral and temporal structure of the agonistic calls. We measured 10 calls for each of 16 females from six different matrilineal kin groups. Calls were assigned to their matriline at a rate significantly higher than chance (pDFA: correct = 47.1%, chance = 26.7%, p = 0.03). There was a statistical trend for a negative correlation between acoustic distance and relatedness (Mantel Test: g = -1.61, Z = 4.61, r = -0.13, p = 0.058). Mouse lemur agonistic calls are moderately distinctive by matriline. Because sleeping groups consisted of close maternal kin, both genetics and social learning may have generated these acoustic signatures. As mouse lemurs are models for solitary foragers, we recommend further studies testing whether the lemurs use these calls to recognize kin. This would enable further modeling of how kin recognition in ancestral species could have shaped the evolution of complex sociality.
Random ensemble learning for EEG classification.
Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid
2018-01-01
Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
2016-02-02
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
Tracking fin whales in the northeast Pacific Ocean with a seafloor seismic network.
Wilcock, William S D
2012-10-01
Ocean bottom seismometer (OBS) networks represent a tool of opportunity to study fin and blue whales. A small OBS network on the Juan de Fuca Ridge in the northeast Pacific Ocean in ~2.3 km of water recorded an extensive data set of 20-Hz fin whale calls. An automated method has been developed to identify arrival times based on instantaneous frequency and amplitude and to locate calls using a grid search even in the presence of a few bad arrival times. When only one whale is calling near the network, tracks can generally be obtained up to distances of ~15 km from the network. When the calls from multiple whales overlap, user supervision is required to identify tracks. The absolute and relative amplitudes of arrivals and their three-component particle motions provide additional constraints on call location but are not useful for extending the distance to which calls can be located. The double-difference method inverts for changes in relative call locations using differences in residuals for pairs of nearby calls recorded on a common station. The method significantly reduces the unsystematic component of the location error, especially when inconsistencies in arrival time observations are minimized by cross-correlation.
Sexual and social competition: broadening perspectives by defining female roles.
Rubenstein, Dustin R
2012-08-19
Males figured more prominently than females in Darwin's view of sexual selection. He considered female choice of secondary importance to male-male competition as a mechanism to explain the evolution of male ornaments and armaments. Fisher later demonstrated the importance of female choice in driving male trait evolution, but his ideas were largely ignored for decades. As sexual selection came to embrace the notions of parent-offspring and sexual conflict, and experimental tests of female choice showed promise, females began to feature more prominently in the framework of sexual selection theory. Recent debate over this theory has centred around the role of females, not only over the question of choice, but also over female-female competition. Whereas some have called for expanding the sexual selection framework to encompass all forms of female-female competition, others have called for subsuming sexual selection within a broader framework of social selection, or replacing it altogether. Still others have argued for linking sexual selection more clearly to other evolutionary theories such as kin selection. Rather than simply debating terminology, we must take a broader view of the general processes that lead to trait evolution in both sexes by clearly defining the roles that females play in the process, and by focusing on intra- and inter-sexual interactions in males and females.
HALT Selected Papers, 1993 with Language Teaching Ideas from Paradise.
ERIC Educational Resources Information Center
Chandler, Paul, Ed.; Hodnett, Edda, Ed.
In section I, papers presented at the Hawaii Association of Language Teachers (HALT) in 1993 are presented. Section II includes a number of projects received from a call for papers simultaneous to the call for the HALT papers. Section 1 contains: "This is Like a Foreign Language to Me: Keynote Address" (Bill VanPatten); "From Discussion Questions…
NASA Astrophysics Data System (ADS)
Kamiński, K.; Dobrowolski, A. P.
2017-04-01
The paper presents the architecture and the results of optimization of selected elements of the Automatic Speaker Recognition (ASR) system that uses Gaussian Mixture Models (GMM) in the classification process. Optimization was performed on the process of selection of individual characteristics using the genetic algorithm and the parameters of Gaussian distributions used to describe individual voices. The system that was developed was tested in order to evaluate the impact of different compression methods used, among others, in landline, mobile, and VoIP telephony systems, on effectiveness of the speaker identification. Also, the results were presented of effectiveness of speaker identification at specific levels of noise with the speech signal and occurrence of other disturbances that could appear during phone calls, which made it possible to specify the spectrum of applications of the presented ASR system.
Iterative non-sequential protein structural alignment.
Salem, Saeed; Zaki, Mohammed J; Bystroff, Christopher
2009-06-01
Structural similarity between proteins gives us insights into their evolutionary relationships when there is low sequence similarity. In this paper, we present a novel approach called SNAP for non-sequential pair-wise structural alignment. Starting from an initial alignment, our approach iterates over a two-step process consisting of a superposition step and an alignment step, until convergence. We propose a novel greedy algorithm to construct both sequential and non-sequential alignments. The quality of SNAP alignments were assessed by comparing against the manually curated reference alignments in the challenging SISY and RIPC datasets. Moreover, when applied to a dataset of 4410 protein pairs selected from the CATH database, SNAP produced longer alignments with lower rmsd than several state-of-the-art alignment methods. Classification of folds using SNAP alignments was both highly sensitive and highly selective. The SNAP software along with the datasets are available online at http://www.cs.rpi.edu/~zaki/software/SNAP.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
Kennedy, Jeffrey R.; Paretti, Nicholas V.; Veilleux, Andrea G.
2014-01-01
Regression equations, which allow predictions of n-day flood-duration flows for selected annual exceedance probabilities at ungaged sites, were developed using generalized least-squares regression and flood-duration flow frequency estimates at 56 streamgaging stations within a single, relatively uniform physiographic region in the central part of Arizona, between the Colorado Plateau and Basin and Range Province, called the Transition Zone. Drainage area explained most of the variation in the n-day flood-duration annual exceedance probabilities, but mean annual precipitation and mean elevation were also significant variables in the regression models. Standard error of prediction for the regression equations varies from 28 to 53 percent and generally decreases with increasing n-day duration. Outside the Transition Zone there are insufficient streamgaging stations to develop regression equations, but flood-duration flow frequency estimates are presented at select streamgaging stations.
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
Code of Federal Regulations, 2010 CFR
2010-01-01
... Secretary: (i) Meetings shall be held in each producer district for the purpose of selecting candidates for the member and alternate member nominations; (ii) Those candidates selected at the producer meetings... meetings called by the committee, the independent handlers shall nominate a qualified person for each...
Nonintrusive multibiometrics on a mobile device: a comparison of fusion techniques
NASA Astrophysics Data System (ADS)
Allano, Lorene; Morris, Andrew C.; Sellahewa, Harin; Garcia-Salicetti, Sonia; Koreman, Jacques; Jassim, Sabah; Ly-Van, Bao; Wu, Dalei; Dorizzi, Bernadette
2006-04-01
In this article we test a number of score fusion methods for the purpose of multimodal biometric authentication. These tests were made for the SecurePhone project, whose aim is to develop a prototype mobile communication system enabling biometrically authenticated users to deal legally binding m-contracts during a mobile phone call on a PDA. The three biometrics of voice, face and signature were selected because they are all traditional non-intrusive and easy to use means of authentication which can readily be captured on a PDA. By combining multiple biometrics of relatively low security it may be possible to obtain a combined level of security which is at least as high as that provided by a PIN or handwritten signature, traditionally used for user authentication. As the relative success of different fusion methods depends on the database used and tests made, the database we used was recorded on a suitable PDA (the Qtek2020) and the test protocol was designed to reflect the intended application scenario, which is expected to use short text prompts. Not all of the fusion methods tested are original. They were selected for their suitability for implementation within the constraints imposed by the application. All of the methods tested are based on fusion of the match scores output by each modality. Though computationally simple, the methods tested have shown very promising results. All of the 4 fusion methods tested obtain a significant performance increase.
Closed-loop bird-computer interactions: a new method to study the role of bird calls.
Lerch, Alexandre; Roy, Pierre; Pachet, François; Nagle, Laurent
2011-03-01
In the field of songbird research, many studies have shown the role of male songs in territorial defense and courtship. Calling, another important acoustic communication signal, has received much less attention, however, because calls are assumed to contain less information about the emitter than songs do. Birdcall repertoire is diverse, and the role of calls has been found to be significant in the area of social interaction, for example, in pair, family, and group cohesion. However, standard methods for studying calls do not allow precise and systematic study of their role in communication. We propose herein a new method to study bird vocal interaction. A closed-loop computer system interacts with canaries, Serinus canaria, by (1) automatically classifying two basic types of canary vocalization, single versus repeated calls, as they are produced by the subject, and (2) responding with a preprogrammed call type recorded from another bird. This computerized animal-machine interaction requires no human interference. We show first that the birds do engage in sustained interactions with the system, by studying the rate of single and repeated calls for various programmed protocols. We then show that female canaries differentially use single and repeated calls. First, they produce significantly more single than repeated calls, and second, the rate of single calls is associated with the context in which they interact, whereas repeated calls are context independent. This experiment is the first illustration of how closed-loop bird-computer interaction can be used productively to study social relationships. © Springer-Verlag 2010
Penna, M; Lin, W Y; Feng, A S
2001-12-01
We investigated the response selectivities of single auditory neurons in the torus semicircularis of Batrachyla antartandica (a leptodactylid from southern Chile) to synthetic stimuli having diverse temporal structures. The advertisement call for this species is characterized by a long sequence of brief sound pulses having a dominant frequency of about 2000 Hz. We constructed five different series of synthetic stimuli in which the following acoustic parameters were systematically modified, one at a time: pulse rate, pulse duration, pulse rise time, pulse fall time, and train duration. The carrier frequency of these stimuli was fixed at the characteristic frequency of the units under study (n=44). Response patterns of TS units to these synthetic call variants revealed different degrees of selectivity for each of the temporal variables. A substantial number of neurons showed preference for pulse rates below 2 pulses s(-1), approximating the values found in natural advertisement calls. Tonic neurons generally showed preferences for long pulse durations, long rise and fall times, and long train durations. In contrast, phasic and phasic-burst neurons preferred stimuli with short duration, short rise and fall times and short train durations.
Earth Observing System Data Gateway
NASA Technical Reports Server (NTRS)
Pfister, Robin; McMahon, Joe; Amrhein, James; Sefert, Ed; Marsans, Lorena; Solomon, Mark; Nestler, Mark
2006-01-01
The Earth Observing System Data Gateway (EDG) software provides a "one-stop-shopping" standard interface for exploring and ordering Earth-science data stored at geographically distributed sites. EDG enables a user to do the following: 1) Search for data according to high-level criteria (e.g., geographic location, time, or satellite that acquired the data); 2) Browse the results of a search, viewing thumbnail sketches of data that satisfy the user s criteria; and 3) Order selected data for delivery to a specified address on a chosen medium (e.g., compact disk or magnetic tape). EDG consists of (1) a component that implements a high-level client/server protocol, and (2) a collection of C-language libraries that implement the passing of protocol messages between an EDG client and one or more EDG servers. EDG servers are located at sites usually called "Distributed Active Archive Centers" (DAACs). Each DAAC may allow access to many individual data items, called "granules" (e.g., single Landsat images). Related granules are grouped into collections called "data sets." EDG enables a user to send a search query to multiple DAACs simultaneously, inspect the resulting information, select browseable granules, and then order selected data from the different sites in a seamless fashion.
Pourabbasi, Ata; Farzami, Jalal; Shirvani, Mahbubeh-Sadat Ebrahimnegad; Shams, Amir Hossein; Larijani, Bagher
2017-01-01
One of the main usages of social networks in clinical studies is facilitating the process of sampling and case finding for scientists. The main focus of this study is on comparing two different methods of sampling through phone calls and using social network, for study purposes. One of the researchers started calling 214 families of children with diabetes during 90 days. After this period, phone calls stopped, and the team started communicating with families through telegram, a virtual social network for 30 days. The number of children who participated in the study was evaluated. Although the telegram method was 60 days shorter than the phone call method, researchers found that the number of participants from telegram (17.6%) did not have any significant differences compared with the ones being phone called (12.9%). Using social networks can be suggested as a beneficial method for local researchers who look for easier sampling methods, winning their samples' trust, following up with the procedure, and an easy-access database.
Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini
2014-01-01
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717
A method for assessing the intrinsic value and management potentials of geomorphosites
NASA Astrophysics Data System (ADS)
Reynard, Emmanuel; Amandine, Perret; Marco, Buchmann; Jonathan, Bussard; Lucien, Grangier; Simon, Martin
2014-05-01
In 2007, we have proposed a method for assessing the scientific and additional values of geomorphosites (Reynard et al., 2007). The evaluation methodology was divided in two steps: the evaluation of the scientific value of pre-selected sites, based on several criteria (rareness, integrity, representativeness, interest for reconstructing the regional morphogenesis), and the assessment of a set of so-called additional values (aesthetic, economic, ecological, and cultural). The method has proved to be quite robust and easy to use. The tests carried out in several geomorphological contexts allowed us to improve the implementation process of the method, by precising the criteria used to assess the various values of selected sites. Nevertheless, two main problems remained unsolved: (1) the selection of sites was not clear and not really systematic; (2) some additional values - in particular the economic value - were difficult to assess, and others, not considered in the method, could be evaluated (e.g. the educational value of sites). These were the factors for launching a series of modifications of the method that are presented in this poster. First of all, the assessment procedure was divided in two main steps: (1) the evaluation of the intrinsic value, in two parts (the scientific and additional values, limited to three kinds of values - cultural, ecological, aesthetic); (2) the documentation of the present use and management of the site, also divided in two parts: the sensitivity of the site (allowing us to assess the need for protection), and a series of factors influencing the (tourist) use of the site (visit conditions, educational interest, economic value). Secondly, a procedure was developed to select the potential geomorphosites - that is the sites worth to be assessed using the evaluation method. The method was then tested in four regions in the Swiss and French Alps: the Chablais area (Switzerland, France), the Hérens valley (Switzerland), the Moesano valley (Switzerland), where a project of national park is in preparation, and the Gruyère - Pays-d'Enhaut Regional Nature Park (Switzerland). The main conclusion of the research is that even if a full objectivity in the evaluation process is difficult to reach, transparency is essential almost at 3 stages: (1) the selection of potential geomorphosites: it is important to develop criteria and a method for establishing a list of potential geomorphosites; in this study, we propose to carry out the selection by crossing two dimensions: a spatial one (the selection should reflect the regional geo(morpho)diversity) and a temporal one (the selection should allow reconstructing the regional geomorphological history); (2) the assessment of the intrinsic value of the selected geomorphosites, by the establishment of clear criteria for carrying out the evaluation; (3) the development of a clear management strategy oriented to the protection and tourist promotion of the sites and based on the precise documentation of management potentials and needs, according to the assessment objectives. Reference Reynard E., Fontana G., Kozlik L., Scapozza C. (2007). A method for assessing the scientific and additional values of geomorphosites, Geogr. Helv. 62(3), 148-158.
NegGOA: negative GO annotations selection using ontology structure.
Fu, Guangyuan; Wang, Jun; Yang, Bo; Yu, Guoxian
2016-10-01
Predicting the biological functions of proteins is one of the key challenges in the post-genomic era. Computational models have demonstrated the utility of applying machine learning methods to predict protein function. Most prediction methods explicitly require a set of negative examples-proteins that are known not carrying out a particular function. However, Gene Ontology (GO) almost always only provides the knowledge that proteins carry out a particular function, and functional annotations of proteins are incomplete. GO structurally organizes more than tens of thousands GO terms and a protein is annotated with several (or dozens) of these terms. For these reasons, the negative examples of a protein can greatly help distinguishing true positive examples of the protein from such a large candidate GO space. In this paper, we present a novel approach (called NegGOA) to select negative examples. Specifically, NegGOA takes advantage of the ontology structure, available annotations and potentiality of additional annotations of a protein to choose negative examples of the protein. We compare NegGOA with other negative examples selection algorithms and find that NegGOA produces much fewer false negatives than them. We incorporate the selected negative examples into an efficient function prediction model to predict the functions of proteins in Yeast, Human, Mouse and Fly. NegGOA also demonstrates improved accuracy than these comparing algorithms across various evaluation metrics. In addition, NegGOA is less suffered from incomplete annotations of proteins than these comparing methods. The Matlab and R codes are available at https://sites.google.com/site/guoxian85/neggoa gxyu@swu.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina
2016-01-01
The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308
Seyyed Alizadeh Ganji, Seyyed Mohammad; Hayati, Mohammad
2018-06-05
The presence of cyanide ions in wastewater is dangerous to the health and life of living creatures, especially humans. Cyanide concentration should not exceed the acceptable limit in wastewaters to avoid their adverse effects to the environment. In this paper, in order to select the most appropriate method to remove cyanide from the wastewater of the Moteh gold mine, based on the experts' opinions, the use of calcium hypochlorite, sodium hypochlorite, and hydrogen peroxide was chosen as forerunning alternative in the form of a multi-stage model. Then, seven criteria including the amount of material consumed, ease of implementation, safety, ability to remove cyanide, pH, time, and cost of the process to assess the considered methods were determined. Afterwards, seven experts conducted numerous experiments to examine the conditions of each of these criteria. Then, by employing a mathematical method called "numerical taxonomy," the use of sodium hypochlorite was suggested as the best method to remove cyanide from the wastewater of the Moteh gold mine. Finally, the TOPSIS model was used to validate the proposed model, which led to the same results of the suggested method. Also, the results of employing taxonomic analysis and TOPSIS method suggested the use of sodium hypochlorite as the best method for cyanide removal from wastewater. In addition, according to the analysis of various experiments, conditions for complete removal of cyanide using sodium hypochlorite included concentration (8.64 g/L), pH (12.3), and temperature (12 °C).
Adaptation of warrant price with Black Scholes model and historical volatility
NASA Astrophysics Data System (ADS)
Aziz, Khairu Azlan Abd; Idris, Mohd Fazril Izhar Mohd; Saian, Rizauddin; Daud, Wan Suhana Wan
2015-05-01
This project discusses about pricing warrant in Malaysia. The Black Scholes model with non-dividend approach and linear interpolation technique was applied in pricing the call warrant. Three call warrants that are listed in Bursa Malaysia were selected randomly from UiTM's datastream. The finding claims that the volatility for each call warrants are different to each other. We have used the historical volatility which will describes the price movement by which an underlying share is expected to fluctuate within a period. The Black Scholes model price that was obtained by the model will be compared with the actual market price. Mispricing the call warrants will contribute to under or over valuation price. Other variables like interest rate, time to maturity date, exercise price and underlying stock price are involves in pricing call warrants as well as measuring the moneyness of call warrants.
NASA Astrophysics Data System (ADS)
Usowicz, Jerzy, B.; Marczewski, Wojciech; Usowicz, Boguslaw; Lipiec, Jerzy; Lukowski, Mateusz I.
2010-05-01
This paper presents the results of the time series analysis of the soil moisture observed at two test sites Podlasie, Polesie, in the Cal/Val AO 3275 campaigns in Poland, during the interval 2006-2009. The test sites have been selected on a basis of their contrasted hydrological conditions. The region Podlasie (Trzebieszow) is essentially drier than the wetland region Polesie (Urszulin). It is worthwhile to note that the soil moisture variations can be represented as a non-stationary random process, and therefore appropriate analysis methods are required. The so-called Empirical Mode Decomposition (EMD) method has been chosen, since it is one of the best methods for the analysis of non-stationary and nonlinear time series. To confirm the results obtained by the EMD we have also used the wavelet methods. Firstly, we have used EMD (analyze step) to decompose the original time series into the so-called Intrinsic Mode Functions (IMFs) and then by grouping and addition similar IMFs (synthesize step) to obtain a few signal components with corresponding temporal scales. Such an adaptive procedure enables to decompose the original time series into diurnal, seasonal and trend components. Revealing of all temporal scales which operates in the original time series is our main objective and this approach may prove to be useful in other studies. Secondly, we have analyzed the soil moisture time series from both sites using the cross-wavelet and wavelet coherency. These methods allow us to study the degree of spatial coherence, which may vary in various intervals of time. We hope the obtained results provide some hints and guidelines for the validation of ESA SMOS data. References: B. Usowicz, J.B. Usowicz, Spatial and temporal variation of selected physical and chemical properties of soil, Institute of Agrophysics, Polish Academy of Sciences, Lublin 2004, ISBN 83-87385-96-4 Rao, A.R., Hsu, E.-C., Hilbert-Huang Transform Analysis of Hydrological and Environmental Time Series, Springer, 2008, ISBN: 978-1-4020-6453-1 Acknowledgements. This work was funded in part by the PECS - Programme for European Cooperating States, No. 98084 "SWEX/R - Soil Water and Energy Exchange/Research".
Hybrid feature selection algorithm using symmetrical uncertainty and a harmony search algorithm
NASA Astrophysics Data System (ADS)
Salameh Shreem, Salam; Abdullah, Salwani; Nazri, Mohd Zakree Ahmad
2016-04-01
Microarray technology can be used as an efficient diagnostic system to recognise diseases such as tumours or to discriminate between different types of cancers in normal tissues. This technology has received increasing attention from the bioinformatics community because of its potential in designing powerful decision-making tools for cancer diagnosis. However, the presence of thousands or tens of thousands of genes affects the predictive accuracy of this technology from the perspective of classification. Thus, a key issue in microarray data is identifying or selecting the smallest possible set of genes from the input data that can achieve good predictive accuracy for classification. In this work, we propose a two-stage selection algorithm for gene selection problems in microarray data-sets called the symmetrical uncertainty filter and harmony search algorithm wrapper (SU-HSA). Experimental results show that the SU-HSA is better than HSA in isolation for all data-sets in terms of the accuracy and achieves a lower number of genes on 6 out of 10 instances. Furthermore, the comparison with state-of-the-art methods shows that our proposed approach is able to obtain 5 (out of 10) new best results in terms of the number of selected genes and competitive results in terms of the classification accuracy.
High-frequency gamma activity (80-150 Hz) is increased in human cortex during selective attention
Ray, Supratim; Niebur, Ernst; Hsiao, Steven S.; Sinai, Alon; Crone, Nathan E.
2008-01-01
Objective: To study the role of gamma oscillations (>30 Hz) in selective attention using subdural electrocorticography (ECoG) in humans. Methods: We recorded ECoG in human subjects implanted with subdural electrodes for epilepsy surgery. Sequences of auditory tones and tactile vibrations of 800 ms duration were presented asynchronously, and subjects were asked to selectively attend to one of the two stimulus modalities in order to detect an amplitude increase at 400 ms in some of the stimuli. Results: Event-related ECoG gamma activity was greater over auditory cortex when subjects attended auditory stimuli and was greater over somatosensory cortex when subjects attended vibrotactile stimuli. Furthermore, gamma activity was also observed over prefrontal cortex when stimuli appeared in either modality, but only when they were attended. Attentional modulation of gamma power began ∼400 ms after stimulus onset, consistent with the temporal demands on attention. The increase in gamma activity was greatest at frequencies between 80 and 150 Hz, in the so-called high gamma frequency range. Conclusions: There appears to be a strong link between activity in the high-gamma range (80-150 Hz) and selective attention. Significance: Selective attention is correlated with increased activity in a frequency range that is significantly higher than what has been reported previously using EEG recordings. PMID:18037343
2010-01-01
Background Physicians' mental health may be adversely affected by the number of days of work and time spent on-call, and improved by sleep and days-off. The aim of this study was to determine the associations of depressive symptoms with taking days of off duty, hours of sleep, and the number of days of on-call and overnight work among physicians working in Japanese hospitals. Methods A cross-sectional study as a national survey was conducted by mail. The study population was 10,000 randomly selected physicians working in hospitals who were also members of the Japan Medical Association (response rate 40.5%). Self-reported anonymous questionnaire was sent to assess the number of days off-duty, overnight work, and on-calls, and the average number of sleep hours on days not working overnight in the previous one month. Depressive state was determined by the Japanese version of the Quick Inventory of Depressive Symptomatology. Logistic regression analysis was used to explore the associations between depressive symptoms and the studied variables. Results Among the respondents, 8.3% of men and 10.5% of women were determined to be depressed. For both men and women, depressive state was associated with having no off-duty days and averaging less than 5 hours of sleep on days not doing overnight work. Depressive state was positively associated with being on-call more than 5 days per month for men, and more than 8 days per month for women, and was negatively associated with being off-duty more than 8 days per month for men. Conclusion Some physicians need some support to maintain their mental health. Physicians who do not take enough days-off, who reduced sleep hours, and who have certain number of days on-calls may develop depressive symptoms. PMID:20222990
The Influence of Judgment Calls on Meta-Analytic Findings.
Tarrahi, Farid; Eisend, Martin
2016-01-01
Previous research has suggested that judgment calls (i.e., methodological choices made in the process of conducting a meta-analysis) have a strong influence on meta-analytic findings and question their robustness. However, prior research applies case study comparison or reanalysis of a few meta-analyses with a focus on a few selected judgment calls. These studies neglect the fact that different judgment calls are related to each other and simultaneously influence the outcomes of a meta-analysis, and that meta-analytic findings can vary due to non-judgment call differences between meta-analyses (e.g., variations of effects over time). The current study analyzes the influence of 13 judgment calls in 176 meta-analyses in marketing research by applying a multivariate, multilevel meta-meta-analysis. The analysis considers simultaneous influences from different judgment calls on meta-analytic effect sizes and controls for alternative explanations based on non-judgment call differences between meta-analyses. The findings suggest that judgment calls have only a minor influence on meta-analytic findings, whereas non-judgment call differences between meta-analyses are more likely to explain differences in meta-analytic findings. The findings support the robustness of meta-analytic results and conclusions.
ROCS: a Reproducibility Index and Confidence Score for Interaction Proteomics Studies
2012-01-01
Background Affinity-Purification Mass-Spectrometry (AP-MS) provides a powerful means of identifying protein complexes and interactions. Several important challenges exist in interpreting the results of AP-MS experiments. First, the reproducibility of AP-MS experimental replicates can be low, due both to technical variability and the dynamic nature of protein interactions in the cell. Second, the identification of true protein-protein interactions in AP-MS experiments is subject to inaccuracy due to high false negative and false positive rates. Several experimental approaches can be used to mitigate these drawbacks, including the use of replicated and control experiments and relative quantification to sensitively distinguish true interacting proteins from false ones. Methods To address the issues of reproducibility and accuracy of protein-protein interactions, we introduce a two-step method, called ROCS, which makes use of Indicator Prey Proteins to select reproducible AP-MS experiments, and of Confidence Scores to select specific protein-protein interactions. The Indicator Prey Proteins account for measures of protein identifiability as well as protein reproducibility, effectively allowing removal of outlier experiments that contribute noise and affect downstream inferences. The filtered set of experiments is then used in the Protein-Protein Interaction (PPI) scoring step. Prey protein scoring is done by computing a Confidence Score, which accounts for the probability of occurrence of prey proteins in the bait experiments relative to the control experiment, where the significance cutoff parameter is estimated by simultaneously controlling false positives and false negatives against metrics of false discovery rate and biological coherence respectively. In summary, the ROCS method relies on automatic objective criterions for parameter estimation and error-controlled procedures. Results We illustrate the performance of our method by applying it to five previously published AP-MS experiments, each containing well characterized protein interactions, allowing for systematic benchmarking of ROCS. We show that our method may be used on its own to make accurate identification of specific, biologically relevant protein-protein interactions, or in combination with other AP-MS scoring methods to significantly improve inferences. Conclusions Our method addresses important issues encountered in AP-MS datasets, making ROCS a very promising tool for this purpose, either on its own or in conjunction with other methods. We anticipate that our methodology may be used more generally in proteomics studies and databases, where experimental reproducibility issues arise. The method is implemented in the R language, and is available as an R package called “ROCS”, freely available from the CRAN repository http://cran.r-project.org/. PMID:22682516
ERIC Educational Resources Information Center
Regional Resource Center Program, 2014
2014-01-01
One component of the recently required State Systemic Improvement Plan (SSIP) for State Departments of Education calls for the selection and implementation of evidence-based practices (EBPs). This report provides six steps to guide the process of selecting evidence based practices (EBP): (1) Begin with the End in Mind--Determine Targeted Outcomes;…
An End to Selection at Eleven: The Long Battle to Make Labour Listen
ERIC Educational Resources Information Center
Hayton, Carol
2015-01-01
The author is a long-time advocate inside the Labour Party for ending selective education and the 11-plus. She outlines how Labour Party frontbenchers routinely ignore or deflect calls from Party members to stand up for comprehensive education in both word and deed. As UKIP, whose policy is to extend selective education more widely, rises in the…
NASA Technical Reports Server (NTRS)
Niccum, R. J.
1972-01-01
A series of candidate materials for use in large balloons was tested and their tensile and shear strength capabilities were compared. The tests were done in a cold box at -68 C (-90 F). Some of these materials were fabricated on a special machine called the flying thread loom. This machine laminates various patterns of polyester yarn to a thin polyester film. The results show that the shear strength of materials changes with the angle selected for the transverse yarns, and substantial increases in biaxial load carrying capabilities, compared to materials formerly used, are possible. The loom capabilities and the test methods are discussed.
NASA Astrophysics Data System (ADS)
Laura, J. R.; Miller, D.; Paul, M. V.
2012-03-01
An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.
Exactly energy conserving semi-implicit particle in cell formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be
We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less
Motion synthesis and force distribution analysis for a biped robot.
Trojnacki, Maciej T; Zielińska, Teresa
2011-01-01
In this paper, the method of generating biped robot motion using recorded human gait is presented. The recorded data were modified taking into account the velocity available for robot drives. Data includes only selected joint angles, therefore the missing values were obtained considering the dynamic postural stability of the robot, which means obtaining an adequate motion trajectory of the so-called Zero Moment Point (ZMT). Also, the method of determining the ground reaction forces' distribution during the biped robot's dynamic stable walk is described. The method was developed by the authors. Following the description of equations characterizing the dynamics of robot's motion, the values of the components of ground reaction forces were symbolically determined as well as the coordinates of the points of robot's feet contact with the ground. The theoretical considerations have been supported by computer simulation and animation of the robot's motion. This was done using Matlab/Simulink package and Simulink 3D Animation Toolbox, and it has proved the proposed method.
Geostatistics for spatial genetic structures: study of wild populations of perennial ryegrass.
Monestiez, P; Goulard, M; Charmet, G
1994-04-01
Methods based on geostatistics were applied to quantitative traits of agricultural interest measured on a collection of 547 wild populations of perennial ryegrass in France. The mathematical background of these methods, which resembles spatial autocorrelation analysis, is briefly described. When a single variable is studied, the spatial structure analysis is similar to spatial autocorrelation analysis, and a spatial prediction method, called "kriging", gives a filtered map of the spatial pattern over all the sampled area. When complex interactions of agronomic traits with different evaluation sites define a multivariate structure for the spatial analysis, geostatistical methods allow the spatial variations to be broken down into two main spatial structures with ranges of 120 km and 300 km, respectively. The predicted maps that corresponded to each range were interpreted as a result of the isolation-by-distance model and as a consequence of selection by environmental factors. Practical collecting methodology for breeders may be derived from such spatial structures.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Color image segmentation with support vector machines: applications to road signs detection.
Cyganek, Bogusław
2008-08-01
In this paper we propose efficient color segmentation method which is based on the Support Vector Machine classifier operating in a one-class mode. The method has been developed especially for the road signs recognition system, although it can be used in other applications. The main advantage of the proposed method comes from the fact that the segmentation of characteristic colors is performed not in the original but in the higher dimensional feature space. By this a better data encapsulation with a linear hypersphere can be usually achieved. Moreover, the classifier does not try to capture the whole distribution of the input data which is often difficult to achieve. Instead, the characteristic data samples, called support vectors, are selected which allow construction of the tightest hypersphere that encloses majority of the input data. Then classification of a test data simply consists in a measurement of its distance to a centre of the found hypersphere. The experimental results show high accuracy and speed of the proposed method.
Effects of band selection on endmember extraction for forestry applications
NASA Astrophysics Data System (ADS)
Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis
2014-10-01
In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.
NASA Astrophysics Data System (ADS)
Viironen, K.; Marín-Franch, A.; López-Sanjuan, C.; Varela, J.; Chaves-Montero, J.; Cristóbal-Hornillos, D.; Molino, A.; Fernández-Soto, A.; Vilella-Rojo, G.; Ascaso, B.; Cenarro, A. J.; Cerviño, M.; Cepa, J.; Ederoclite, A.; Márquez, I.; Masegosa, J.; Moles, M.; Oteo, I.; Pović, M.; Aguerri, J. A. L.; Alfaro, E.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, J. F.; Del Olmo, A.; González Delgado, R. M.; Husillos, C.; Infante, L.; Martínez, V. J.; Perea, J.; Prada, F.; Quintana, J. M.
2015-04-01
Context. Most observational results on the high redshift restframe UV-bright galaxies are based on samples pinpointed using the so-called dropout technique or Ly-α selection. However, the availability of multifilter data now allows the dropout selections to be replaced by direct methods based on photometric redshifts. In this paper we present the methodology to select and study the population of high redshift galaxies in the ALHAMBRA survey data. Aims: Our aim is to develop a less biased methodology than the traditional dropout technique to study the high redshift galaxies in ALHAMBRA and other multifilter data. Thanks to the wide area ALHAMBRA covers, we especially aim at contributing to the study of the brightest, least frequent, high redshift galaxies. Methods: The methodology is based on redshift probability distribution functions (zPDFs). It is shown how a clean galaxy sample can be obtained by selecting the galaxies with high integrated probability of being within a given redshift interval. However, reaching both a complete and clean sample with this method is challenging. Hence, a method to derive statistical properties by summing the zPDFs of all the galaxies in the redshift bin of interest is introduced. Results: Using this methodology we derive the galaxy rest frame UV number counts in five redshift bins centred at z = 2.5,3.0,3.5,4.0, and 4.5, being complete up to the limiting magnitude at mUV(AB) = 24, where mUV refers to the first ALHAMBRA filter redwards of the Ly-α line. With the wide field ALHAMBRA data we especially contribute to the study of the brightest ends of these counts, accurately sampling the surface densities down to mUV(AB) = 21-22. Conclusions: We show that using the zPDFs it is easy to select a very clean sample of high redshift galaxies. We also show that it is better to do statistical analysis of the properties of galaxies using a probabilistic approach, which takes into account both the incompleteness and contamination issues in a natural way. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC).
3D-QSPR Method of Computational Technique Applied on Red Reactive Dyes by Using CoMFA Strategy
Mahmood, Uzma; Rashid, Sitara; Ali, S. Ishrat; Parveen, Rasheeda; Zaheer-ul-Haq; Ambreen, Nida; Khan, Khalid Mohammed; Perveen, Shahnaz; Voelter, Wolfgang
2011-01-01
Cellulose fiber is a tremendous natural resource that has broad application in various productions including the textile industry. The dyes, which are commonly used for cellulose printing, are “reactive dyes” because of their high wet fastness and brilliant colors. The interaction of various dyes with the cellulose fiber depends upon the physiochemical properties that are governed by specific features of the dye molecule. The binding pattern of the reactive dye with cellulose fiber is called the ligand-receptor concept. In the current study, the three dimensional quantitative structure property relationship (3D-QSPR) technique was applied to understand the red reactive dyes interactions with the cellulose by the Comparative Molecular Field Analysis (CoMFA) method. This method was successfully utilized to predict a reliable model. The predicted model gives satisfactory statistical results and in the light of these, it was further analyzed. Additionally, the graphical outcomes (contour maps) help us to understand the modification pattern and to correlate the structural changes with respect to the absorptivity. Furthermore, the final selected model has potential to assist in understanding the charachteristics of the external test set. The study could be helpful to design new reactive dyes with better affinity and selectivity for the cellulose fiber. PMID:22272108
Predicting bacteriophage proteins located in host cell with feature selection technique.
Ding, Hui; Liang, Zhi-Yong; Guo, Feng-Biao; Huang, Jian; Chen, Wei; Lin, Hao
2016-04-01
A bacteriophage is a virus that can infect a bacterium. The fate of an infected bacterium is determined by the bacteriophage proteins located in the host cell. Thus, reliably identifying bacteriophage proteins located in the host cell is extremely important to understand their functions and discover potential anti-bacterial drugs. Thus, in this paper, a computational method was developed to recognize bacteriophage proteins located in host cells based only on their amino acid sequences. The analysis of variance (ANOVA) combined with incremental feature selection (IFS) was proposed to optimize the feature set. Using a jackknife cross-validation, our method can discriminate between bacteriophage proteins located in a host cell and the bacteriophage proteins not located in a host cell with a maximum overall accuracy of 84.2%, and can further classify bacteriophage proteins located in host cell cytoplasm and in host cell membranes with a maximum overall accuracy of 92.4%. To enhance the value of the practical applications of the method, we built a web server called PHPred (〈http://lin.uestc.edu.cn/server/PHPred〉). We believe that the PHPred will become a powerful tool to study bacteriophage proteins located in host cells and to guide related drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.
FTIR spectra and mechanical strength analysis of some selected rubber derivatives.
Gunasekaran, S; Natarajan, R K; Kala, A
2007-10-01
Rubber materials have wide range of commercial applications such as, infant diapers, famine hygiene products, drug delivery devices and incontinency products such as rubber tubes, tyres, etc. In the present work, studies on mechanical properties of some selected rubber materials viz., natural rubber (NR), styrene butadiene rubber (SBR), nitrile butadiene rubber (NBR) and ethylene propylene diene monomer (EPDM) have been carried out in three states viz., raw, vulcanized and reinforced. To enhance the quality of rubber elastomers, an attempt is made to prepare new elastomers called polyblends. In the present study an attempt is made to blend NR with NBR and with EPDM. We here report, a novel approach for the evaluation of various physico-mechanical properties such as mechanical strength, tensile strength, elongation and hardness. The method is simple, direct and fast and involves infrared spectral measurements for the evaluation of these properties. With the applications of modern infrared spectroscopy, the mechanical strength of these rubber materials have been analyzed by calculating the internal standards among the methyl and methylene group vibrational frequencies obtained from FTIR spectroscopy. Also the tensile strength measurements carried out by universal testing machine. The results pertaining physico-mechanical properties of the rubber derivatives undertaken in the present study obtained by IR-based method are in good agreement with data resulted from the standard methods.
FTIR spectra and mechanical strength analysis of some selected rubber derivatives
NASA Astrophysics Data System (ADS)
Gunasekaran, S.; Natarajan, R. K.; Kala, A.
2007-10-01
Rubber materials have wide range of commercial applications such as, infant diapers, famine hygiene products, drug delivery devices and incontinency products such as rubber tubes, tyres, etc. In the present work, studies on mechanical properties of some selected rubber materials viz., natural rubber (NR), styrene butadiene rubber (SBR), nitrile butadiene rubber (NBR) and ethylene propylene diene monomer (EPDM) have been carried out in three states viz., raw, vulcanized and reinforced. To enhance the quality of rubber elastomers, an attempt is made to prepare new elastomers called polyblends. In the present study an attempt is made to blend NR with NBR and with EPDM. We here report, a novel approach for the evaluation of various physico-mechanical properties such as mechanical strength, tensile strength, elongation and hardness. The method is simple, direct and fast and involves infrared spectral measurements for the evaluation of these properties. With the applications of modern infrared spectroscopy, the mechanical strength of these rubber materials have been analyzed by calculating the internal standards among the methyl and methylene group vibrational frequencies obtained from FTIR spectroscopy. Also the tensile strength measurements carried out by universal testing machine. The results pertaining physico-mechanical properties of the rubber derivatives undertaken in the present study obtained by IR-based method are in good agreement with data resulted from the standard methods.
PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine
Manavalan, Balachandran; Shin, Tae H.; Lee, Gwang
2018-01-01
Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html. PMID:29616000
PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine.
Manavalan, Balachandran; Shin, Tae H; Lee, Gwang
2018-01-01
Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html.
NASA Technical Reports Server (NTRS)
Lee, Jonathan A.
2010-01-01
High pressure Hydrogen (H) gas has been known to have a deleterious effect on the mechanical properties of certain metals, particularly, the notched tensile strength, fracture toughness and ductility. The ratio of these properties in Hydrogen as compared to Helium or Air is called the Hydrogen Environment Embrittlement (HEE) Index, which is a useful method to classify the severity of H embrittlement and to aid in the material screening and selection for safety usage H gas environment. A comprehensive world-wide database compilation, in the past 50 years, has shown that the HEE index is mostly collected at two conveniently high H pressure points of 5 ksi and 10 ksi near room temperature. Since H embrittlement is directly related to pressure, the lack of HEE index at other pressure points has posed a technical problem for the designers to select appropriate materials at a specific H pressure for various applications in aerospace, alternate and renewable energy sectors for an emerging hydrogen economy. Based on the Power-Law mathematical relationship, an empirical method to accurately predict the HEE index, as a function of H pressure at constant temperature, is presented with a brief review on Sievert's law for gas-metal absorption.
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng
2018-01-01
Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
ERIC Educational Resources Information Center
Stout State Univ., Menomonie, WI. Graduate School.
The Graduate College at Stout State University is considering an option to the present A-F grading system called "Mastery Grading," based on a concept called "teaching for mastery." This involves carefully defining each course in terms of the specific competencies which the student is expected to develop as a result of…
A coevolutionary arms race causes ecological speciation in crossbills.
Smith, Julie W; Benkman, Craig W
2007-04-01
We examined three ecological factors potentially causing premating reproductive isolation to determine whether divergent selection as a result of coevolution between South Hills crossbills (Loxia curvirostra complex) and Rocky Mountain lodgepole pine (Pinus contorta latifolia) promotes ecological speciation. One factor was habitat isolation arising because of enhanced seed defenses of lodgepole pine in the South Hills. This caused the crossbill call types (morphologically and vocally differentiated forms) adapted to alternative resources to be rare. Another occurred when crossbills of other call types moved into the South Hills late in the breeding season and feeding conditions were deteriorating so that relatively few non-South Hills crossbills bred ("immigrant infecundity"). Finally, among those crossbills that bred, pairing was strongly assortative by call type (behavioral isolation). Total reproductive isolation between South Hills crossbills and the two other crossbills most common in the South Hills (call types 2 and 5) summed to .9975 and .9998, respectively, on a scale of 0 (no reproductive isolation) to 1 (complete reproductive isolation). These extremely high levels of reproductive isolation indicate that the divergent selection resulting from the coevolutionary arms race between crossbills and lodgepole pine is causing the South Hills crossbill to speciate.
Pham, Trâm; Giraud, Sandrine; Schuliar, Gaëlle; Rougeron, Amandine; Bouchara, Jean-Philippe
2015-06-01
The Scedosporium apiospermum complex is responsible for a large variety of infections in human. Members of this complex have become emerging fungal pathogens with an increasing occurrence in patients with underlying conditions such as immunosuppression or cystic fibrosis. A better knowledge of these fungi and of the sources of contamination of the patients is required and more accurate detection methods from the environment are needed. In this context, a highly selective culture medium was developed in the present study. Thus, various aliphatic, cyclic, or aromatic compounds were tested as the sole carbon source, in combination with some inorganic nitrogen sources and fungicides. The best results were obtained with 4-hydroxy-benzoate combined with ammonium sulfate and the fungicides dichloran and benomyl. This new culture medium called Scedo-Select III was shown to support growth of all species of the S. apiospermum complex. Subsequently, this new culture medium was evaluated successfully on water and soil samples, exhibiting higher sensitivity and selectivity than the previously described SceSel+ culture medium. Therefore, this easy-to-prepare and synthetic semi-selective culture medium may be useful to clarify the ecology of these fungi and to identify their reservoirs in patients' environment. © The Author 2015. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
A transposase strategy for creating libraries of circularly permuted proteins.
Mehta, Manan M; Liu, Shirley; Silberg, Jonathan J
2012-05-01
A simple approach for creating libraries of circularly permuted proteins is described that is called PERMutation Using Transposase Engineering (PERMUTE). In PERMUTE, the transposase MuA is used to randomly insert a minitransposon that can function as a protein expression vector into a plasmid that contains the open reading frame (ORF) being permuted. A library of vectors that express different permuted variants of the ORF-encoded protein is created by: (i) using bacteria to select for target vectors that acquire an integrated minitransposon; (ii) excising the ensemble of ORFs that contain an integrated minitransposon from the selected vectors; and (iii) circularizing the ensemble of ORFs containing integrated minitransposons using intramolecular ligation. Construction of a Thermotoga neapolitana adenylate kinase (AK) library using PERMUTE revealed that this approach produces vectors that express circularly permuted proteins with distinct sequence diversity from existing methods. In addition, selection of this library for variants that complement the growth of Escherichia coli with a temperature-sensitive AK identified functional proteins with novel architectures, suggesting that PERMUTE will be useful for the directed evolution of proteins with new functions.
A transposase strategy for creating libraries of circularly permuted proteins
Mehta, Manan M.; Liu, Shirley; Silberg, Jonathan J.
2012-01-01
A simple approach for creating libraries of circularly permuted proteins is described that is called PERMutation Using Transposase Engineering (PERMUTE). In PERMUTE, the transposase MuA is used to randomly insert a minitransposon that can function as a protein expression vector into a plasmid that contains the open reading frame (ORF) being permuted. A library of vectors that express different permuted variants of the ORF-encoded protein is created by: (i) using bacteria to select for target vectors that acquire an integrated minitransposon; (ii) excising the ensemble of ORFs that contain an integrated minitransposon from the selected vectors; and (iii) circularizing the ensemble of ORFs containing integrated minitransposons using intramolecular ligation. Construction of a Thermotoga neapolitana adenylate kinase (AK) library using PERMUTE revealed that this approach produces vectors that express circularly permuted proteins with distinct sequence diversity from existing methods. In addition, selection of this library for variants that complement the growth of Escherichia coli with a temperature-sensitive AK identified functional proteins with novel architectures, suggesting that PERMUTE will be useful for the directed evolution of proteins with new functions. PMID:22319214
Context-dependent vocal mimicry in a passerine bird.
Goodale, Eben; Kotagama, Sarath W
2006-04-07
How do birds select the sounds they mimic, and in what contexts do they use vocal mimicry? Some birds show a preference for mimicking other species' alarm notes, especially in situations when they appear to be alarmed. Yet no study has demonstrated that birds change the call types they mimic with changing contexts. We found that greater racket-tailed drongos (Dicrurus paradiseus) in the rainforest of Sri Lanka mimic the calls of predators and the alarm-associated calls of other species more often than would be expected from the frequency of these sounds in the acoustic environment. Drongos include this alarm-associated mimicry in their own alarm vocalizations, while incorporating other species' songs and contact calls in their own songs. Drongos show an additional level of context specificity by mimicking other species' ground predator-specific call types when mobbing. We suggest that drongos learn other species' calls and their contexts while interacting with these species in mixed flocks. The drongos' behaviour demonstrates that alarm-associated calls can have learned components, and that birds can learn the appropriate usage of calls that encode different types of information.
Context-dependent vocal mimicry in a passerine bird
Goodale, Eben; Kotagama, Sarath W
2005-01-01
How do birds select the sounds they mimic, and in what contexts do they use vocal mimicry? Some birds show a preference for mimicking other species' alarm notes, especially in situations when they appear to be alarmed. Yet no study has demonstrated that birds change the call types they mimic with changing contexts. We found that greater racket-tailed drongos (Dicrurus paradiseus) in the rainforest of Sri Lanka mimic the calls of predators and the alarm-associated calls of other species more often than would be expected from the frequency of these sounds in the acoustic environment. Drongos include this alarm-associated mimicry in their own alarm vocalizations, while incorporating other species' songs and contact calls in their own songs. Drongos show an additional level of context specificity by mimicking other species' ground predator-specific call types when mobbing. We suggest that drongos learn other species' calls and their contexts while interacting with these species in mixed flocks. The drongos' behaviour demonstrates that alarm-associated calls can have learned components, and that birds can learn the appropriate usage of calls that encode different types of information. PMID:16618682
Zimmitti, S J
1999-01-01
In an eastern North American tree frog, the spring peeper (Pseudacris crucifer), calling rate has been correlated with reproductive success in the field. To determine the sources of individual variation in calling rate in this species, I analyzed males calling at rates greater than and less than the chorus average throughout one breeding season. Compared to low-rate callers, high-rate callers were relatively larger, heavier, older, and in better body condition, and their muscles used in calling had higher activities of the enzymes citrate synthase and beta-hydroxyacyl-CoA dehydrogenase. This muscle profile is functionally matched by cardiovascular correlates, as indicated by the larger ventricles and higher blood hemoglobin concentrations in high-calling rate males. These cardiovascular features are much less developed in females and may result from the fact that females do not engage in vigorous calling behavior. In P. crucifier, a male's calling rate may function as an indicator of the presence of a suite of functionally interrelated traits responsible for the maintenance of this sexually selected display behavior.
Abdul Rashid, Rima Marhayu; Mohamed, Majdah; Hamid, Zaleha Abdul; Dahlui, Maznah
2013-01-01
To compare the effectiveness of different methods of recall for repeat Pap smear among women who had normal smears in the previous screening. Prospective randomized controlled study. All community clinics in Klang under the Ministry of Health Malaysia. Women of Klang who attended cervical screening and had a normal Pap smear in the previous year, and were due for a repeat smear were recruited and randomly assigned to four different methods of recall for repeat smear. The recall methods given to the women to remind them for a repeat smear were either by postal letter, registered letter, short message by phone (SMS) or phone call. Number and percentage of women who responded to the recall within 8 weeks after they had received the recall, irrespective whether they had Pap test conducted. Also the numbers of women in each recall method that came for repeat Pap smear. The rates of recall messages reaching the women when using letter, registered letter, SMS and phone calls were 79%, 87%, 66% and 68%, respectively. However, the positive responses to recall by letter, registered letter, phone messages and telephone call were 23.9%, 23.0%, 32.9% and 50.9%, respectively (p<0.05). Furthermore, more women who received recall by phone call had been screened (p<0.05) compared to those who received recall by postal letter (OR=2.38, CI=1.56-3.62). Both the usual way of sending letters and registered letters had higher chances of reaching patients compared to using phone either for sending messages or calling. The response to the recall method and uptake of repeat smear, however, were highest via phone call, indicating the importance of direct communication.
ERIC Educational Resources Information Center
Hermanowicz, Joseph C.
2013-01-01
Select groups and organizations embrace practices that perpetuate their inferiority. The result is the phenomenon we call "mediocrity." This article examines the conditions under which mediocrity is selected and maintained by groups over time. Mediocrity is maintained by a key social process: the marginalization of the adept, which is a…
Pollution Police: How to Determine Spectroscopic Selection Rules
ERIC Educational Resources Information Center
Selco, Jodye I.; Beery, Janet
2004-01-01
Students employ mathematics and physical chemistry in a project called Pollution Police to establish spectroscopic selection rules, and apply them to detect environmental contaminants from infrared spectra. This interdisciplinary project enables students to gain multiple information on molecular symmetry, and its role in the development of…
Influence of atmospheric properties on detection of wood-warbler nocturnal flight calls
NASA Astrophysics Data System (ADS)
Horton, Kyle G.; Stepanian, Phillip M.; Wainwright, Charlotte E.; Tegeler, Amy K.
2015-10-01
Avian migration monitoring can take on many forms; however, monitoring active nocturnal migration of land birds is limited to a few techniques. Avian nocturnal flight calls are currently the only method for describing migrant composition at the species level. However, as this method develops, more information is needed to understand the sources of variation in call detection. Additionally, few studies examine how detection probabilities differ under varying atmospheric conditions. We use nocturnal flight call recordings from captive individuals to explore the dependence of flight call detection on atmospheric temperature and humidity. Height or distance from origin had the largest influence on call detection, while temperature and humidity also influenced detectability at higher altitudes. Because flight call detection varies with both atmospheric conditions and flight height, improved monitoring across time and space will require correction for these factors to generate standardized metrics of songbird migration.
Masking as an effective quality control method for next-generation sequencing data analysis.
Yun, Sajung; Yun, Sijung
2014-12-13
Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with 'N's (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).
rpiCOOL: A tool for In Silico RNA-protein interaction detection using random forest.
Akbaripour-Elahabad, Mohammad; Zahiri, Javad; Rafeh, Reza; Eslami, Morteza; Azari, Mahboobeh
2016-08-07
Understanding the principle of RNA-protein interactions (RPIs) is of critical importance to provide insights into post-transcriptional gene regulation and is useful to guide studies about many complex diseases. The limitations and difficulties associated with experimental determination of RPIs, call an urgent need to computational methods for RPI prediction. In this paper, we proposed a machine learning method to detect RNA-protein interactions based on sequence information. We used motif information and repetitive patterns, which have been extracted from experimentally validated RNA-protein interactions, in combination with sequence composition as descriptors to build a model to RPI prediction via a random forest classifier. About 20% of the "sequence motifs" and "nucleotide composition" features have been selected as the informative features with the feature selection methods. These results suggest that these two feature types contribute effectively in RPI detection. Results of 10-fold cross-validation experiments on three non-redundant benchmark datasets show a better performance of the proposed method in comparison with the current state-of-the-art methods in terms of various performance measures. In addition, the results revealed that the accuracy of the RPI prediction methods could vary considerably across different organisms. We have implemented the proposed method, namely rpiCOOL, as a stand-alone tool with a user friendly graphical user interface (GUI) that enables the researchers to predict RNA-protein interaction. The rpiCOOL is freely available at http://biocool.ir/rpicool.html for non-commercial uses. Copyright © 2016 Elsevier Ltd. All rights reserved.
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
Variable Selection for Regression Models of Percentile Flows
NASA Astrophysics Data System (ADS)
Fouad, G.
2017-12-01
Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.
NASA Institute for Advanced Concepts
NASA Technical Reports Server (NTRS)
Cassanova, Robert A.
1999-01-01
The purpose of NASA Institute for Advanced Concepts (NIAC) is to provide an independent, open forum for the external analysis and definition of space and aeronautics advanced concepts to complement the advanced concepts activities conducted within the NASA Enterprises. The NIAC will issue Calls for Proposals during each year of operation and will select revolutionary advanced concepts for grant or contract awards through a peer review process. Final selection of awards will be with the concurrence of NASA's Chief Technologist. The operation of the NIAC is reviewed biannually by the NIAC Science, Exploration and Technology Council (NSETC) whose members are drawn from the senior levels of industry and universities. The process of defining the technical scope of the initial Call for Proposals was begun with the NIAC "Grand Challenges" workshop conducted on May 21-22, 1998 in Columbia, Maryland. These "Grand Challenges" resulting from this workshop became the essence of the technical scope for the first Phase I Call for Proposals which was released on June 19, 1998 with a due date of July 31, 1998. The first Phase I Call for Proposals attracted 119 proposals. After a thorough peer review, prioritization by NIAC and technical concurrence by NASA, sixteen subgrants were awarded. The second Phase I Call for Proposals was released on November 23, 1998 with a due date of January 31, 1999. Sixty-three (63) proposals were received in response to this Call. On December 2-3, 1998, the NSETC met to review the progress and future plans of the NIAC. The next NSETC meeting is scheduled for August 5-6, 1999. The first Phase II Call for Proposals was released to the current Phase I grantees on February 3,1999 with a due date of May 31, 1999. Plans for the second year of the contract include a continuation of the sequence of Phase I and Phase II Calls for Proposals and hosting the first NIAC Annual Meeting and USRA/NIAC Technical Symposium in NASA HQ.
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
Zhao, Zhongming; Liu, Zhandong; Chen, Ken; Guo, Yan; Allen, Genevera I; Zhang, Jiajie; Jim Zheng, W; Ruan, Jianhua
2017-10-03
In this editorial, we first summarize the 2016 International Conference on Intelligent Biology and Medicine (ICIBM 2016) that was held on December 8-10, 2016 in Houston, Texas, USA, and then briefly introduce the ten research articles included in this supplement issue. ICIBM 2016 included four workshops or tutorials, four keynote lectures, four conference invited talks, eight concurrent scientific sessions and a poster session for 53 accepted abstracts, covering current topics in bioinformatics, systems biology, intelligent computing, and biomedical informatics. Through our call for papers, a total of 77 original manuscripts were submitted to ICIBM 2016. After peer review, 11 articles were selected in this special issue, covering topics such as single cell RNA-seq analysis method, genome sequence and variation analysis, bioinformatics method for vaccine development, and cancer genomics.
Recognition method of construction conflict based on driver's eye movement.
Xu, Yi; Li, Shiwu; Gao, Song; Tan, Derong; Guo, Dong; Wang, Yuqiong
2018-04-01
Drivers eye movement data in simulated construction conflicts at different speeds were collected and analyzed to find the relationship between the drivers' eye movement and the construction conflict. On the basis of the relationship between the drivers' eye movement and the construction conflict, the peak point of wavelet processed pupil diameter, the first point on the left side of the peak point and the first blink point after the peak point are selected as key points for locating construction conflict periods. On the basis of the key points and the GSA, a construction conflict recognition method so called the CCFRM is proposed. And the construction conflict recognition speed and location accuracy of the CCFRM are verified. The good performance of the CCFRM verified the feasibility of proposed key points in construction conflict recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.
Phenotype definition and development--contributions from Group 7.
Wilcox, Marsha A; Paterson, Andrew D
2009-01-01
The papers in Genetic Analysis Workshop 16 Group 7 covered a wide range of topics. The effects of confounder misclassification and selection bias on association results were examined by one group. Another focused on bias introduced by various methods of accounting for treatment effects. Two groups used related methods to derive phenotypic traits. They used different analytic strategies for genetic associations with non-overlapping results (but because they used different sets of single-nucleotide polymorphisms (SNPs) and significance criteria, this is not surprising). Another group relied on the well-characterized definition of type 2 diabetes to show benefits of a novel predictive test. Transmission-ratio distortion was the focus of another paper. The results were extended to show a potential secondary benefit of the test to identify potentially mis-called SNPs. (c) 2009 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Van Neste, Christophe; Vandewoestyne, Mado; Van Criekinge, Wim; Deforce, Dieter; Van Nieuwerburgh, Filip
2014-03-01
Forensic scientists are currently investigating how to transition from capillary electrophoresis (CE) to massive parallel sequencing (MPS) for analysis of forensic DNA profiles. MPS offers several advantages over CE such as virtually unlimited multiplexy of loci, combining both short tandem repeat (STR) and single nucleotide polymorphism (SNP) loci, small amplicons without constraints of size separation, more discrimination power, deep mixture resolution and sample multiplexing. We present our bioinformatic framework My-Forensic-Loci-queries (MyFLq) for analysis of MPS forensic data. For allele calling, the framework uses a MySQL reference allele database with automatically determined regions of interest (ROIs) by a generic maximal flanking algorithm which makes it possible to use any STR or SNP forensic locus. Python scripts were designed to automatically make allele calls starting from raw MPS data. We also present a method to assess the usefulness and overall performance of a forensic locus with respect to MPS, as well as methods to estimate whether an unknown allele, which sequence is not present in the MySQL database, is in fact a new allele or a sequencing error. The MyFLq framework was applied to an Illumina MiSeq dataset of a forensic Illumina amplicon library, generated from multilocus STR polymerase chain reaction (PCR) on both single contributor samples and multiple person DNA mixtures. Although the multilocus PCR was not yet optimized for MPS in terms of amplicon length or locus selection, the results show excellent results for most loci. The results show a high signal-to-noise ratio, correct allele calls, and a low limit of detection for minor DNA contributors in mixed DNA samples. Technically, forensic MPS affords great promise for routine implementation in forensic genomics. The method is also applicable to adjacent disciplines such as molecular autopsy in legal medicine and in mitochondrial DNA research. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Evaluation of wetland implementation strategies on phosphorus reduction at a watershed scale
NASA Astrophysics Data System (ADS)
Abouali, Mohammad; Nejadhashemi, A. Pouyan; Daneshvar, Fariborz; Adhikari, Umesh; Herman, Matthew R.; Calappi, Timothy J.; Rohn, Bridget G.
2017-09-01
Excessive nutrient use in agricultural practices is a major cause of water quality degradation around the world, which results in eutrophication of the freshwater systems. Among the nutrients, phosphorus enrichment has recently drawn considerable attention due to major environmental issues such as Lake Erie and Chesapeake Bay eutrophication. One approach for mitigating the impacts of excessive nutrients on water resources is the implementation of wetlands. However, proper site selection for wetland implementation is the key for effective water quality management at the watershed scale, which is the goal of this study. In this regard, three conventional and two pseudo-random targeting methods were considered. A watershed model called the Soil and Water Assessment Tool (SWAT) was coupled with another model called System for Urban Stormwater Treatment and Analysis IntegratioN (SUSTAIN) to simulate the impacts of wetland implementation scenarios in the Saginaw River watershed, located in Michigan. The inter-group similarities of the targeting strategies were investigated and it was shown that the level of similarity increases as the target area increases (0.54-0.86). In general, the conventional targeting method based on phosphorus load generated per unit area at the subwatershed scale had the highest average reduction among all the scenarios (44.46 t/year). However, when considering the total area of implemented wetlands, the conventional method based on long-term impacts of wetland implementation showed the highest amount of phosphorus reduction (36.44 t/year).
Udeagu, Chi-Chi N; Shah, Sharmila; Toussaint, Magalieta M; Pickett, Leonard
2017-11-01
The New York City Department of Health Disease Intervention Specialists (DIS) routinely contact newly HIV-diagnosed persons via telephone calls and in-person meetings to conduct partner services (PS) interviews in order to elicit the names and contact information of the HIV-exposed partners for notification and HIV-testing, and to assist clients with linkage to care. From October 2013 to December 2015, we offered PS interviews conducted via video-call alongside voice-call and in-person modes in a selected geographic area of NYC. PS interviews were conducted according to the clients' preferred mode (in-person, voice- or video-call) and location (health care facility, clients' residences, or other NYC locations). At the conclusion of the PS interviews, DIS elicited responses from persons interviewed via video-call on their perception, satisfaction and personal experiences using video-call for public health and personal purposes. Acceptance and satisfaction with PS interviews via video-call were high among clients aged <30 years, men who have sex with men, or with education above high school; while PS yields were similar across modes. These results provide evidence of the potential effectiveness of video-call interviews for specific populations.
Bubble Entropy: An Entropy Almost Free of Parameters.
Manis, George; Aktaruzzaman, Md; Sassi, Roberto
2017-11-01
Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.
Measuring the Interestingness of Articles in a Limited User Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pon, Raymond K.
Search engines, such as Google, assign scores to news articles based on their relevancy to a query. However, not all relevant articles for the query may be interesting to a user. For example, if the article is old or yields little new information, the article would be uninteresting. Relevancy scores do not take into account what makes an article interesting, which varies from user to user. Although methods such as collaborative filtering have been shown to be effective in recommendation systems, in a limited user environment, there are not enough users that would make collaborative filtering effective. A general framework,more » called iScore, is presented for defining and measuring the 'interestingness' of articles, incorporating user-feedback. iScore addresses various aspects of what makes an article interesting, such as topic relevancy, uniqueness, freshness, source reputation, and writing style. It employs various methods to measure these features and uses a classifier operating on these features to recommend articles. The basic iScore configuration is shown to improve recommendation results by as much as 20%. In addition to the basic iScore features, additional features are presented to address the deficiencies of existing feature extractors, such as one that tracks multiple topics, called MTT, and a version of the Rocchio algorithm that learns its parameters online as it processes documents, called eRocchio. The inclusion of both MTT and eRocchio into iScore is shown to improve iScore recommendation results by as much as 3.1% and 5.6%, respectively. Additionally, in TREC11 Adaptive Filter Task, eRocchio is shown to be 10% better than the best filter in the last run of the task. In addition to these two major topic relevancy measures, other features are also introduced that employ language models, phrases, clustering, and changes in topics to improve recommendation results. These additional features are shown to improve recommendation results by iScore by up to 14%. Due to varying reasons that users hold regarding why an article is interesting, an online feature selection method in naive Bayes is also introduced. Online feature selection can improve recommendation results in iScore by up to 18.9%. In summary, iScore in its best configuration can outperform traditional IR techniques by as much as 50.7%. iScore and its components are evaluated in the news recommendation task using three datasets from Yahoo! News, actual users, and Digg. iScore and its components are also evaluated in the TREC Adaptive Filter task using the Reuters RCV1 corpus.« less
Azizi, Ali; Malekmohammadi, Bahram; Jafari, Hamid Reza; Nasiri, Hossein; Amini Parsa, Vahid
2014-10-01
Wind energy is a renewable energy resource that has increased in usage in most countries. Site selection for the establishment of large wind turbines, called wind farms, like any other engineering project, requires basic information and careful planning. This study assessed the possibility of establishing wind farms in Ardabil province in northwestern Iran by using a combination of analytic network process (ANP) and decision making trial and evaluation laboratory (DEMATEL) methods in a geographical information system (GIS) environment. DEMATEL was used to determine the criteria relationships. The weights of the criteria were determined using ANP and the overlaying process was done on GIS. Using 13 information layers in three main criteria including environmental, technical and economical, the land suitability map was produced and reclassified into 5 equally scored divisions from least suitable to most suitable areas. The results showed that about 6.68% of the area of Ardabil province is most suitable for establishment of wind turbines. Sensitivity analysis shows that significant portions of these most suitable zones coincide with suitable divisions of the input layers. The efficiency and accuracy of the hybrid model (ANP-DEMATEL) was evaluated and the results were compared to the ANP model. The sensitivity analysis, map classification, and factor weights for the two methods showed satisfactory results for the ANP-DEMATEL model in wind power plant site selection.
The 2016 interferometric imaging beauty contest
NASA Astrophysics Data System (ADS)
Sanchez-Bermudez, J.; Thiébaut, E.; Hofmann, K.-H.; Heininger, M.; Schertl, D.; Weigelt, G.; Millour, F.; Schutz, A.; Ferrari, A.; Vannier, M.; Mary, D.; Young, J.
2016-08-01
Image reconstruction in optical interferometry has gained considerable importance for astrophysical studies during the last decade. This has been mainly due to improvements in the imaging capabilities of existing interferometers and the expectation of new facilities in the coming years. However, despite the advances made so far, image synthesis in optical interferometry is still an open field of research. Since 2004, the community has organized a biennial contest to formally test the different methods and algorithms for image reconstruction. In 2016, we celebrated the 7th edition of the "Interferometric Imaging Beauty Contest". This initiative represented an open call to participate in the reconstruction of a selected set of simulated targets with a wavelength-dependent morphology as they could be observed by the 2nd generation of VLTI instruments. This contest represents a unique opportunity to benchmark, in a systematic way, the current advances and limitations in the field, as well as to discuss possible future approaches. In this contribution, we summarize: (a) the rules of the 2016 contest; (b) the different data sets used and the selection procedure; (c) the methods and results obtained by each one of the participants; and (d) the metric used to select the best reconstructed images. Finally, we named Karl-Heinz Hofmann and the group of the Max-Planck-Institut fur Radioastronomie as winners of this edition of the contest.
Efficacy of a sperm-selection chamber in terms of morphology, aneuploidy and DNA packaging.
Seiringer, M; Maurer, M; Shebl, O; Dreier, K; Tews, G; Ziehr, S; Schappacher-Tilp, G; Petek, E; Ebner, T
2013-07-01
Since most current techniques analysing spermatozoa will inevitably exclude these gametes from further use, attempts have been made to enrich semen samples with physiological spermatozoa with good prognosis using special sperm-processing methods. A particular sperm-selection chamber, called the Zech-selector, was found to be effective in completely eliminating spermatozoa with DNA strand breaks. The aim of this study was to further analyse the subgroup of spermatozoa accumulated using the Zech-selector. In detail, the potential of the chamber to select for proper sperm morphology, DNA status and chromatin condensation was tested. Two samples, native and processed semen, of 53 patients were analysed for sperm morphology (×1000, ×6300), DNA packaging (fragmentation, chromatin condensation) and chromosomal status (X, Y, 18). Migration time (the time needed for proper sperm accumulation) was significantly correlated to fast progressive motility (P=0.002). The present sperm-processing method was highly successful with respect to all parameters analysed (P<0.001). In particular, spermatozoa showing numeric (17.4% of patients without aneuploidy) or structural chromosomal abnormalities (90% of patients without strand-breaks) were separated most effectively. To summarize, further evidence is provided that separating spermatozoa without exposure to centrifugation stress results in a population of highly physiological spermatozoa. Copyright © 2013 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Recruitment and Selection: Meeting the Leadership Shortage in One Large Canadian School District
ERIC Educational Resources Information Center
Normore, Anthony H.
2004-01-01
This article investigates the recruitment and selection strategies of one large Canadian school district in Ontario, called here the "Northwestern School District." Data collection included interviews, document analyses and observations, and were gathered in 2001. Findings indicated that designated structured teams, financial and…
Differentiation and Social Selectivity in German Higher Education
ERIC Educational Resources Information Center
Schindler, Steffen; Reimer, David
2011-01-01
In this paper we investigate social selectivity in access to higher education in Germany and, unlike most previous studies, explicitly devote attention to semi-tertiary institutions such as the so-called universities of cooperative education. Drawing on rational choice models of educational decisions we seek to understand which factors influence…
Career Construction with a Gay Client: A Case Study
ERIC Educational Resources Information Center
Maree, Jacobus Gideon
2014-01-01
This article reports on the value of career construction counselling (CCC) with a gay person. The participant was selected purposively, with the selection criteria calling for a mid-career woman who had sought career counselling. The intervention involved administration of the "Career Construction Interview" (CCI) and the creation of a…
Conservative, special-relativistic smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Rosswog, Stephan
2010-11-01
We present and test a new, special-relativistic formulation of smoothed particle hydrodynamics (SPH). Our approach benefits from several improvements with respect to earlier relativistic SPH formulations. It is self-consistently derived from the Lagrangian of an ideal fluid and accounts for the terms that stem from non-constant smoothing lengths, usually called “grad-h terms”. In our approach, we evolve the canonical momentum and the canonical energy per baryon and thus circumvent some of the problems that have plagued earlier formulations of relativistic SPH. We further use a much improved artificial viscosity prescription which uses the extreme local eigenvalues of the Euler equations and triggers selectively on (a) shocks and (b) velocity noise. The shock trigger accurately monitors the relative density slope and uses it to fine-tune the amount of artificial viscosity that is applied. This procedure substantially sharpens shock fronts while still avoiding post-shock noise. If not triggered, the viscosity parameter of each particle decays to zero. None of these viscosity triggers is specific to special relativity, both could also be applied in Newtonian SPH.The performance of the new scheme is explored in a large variety of benchmark tests where it delivers excellent results. Generally, the grad-h terms deliver minor, though worthwhile, improvements. As expected for a Lagrangian method, it performs close to perfect in supersonic advection tests, but also in strong relativistic shocks, usually considered a particular challenge for SPH, the method yields convincing results. For example, due to its perfect conservation properties, it is able to handle Lorentz factors as large as γ = 50,000 in the so-called wall shock test. Moreover, we find convincing results in a rarely shown, but challenging test that involves so-called relativistic simple waves and also in multi-dimensional shock tube tests.
Garland, Ellen C; Castellote, Manuel; Berchok, Catherine L
2015-06-01
Beluga whales, Delphinapterus leucas, have a graded call system; call types exist on a continuum making classification challenging. A description of vocalizations from the eastern Beaufort Sea beluga population during its spring migration are presented here, using both a non-parametric classification tree analysis (CART), and a Random Forest analysis. Twelve frequency and duration measurements were made on 1019 calls recorded over 14 days off Icy Cape, Alaska, resulting in 34 identifiable call types with 83% agreement in classification for both CART and Random Forest analyses. This high level of agreement in classification, with an initial subjective classification of calls into 36 categories, demonstrates that the methods applied here provide a quantitative analysis of a graded call dataset. Further, as calls cannot be attributed to individuals using single sensor passive acoustic monitoring efforts, these methods provide a comprehensive analysis of data where the influence of pseudo-replication of calls from individuals is unknown. This study is the first to describe the vocal repertoire of a beluga population using a robust and repeatable methodology. A baseline eastern Beaufort Sea beluga population repertoire is presented here, against which the call repertoire of other seasonally sympatric Alaskan beluga populations can be compared.
Selective growth of MoS2 for proton exchange membranes with extremely high selectivity.
Feng, Kai; Tang, Beibei; Wu, Peiyi
2013-12-26
Proton conductivity and methanol permeability are the most important transport properties of proton exchange membranes (PEMs). The ratio of proton conductivity to methanol permeability is usually called selectivity. Herein, a novel strategy of in situ growth of MoS2 is employed to prepare MoS2/Nafion composite membranes for highly selective PEM. The strong interactions between the Mo precursor ((NH4)2MoS4) and Nafion's sulfonic groups in a suitable solvent environment (DMF) probably lead to a selective growth of MoS2 flakes mainly around the ionic clusters of the resultant MoS2/Nafion composite membrane. Therefore, it would significantly promote the aggregation and hence lead to a better connectivity of these ionic clusters, which favors the increase in proton conductivity. Meanwhile, the existence of MoS2 in the ionic channels effectively prevents methanol transporting through the PEM, contributing to the dramatic decrease in the methanol permeability. Consequently, the MoS2/Nafion composite membranes exhibit greatly increased selectivity. Under some severe conditions, such as 50 °C with 80 v/v% of methanol concentration, an increase in the membrane selectivity by nearly 2 orders of magnitude compared with that of the recast Nafion membrane could be achieved here, proving our method as a very promising way to prepare high-performance PEMs. All these conclusions are confirmed by various characterizations, such as (FE-) SEM, TEM, AFM, IR, Raman, TGA, XRD, etc.
Rating experiments in forestry: How much agreement is there in tree marking?
Pallarés Ramos, Carlos; Kędziora, Wojciech; Haufe, Jens; Stoyan, Dietrich
2018-01-01
The process of selecting individual trees by humans for forest management purposes is the result of a plethora of factors and processes that are hard to disentangle. And yet in the past many textbooks and other publications have maintained that this selection leads to somewhat unanimous results. In this study, we analysed the data of 36 so-called marteloscope experiments from all over Britain, which are managed by the Ae Training Centre (Scotland, UK). Our objective was (1) to establish how much agreement there actually was when asking test persons (raters) to apply two different thinning methods, low and crown thinning. In addition we (2) were interested in understanding some of the processes leading to certain levels of agreement and in relationships between the agreement measures and characteristics of forest structure. Our analysis was based on multivariate statistics, particularly using Fleiss’ kappa. This was the first time that an analysis of rater behaviour was performed at such a large scale and it revealed that the general agreement in tree selection in Britain was only slight to fair, i.e. much lower than in medical experiments. The variability of selecting individual trees was considerable. We also found that agreement in tree selection was much stronger in low-thinning as opposed to crown-thinning experiments. As the latter is an important method of Continuous Cover Forestry and British forestry is increasingly adopting this forest management type, our results suggested that there is a need to provide more training. Interestingly the different levels of agreement as identified by Fleiss’ kappa could not be explained by measures of forest structure, however, the mean conformity number, a surrogate of Fleiss’ kappa, showed correlations and indicated that conformity increased with increasing complexity of tree stem diameter structure. PMID:29566076
2010-01-01
Background Plasmodium vivax malaria is a major public health challenge in Latin America, Asia and Oceania, with 130-435 million clinical cases per year worldwide. Invasion of host blood cells by P. vivax mainly depends on a type I membrane protein called Duffy binding protein (PvDBP). The erythrocyte-binding motif of PvDBP is a 170 amino-acid stretch located in its cysteine-rich region II (PvDBPII), which is the most variable segment of the protein. Methods To test whether diversifying natural selection has shaped the nucleotide diversity of PvDBPII in Brazilian populations, this region was sequenced in 122 isolates from six different geographic areas. A Bayesian method was applied to test for the action of natural selection under a population genetic model that incorporates recombination. The analysis was integrated with a structural model of PvDBPII, and T- and B-cell epitopes were localized on the 3-D structure. Results The results suggest that: (i) recombination plays an important role in determining the haplotype structure of PvDBPII, and (ii) PvDBPII appears to contain neutrally evolving codons as well as codons evolving under natural selection. Diversifying selection preferentially acts on sites identified as epitopes, particularly on amino acid residues 417, 419, and 424, which show strong linkage disequilibrium. Conclusions This study shows that some polymorphisms of PvDBPII are present near the erythrocyte-binding domain and might serve to elude antibodies that inhibit cell invasion. Therefore, these polymorphisms should be taken into account when designing vaccines aimed at eliciting antibodies to inhibit erythrocyte invasion. PMID:21092207
McKenzie, Anne; Hancock, Kirsten; Haines, Hayley; Christensen, Daniel; Zubrick, Stephen R.
2015-01-01
Objective The aims of this study were to assess participatory methods for obtaining community views on child health research. Background Community participation in research is recognised as an important part of the research process; however, there has been inconsistency in its implementation and application in Australia. The Western Australian Telethon Kids Institute Participation Program employs a range of methods for fostering active involvement of community members in its research. These include public discussion forums, called Community Conversations. While participation levels are good, the attendees represent only a sub-section of the Western Australian population. Therefore, we conducted a telephone survey of randomly selected households to evaluate its effectiveness in eliciting views from a broader cross-section of the community about our research agenda and community participation in research, and whether the participants would be representative of the general population. We also conducted two Conversations, comparing the survey as a recruitment tool and normal methods using the Participation Program. Results While the telephone survey was a good method for eliciting community views about research, there were marked differences in the profile of study participants compared to the general population (e.g. 78% vs 50% females). With a 26% response rate, the telephone survey was also more expensive than a Community Conversation. The cold calling approach proved an unsuccessful recruitment method, with only two out of a possible 816 telephone respondents attending a Conversation. Conclusion While the results showed that both of the methods produced useful input for our research program, we could not conclude that either method gained input that was representative of the entire community. The Conversations were relatively low-cost and provided more in-depth information about one subject, whereas the telephone survey provided information across a greater range of subjects, and allowed more quantitative analysis. PMID:25938240
Sorbolini, Silvia; Marras, Gabriele; Gaspa, Giustino; Dimauro, Corrado; Cellesi, Massimo; Valentini, Alessio; Macciotta, Nicolò Pp
2015-06-23
Domestication and selection are processes that alter the pattern of within- and between-population genetic variability. They can be investigated at the genomic level by tracing the so-called selection signatures. Recently, sequence polymorphisms at the genome-wide level have been investigated in a wide range of animals. A common approach to detect selection signatures is to compare breeds that have been selected for different breeding goals (i.e. dairy and beef cattle). However, genetic variations in different breeds with similar production aptitudes and similar phenotypes can be related to differences in their selection history. In this study, we investigated selection signatures between two Italian beef cattle breeds, Piemontese and Marchigiana, using genotyping data that was obtained with the Illumina BovineSNP50 BeadChip. The comparison was based on the fixation index (Fst), combined with a locally weighted scatterplot smoothing (LOWESS) regression and a control chart approach. In addition, analyses of Fst were carried out to confirm candidate genes. In particular, data were processed using the varLD method, which compares the regional variation of linkage disequilibrium between populations. Genome scans confirmed the presence of selective sweeps in the genomic regions that harbour candidate genes that are known to affect productive traits in cattle such as DGAT1, ABCG2, CAPN3, MSTN and FTO. In addition, several new putative candidate genes (for example ALAS1, ABCB8, ACADS and SOD1) were detected. This study provided evidence on the different selection histories of two cattle breeds and the usefulness of genomic scans to detect selective sweeps even in cattle breeds that are bred for similar production aptitudes.
Bal, Mert; Amasyali, M Fatih; Sever, Hayri; Kose, Guven; Demirhan, Ayse
2014-01-01
The importance of the decision support systems is increasingly supporting the decision making process in cases of uncertainty and the lack of information and they are widely used in various fields like engineering, finance, medicine, and so forth, Medical decision support systems help the healthcare personnel to select optimal method during the treatment of the patients. Decision support systems are intelligent software systems that support decision makers on their decisions. The design of decision support systems consists of four main subjects called inference mechanism, knowledge-base, explanation module, and active memory. Inference mechanism constitutes the basis of decision support systems. There are various methods that can be used in these mechanisms approaches. Some of these methods are decision trees, artificial neural networks, statistical methods, rule-based methods, and so forth. In decision support systems, those methods can be used separately or a hybrid system, and also combination of those methods. In this study, synthetic data with 10, 100, 1000, and 2000 records have been produced to reflect the probabilities on the ALARM network. The accuracy of 11 machine learning methods for the inference mechanism of medical decision support system is compared on various data sets.
A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals
Castañón–Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo
2015-01-01
The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi–Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information. PMID:26633417
A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals.
Castañón-Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo
2015-12-02
The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi-Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information.
Chavez, P.S.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K
2001-01-01
When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.
Veeranagouda, Yaligara; Debono-Lagneaux, Delphine; Fournet, Hamida; Thill, Gilbert; Didier, Michel
2018-01-16
The emergence of clustered regularly interspaced short palindromic repeats-Cas9 (CRISPR-Cas9) gene editing systems has enabled the creation of specific mutants at low cost, in a short time and with high efficiency, in eukaryotic cells. Since a CRISPR-Cas9 system typically creates an array of mutations in targeted sites, a successful gene editing project requires careful selection of edited clones. This process can be very challenging, especially when working with multiallelic genes and/or polyploid cells (such as cancer and plants cells). Here we described a next-generation sequencing method called CRISPR-Cas9 Edited Site Sequencing (CRES-Seq) for the efficient and high-throughput screening of CRISPR-Cas9-edited clones. CRES-Seq facilitates the precise genotyping up to 96 CRISPR-Cas9-edited sites (CRES) in a single MiniSeq (Illumina) run with an approximate sequencing cost of $6/clone. CRES-Seq is particularly useful when multiple genes are simultaneously targeted by CRISPR-Cas9, and also for screening of clones generated from multiallelic genes/polyploid cells. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
Quantitative trait nucleotide analysis using Bayesian model selection.
Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D
2005-10-01
Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.
Neuroplus biofeedback improves attention, resilience, and injury prevention in elite soccer players.
Rusciano, Aiace; Corradini, Giuliano; Stoianov, Ivilin
2017-06-01
Performance and injury prevention in elite soccer players are typically investigated from physical-tactical, biomechanical, and metabolic perspectives. However, executive functions, visuospatial abilities, and psychophysiological adaptability or resilience are also fundamental for efficiency and well-being in sports. Based on previous research associating autonomic flexibility with prefrontal cortical control, we designed a novel integrated autonomic biofeedback training method called Neuroplus to improve resilience, visual attention, and injury prevention. Herein, we introduce the method and provide an evaluation of 20 elite soccer players from the Italian Soccer High Division (Serie-A): 10 players trained with Neuroplus and 10 trained with a control treatment. The assessments included psychophysiological stress profiles, a visual search task, and indexes of injury prevention, which were measured pre- and posttreatment. The analysis showed a significant enhancement of physiological adaptability, recovery following stress, visual selective attention, and injury prevention that were specific to the Neuroplus group. Enhancing the interplay between autonomic and cognitive functions through biofeedback may become a key principle for obtaining excellence and well-being in sports. To our knowledge, this is the first evidence that shows improvement in visual selective attention following intense autonomic biofeedback. © 2017 Society for Psychophysiological Research.
A measurement of the mass of the top quark using the ideogram technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houben, Pieter Willem Huib
2009-06-03
This thesis describes a measurement of the mass of the top quark on data collected with the D0 detector at the Tevatron collider in the period from 2002 until 2006. The first chapter describes the Standard Model and the prominent role of the top quark mass. The second chapter gives a description of the D0 detector which is used for this measurement. After the pmore » $$\\bar{p}$$ collisions have been recorded, reconstruction of physics objects is required, which is described in Chapter 3. Chapter 4 describes how the interesting collisions in which top quarks are produced are separated from the `uninteresting' ones with a set of selection criteria. The method to extract the top quark mass from the sample of selected collisions (also called events), which is based on the ideogram technique, is explained in Chapter 5, followed in Chapter 6 by the description of the calibration of the method using simulation of our most precise knowledge of nature. Chapter 7 shows the result of the measurement together with some cross checks and an estimation of the uncertainty on this measurement. This thesis concludes with a constraint on the Higgs boson mass.« less
Protein Loop Structure Prediction Using Conformational Space Annealing.
Heo, Seungryong; Lee, Juyong; Joo, Keehyoung; Shin, Hang-Cheol; Lee, Jooyoung
2017-05-22
We have developed a protein loop structure prediction method by combining a new energy function, which we call E PLM (energy for protein loop modeling), with the conformational space annealing (CSA) global optimization algorithm. The energy function includes stereochemistry, dynamic fragment assembly, distance-scaled finite ideal gas reference (DFIRE), and generalized orientation- and distance-dependent terms. For the conformational search of loop structures, we used the CSA algorithm, which has been quite successful in dealing with various hard global optimization problems. We assessed the performance of E PLM with two widely used loop-decoy sets, Jacobson and RAPPER, and compared the results against the DFIRE potential. The accuracy of model selection from a pool of loop decoys as well as de novo loop modeling starting from randomly generated structures was examined separately. For the selection of a nativelike structure from a decoy set, E PLM was more accurate than DFIRE in the case of the Jacobson set and had similar accuracy in the case of the RAPPER set. In terms of sampling more nativelike loop structures, E PLM outperformed E DFIRE for both decoy sets. This new approach equipped with E PLM and CSA can serve as the state-of-the-art de novo loop modeling method.
Adaptive dynamic programming approach to experience-based systems identification and control.
Lendaris, George G
2009-01-01
Humans have the ability to make use of experience while selecting their control actions for distinct and changing situations, and their process speeds up and have enhanced effectiveness as more experience is gained. In contrast, current technological implementations slow down as more knowledge is stored. A novel way of employing Approximate (or Adaptive) Dynamic Programming (ADP) is described that shifts the underlying Adaptive Critic type of Reinforcement Learning method "up a level", away from designing individual (optimal) controllers to that of developing on-line algorithms that efficiently and effectively select designs from a repository of existing controller solutions (perhaps previously developed via application of ADP methods). The resulting approach is called Higher-Level Learning Algorithm. The approach and its rationale are described and some examples of its application are given. The notions of context and context discernment are important to understanding the human abilities noted above. These are first defined, in a manner appropriate to controls and system-identification, and as a foundation relating to the application arena, a historical view of the various phases during development of the controls field is given, organized by how the notion 'context' was, or was not, involved in each phase.
Conduits to care: call lights and patients’ perceptions of communication
Montie, Mary; Shuman, Clayton; Galinato, Jose; Patak, Lance; Anderson, Christine A; Titler, Marita G
2017-01-01
Background Call light systems remain the primary means of hospitalized patients to initiate communication with their health care providers. Although there is vast amounts of literature discussing patient communication with their health care providers, few studies have explored patients’ perceptions concerning call light use and communication. The specific aim of this study was to solicit patients’ perceptions regarding their call light use and communication with nursing staff. Methods Patients invited to this study met the following inclusion criteria: proficient in English, been hospitalized for at least 24 hours, aged ≥21 years, and able to communicate verbally (eg, not intubated). Thirty participants provided written informed consent, were enrolled in the study, and completed interviews. Results Using qualitative descriptive methods, five major themes emerged from patients’ perceptions (namely; establishing connectivity, participant safety concerns, no separation: health care and the call light device, issues with the current call light, and participants’ perceptions of “nurse work”). Multiple minor themes supported these major themes. Data analysis utilized the constant comparative methods of Glaser and Strauss. Discussion Findings from this study extend the knowledge of patients’ understanding of not only why inconsistencies occur between the call light and their nurses, but also why the call light is more than merely a device to initiate communication; rather, it is a direct conduit to their health care and its delivery. PMID:29075125
An analysis of hypercritical states in elastic and inelastic systems
NASA Astrophysics Data System (ADS)
Kowalczk, Maciej
The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.
Some statistical properties of an index of multiple traits.
Nordskog, A W
1978-03-01
Hazel (1943) defined a selection index that maximizes the correlation, RIH, between the index, I=ΣbiXi and its aggregate genetic value, H=Σaigi, where the bi's are the derived coefficients of the observed traits, Xi, the ai's are their relative economic values and the gi's are their respective breeding values. This is called an optimum index. The expected value of the index is not H but rather K=Σbigi. The ratio of RHI/RKI is always less than or equal to 1.0. With selection on I, the ratio of the change in K to the change in H is the true heritability of the indez, that is, h I (2) =Δ(K)/ΔH. This is not the same as R IH (2) , which serves only as the predictor of the change in H with selection on I. If the index, I, itself is considered as a unit trait, studies can then be made of correlated response in I when selection is based only on a single trait in the index. I is then called a performance index. This approach provides for additional insight into the question of the failure of selection to make gains in total performance over many generations.
2017-01-01
ABSTRACT Male-male vocal competition in anuran species is critical for mating success; however, it is also energetically demanding and highly time-consuming. Thus, we hypothesized that males may change signal elaboration in response to competition in real time. Male serrate-legged small treefrogs (Kurixalus odontotarsus) produce compound calls that contain two kinds of notes, harmonic sounds called ‘A notes’ and short broadband sounds called ‘B notes’. Using male evoked vocal response experiments, we found that competition influences the temporal structure and complexity of vocal signals produced by males. Males produce calls with a higher ratio of notes:call, and more compound calls including more A notes but fewer B notes with contest escalation. In doing so, males minimize the energy costs and maximize the benefits of competition when the level of competition is high. This means that the evolution of sexual signal complexity in frogs may be susceptible to selection for plasticity related to adjusting performance to the pressures of competition, and supports the idea that more complex social contexts can lead to greater vocal complexity. PMID:29175862
Biological relevance of CNV calling methods using familial relatedness including monozygotic twins.
Castellani, Christina A; Melka, Melkaye G; Wishart, Andrea E; Locke, M Elizabeth O; Awamleh, Zain; O'Reilly, Richard L; Singh, Shiva M
2014-04-21
Studies involving the analysis of structural variation including Copy Number Variation (CNV) have recently exploded in the literature. Furthermore, CNVs have been associated with a number of complex diseases and neurodevelopmental disorders. Common methods for CNV detection use SNP, CNV, or CGH arrays, where the signal intensities of consecutive probes are used to define the number of copies associated with a given genomic region. These practices pose a number of challenges that interfere with the ability of available methods to accurately call CNVs. It has, therefore, become necessary to develop experimental protocols to test the reliability of CNV calling methods from microarray data so that researchers can properly discriminate biologically relevant data from noise. We have developed a workflow for the integration of data from multiple CNV calling algorithms using the same array results. It uses four CNV calling programs: PennCNV (PC), Affymetrix® Genotyping Console™ (AGC), Partek® Genomics Suite™ (PGS) and Golden Helix SVS™ (GH) to analyze CEL files from the Affymetrix® Human SNP 6.0 Array™. To assess the relative suitability of each program, we used individuals of known genetic relationships. We found significant differences in CNV calls obtained by different CNV calling programs. Although the programs showed variable patterns of CNVs in the same individuals, their distribution in individuals of different degrees of genetic relatedness has allowed us to offer two suggestions. The first involves the use of multiple algorithms for the detection of the largest possible number of CNVs, and the second suggests the use of PennCNV over all other methods when the use of only one software program is desirable.
Evidence of auditory insensitivity to vocalization frequencies in two frogs.
Goutte, Sandra; Mason, Matthew J; Christensen-Dalsgaard, Jakob; Montealegre-Z, Fernando; Chivers, Benedict D; Sarria-S, Fabio A; Antoniazzi, Marta M; Jared, Carlos; Almeida Sato, Luciana; Felipe Toledo, Luís
2017-09-21
The emergence and maintenance of animal communication systems requires the co-evolution of signal and receiver. Frogs and toads rely heavily on acoustic communication for coordinating reproduction and typically have ears tuned to the dominant frequency of their vocalizations, allowing discrimination from background noise and heterospecific calls. However, we present here evidence that two anurans, Brachycephalus ephippium and B. pitanga, are insensitive to the sound of their own calls. Both species produce advertisement calls outside their hearing sensitivity range and their inner ears are partly undeveloped, which accounts for their lack of high-frequency sensitivity. If unheard by the intended receivers, calls are not beneficial to the emitter and should be selected against because of the costs associated with signal production. We suggest that protection against predators conferred by their high toxicity might help to explain why calling has not yet disappeared, and that visual communication may have replaced auditory in these colourful, diurnal frogs.
Efficient selection of tagging single-nucleotide polymorphisms in multiple populations.
Howie, Bryan N; Carlson, Christopher S; Rieder, Mark J; Nickerson, Deborah A
2006-08-01
Common genetic polymorphism may explain a portion of the heritable risk for common diseases, so considerable effort has been devoted to finding and typing common single-nucleotide polymorphisms (SNPs) in the human genome. Many SNPs show correlated genotypes, or linkage disequilibrium (LD), suggesting that only a subset of all SNPs (known as tagging SNPs, or tagSNPs) need to be genotyped for disease association studies. Based on the genetic differences that exist among human populations, most tagSNP sets are defined in a single population and applied only in populations that are closely related. To improve the efficiency of multi-population analyses, we have developed an algorithm called MultiPop-TagSelect that finds a near-minimal union of population-specific tagSNP sets across an arbitrary number of populations. We present this approach as an extension of LD-select, a tagSNP selection method that uses a greedy algorithm to group SNPs into bins based on their pairwise association patterns, although the MultiPop-TagSelect algorithm could be used with any SNP tagging approach that allows choices between nearly equivalent SNPs. We evaluate the algorithm by considering tagSNP selection in candidate-gene resequencing data and lower density whole-chromosome data. Our analysis reveals that an exhaustive search is often intractable, while the developed algorithm can quickly and reliably find near-optimal solutions even for difficult tagSNP selection problems. Using populations of African, Asian, and European ancestry, we also show that an optimal multi-population set of tagSNPs can be substantially smaller (up to 44%) than a typical set obtained through independent or sequential selection.
Reynolds, Conner D; Nolan, Suzanne O; Huebschman, Jessica L; Hodges, Samantha L; Lugo, Joaquin N
2017-07-01
Early-life seizures are known to cause long-term deficits in social behavior, learning, and memory, however little is known regarding their acute impact. Ultrasonic vocalization (USV) recordings have been developed as a tool for investigating early communicative deficits in mice. Previous investigation from our lab found that postnatal day (PD) 10 seizures cause male-specific suppression of 50-kHz USVs on PD12 in 129 SvEvTac mouse pups. The present study extends these findings by spectrographic characterization of USVs following neonatal seizures. On PD10, male C57BL/6 pups were administered intraperitoneal injections of kainic acid or physiological saline. On PD12, isolation-induced recordings were captured using a broad-spectrum ultrasonic microphone. Status epilepticus significantly suppressed USV quantity (p=0.001) and total duration (p<0.05). Seizure pups also utilized fewer complex calls than controls (p<0.05). There were no changes in call latency or inter-call intervals. Spectrographic analysis revealed increased peak amplitude for complex, downward, short, two-syllable, and upward calls, as well as reduced mean duration for short and two-syllable calls in seizure mice. This investigation provides the first known spectrographic characterization of USVs following early-life seizures. These findings also enhance evidence for USVs as an indicator of select communicative impairment. Copyright © 2017 Elsevier Inc. All rights reserved.
Velásquez, Nelson A; Moreno-Gómez, Felipe N; Brunetti, Enzo; Penna, Mario
2018-05-03
Animal communication occurs in environments that affect the properties of signals as they propagate from senders to receivers. We studied the geographic variation of the advertisement calls of male Pleurodema thaul individuals from eight localities in Chile. Furthermore, by means of signal propagation experiments, we tested the hypothesis that local calls are better transmitted and less degraded than foreign calls (i.e. acoustic adaptation hypothesis). Overall, the advertisement calls varied greatly along the distribution of P. thaul in Chile, and it was possible to discriminate localities grouped into northern, central and southern stocks. Propagation distance affected signal amplitude and spectral degradation in all localities, but temporal degradation was only affected by propagation distance in one out of seven localities. Call origin affected signal amplitude in five out of seven localities and affected spectral and temporal degradation in six out of seven localities. In addition, in northern localities, local calls degraded more than foreign calls, and in southern localities the opposite was observed. The lack of a strict optimal relationship between signal characteristics and environment indicates partial concordance with the acoustic adaptation hypothesis. Inter-population differences in selectivity for call patterns may compensate for such environmental constraints on acoustic communication.
Effect of introduction of electronic patient reporting on the duration of ambulance calls.
Kuisma, Markku; Väyrynen, Taneli; Hiltunen, Tuomas; Porthan, Kari; Aaltonen, Janne
2009-10-01
We examined the effect of the change from paper records to the electronic patient records (EPRs) on ambulance call duration. We retrieved call duration times 6 months before (group 1) and 6 months after (group 2) the introduction of EPR. Subgroup analysis of group 2 was fulfilled depending whether the calls were made during the first or last 3 months after EPR introduction. We analyzed 37 599 ambulance calls (17 950 were in group 1 and 19 649 were in group 2). The median call duration in group 1 was 48 minutes and in group 2 was 49 minutes (P = .008). In group 2, call duration was longer during the first 3 months after EPR introduction. In multiple linear regression analysis, urgency category (P < .0001), unit level (P < .0001), and transportation decision (P < .0001) influenced the call duration. The documentation method was not a significant factor. Electronic patient record system can be implemented in an urban ambulance service in such a way that documentation method does not become a significant factor in determining call duration in the long run. Temporary performance drop during the first 3 months after introduction was noticed, reflecting adaptation process to a new way of working.
ERIC Educational Resources Information Center
Charman, Steve D.; Carlucci, Marianna; Vallano, Jon; Gregory, Amy Hyman
2010-01-01
The current manuscript proposes a theory of how witnesses assess their confidence following a lineup identification, called the selective cue integration framework (SCIF). Drawing from past research on the postidentification feedback effect, the SCIF details a three-stage process of confidence assessment that is based largely on a…
ERIC Educational Resources Information Center
Greenlaw, M. Jean; And Others
This study examined the effect of three different modes of presentation on elementary education majors' selection and rating of materials for reading instruction. Materials were chosen to represent each of the following propaganda techniques: glittering generalities, name calling, transfer, testimonial, bandwagon, and card stacking. Students in…
Cools, A R
1980-10-01
The purpose of this study was to detect the behavioural effect of drug-induced changes in the neostriatal dopaminergic activity upon the degree of intrinsic (self-generated) and extrinsic (externally produced) constraints on the selection of behavioural patterns in rats. Both systemic and neostriatal injections of extremely low doses of apomorphine and haloperidol were used to change the neostriatal dopaminergic activity. Behavioural changes were observed in (a) an open-field test, (b) a so-called 'swimming without escape' test, (c) a so-called 'swimming with escape' test, and (d) a test to detect deficiencies in sensory, motor and sensorimotor capacities required to perform both swimming tests. Evidence is found that the neostriatum, especially the neostriatal, dopaminergic activity determines the animal's ability to select the best strategy in a stressful situation by modifying the process of switching strategies under pressure of factors intrinsic to the organism: neither sensory neglect nor inability to initiate voluntary movements underlay the observed phenomena. It is suggested that the neostriatum determines the individual flexibility to cope with available sensory information.
Maximizing the Spread of Influence via Generalized Degree Discount.
Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun
2016-01-01
It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods.
Maximizing the Spread of Influence via Generalized Degree Discount
Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun
2016-01-01
It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods. PMID:27732681
Phenolics from Castanea sativa leaves and their effects on UVB-induced damage.
Cerulli, Antonietta; Masullo, Milena; Mari, Angela; Balato, Anna; Filosa, Rosanna; Lembo, Serena; Napolitano, Assunta; Piacente, Sonia
2018-05-01
The phytochemical investigation of the methanol extract of the leaves of Castanea sativa Mill., source of the Italian PGI (Protected Geographical Indication) product 'Marrone di Roccadaspide' (Campania region) afforded as main compounds crenatin (1), chestanin (2), gallic acid (3), cretanin (4), 5-O-p-coumaroylquinic acid (5), p-methylgallic acid (6) and quercetin-3-O-glucoside (7). To quantify the isolated compounds a LC-ESI(QqQ)MS method working with a very sensitive and selective mass tandem experiment called Multiple Reaction Monitoring (MRM) has been developed. Moreover the antioxidant capacity by TEAC assay and the ability of compounds 1-7 to protect HaCaT human keratinocytes from UVB-induced damage has been investigated.
Gorelik, Tatiana E; Schmidt, Martin U; Kolb, Ute; Billinge, Simon J L
2015-04-01
This paper shows that pair-distribution function (PDF) analyses can be carried out on organic and organometallic compounds from powder electron diffraction data. Different experimental setups are demonstrated, including selected area electron diffraction and nanodiffraction in transmission electron microscopy or nanodiffraction in scanning transmission electron microscopy modes. The methods were demonstrated on organometallic complexes (chlorinated and unchlorinated copper phthalocyanine) and on purely organic compounds (quinacridone). The PDF curves from powder electron diffraction data, called ePDF, are in good agreement with PDF curves determined from X-ray powder data demonstrating that the problems of obtaining kinematical scattering data and avoiding beam damage of the sample are possible to resolve.
Evolutionary learning processes as the foundation for behaviour change.
Crutzen, Rik; Peters, Gjalt-Jorn Ygram
2018-03-01
We argue that the active ingredients of behaviour change interventions, often called behaviour change methods (BCMs) or techniques (BCTs), can usefully be placed on a dimension of psychological aggregation. We introduce evolutionary learning processes (ELPs) as fundamental building blocks that are on a lower level of psychological aggregation than BCMs/BCTs. A better understanding of ELPs is useful to select the appropriate BCMs/BCTs to target determinants of behaviour, or vice versa, to identify potential determinants targeted by a given BCM/BCT, and to optimally translate them into practical applications. Using these insights during intervention development may increase the likelihood of developing effective interventions - both in terms of behaviour change as well as maintenance of behaviour change.
Automatic Line Calling Badminton System
NASA Astrophysics Data System (ADS)
Affandi Saidi, Syahrul; Adawiyah Zulkiplee, Nurabeahtul; Muhammad, Nazmizan; Sarip, Mohd Sharizan Md
2018-05-01
A system and relevant method are described to detect whether a projectile impact occurs on one side of a boundary line or the other. The system employs the use of force sensing resistor-based sensors that may be designed in segments or assemblies and linked to a mechanism with a display. An impact classification system is provided for distinguishing between various events, including a footstep, ball impact and tennis racquet contact. A sensor monitoring system is provided for determining the condition of sensors and providing an error indication if sensor problems exist. A service detection system is provided when the system is used for tennis that permits activation of selected groups of sensors and deactivation of others.
Kouchri, Farrokh Mohammadzadeh
2012-11-06
A Voice over Internet Protocol (VoIP) communications system, a method of managing a communications network in such a system and a program product therefore. The system/network includes an ENERGY STAR (E-star) aware softswitch and E-star compliant communications devices at system endpoints. The E-star aware softswitch allows E-star compliant communications devices to enter and remain in power saving mode. The E-star aware softswitch spools messages and forwards only selected messages (e.g., calls) to the devices in power saving mode. When the E-star compliant communications devices exit power saving mode, the E-star aware softswitch forwards spooled messages.
NASA Astrophysics Data System (ADS)
Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji
Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.
Reinforcement learning for resource allocation in LEO satellite networks.
Usaha, Wipawee; Barria, Javier A
2007-06-01
In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Select Committee on Aging.
This document contains testimony and prepared statements from a Congressional hearing called to examine the issue of elder abuse. Chairman Claude Pepper's opening statement discusses the horror of elder abuse and calls for federal legislation, similar to the child abuse legislation, for combating elder abuse. Elder abuse is defined as physical…
LinkImputeR: user-guided genotype calling and imputation for non-model organisms.
Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean
2017-07-10
Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from NGS technologies. It enables the user to quickly and easily examine the effects of varying thresholds and filters on the number and quality of the resulting genotype calls. In this manner, users can decide on thresholds that are most suitable for their purposes. We show that LinkImputeR can significantly augment the value and utility of NGS data sets, especially in non-model organisms with poor genomic resources.
Sports Training Support Method by Self-Coaching with Humanoid Robot
NASA Astrophysics Data System (ADS)
Toyama, S.; Ikeda, F.; Yasaka, T.
2016-09-01
This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.
S. Loeb; E. Britzke
2010-01-01
Bats respond to the calls of conspecifics as well as to calls of other species; however, few studies have attempted to quantify these responses or understand the functions of these calls. We tested the response of Rafinesqueâs big-eared bats (Corynorhinus rafinesquii) to social calls as a possible method to increase capture success and to understand the function of...