LEARNING TO READ SCIENTIFIC RUSSIAN BY THE THREE QUESTION EXPERIMENTAL (3QX) METHOD.
ERIC Educational Resources Information Center
ALFORD, M.H.T.
A NEW METHOD FOR LEARNING TO READ TECHNICAL LITERATURE IN A FOREIGN LANGUAGE IS BEING DEVELOPED AND TESTED AT THE LANGUAGE CENTRE OF THE UNIVERSITY OF ESSEX, COLCHESTER, ENGLAND. THE METHOD IS CALLED "THREE QUESTION EXPERIMENTAL METHOD (3QX)," AND IT HAS BEEN USED IN THREE COURSES FOR TEACHING SCIENTIFIC RUSSIAN TO PHYSICISTS. THE THREE…
Sound imaging of nocturnal animal calls in their natural habitat.
Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G
2011-09-01
We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.
Sports Training Support Method by Self-Coaching with Humanoid Robot
NASA Astrophysics Data System (ADS)
Toyama, S.; Ikeda, F.; Yasaka, T.
2016-09-01
This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loeb, Susan, C.; Britzke, Eric, R.
Bats respond to the calls of conspecifics as well as to calls of other species; however, few studies have attempted to quantify these responses or understand the functions of these calls. We tested the response of Rafinesque’s big-eared bats (Corynorhinus rafinesquii) to social calls as a possible method to increase capture success and to understand the function of social calls. We also tested if calls of bats within the range of the previously designated subspecies differed, if the responses of Rafinesque’s big-eared bats varied with geographic origin of the calls, and if other species responded to the calls of C.more » rafinesquii. We recorded calls of Rafinesque’s big-eared bats at two colony roost sites in South Carolina, USA. Calls were recorded while bats were in the roosts and as they exited. Playback sequences for each site were created by copying typical pulses into the playback file. Two mist nets were placed approximately 50–500 m from known roost sites; the net with the playback equipment served as the Experimental net and the one without the equipment served as the Control net. Call structures differed significantly between the Mountain and Coastal Plains populations with calls from the Mountains being of higher frequency and longer duration. Ten of 11 Rafinesque’s big-eared bats were caught in the Control nets and, 13 of 19 bats of other species were captured at Experimental nets even though overall bat activity did not differ significantly between Control and Experimental nets. Our results suggest that Rafinesque’s big-eared bats are not attracted to conspecifics’ calls and that these calls may act as an intraspecific spacing mechanism during foraging.« less
CALL, Prewriting Strategies, and EFL Writing Quantity
ERIC Educational Resources Information Center
Shafiee, Sajad; Koosha, Mansour; Afghar, Akbar
2015-01-01
This study sought to explore the effect of teaching prewriting strategies through different methods of input delivery (i.e. conventional, web-based, and hybrid) on EFL learners' writing quantity. In its quasi-experimental study, the researchers recruited 98 available sophomores, and assigned them to three experimental groups (conventional,…
Method Improving Reading Comprehension In Primary Education Program Students
NASA Astrophysics Data System (ADS)
Rohana
2018-01-01
This study aims to determine the influence of reading comprehension skills of English for PGSD students through the application of SQ3R learning method. The type of this research is Pre-Experimental research because it is not yet a real experiment, there are external variables that influence the formation of a dependent variable, this is because there is no control variable and the sample is not chosen randomly. The research design is used is one-group pretest-post-test design involving one group that is an experimental group. In this design, the observation is done twice before and after the experiment. Observations made before the experiment (O1) are called pretests and the post-experimental observation (O2) is called posttest. The difference between O1 and O2 ie O2 - O1 is the effect of the treatment. The results showed that there was an improvement in reading comprehension skills of PGSD students in Class M.4.3 using SQ3R method, and better SQ3R enabling SQ3R to improve English comprehension skills.
Contour detection improved by context-adaptive surround suppression.
Sang, Qiang; Cai, Biao; Chen, Hao
2017-01-01
Recently, many image processing applications have taken advantage of a psychophysical and neurophysiological mechanism, called "surround suppression" to extract object contour from a natural scene. However, these traditional methods often adopt a single suppression model and a fixed input parameter called "inhibition level", which needs to be manually specified. To overcome these drawbacks, we propose a novel model, called "context-adaptive surround suppression", which can automatically control the effect of surround suppression according to image local contextual features measured by a surface estimator based on a local linear kernel. Moreover, a dynamic suppression method and its stopping mechanism are introduced to avoid manual intervention. The proposed algorithm is demonstrated and validated by a broad range of experimental results.
Designing Free Energy Surfaces That Match Experimental Data with Metadynamics
White, Andrew D.; Dama, James F.; Voth, Gregory A.
2015-04-30
Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less
Designing free energy surfaces that match experimental data with metadynamics.
White, Andrew D; Dama, James F; Voth, Gregory A
2015-06-09
Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.
de Castro Lacaze, Denise Helena; Sacco, Isabel de C. N.; Rocha, Lys Esther; de Bragança Pereira, Carlos Alberto; Casarotto, Raquel Aparecida
2010-01-01
AIM: We sought to evaluate musculoskeletal discomfort and mental and physical fatigue in the call-center workers of an airline company before and after a supervised exercise program compared with rest breaks during the work shift. INTRODUCTION: This was a longitudinal pilot study conducted in a flight-booking call-center for an airline in São Paulo, Brazil. Occupational health activities are recommended to decrease the negative effects of the call-center working conditions. In practice, exercise programs are commonly recommended for computer workers, but their effects have not been studied in call-center operators. METHODS: Sixty-four call-center operators participated in this study. Thirty-two subjects were placed into the experimental group and attended a 10-min daily exercise session for 2 months. Conversely, 32 participants were placed into the control group and took a 10-min daily rest break during the same period. Each subject was evaluated once a week by means of the Corlett-Bishop body map with a visual analog discomfort scale and the Chalder fatigue questionnaire. RESULTS: Musculoskeletal discomfort decreased in both groups, but the reduction was only statistically significant for the spine and buttocks (p=0.04) and the sum of the segments (p=0.01) in the experimental group. In addition, the experimental group showed significant differences in the level of mental fatigue, especially in questions related to memory Rienzo, #181ff and tiredness (p=0.001). CONCLUSIONS: Our preliminary results demonstrate that appropriately designed and supervised exercise programs may be more efficient than rest breaks in decreasing discomfort and fatigue levels in call-center operators. PMID:20668622
Analysis of a virtual memory model for maintaining database views
NASA Technical Reports Server (NTRS)
Kinsley, Kathryn C.; Hughes, Charles E.
1992-01-01
This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.
Identification and validation of loss of function variants in clinical contexts.
Lescai, Francesco; Marasco, Elena; Bacchelli, Chiara; Stanier, Philip; Mantovani, Vilma; Beales, Philip
2014-01-01
The choice of an appropriate variant calling pipeline for exome sequencing data is becoming increasingly more important in translational medicine projects and clinical contexts. Within GOSgene, which facilitates genetic analysis as part of a joint effort of the University College London and the Great Ormond Street Hospital, we aimed to optimize a variant calling pipeline suitable for our clinical context. We implemented the GATK/Queue framework and evaluated the performance of its two callers: the classical UnifiedGenotyper and the new variant discovery tool HaplotypeCaller. We performed an experimental validation of the loss-of-function (LoF) variants called by the two methods using Sequenom technology. UnifiedGenotyper showed a total validation rate of 97.6% for LoF single-nucleotide polymorphisms (SNPs) and 92.0% for insertions or deletions (INDELs), whereas HaplotypeCaller was 91.7% for SNPs and 55.9% for INDELs. We confirm that GATK/Queue is a reliable pipeline in translational medicine and clinical context. We conclude that in our working environment, UnifiedGenotyper is the caller of choice, being an accurate method, with a high validation rate of error-prone calls like LoF variants. We finally highlight the importance of experimental validation, especially for INDELs, as part of a standard pipeline in clinical environments.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Training to Use the Scientific Method in a First-Year Physics Laboratory: A Case Study
ERIC Educational Resources Information Center
Sarasola, Ane; Rojas, Jose Félix; Okariz, Ana
2015-01-01
In this work, a specific implementation of a so-called experimental or open-ended laboratory is proposed and evaluated. Keeping in mind the scheduling limitations imposed by the context, first-year engineering physics laboratory practices have been revised in order to facilitate acquisition of the skills that are required in the experimental work.…
Phaser crystallographic software.
McCoy, Airlie J; Grosse-Kunstleve, Ralf W; Adams, Paul D; Winn, Martyn D; Storoni, Laurent C; Read, Randy J
2007-08-01
Phaser is a program for phasing macromolecular crystal structures by both molecular replacement and experimental phasing methods. The novel phasing algorithms implemented in Phaser have been developed using maximum likelihood and multivariate statistics. For molecular replacement, the new algorithms have proved to be significantly better than traditional methods in discriminating correct solutions from noise, and for single-wavelength anomalous dispersion experimental phasing, the new algorithms, which account for correlations between F(+) and F(-), give better phases (lower mean phase error with respect to the phases given by the refined structure) than those that use mean F and anomalous differences DeltaF. One of the design concepts of Phaser was that it be capable of a high degree of automation. To this end, Phaser (written in C++) can be called directly from Python, although it can also be called using traditional CCP4 keyword-style input. Phaser is a platform for future development of improved phasing methods and their release, including source code, to the crystallographic community.
A laboratory exercise in experimental bioimmuration
Mankiewicz, C.
1998-01-01
A paleobiologic laboratory exercise using lunch meat, cheeses, and condiments provides a means for studying a method of fossil preservation called "bioimmuration." The exercise also has students deal with problems associated with other aspects of taphonomy, taxonomy, and paleoecology.
A Shellcode Detection Method Based on Full Native API Sequence and Support Vector Machine
NASA Astrophysics Data System (ADS)
Cheng, Yixuan; Fan, Wenqing; Huang, Wei; An, Jing
2017-09-01
Dynamic monitoring the behavior of a program is widely used to discriminate between benign program and malware. It is usually based on the dynamic characteristics of a program, such as API call sequence or API call frequency to judge. The key innovation of this paper is to consider the full Native API sequence and use the support vector machine to detect the shellcode. We also use the Markov chain to extract and digitize Native API sequence features. Our experimental results show that the method proposed in this paper has high accuracy and low detection rate.
Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method
ERIC Educational Resources Information Center
Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev
2018-01-01
The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1989-01-01
In response to the tremendous growth in the development of advanced materials, such as fiber-reinforced plastic (FRP) composite materials, a new numerical method is developed to analyze and predict the time-dependent properties of these materials. Basic concepts in viscoelasticity, laminated composites, and previous viscoelastic numerical methods are presented. A stable numerical method, called the nonlinear differential equation method (NDEM), is developed to calculate the in-plane stresses and strains over any time period for a general laminate constructed from nonlinear viscoelastic orthotropic plies. The method is implemented in an in-plane stress analysis computer program, called VCAP, to demonstrate its usefulness and to verify its accuracy. A number of actual experimental test results performed on Kevlar/epoxy composite laminates are compared to predictions calculated from the numerical method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonior, Jason D; Hu, Zhen; Guo, Terry N.
This letter presents an experimental demonstration of software-defined-radio-based wireless tomography using computer-hosted radio devices called Universal Software Radio Peripheral (USRP). This experimental brief follows our vision and previous theoretical study of wireless tomography that combines wireless communication and RF tomography to provide a novel approach to remote sensing. Automatic data acquisition is performed inside an RF anechoic chamber. Semidefinite relaxation is used for phase retrieval, and the Born iterative method is utilized for imaging the target. Experimental results are presented, validating our vision of wireless tomography.
ERIC Educational Resources Information Center
Guerin, Stephen M.; Guerin, Clark L.
1979-01-01
Discusses a phenomenon called Extrasensory Perception (ESP) whereby information is gained directly by the mind without the use of the ordinary senses. Experiments in ESP and the basic equipment and methods are presented. Statistical evaluation of ESP experimental results are also included. (HM)
The fuel tax compliance unit : an evaluation and analysis of results.
DOT National Transportation Integrated Search
2004-01-01
Kentucky utilized TEA-21 federal funds to create an innovative pilot program to identify the best practices and methods for auditing taxpayers of transportation related taxes. This program involved a four-year experimental program called the Fuel Tax...
ON DEVELOPING TOOLS AND METHODS FOR ENVIRONMENTALLY BENIGN PROCESSES
Two types of tools are generally needed for designing processes and products that are cleaner from environmental impact perspectives. The first kind is called process tools. Process tools are based on information obtained from experimental investigations in chemistry., material s...
Improved Method for Prediction of Attainable Wing Leading-Edge Thrust
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; McElroy, Marcus O.; Lessard, Wendy B.; McCullers, L. Arnold
1996-01-01
Prediction of the loss of wing leading-edge thrust and the accompanying increase in drag due to lift, when flow is not completely attached, presents a difficult but commonly encountered problem. A method (called the previous method) for the prediction of attainable leading-edge thrust and the resultant effect on airplane aerodynamic performance has been in use for more than a decade. Recently, the method has been revised to enhance its applicability to current airplane design and evaluation problems. The improved method (called the present method) provides for a greater range of airfoil shapes from very sharp to very blunt leading edges. It is also based on a wider range of Reynolds numbers than was available for the previous method. The present method, when employed in computer codes for aerodynamic analysis, generally results in improved correlation with experimental wing-body axial-force data and provides reasonable estimates of the measured drag.
Biological relevance of CNV calling methods using familial relatedness including monozygotic twins.
Castellani, Christina A; Melka, Melkaye G; Wishart, Andrea E; Locke, M Elizabeth O; Awamleh, Zain; O'Reilly, Richard L; Singh, Shiva M
2014-04-21
Studies involving the analysis of structural variation including Copy Number Variation (CNV) have recently exploded in the literature. Furthermore, CNVs have been associated with a number of complex diseases and neurodevelopmental disorders. Common methods for CNV detection use SNP, CNV, or CGH arrays, where the signal intensities of consecutive probes are used to define the number of copies associated with a given genomic region. These practices pose a number of challenges that interfere with the ability of available methods to accurately call CNVs. It has, therefore, become necessary to develop experimental protocols to test the reliability of CNV calling methods from microarray data so that researchers can properly discriminate biologically relevant data from noise. We have developed a workflow for the integration of data from multiple CNV calling algorithms using the same array results. It uses four CNV calling programs: PennCNV (PC), Affymetrix® Genotyping Console™ (AGC), Partek® Genomics Suite™ (PGS) and Golden Helix SVS™ (GH) to analyze CEL files from the Affymetrix® Human SNP 6.0 Array™. To assess the relative suitability of each program, we used individuals of known genetic relationships. We found significant differences in CNV calls obtained by different CNV calling programs. Although the programs showed variable patterns of CNVs in the same individuals, their distribution in individuals of different degrees of genetic relatedness has allowed us to offer two suggestions. The first involves the use of multiple algorithms for the detection of the largest possible number of CNVs, and the second suggests the use of PennCNV over all other methods when the use of only one software program is desirable.
Abraham Trembley's strategy of generosity and the scope of celebrity in the mid-eighteenth century.
Ratcliff, Marc J
2004-12-01
Historians of science have long believed that Abraham Trembley's celebrity and impact were attributable chiefly to the incredible regenerative phenomena demonstrated by the polyp, which he discovered in 1744, and to the new experimental method he devised to investigate them. This essay shows that experimental method alone cannot account for Trembley's success and influence; nor are the marvels of the polyp sufficient to explain its scientific and cultural impact. Experimental method was but one element in a new conception of the laboratory that called for both experimental and para-experimental skills whose public availability depended on a new style of communication. The strategy of generosity that led Trembley to dispatch polyps everywhere enabled experimental naturalist laboratories to spread throughout Europe, and the free circulation of living objects for scientific research led practitioners to establish an experimental field distinct from mechanical physics. Scholars reacted to the marvels of the polyp by strengthening the boundaries between the public and academic spheres and, in consequence, opened a space for new standards in both scientific work and the production of celebrity.
Conceptualizing Effectiveness in Disability Research
ERIC Educational Resources Information Center
de Bruin, Catriona L.
2017-01-01
Policies promoting evidence-based practice in education typically endorse evaluations of the effectiveness of teaching strategies through specific experimental research designs and methods. A number of researchers have critiqued this approach to evaluation as narrow and called for greater methodological sophistication. This paper discusses the…
Vibration band gaps for elastic metamaterial rods using wave finite element method
NASA Astrophysics Data System (ADS)
Nobrega, E. D.; Gautier, F.; Pelat, A.; Dos Santos, J. M. C.
2016-10-01
Band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators are investigated. New techniques to analyze metamaterial systems are using a combination of analytical or numerical method with wave propagation. One of them, called here wave spectral element method (WSEM), consists of combining the spectral element method (SEM) with Floquet-Bloch's theorem. A modern methodology called wave finite element method (WFEM), developed to calculate dynamic behavior in periodic acoustic and structural systems, utilizes a similar approach where SEM is substituted by the conventional finite element method (FEM). In this paper, it is proposed to use WFEM to calculate band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators of multi-degree-of-freedom (M-DOF). Simulated examples with band gaps generated by Bragg scattering and local resonators are calculated by WFEM and verified with WSEM, which is used as a reference method. Results are presented in the form of attenuation constant, vibration transmittance and frequency response function (FRF). For all cases, WFEM and WSEM results are in agreement, provided that the number of elements used in WFEM is sufficient to convergence. An experimental test was conducted with a real elastic metamaterial rod, manufactured with plastic in a 3D printer, without local resonance-type effect. The experimental results for the metamaterial rod with band gaps generated by Bragg scattering are compared with the simulated ones. Both numerical methods (WSEM and WFEM) can localize the band gap position and width very close to the experimental results. A hybrid approach combining WFEM with the commercial finite element software ANSYS is proposed to model complex metamaterial systems. Two examples illustrating its efficiency and accuracy to model an elastic metamaterial rod unit-cell using 1D simple rod element and 3D solid element are demonstrated and the results present good approximation to the experimental data.
Post-Fisherian Experimentation: From Physical to Virtual
Jeff Wu, C. F.
2014-04-24
Fisher's pioneering work in design of experiments has inspired further work with broader applications, especially in industrial experimentation. Three topics in physical experiments are discussed: principles of effect hierarchy, sparsity, and heredity for factorial designs, a new method called CME for de-aliasing aliased effects, and robust parameter design. The recent emergence of virtual experiments on a computer is reviewed. Here, some major challenges in computer experiments, which must go beyond Fisherian principles, are outlined.
NASA Astrophysics Data System (ADS)
Banabic, D.; Vos, M.; Paraianu, L.; Jurco, P.
2007-05-01
The experimental research on the formability of metal sheets has shown that there is a significant dispersion of the limit strains in an area delimited by two curves: a lower curve (LFLC) and an upper one (UFLC). The region between the two curves defines the so-called Forming Limit Band (FLB). So far, this forming band has only been determined experimentally. In this paper the authors suggested a method to predict the Forming Limit Band. The proposed method is illustrated on the AA6111-T43 aluminium alloy.
Parolini, Giuditta
2015-01-01
During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.
Kim, Min Kyung; Lane, Anatoliy; Kelley, James J; Lun, Desmond S
2016-01-01
Several methods have been developed to predict system-wide and condition-specific intracellular metabolic fluxes by integrating transcriptomic data with genome-scale metabolic models. While powerful in many settings, existing methods have several shortcomings, and it is unclear which method has the best accuracy in general because of limited validation against experimentally measured intracellular fluxes. We present a general optimization strategy for inferring intracellular metabolic flux distributions from transcriptomic data coupled with genome-scale metabolic reconstructions. It consists of two different template models called DC (determined carbon source model) and AC (all possible carbon sources model) and two different new methods called E-Flux2 (E-Flux method combined with minimization of l2 norm) and SPOT (Simplified Pearson cOrrelation with Transcriptomic data), which can be chosen and combined depending on the availability of knowledge on carbon source or objective function. This enables us to simulate a broad range of experimental conditions. We examined E. coli and S. cerevisiae as representative prokaryotic and eukaryotic microorganisms respectively. The predictive accuracy of our algorithm was validated by calculating the uncentered Pearson correlation between predicted fluxes and measured fluxes. To this end, we compiled 20 experimental conditions (11 in E. coli and 9 in S. cerevisiae), of transcriptome measurements coupled with corresponding central carbon metabolism intracellular flux measurements determined by 13C metabolic flux analysis (13C-MFA), which is the largest dataset assembled to date for the purpose of validating inference methods for predicting intracellular fluxes. In both organisms, our method achieves an average correlation coefficient ranging from 0.59 to 0.87, outperforming a representative sample of competing methods. Easy-to-use implementations of E-Flux2 and SPOT are available as part of the open-source package MOST (http://most.ccib.rutgers.edu/). Our method represents a significant advance over existing methods for inferring intracellular metabolic flux from transcriptomic data. It not only achieves higher accuracy, but it also combines into a single method a number of other desirable characteristics including applicability to a wide range of experimental conditions, production of a unique solution, fast running time, and the availability of a user-friendly implementation.
NASA Astrophysics Data System (ADS)
Beck, Megan; Morse, Michael; Corolewski, Caleb; Fritchman, Koyuki; Stifter, Chris; Poole, Callum; Hurley, Michael; Frary, Megan
2017-08-01
Dynamic recrystallization (DRX) occurs during high-temperature deformation in metals and alloys with low to medium stacking fault energies. Previous simulations and experimental research have shown the effect of temperature and grain size on DRX behavior, but not the effect of the grain boundary character distribution. To investigate the effects of the distribution of grain boundary types, experimental testing was performed on stainless steel 316L specimens with different initial special boundary fractions (SBF). This work was completed in conjunction with computer simulations that used a modified Monte Carlo method which allowed for the addition of anisotropic grain boundary energies using orientation data from electron backscatter diffraction (EBSD). The correlation of the experimental and simulation work allows for a better understanding of how the input parameters in the simulations correspond to what occurs experimentally. Results from both simulations and experiments showed that a higher fraction of so-called "special" boundaries ( e.g., Σ3 twin boundaries) delayed the onset of recrystallization to larger strains and that it is energetically favorable for nuclei to form on triple junctions without these so-called "special" boundaries.
An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2008-01-01
An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.
Flight-Test Evaluation of Flutter-Prediction Methods
NASA Technical Reports Server (NTRS)
Lind, RIck; Brenner, Marty
2003-01-01
The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.
Recent Research on Human Learning Challenges Conventional Instructional Strategies
ERIC Educational Resources Information Center
Rohrer, Doug; Pashler, Harold
2010-01-01
There has been a recent upsurge of interest in exploring how choices of methods and timing of instruction affect the rate and persistence of learning. The authors review three lines of experimentation--all conducted using educationally relevant materials and time intervals--that call into question important aspects of common instructional…
Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.
2001-03-08
1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.
An experimental system for coiled tubing partial underbalanced drilling (CT-PUBD) technique
NASA Astrophysics Data System (ADS)
Shi, H. Z.; Ji, Z. S.; Zhao, H. Q.; Chen, Z. L.; Zhang, H. Z.
2018-05-01
To improve the rate of penetration (ROP) in hard formations, a new high-speed drilling technique called Coiled Tubing Partial Underbalanced Drilling (CT-PUBD) is proposed. This method uses a rotary packer to realize an underbalanced condition near the bit by creating a micro-annulus and an overbalanced condition at the main part of the annulus. A new full-scale laboratory experimental system is designed and set up to study the hydraulic characteristics and drilling performance of this method. The system is composed of a drilling system, circulation system, and monitor system, including three key devices, namely, cuttings discharge device, rotary packer, and backflow device. The experimental results showed that the pressure loss increased linearly with the flow rate of the drilling fluid. The high drilling speed of CT-PUBD proved it a better drilling method than the conventional drilling. The experimental system may provide a fundamental basis for the research of CT-PUBD, and the results proved that this new method is feasible in enhancing ROP and guaranteeing the drilling safety.
Algorithms that Defy the Gravity of Learning Curve
2017-04-28
three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the
Invention Versus Direct Instruction: For Some Content, It's a Tie
NASA Astrophysics Data System (ADS)
Chase, Catherine C.; Klahr, David
2017-12-01
An important, but as yet unresolved pedagogical question is whether discovery-oriented or direct instruction methods lead to greater learning and transfer. We address this issue in a study with 101 fourth and fifth grade students that contrasts two distinct instructional methods. One is a blend of discovery and direct instruction called Invent-then-Tell (IT), and the other is a version of direct instruction called Tell-then-Practice (TP). The relative effectiveness of these methods is compared in the context of learning a critical inquiry skill—the control-of-variables strategy. Previous research has demonstrated the success of IT over TP for teaching deep domain structures, while other research has demonstrated the superiority of direct instruction for teaching simple experimental design, a domain-general inquiry skill. In the present study, students in both conditions made equally large gains on an immediate assessment of their application and conceptual understanding of experimental design, and they also performed similarly on a test of far transfer. These results were fairly consistent across school populations with various levels of prior achievement and socioeconomic status. Findings suggest that broad claims about the relative effectiveness of these two distinct methods should be conditionalized by particular instructional contexts, such as the type of knowledge being taught.
Tang, Hua; Chen, Wei; Lin, Hao
2016-04-01
Immunoglobulins, also called antibodies, are a group of cell surface proteins which are produced by the immune system in response to the presence of a foreign substance (called antigen). They play key roles in many medical, diagnostic and biotechnological applications. Correct identification of immunoglobulins is crucial to the comprehension of humoral immune function. With the avalanche of protein sequences identified in postgenomic age, it is highly desirable to develop computational methods to timely identify immunoglobulins. In view of this, we designed a predictor called "IGPred" by formulating protein sequences with the pseudo amino acid composition into which nine physiochemical properties of amino acids were incorporated. Jackknife cross-validated results showed that 96.3% of immunoglobulins and 97.5% of non-immunoglobulins can be correctly predicted, indicating that IGPred holds very high potential to become a useful tool for antibody analysis. For the convenience of most experimental scientists, a web-server for IGPred was established at http://lin.uestc.edu.cn/server/IGPred. We believe that the web-server will become a powerful tool to study immunoglobulins and to guide related experimental validations.
Thermodynamics of quantum information scrambling
NASA Astrophysics Data System (ADS)
Campisi, Michele; Goold, John
2017-06-01
Scrambling of quantum information can conveniently be quantified by so-called out-of-time-order correlators (OTOCs), i.e., correlators of the type <[Wτ,V ] †[Wτ,V ] > , whose measurements present a formidable experimental challenge. Here we report on a method for the measurement of OTOCs based on the so-called two-point measurement scheme developed in the field of nonequilibrium quantum thermodynamics. The scheme is of broader applicability than methods employed in current experiments and provides a clear-cut interpretation of quantum information scrambling in terms of nonequilibrium fluctuations of thermodynamic quantities, such as work and heat. Furthermore, we provide a numerical example on a spin chain which highlights the utility of our thermodynamic approach when understanding the differences between integrable and ergodic behaviors. We also discuss how the method can be used to extend the reach of current experiments.
A BAC clone fingerprinting approach to the detection of human genome rearrangements
Krzywinski, Martin; Bosdet, Ian; Mathewson, Carrie; Wye, Natasja; Brebner, Jay; Chiu, Readman; Corbett, Richard; Field, Matthew; Lee, Darlene; Pugh, Trevor; Volik, Stas; Siddiqui, Asim; Jones, Steven; Schein, Jacquie; Collins, Collin; Marra, Marco
2007-01-01
We present a method, called fingerprint profiling (FPP), that uses restriction digest fingerprints of bacterial artificial chromosome clones to detect and classify rearrangements in the human genome. The approach uses alignment of experimental fingerprint patterns to in silico digests of the sequence assembly and is capable of detecting micro-deletions (1-5 kb) and balanced rearrangements. Our method has compelling potential for use as a whole-genome method for the identification and characterization of human genome rearrangements. PMID:17953769
Computer tomography of flows external to test models
NASA Technical Reports Server (NTRS)
Prikryl, I.; Vest, C. M.
1982-01-01
Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.
ERIC Educational Resources Information Center
Kamarova, Sviatlana; Chatzisarantis, Nikos L. D.; Hagger, Martin S.; Lintunen, Taru; Hassandra, Mary; Papaioannou, Athanasios
2017-01-01
Background: Previous prospective studies have documented that mastery-approach goals are adaptive because they facilitate less negative psychological responses to unfavourable social comparisons than performance-approach goals. Aims: This study aimed to confirm this so-called "mastery goal advantage" effect experimentally. Methods: A…
Allele-specific copy-number discovery from whole-genome and whole-exome sequencing
Wang, WeiBo; Wang, Wei; Sun, Wei; Crowley, James J.; Szatkiewicz, Jin P.
2015-01-01
Copy-number variants (CNVs) are a major form of genetic variation and a risk factor for various human diseases, so it is crucial to accurately detect and characterize them. It is conceivable that allele-specific reads from high-throughput sequencing data could be leveraged to both enhance CNV detection and produce allele-specific copy number (ASCN) calls. Although statistical methods have been developed to detect CNVs using whole-genome sequence (WGS) and/or whole-exome sequence (WES) data, information from allele-specific read counts has not yet been adequately exploited. In this paper, we develop an integrated method, called AS-GENSENG, which incorporates allele-specific read counts in CNV detection and estimates ASCN using either WGS or WES data. To evaluate the performance of AS-GENSENG, we conducted extensive simulations, generated empirical data using existing WGS and WES data sets and validated predicted CNVs using an independent methodology. We conclude that AS-GENSENG not only predicts accurate ASCN calls but also improves the accuracy of total copy number calls, owing to its unique ability to exploit information from both total and allele-specific read counts while accounting for various experimental biases in sequence data. Our novel, user-friendly and computationally efficient method and a complete analytic protocol is freely available at https://sourceforge.net/projects/asgenseng/. PMID:25883151
Rossi, Michael R.; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed
2009-01-01
The current study focuses on experimentally validating a planning scheme based on the so-called bubble-packing method. This study is a part of an ongoing effort to develop computerized planning tools for cryosurgery, where bubble packing has been previously developed as a means to find an initial, uniform distribution of cryoprobes within a given domain; the so-called force-field analogy was then used to move cryoprobes to their optimum layout. However, due to the high quality of the cryoprobes’ distribution, suggested by bubble packing and its low computational cost, it has been argued that a planning scheme based solely on bubble packing may be more clinically relevant. To test this argument, an experimental validation is performed on a simulated cross-section of the prostate, using gelatin solution as a phantom material, proprietary liquid-nitrogen based cryoprobes, and a cryoheater to simulate urethral warming. Experimental results are compared with numerically simulated temperature histories resulting from planning. Results indicate an average disagreement of 0.8 mm in identifying the freezing front location, which is an acceptable level of uncertainty in the context of prostate cryosurgery imaging. PMID:19885373
Experimental level densities of atomic nuclei
Guttormsen, M.; Aiche, M.; Bello Garrote, F. L.; ...
2015-12-23
It is almost 80 years since Hans Bethe described the level density as a non-interacting gas of protons and neutrons. In all these years, experimental data were interpreted within this picture of a fermionic gas. However, the renewed interest of measuring level density using various techniques calls for a revision of this description. In particular, the wealth of nuclear level densities measured with the Oslo method favors the constant-temperature level density over the Fermi-gas picture. Furthermore, trom the basis of experimental data, we demonstrate that nuclei exhibit a constant-temperature level density behavior for all mass regions and at least upmore » to the neutron threshold.« less
Bates, Maxwell; Berliner, Aaron J; Lachoff, Joe; Jaschke, Paul R; Groban, Eli S
2017-01-20
Wet Lab Accelerator (WLA) is a cloud-based tool that allows a scientist to conduct biology via robotic control without the need for any programming knowledge. A drag and drop interface provides a convenient and user-friendly method of generating biological protocols. Graphically developed protocols are turned into programmatic instruction lists required to conduct experiments at the cloud laboratory Transcriptic. Prior to the development of WLA, biologists were required to write in a programming language called "Autoprotocol" in order to work with Transcriptic. WLA relies on a new abstraction layer we call "Omniprotocol" to convert the graphical experimental description into lower level Autoprotocol language, which then directs robots at Transcriptic. While WLA has only been tested at Transcriptic, the conversion of graphically laid out experimental steps into Autoprotocol is generic, allowing extension of WLA into other cloud laboratories in the future. WLA hopes to democratize biology by bringing automation to general biologists.
The effects of guided inquiry instruction on student achievement in high school biology
NASA Astrophysics Data System (ADS)
Vass, Laszlo
The purpose of this quantitative, quasi-experimental study was to measure the effect of a student-centered instructional method called guided inquiry on the achievement of students in a unit of study in high school biology. The study used a non-random sample of 109 students, the control group of 55 students enrolled in high school one, received teacher centered instruction while the experimental group of 54 students enrolled at high school two received student-centered, guided inquiry instruction. The pretest-posttest design of the study analyzed scores using an independent t-test, a dependent t-test (p = <.001), an ANCOVA (p = .007), mixed method ANOVA (p = .024) and hierarchical linear regression (p = <.001). The experimental group that received guided inquiry instruction had statistically significantly higher achievement than the control group.
Hovering Dual-Spin Vehicle Groundwork for Bias Momentum Sizing Validation Experiment
NASA Technical Reports Server (NTRS)
Rothhaar, Paul M.; Moerder, Daniel D.; Lim, Kyong B.
2008-01-01
Angular bias momentum offers significant stability augmentation for hovering flight vehicles. The reliance of the vehicle on thrust vectoring for agility and disturbance rejection is greatly reduced with significant levels of stored angular momentum in the system. A methodical procedure for bias momentum sizing has been developed in previous studies. This current study provides groundwork for experimental validation of that method using an experimental vehicle called the Dual-Spin Test Device, a thrust-levitated platform. Using measured data the vehicle's thrust vectoring units are modeled and a gust environment is designed and characterized. Control design is discussed. Preliminary experimental results of the vehicle constrained to three rotational degrees of freedom are compared to simulation for a case containing no bias momentum to validate the simulation. A simulation of a bias momentum dominant case is presented.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Kinase Identification with Supervised Laplacian Regularized Least Squares
Zhang, He; Wang, Minghui
2015-01-01
Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms. PMID:26448296
Kinase Identification with Supervised Laplacian Regularized Least Squares.
Li, Ao; Xu, Xiaoyi; Zhang, He; Wang, Minghui
2015-01-01
Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms.
ERIC Educational Resources Information Center
Campbell, Bernadette; Mark, Melvin M.
2015-01-01
Evaluation theories can be tested in various ways. One approach, the experimental analogue study, is described and illustrated in this article. The approach is presented as a method worthy to use in the pursuit of what Alkin and others have called descriptive evaluation theory. Drawing on analogue studies conducted by the first author, we…
Metainference: A Bayesian inference method for heterogeneous systems.
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.
Berlin, Konstantin; O’Leary, Dianne P.; Fushman, David
2011-01-01
We present and evaluate a rigid-body, deterministic, molecular docking method, called ELMDOCK, that relies solely on the three-dimensional structure of the individual components and the overall rotational diffusion tensor of the complex, obtained from nuclear spin-relaxation measurements. We also introduce a docking method, called ELMPATIDOCK, derived from ELMDOCK and based on the new concept of combining the shape-related restraints from rotational diffusion with those from residual dipolar couplings, along with ambiguous contact/interface-related restraints obtained from chemical shift perturbations. ELMDOCK and ELMPATIDOCK use two novel approximations of the molecular rotational diffusion tensor that allow computationally efficient docking. We show that these approximations are accurate enough to properly dock the two components of a complex without the need to recompute the diffusion tensor at each iteration step. We analyze the accuracy, robustness, and efficiency of these methods using synthetic relaxation data for a large variety of protein-protein complexes. We also test our method on three protein systems for which the structure of the complex and experimental relaxation data are available, and analyze the effect of flexible unstructured tails on the outcome of docking. Additionally, we describe a method for integrating the new approximation methods into the existing docking approaches that use the rotational diffusion tensor as a restraint. The results show that the proposed docking method is robust against experimental errors in the relaxation data or structural rearrangements upon complex formation and is computationally more efficient than current methods. The developed approximations are accurate enough to be used in structure refinement protocols. PMID:21604302
Berlin, Konstantin; O'Leary, Dianne P; Fushman, David
2011-07-01
We present and evaluate a rigid-body, deterministic, molecular docking method, called ELMDOCK, that relies solely on the three-dimensional structure of the individual components and the overall rotational diffusion tensor of the complex, obtained from nuclear spin-relaxation measurements. We also introduce a docking method, called ELMPATIDOCK, derived from ELMDOCK and based on the new concept of combining the shape-related restraints from rotational diffusion with those from residual dipolar couplings, along with ambiguous contact/interface-related restraints obtained from chemical shift perturbations. ELMDOCK and ELMPATIDOCK use two novel approximations of the molecular rotational diffusion tensor that allow computationally efficient docking. We show that these approximations are accurate enough to properly dock the two components of a complex without the need to recompute the diffusion tensor at each iteration step. We analyze the accuracy, robustness, and efficiency of these methods using synthetic relaxation data for a large variety of protein-protein complexes. We also test our method on three protein systems for which the structure of the complex and experimental relaxation data are available, and analyze the effect of flexible unstructured tails on the outcome of docking. Additionally, we describe a method for integrating the new approximation methods into the existing docking approaches that use the rotational diffusion tensor as a restraint. The results show that the proposed docking method is robust against experimental errors in the relaxation data or structural rearrangements upon complex formation and is computationally more efficient than current methods. The developed approximations are accurate enough to be used in structure refinement protocols. Copyright © 2011 Wiley-Liss, Inc.
Limitations of Lifting-Line Theory for Estimation of Aileron Hinge-Moment Characteristics
NASA Technical Reports Server (NTRS)
Swanson, Robert S.; Gillis, Clarence L.
1943-01-01
Hinge-moment parameters for several typical ailerons were calculated from section data with the aspect-ratio correction as usually determined from lifting-line theory. The calculations showed that the agreement between experimental and calculated results was unsatisfactory. An additional aspect-ratio correction, calculated by the method of lifting-surface theory, was applied to the slope of the curve of hinge-moment coefficient against angle of attack at small angles of attack. This so-called streamline-curvature correction brought the calculated and experimental results into satisfactory agreement.
Comparison of Calibration of Sensors Used for the Quantification of Nuclear Energy Rate Deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, J.; Reynard-Carette, C.; Tarchalski, M.
This present work deals with a collaborative program called GAMMA-MAJOR 'Development and qualification of a deterministic scheme for the evaluation of GAMMA heating in MTR reactors with exploitation as example MARIA reactor and Jules Horowitz Reactor' between the National Centre for Nuclear Research of Poland, the French Atomic Energy and Alternative Energies Commission and Aix Marseille University. One of main objectives of this program is to optimize the nuclear heating quantification thanks to calculation validated from experimental measurements of radiation energy deposition carried out in irradiation reactors. The quantification of the nuclear heating is a key data especially for themore » thermal, mechanical design and sizing of irradiation experimental devices in specific irradiated conditions and locations. The determination of this data is usually performed by differential calorimeters and gamma thermometers such as used in the experimental multi-sensors device called CARMEN 'Calorimetric en Reacteur et Mesures des Emissions Nucleaires'. In the framework of the GAMMA-MAJOR program a new calorimeter was designed for the nuclear energy deposition quantification. It corresponds to a single-cell calorimeter and it is called KAROLINA. This calorimeter was recently tested during an irradiation campaign inside MARIA reactor in Poland. This new single-cell calorimeter differs from previous CALMOS or CARMEN type differential calorimeters according to three main points: its geometry, its preliminary out-of-pile calibration, and its in-pile measurement method. The differential calorimeter, which is made of two identical cells containing heaters, has a calibration method based on the use of steady thermal states reached by simulating the nuclear energy deposition into the calorimeter sample by Joule effect; whereas the single-cell calorimeter, which has no heater, is calibrated by using the transient thermal response of the sensor (heating and cooling steps). The paper will concern these two kinds of calorimetric sensors. It will focus in particular on studies on their out-of-pile calibrations. Firstly, the characteristics of the sensor designs will be detailed (such as geometry, dimension, material sample, assembly, instrumentation). Then the out-of-pile calibration methods will be described. Furthermore numerical results obtained thanks to 2D axisymmetrical thermal simulations (Finite Element Method, CAST3M) and experimental results will be presented for each sensor. A comparison of the two different thermal sensor behaviours will be realized. To conclude a discussion of the advantages and the drawbacks of each sensor will be performed especially regarding measurement methods. (authors)« less
Stability of Castering Wheels for Aircraft Landing Gears
NASA Technical Reports Server (NTRS)
Kantrowitz, Arthur
1940-01-01
A theoretical study was made of the shimmy of castering wheels. The theory is based on the discovery of a phenomenon called kinematic shimmy. Experimental checks, use being made of a model having low-pressure tires, are reported and the applicability of the results to full scale is discussed. Theoretical methods of estimating the spindle viscous damping and the spindle solid friction necessary to avoid shimmy are given. A new method of avoiding shimmy -- lateral freedom -- is introduced.
Females that experience threat are better teachers
Kleindorfer, Sonia; Evans, Christine; Colombelli-Négrel, Diane
2014-01-01
Superb fairy-wren (Malurus cyaneus) females use an incubation call to teach their embryos a vocal password to solicit parental feeding care after hatching. We previously showed that high call rate by the female was correlated with high call similarity in fairy-wren chicks, but not in cuckoo chicks, and that parent birds more often fed chicks with high call similarity. Hosts should be selected to increase their defence behaviour when the risk of brood parasitism is highest, such as when cuckoos are present in the area. Therefore, we experimentally test whether hosts increase call rate to embryos in the presence of a singing Horsfield's bronze-cuckoo (Chalcites basalis). Female fairy-wrens increased incubation call rate when we experimentally broadcast cuckoo song near the nest. Embryos had higher call similarity when females had higher incubation call rate. We interpret the findings of increased call rate as increased teaching effort in response to a signal of threat. PMID:24806422
Efficient experimental design for uncertainty reduction in gene regulatory networks.
Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R
2015-01-01
An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.
Efficient experimental design for uncertainty reduction in gene regulatory networks
2015-01-01
Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515
Parent-offspring communication in the western sandpiper
Johnson, M.; Aref, S.; Walters, J.R.
2008-01-01
Western sandpiper (Calidris mauri) chicks are precocial and leave the nest shortly after hatch to forage independently. Chicks require thermoregulatory assistance from parents (brooding) for 5-7 days posthatch, and parents facilitate chick survival for 2-3 weeks posthatch by leading and defending chicks. Parental vocal signals are likely involved in protecting chicks from predators, preventing them from wandering away and becoming lost and leading them to good foraging locations. Using observational and experimental methods in the field, we describe and demonstrate the form and function of parent-chick communication in the western sandpiper. We document 4 distinct calls produced by parents that are apparently directed toward their chicks (brood, gather, alarm, and freeze calls). Through experimental playback of parental and non-parental vocalizations to chicks in a small arena, we demonstrated the following: 1) chicks respond to the alarm call by vocalizing relatively less often and moving away from the signal source, 2) chicks respond to the gather call by vocalizing relatively more often and moving toward the signal source, and 3) chicks respond to the freeze call by vocalizing relatively less often and crouching motionless on the substrate for extended periods of time. Chicks exhibited consistent directional movement and space use to parental and non-parental signals. Although fewer vocalizations were given in response to non-parental signals, which may indicate a weaker response to unfamiliar individuals, the relative number of chick calls given to each type of call signal was consistent between parental and non-parental signals. We also discovered 2 distinct chick vocalizations (chick-contact and chick-alarm calls) during arena playback experiments. Results indicate that sandpiper parents are able to elicit antipredatory chick behaviors and direct chick movement and vocalizations through vocal signals. Future study of parent-offspring communication should determine whether shorebird chicks exhibit parental recognition though vocalizations and the role of chick vocalizations in parental behavior. ?? The Author 2008. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved.
Multipulse technique exploiting the intermodulation of ultrasound waves in a nonlinear medium.
Biagi, Elena; Breschi, Luca; Vannacci, Enrico; Masotti, Leonardo
2009-03-01
In recent years, the nonlinear properties of materials have attracted much interest in nondestructive testing and in ultrasound diagnostic applications. Acoustic nonlinear parameters represent an opportunity to improve the information that can be extracted from a medium such as structural organization and pathologic status of tissue. In this paper, a method called pulse subtraction intermodulation (PSI), based on a multipulse technique, is presented and investigated both theoretically and experimentally. This method allows separation of the intermodulation products, which arise when 2 separate frequencies are transmitted in a nonlinear medium, from fundamental and second harmonic components, making them available for improved imaging techniques or signal processing algorithms devoted to tissue characterization. The theory of intermodulation product generation was developed according the Khokhlov-Zabolotskaya-Kuznetsov (KZK) nonlinear propagation equation, which is consistent with experimental results. The description of the proposed method, characterization of the intermodulation spectral contents, and quantitative results coming from in vitro experimentation are reported and discussed in this paper.
Effect of two 12-minute culturally targeted films on intent to call 911 for stroke
Williams, Olajide; DeSorbo, Alexandra; Eimicke, Joseph; Abel-Bey, Amparo; Valdez, Lenfis; Noble, James; Gordillo, Madeleine; Ravenell, Joseph; Ramirez, Mildred; Teresi, Jeanne A.; Jean-Louis, Girardin; Ogedegbe, Gbenga
2016-01-01
Objective: We assessed the behavioral effect of two 12-minute culturally targeted stroke films on immediately calling 911 for suspected stroke among black and Hispanic participants using a quasi-experimental pretest-posttest design. Methods: We enrolled 102 adult churchgoers (60 black and 42 Hispanic) into a single viewing of one of the 2 stroke films—a Gospel musical (English) or Telenovela (Spanish). We measured intent to immediately call 911 using the validated 28-item Stroke Action Test in English and Spanish, along with related variables, before and immediately after the intervention. Data were analyzed using repeated-measures analysis of variance. Results: An increase in intent to call 911 was seen immediately following the single viewing. Higher self-efficacy for calling 911 was associated with intent to call 911 among Hispanic but not black participants. A composite measure of barriers to calling 911 was not associated with intent to call 911 in either group. A significant association was found between higher stroke symptom knowledge and intent to call 911 at baseline, but not immediately following the intervention. No sex associations were found; however, being older was associated with greater intent to call 911. The majority of participants would strongly recommend the films to others. One participant appropriately called 911 for a real-life stroke event. Conclusions: Narrative communication in the form of tailored short films may improve intent to call 911 for stroke among the black and Hispanic population. PMID:27164682
Finger vein recognition using local line binary pattern.
Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin
2011-01-01
In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP).
Allele-specific copy-number discovery from whole-genome and whole-exome sequencing.
Wang, WeiBo; Wang, Wei; Sun, Wei; Crowley, James J; Szatkiewicz, Jin P
2015-08-18
Copy-number variants (CNVs) are a major form of genetic variation and a risk factor for various human diseases, so it is crucial to accurately detect and characterize them. It is conceivable that allele-specific reads from high-throughput sequencing data could be leveraged to both enhance CNV detection and produce allele-specific copy number (ASCN) calls. Although statistical methods have been developed to detect CNVs using whole-genome sequence (WGS) and/or whole-exome sequence (WES) data, information from allele-specific read counts has not yet been adequately exploited. In this paper, we develop an integrated method, called AS-GENSENG, which incorporates allele-specific read counts in CNV detection and estimates ASCN using either WGS or WES data. To evaluate the performance of AS-GENSENG, we conducted extensive simulations, generated empirical data using existing WGS and WES data sets and validated predicted CNVs using an independent methodology. We conclude that AS-GENSENG not only predicts accurate ASCN calls but also improves the accuracy of total copy number calls, owing to its unique ability to exploit information from both total and allele-specific read counts while accounting for various experimental biases in sequence data. Our novel, user-friendly and computationally efficient method and a complete analytic protocol is freely available at https://sourceforge.net/projects/asgenseng/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Kalyanaraman, Ananth; Cannon, William R; Latt, Benjamin; Baxter, Douglas J
2011-11-01
A MapReduce-based implementation called MR-MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs. The source code along with user documentation are available on http://compbio.eecs.wsu.edu/MR-MSPolygraph. ananth@eecs.wsu.edu; william.cannon@pnnl.gov. Supplementary data are available at Bioinformatics online.
Theory of mind in dogs?: examining method and concept.
Horowitz, Alexandra
2011-12-01
In line with other research, Udell, Dorey, and Wynne's (in press) finding that dogs and wolves pass on some trials of a putative theory-of-mind test and fail on others is as informative about the methods and concepts of the research as about the subjects. This commentary expands on these points. The intertrial differences in the target article demonstrate how critical the choice of cues is in experimental design; the intersubject-group differences demonstrate how life histories can interact with experimental design. Even the best-designed theory-of-mind tests have intractable logical problems. Finally, these and previous research results call for the introduction of an intermediate stage of ability, a rudimentary theory of mind, to describe subjects' performance.
TURNS - A free-wake Euler/Navier-Stokes numerical method for helicopter rotors
NASA Technical Reports Server (NTRS)
Srinivasan, G. R.; Baeder, J. D.
1993-01-01
Computational capabilities of a numerical procedure, called TURNS (transonic unsteady rotor Navier-Stokes), to calculate the aerodynamics and acoustics (high-speed impulsive noise) out to several rotor diameters are summarized. The procedure makes it possible to obtain the aerodynamics and acoustics information in one single calculation. The vortical wave and its influence, as well as the acoustics, are captured as part of the overall flowfield solution. The accuracy and suitability of the TURNS method is demonstrated through comparisons with experimental data.
Identity method for particle number fluctuations and correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorenstein, M. I.
An incomplete particle identification distorts the observed event-by-event fluctuations of the hadron chemical composition in nucleus-nucleus collisions. A new experimental technique called the identity method was recently proposed. It eliminated the misidentification problem for one specific combination of the second moments in a system of two hadron species. In the present paper, this method is extended to calculate all the second moments in a system with an arbitrary number of hadron species. Special linear combinations of the second moments are introduced. These combinations are presented in terms of single-particle variables and can be found experimentally from the event-by-event averaging. Themore » mathematical problem is then reduced to solving a system of linear equations. The effect of incomplete particle identification is fully eliminated from the final results.« less
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method
Niks, Irene; Gevers, Josette
2018-01-01
Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care. PMID:29438350
Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method.
Niks, Irene; de Jonge, Jan; Gevers, Josette; Houtman, Irene
2018-02-13
Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.
NASA Astrophysics Data System (ADS)
Imai, Takashi; Ota, Kaiichiro; Aoyagi, Toshio
2017-02-01
Phase reduction has been extensively used to study rhythmic phenomena. As a result of phase reduction, the rhythm dynamics of a given system can be described using the phase response curve. Measuring this characteristic curve is an important step toward understanding a system's behavior. Recently, a basic idea for a new measurement method (called the multicycle weighted spike-triggered average method) was proposed. This paper confirms the validity of this method by providing an analytical proof and demonstrates its effectiveness in actual experimental systems by applying the method to an oscillating electric circuit. Some practical tips to use the method are also presented.
Leyde, Brian P; Klein, Sanford A; Nellis, Gregory F; Skye, Harrison
2017-03-01
This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model.
Metainference: A Bayesian inference method for heterogeneous systems
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
Improved Method for Linear B-Cell Epitope Prediction Using Antigen’s Primary Sequence
Raghava, Gajendra P. S.
2013-01-01
One of the major challenges in designing a peptide-based vaccine is the identification of antigenic regions in an antigen that can stimulate B-cell’s response, also called B-cell epitopes. In the past, several methods have been developed for the prediction of conformational and linear (or continuous) B-cell epitopes. However, the existing methods for predicting linear B-cell epitopes are far from perfection. In this study, an attempt has been made to develop an improved method for predicting linear B-cell epitopes. We have retrieved experimentally validated B-cell epitopes as well as non B-cell epitopes from Immune Epitope Database and derived two types of datasets called Lbtope_Variable and Lbtope_Fixed length datasets. The Lbtope_Variable dataset contains 14876 B-cell epitope and 23321 non-epitopes of variable length where as Lbtope_Fixed length dataset contains 12063 B-cell epitopes and 20589 non-epitopes of fixed length. We also evaluated the performance of models on above datasets after removing highly identical peptides from the datasets. In addition, we have derived third dataset Lbtope_Confirm having 1042 epitopes and 1795 non-epitopes where each epitope or non-epitope has been experimentally validated in at least two studies. A number of models have been developed to discriminate epitopes and non-epitopes using different machine-learning techniques like Support Vector Machine, and K-Nearest Neighbor. We achieved accuracy from ∼54% to 86% using diverse s features like binary profile, dipeptide composition, AAP (amino acid pair) profile. In this study, for the first time experimentally validated non B-cell epitopes have been used for developing method for predicting linear B-cell epitopes. In previous studies, random peptides have been used as non B-cell epitopes. In order to provide service to scientific community, a web server LBtope has been developed for predicting and designing B-cell epitopes (http://crdd.osdd.net/raghava/lbtope/). PMID:23667458
Novel optical scanning cryptography using Fresnel telescope imaging.
Yan, Aimin; Sun, Jianfeng; Hu, Zhijuan; Zhang, Jingtao; Liu, Liren
2015-07-13
We propose a new method called modified optical scanning cryptography using Fresnel telescope imaging technique for encryption and decryption of remote objects. An image or object can be optically encrypted on the fly by Fresnel telescope scanning system together with an encryption key. For image decryption, the encrypted signals are received and processed with an optical coherent heterodyne detection system. The proposed method has strong performance through use of secure Fresnel telescope scanning with orthogonal polarized beams and efficient all-optical information processing. The validity of the proposed method is demonstrated by numerical simulations and experimental results.
An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu
2016-01-01
Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future proteomics research. For facilitating extensive studies for future proteomics research, we developed a freely available web server called RVM‐BiGP‐PPIs in Hypertext Preprocessor (PHP) for predicting PPIs. The web server including source code and the datasets are available at http://219.219.62.123:8888/BiGP/. PMID:27452983
Finger Vein Recognition Using Local Line Binary Pattern
Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin
2011-01-01
In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP). PMID:22247670
Visual question answering using hierarchical dynamic memory networks
NASA Astrophysics Data System (ADS)
Shang, Jiayu; Li, Shiren; Duan, Zhikui; Huang, Junwei
2018-04-01
Visual Question Answering (VQA) is one of the most popular research fields in machine learning which aims to let the computer learn to answer natural language questions with images. In this paper, we propose a new method called hierarchical dynamic memory networks (HDMN), which takes both question attention and visual attention into consideration impressed by Co-Attention method, which is the best (or among the best) algorithm for now. Additionally, we use bi-directional LSTMs, which have a better capability to remain more information from the question and image, to replace the old unit so that we can capture information from both past and future sentences to be used. Then we rebuild the hierarchical architecture for not only question attention but also visual attention. What's more, we accelerate the algorithm via a new technic called Batch Normalization which helps the network converge more quickly than other algorithms. The experimental result shows that our model improves the state of the art on the large COCO-QA dataset, compared with other methods.
Experimental transition probabilities for Mn II spectral lines
NASA Astrophysics Data System (ADS)
Manrique, J.; Aguilera, J. A.; Aragón, C.
2018-06-01
Transition probabilities for 46 spectral lines of Mn II with wavelengths in the range 2000-3500 Å have been measured by CSigma laser-induced breakdown spectroscopy (Cσ-LIBS). For 28 of the lines, experimental data had not been reported previously. The Cσ-LIBS method, based in the construction of generalized curves of growth called Cσ graphs, avoids the error due to self-absorption. The samples used to generate the laser-induced plasmas are fused glass disks prepared from pure MnO. The Mn concentrations in the samples and the lines included in the study are selected to ensure the validity of the model of homogeneous plasma used. The results are compared to experimental and theoretical values available in the literature.
Determination of the optimal number of components in independent components analysis.
Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N
2018-03-01
Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Toward a standard in structural genome annotation for prokaryotes
Tripp, H. James; Sutton, Granger; White, Owen; ...
2015-07-25
In an effort to identify the best practice for finding genes in prokaryotic genomes and propose it as a standard for automated annotation pipelines, we collected 1,004,576 peptides from various publicly available resources, and these were used as a basis to evaluate various gene-calling methods. The peptides came from 45 bacterial replicons with an average GC content from 31 % to 74 %, biased toward higher GC content genomes. Automated, manual, and semi-manual methods were used to tally errors in three widely used gene calling methods, as evidenced by peptides mapped outside the boundaries of called genes. We found thatmore » the consensus set of identical genes predicted by the three methods constitutes only about 70 % of the genes predicted by each individual method (with start and stop required to coincide). Peptide data was useful for evaluating some of the differences between gene callers, but not reliable enough to make the results conclusive, due to limitations inherent in any proteogenomic study. A single, unambiguous, unanimous best practice did not emerge from this analysis, since the available proteomics data were not adequate to provide an objective measurement of differences in the accuracy between these methods. However, as a result of this study, software, reference data, and procedures have been better matched among participants, representing a step toward a much-needed standard. In the absence of sufficient amount of experimental data to achieve a universal standard, our recommendation is that any of these methods can be used by the community, as long as a single method is employed across all datasets to be compared.« less
Preference Mining Using Neighborhood Rough Set Model on Two Universes.
Zeng, Kai
2016-01-01
Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.
NASA Astrophysics Data System (ADS)
Wang, Hongcui; Kawahara, Tatsuya
CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.
Evaluation of ultrasonic cavitation of metallic and non-metallic surfaces
NASA Technical Reports Server (NTRS)
Mehta, Narinder K.
1992-01-01
1,1,2 trichloro-1,2,2 trifluoro ethane (CFC-113) commercially known as Freon-113 is the primary test solvent used for validating the cleaned hardware at the Kennedy Space Center (KSC). Due to the ozone depletion problem, the current United States policy calls for the phase out of Freons by 1995. NASAs chlorofluorocarbon (CFC) replacement group at KSC has opted to use water as a replacement fluid for the validation process since water is non-toxic, inexpensive, and is environmentally friendly. The replacement validation method calls for the ultrasonification of the small parts with water at 52 C for a cycle or two of 10 min duration wash using commercial ultrasonic baths. In this project, experimental data was obtained to assess the applicability of the proposed validation method for any damage of the metallic and non-metallic surfaces resulting from ultrasonic cavitation.
Optimizing Associative Experimental Design for Protein Crystallization Screening
Dinç, Imren; Pusey, Marc L.; Aygün, Ramazan S.
2016-01-01
The goal of protein crystallization screening is the determination of the main factors of importance to crystallizing the protein under investigation. One of the major issues about determining these factors is that screening is often expanded to many hundreds or thousands of conditions to maximize combinatorial chemical space coverage for maximizing the chances of a successful (crystalline) outcome. In this paper, we propose an experimental design method called “Associative Experimental Design (AED)” and an optimization method includes eliminating prohibited combinations and prioritizing reagents based on AED analysis of results from protein crystallization experiments. AED generates candidate cocktails based on these initial screening results. These results are analyzed to determine those screening factors in chemical space that are most likely to lead to higher scoring outcomes, crystals. We have tested AED on three proteins derived from the hyperthermophile Thermococcus thioreducens, and we applied an optimization method to these proteins. Our AED method generated novel cocktails (count provided in parentheses) leading to crystals for three proteins as follows: Nucleoside diphosphate kinase (4), HAD superfamily hydrolase (2), Nucleoside kinase (1). After getting promising results, we have tested our optimization method on four different proteins. The AED method with optimization yielded 4, 3, and 20 crystalline conditions for holo Human Transferrin, archaeal exosome protein, and Nucleoside diphosphate kinase, respectively. PMID:26955046
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, K. D.
1985-01-01
A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
Distributed genetic algorithms for the floorplan design problem
NASA Technical Reports Server (NTRS)
Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.
1991-01-01
Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
Getting the most out of RNA-seq data analysis.
Khang, Tsung Fei; Lau, Ching Yee
2015-01-01
Background. A common research goal in transcriptome projects is to find genes that are differentially expressed in different phenotype classes. Biologists might wish to validate such gene candidates experimentally, or use them for downstream systems biology analysis. Producing a coherent differential gene expression analysis from RNA-seq count data requires an understanding of how numerous sources of variation such as the replicate size, the hypothesized biological effect size, and the specific method for making differential expression calls interact. We believe an explicit demonstration of such interactions in real RNA-seq data sets is of practical interest to biologists. Results. Using two large public RNA-seq data sets-one representing strong, and another mild, biological effect size-we simulated different replicate size scenarios, and tested the performance of several commonly-used methods for calling differentially expressed genes in each of them. We found that, when biological effect size was mild, RNA-seq experiments should focus on experimental validation of differentially expressed gene candidates. Importantly, at least triplicates must be used, and the differentially expressed genes should be called using methods with high positive predictive value (PPV), such as NOISeq or GFOLD. In contrast, when biological effect size was strong, differentially expressed genes mined from unreplicated experiments using NOISeq, ASC and GFOLD had between 30 to 50% mean PPV, an increase of more than 30-fold compared to the cases of mild biological effect size. Among methods with good PPV performance, having triplicates or more substantially improved mean PPV to over 90% for GFOLD, 60% for DESeq2, 50% for NOISeq, and 30% for edgeR. At a replicate size of six, we found DESeq2 and edgeR to be reasonable methods for calling differentially expressed genes at systems level analysis, as their PPV and sensitivity trade-off were superior to the other methods'. Conclusion. When biological effect size is weak, systems level investigation is not possible using RNAseq data, and no meaningful result can be obtained in unreplicated experiments. Nonetheless, NOISeq or GFOLD may yield limited numbers of gene candidates with good validation potential, when triplicates or more are available. When biological effect size is strong, NOISeq and GFOLD are effective tools for detecting differentially expressed genes in unreplicated RNA-seq experiments for qPCR validation. When triplicates or more are available, GFOLD is a sharp tool for identifying high confidence differentially expressed genes for targeted qPCR validation; for downstream systems level analysis, combined results from DESeq2 and edgeR are useful.
The cardiac muscle duplex as a method to study myocardial heterogeneity
Solovyova, O.; Katsnelson, L.B.; Konovalov, P.V.; Kursanov, A.G.; Vikulova, N.A.; Kohl, P.; Markhasin, V.S.
2014-01-01
This paper reviews the development and application of paired muscle preparations, called duplex, for the investigation of mechanisms and consequences of intra-myocardial electro-mechanical heterogeneity. We illustrate the utility of the underlying combined experimental and computational approach for conceptual development and integration of basic science insight with clinically relevant settings, using previously published and new data. Directions for further study are identified. PMID:25106702
Pattern recognition neural-net by spatial mapping of biology visual field
NASA Astrophysics Data System (ADS)
Lin, Xin; Mori, Masahiko
2000-05-01
The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.
Leyde, Brian P.; Klein, Sanford A; Nellis, Gregory F.; Skye, Harrison
2017-01-01
This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model. PMID:28785125
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Active vibration control for flexible rotor by optimal direct-output feedback control
NASA Technical Reports Server (NTRS)
Nonami, Kenzou; Dirusso, Eliseo; Fleming, David P.
1989-01-01
Experimental research tests were performed to actively control the rotor vibrations of a flexible rotor mounted on flexible bearing supports. The active control method used in the tests is called optimal direct-output feedback control. This method uses four electrodynamic actuators to apply control forces directly to the bearing housings in order to achieve effective vibration control of the rotor. The force actuators are controlled by an analog controller that accepts rotor displacement as input. The controller is programmed with experimentally determined feedback coefficients; the output is a control signal to the force actuators. The tests showed that this active control method reduced the rotor resonance peaks due to unbalance from approximately 250 micrometers down to approximately 25 micrometers (essentially runout level). The tests were conducted over a speed range from 0 to 10,000 rpm; the rotor system had nine critical speeds within this speed range. The method was effective in significantly reducing the rotor vibration for all of the vibration modes and critical speeds.
Active vibration control for flexible rotor by optimal direct-output feedback control
NASA Technical Reports Server (NTRS)
Nonami, K.; Dirusso, E.; Fleming, D. P.
1989-01-01
Experimental research tests were performed to actively control the rotor vibrations of a flexible rotor mounted on flexible bearing supports. The active control method used in the tests is called optimal direct-output feedback control. This method uses four electrodynamic actuators to apply control forces directly to the bearing housings in order to achieve effective vibration control of the rotor. The force actuators are controlled by an analog controller that accepts rotor displacement as input. The controller is programmed with experimentally determined feedback coefficients; the output is a control signal to the force actuators. The tests showed that this active control method reduced the rotor resonance peaks due to unbalance from approximately 250 microns down to approximately 25 microns (essentially runout level). The tests were conducted over a speed range from 0 to 10,000 rpm; the rotor system had nine critical speeds within this speed range. The method was effective in significantly reducing the rotor vibration for all of the vibration modes and critical speeds.
A remark on copy number variation detection methods.
Li, Shuo; Dou, Xialiang; Gao, Ruiqi; Ge, Xinzhou; Qian, Minping; Wan, Lin
2018-01-01
Copy number variations (CNVs) are gain and loss of DNA sequence of a genome. High throughput platforms such as microarrays and next generation sequencing technologies (NGS) have been applied for genome wide copy number losses. Although progress has been made in both approaches, the accuracy and consistency of CNV calling from the two platforms remain in dispute. In this study, we perform a deep analysis on copy number losses on 254 human DNA samples, which have both SNP microarray data and NGS data publicly available from Hapmap Project and 1000 Genomes Project respectively. We show that the copy number losses reported from Hapmap Project and 1000 Genome Project only have < 30% overlap, while these reports are required to have cross-platform (e.g. PCR, microarray and high-throughput sequencing) experimental supporting by their corresponding projects, even though state-of-art calling methods were employed. On the other hand, copy number losses are found directly from HapMap microarray data by an accurate algorithm, i.e. CNVhac, almost all of which have lower read mapping depth in NGS data; furthermore, 88% of which can be supported by the sequences with breakpoint in NGS data. Our results suggest the ability of microarray calling CNVs and the possible introduction of false negatives from the unessential requirement of the additional cross-platform supporting. The inconsistency of CNV reports from Hapmap Project and 1000 Genomes Project might result from the inadequate information containing in microarray data, the inconsistent detection criteria, or the filtration effect of cross-platform supporting. The statistical test on CNVs called from CNVhac show that the microarray data can offer reliable CNV reports, and majority of CNV candidates can be confirmed by raw sequences. Therefore, the CNV candidates given by a good caller could be highly reliable without cross-platform supporting, so additional experimental information should be applied in need instead of necessarily.
Frith, Emily; Loprinzi, Paul D.
2018-01-01
Background: We evaluated the differential influence of preferred versus imposed media selections on distinct hedonic responses to an acute bout of treadmill walking. Methods: Twenty university students were recruited for this [160 person-visit] laboratory experiment, which employed a within-subject, counter-balanced design. Participants were exposed to 8 experimental conditions, including (1) Exercise Only, (2) Texting Only, (3) Preferred Phone Call, (4) Imposed Phone Call, (5) Preferred Music Playlist, (6) Imposed Music Playlist, (7)Preferred Video and (8) Imposed Video. During each visit (except Texting Only), participants completed a 10-minute bout of walking on the treadmill at a self-selected pace. Walking speed was identical for all experimental conditions. Before, at the midpoint of exercise, and post-exercise, participants completed the Feeling Scale (FS) and the Felt Arousal Scale (FAS) to measure acute hedonic response. The Affective Circumplex Scale was administered pre-exercise and post-exercise. Results: Significant pre-post change scores were observed for happy (Imposed Call: P=0.05;Preferred Music: P=0.02; Imposed Video: P=0.03), excited (Exercise Only: P=0.001; PreferredVideo: P=0.01; Imposed Video: P=0.03), sad (Preferred Music: P=0.05), anxious (ExerciseOnly: P=0.05; Preferred Video: P=0.01), and fatigue (Exercise Only: P=0.03; Imposed Video:P=0.002). For the FS all change scores were statistically significant from pre-to-mid and pre-topost (P<0.05). Conclusion: This experiment provides strong evidence that entertaining media platforms substantively influences hedonic responses to exercise. Implications of these findings are discussed. PMID:29744306
Frith, Emily; Loprinzi, Paul D
2018-01-01
Background: We evaluated the differential influence of preferred versus imposed media selections on distinct hedonic responses to an acute bout of treadmill walking. Methods: Twenty university students were recruited for this [160 person-visit] laboratory experiment, which employed a within-subject, counter-balanced design. Participants were exposed to 8 experimental conditions, including (1) Exercise Only, (2) Texting Only, (3) Preferred Phone Call, (4) Imposed Phone Call, (5) Preferred Music Playlist, (6) Imposed Music Playlist, (7)Preferred Video and (8) Imposed Video. During each visit (except Texting Only), participants completed a 10-minute bout of walking on the treadmill at a self-selected pace. Walking speed was identical for all experimental conditions. Before, at the midpoint of exercise, and post-exercise, participants completed the Feeling Scale (FS) and the Felt Arousal Scale (FAS) to measure acute hedonic response. The Affective Circumplex Scale was administered pre-exercise and post-exercise. Results: Significant pre-post change scores were observed for happy (Imposed Call: P=0.05;Preferred Music: P=0.02; Imposed Video: P=0.03), excited (Exercise Only: P=0.001; PreferredVideo: P=0.01; Imposed Video: P=0.03), sad (Preferred Music: P=0.05), anxious (ExerciseOnly: P=0.05; Preferred Video: P=0.01), and fatigue (Exercise Only: P=0.03; Imposed Video:P=0.002). For the FS all change scores were statistically significant from pre-to-mid and pre-topost (P<0.05). Conclusion: This experiment provides strong evidence that entertaining media platforms substantively influences hedonic responses to exercise. Implications of these findings are discussed.
Squeeze strengthening of magnetorheological fluids using mixed mode operation
NASA Astrophysics Data System (ADS)
Becnel, A. C.; Sherman, S. G.; Hu, W.; Wereley, N. M.
2015-05-01
This research details a novel method of increasing the shear yield stress of magnetorheological fluids by combining shear and squeeze modes of operation to manipulate particle chain structures, so-called squeeze strengthening. Using a custom built Searle cell magnetorheometer, which is a model device emulating a rotary magnetorheological energy absorber (MREA), the contribution of squeeze strengthening to the total controllable yield force is experimentally investigated. Using an eccentric rotating inner cylinder, characterization data from large (1 mm) and small (0.25 mm) nominal gap geometries are compared to investigate the squeeze strengthening effect. Details of the experimental setup and method are presented, and a hybrid model is used to explain experimental trends. This study demonstrates that it is feasible, utilizing squeeze strengthening to increase yield stress, to either (1) design a rotary MREA of a given volume to achieve higher energy absorption density (energy absorbed normalized by active fluid volume), or (2) reduce the volume of a given rotary MREA to achieve the same energy absorption density.
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ziegler, Lucía; Arim, Matías; Narins, Peter M
2011-05-01
The structure of the environment surrounding signal emission produces different patterns of degradation and attenuation. The expected adjustment of calls to ensure signal transmission in an environment was formalized in the acoustic adaptation hypothesis. Within this framework, most studies considered anuran calls as fixed attributes determined by local adaptations. However, variability in vocalizations as a product of phenotypic expression has also been reported. Empirical evidence supporting the association between environment and call structure has been inconsistent, particularly in anurans. Here, we identify a plausible causal structure connecting environment, individual attributes, and temporal and spectral adjustments as direct or indirect determinants of the observed variation in call attributes of the frog Hypsiboas pulchellus. For that purpose, we recorded the calls of 40 males in the field, together with vegetation density and other environmental descriptors of the calling site. Path analysis revealed a strong effect of habitat structure on the temporal parameters of the call, and an effect of site temperature conditioning the size of organisms calling at each site and thus indirectly affecting the dominant frequency of the call. Experimental habitat modification with a styrofoam enclosure yielded results consistent with field observations, highlighting the potential role of call flexibility on detected call patterns. Both, experimental and correlative results indicate the need to incorporate the so far poorly considered role of phenotypic plasticity in the complex connection between environmental structure and individual call attributes.
True temperature measurement on metallic surfaces using a two-color pyroreflectometer method.
Hernandez, D; Netchaieff, A; Stein, A
2009-09-01
In the most common case of optical pyrometry, the major obstacle in determining the true temperature is the knowledge of the thermo-optical properties for in situ conditions. We present experimental results obtained with a method able to determine the true temperature of metallic surfaces above 500 degrees C when there is not parasitic effect by surrounding radiation. The method is called bicolor pyroreflectometry and it is based on Planck's law, Kirchhoff's law, and the assumption of identical reflectivity indicatrixes for the target surface at two different close wavelengths (here, 1.3 and 1.55 microm). The diffusion factor eta(d), the key parameter of the method, is introduced to determine the convergence temperature T(*), which is expected to be equal to the true temperature T. Our goal is to asses this method for different metallic surfaces. The validation of this method is made by comparison with thermocouples. Measurements were made for tungsten, copper, and aluminum samples of different roughnesses, determined by a rugosimeter. After introducing a theoretical model for two-color pyroreflectometry, we give a description of the experimental setup and present experimental applications of the subject method. The quality of the results demonstrates the usefulness of two-color pyroreflectometry to determine the temperatures of hot metals when the emissivity is not known and for the commercially important case of specular surfaces.
Cook, G M
1999-12-01
The 1890s and the first decades of the twentieth century saw a vigorous debate about the mechanisms of evolutionary change. On one side, August Weismann defended the selectionist hypothesis; on the other, Herbert Spencer defended neo-Lamarckian theory. Supporters of Spencer, notably the American paleontologist and evolutionary theorist Henry Fairfield Osborn, recognized that the questions raised by Weismann and Spencer could only be settled experimentally. They called for the application of experimental methods, and the establishment of a new institution for the purpose of confirming the inheritance of acquired characters. To a great extent, the experimental program championed by Osborn and others was implemented and, although it failed to reveal soft inheritance and was soon eclipsed by Mendelian and chromosomal genetics, it did make significant and lasting contributions to evolutionary biology. Thus the importance of methodological and institutional innovation and theoretical pluralism to the progress of science is illustrated and underscored.
Liening, Andreas; Strunk, Guido; Mittelstadt, Ewald
2013-10-01
Much has been written about the differences between single- and double-loop learning, or more general between lower level and higher level learning. Especially in times of a fundamental crisis, a transition between lower and higher level learning would be an appropriate reaction to a challenge coming entirely out of the dark. However, so far there is no quantitative method to monitor such a transition. Therefore we introduce theory and methods of synergetics and present results from an experimental study based on the simulation of a crisis within a business simulation game. Hypothesized critical fluctuations - as a marker for so-called phase transitions - have been assessed with permutation entropy. Results show evidence for a phase transition during the crisis, which can be interpreted as a transition between lower and higher level learning.
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure.
Dharan, Smitha; Nair, Achuthsankar S
2009-01-30
Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts.
Kim, Junho; Maeng, Ju Heon; Lim, Jae Seok; Son, Hyeonju; Lee, Junehawk; Lee, Jeong Ho; Kim, Sangwoo
2016-10-15
Advances in sequencing technologies have remarkably lowered the detection limit of somatic variants to a low frequency. However, calling mutations at this range is still confounded by many factors including environmental contamination. Vector contamination is a continuously occurring issue and is especially problematic since vector inserts are hardly distinguishable from the sample sequences. Such inserts, which may harbor polymorphisms and engineered functional mutations, can result in calling false variants at corresponding sites. Numerous vector-screening methods have been developed, but none could handle contamination from inserts because they are focusing on vector backbone sequences alone. We developed a novel method-Vecuum-that identifies vector-originated reads and resultant false variants. Since vector inserts are generally constructed from intron-less cDNAs, Vecuum identifies vector-originated reads by inspecting the clipping patterns at exon junctions. False variant calls are further detected based on the biased distribution of mutant alleles to vector-originated reads. Tests on simulated and spike-in experimental data validated that Vecuum could detect 93% of vector contaminants and could remove up to 87% of variant-like false calls with 100% precision. Application to public sequence datasets demonstrated the utility of Vecuum in detecting false variants resulting from various types of external contamination. Java-based implementation of the method is available at http://vecuum.sourceforge.net/ CONTACT: swkim@yuhs.acSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Grow, Laura L; Kodak, Tiffany; Carr, James E
2014-01-01
Previous research has demonstrated that the conditional-only method (starting with a multiple-stimulus array) is more efficient than the simple-conditional method (progressive incorporation of more stimuli into the array) for teaching receptive labeling to children with autism spectrum disorders (Grow, Carr, Kodak, Jostad, & Kisamore,). The current study systematically replicated the earlier study by comparing the 2 approaches using progressive prompting with 2 boys with autism. The results showed that the conditional-only method was a more efficient and reliable teaching procedure than the simple-conditional method. The results further call into question the practice of teaching simple discriminations to facilitate acquisition of conditional discriminations. © Society for the Experimental Analysis of Behavior.
Numerical and Experimental Studies on Impact Loaded Concrete Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo
2006-07-01
An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less
NASA Technical Reports Server (NTRS)
Agrawal, Gagan; Sussman, Alan; Saltz, Joel
1993-01-01
Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.
Arim, Matías; Narins, Peter M.
2011-01-01
The structure of the environment surrounding signal emission produces different patterns of degradation and attenuation. The expected adjustment of calls to ensure signal transmission in an environment was formalized in the acoustic adaptation hypothesis. Within this framework, most studies considered anuran calls as fixed attributes determined by local adaptations. However, variability in vocalizations as a product of phenotypic expression has also been reported. Empirical evidence supporting the association between environment and call structure has been inconsistent, particularly in anurans. Here, we identify a plausible causal structure connecting environment, individual attributes, and temporal and spectral adjustments as direct or indirect determinants of the observed variation in call attributes of the frog Hypsiboas pulchellus. For that purpose, we recorded the calls of 40 males in the field, together with vegetation density and other environmental descriptors of the calling site. Path analysis revealed a strong effect of habitat structure on the temporal parameters of the call, and an effect of site temperature conditioning the size of organisms calling at each site and thus indirectly affecting the dominant frequency of the call. Experimental habitat modification with a styrofoam enclosure yielded results consistent with field observations, highlighting the potential role of call flexibility on detected call patterns. Both, experimental and correlative results indicate the need to incorporate the so far poorly considered role of phenotypic plasticity in the complex connection between environmental structure and individual call attributes. PMID:22479134
Perception and Haptic Rendering of Friction Moments.
Kawasaki, H; Ohtuka, Y; Koide, S; Mouri, T
2011-01-01
This paper considers moments due to friction forces on the human fingertip. A computational technique called the friction moment arc method is presented. The method computes the static and/or dynamic friction moment independent of a friction force calculation. In addition, a new finger holder to display friction moment is presented. This device incorporates a small brushless motor and disk, and connects the human's finger to an interface finger of the five-fingered haptic interface robot HIRO II. Subjects' perception of friction moment while wearing the finger holder, as well as perceptions during object manipulation in a virtual reality environment, were evaluated experimentally.
Poor methodological detail precludes experimental repeatability and hampers synthesis in ecology.
Haddaway, Neal R; Verhoeven, Jos T A
2015-10-01
Despite the scientific method's central tenets of reproducibility (the ability to obtain similar results when repeated) and repeatability (the ability to replicate an experiment based on methods described), published ecological research continues to fail to provide sufficient methodological detail to allow either repeatability of verification. Recent systematic reviews highlight the problem, with one example demonstrating that an average of 13% of studies per year (±8.0 [SD]) failed to report sample sizes. The problem affects the ability to verify the accuracy of any analysis, to repeat methods used, and to assimilate the study findings into powerful and useful meta-analyses. The problem is common in a variety of ecological topics examined to date, and despite previous calls for improved reporting and metadata archiving, which could indirectly alleviate the problem, there is no indication of an improvement in reporting standards over time. Here, we call on authors, editors, and peer reviewers to consider repeatability as a top priority when evaluating research manuscripts, bearing in mind that legacy and integration into the evidence base can drastically improve the impact of individual research reports.
An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu
2016-10-01
Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future proteomics research. For facilitating extensive studies for future proteomics research, we developed a freely available web server called RVM-BiGP-PPIs in Hypertext Preprocessor (PHP) for predicting PPIs. The web server including source code and the datasets are available at http://219.219.62.123:8888/BiGP/. © 2016 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Thick-target transmission method for excitation functions of interaction cross sections
NASA Astrophysics Data System (ADS)
Aikawa, M.; Ebata, S.; Imai, S.
2016-09-01
We propose a method, called as thick-target transmission (T3) method, to obtain an excitation function of interaction cross sections. In an ordinal experiment to measure the excitation function of interaction cross sections by the transmission method, we need to change the beam energy for each cross section. In the T3 method, the excitation function is derived from the beam attenuations measured at the targets of different thicknesses without changing the beam energy. The advantage of the T3 method is the simplicity and availability for radioactive beams. To confirm the availability, we perform a simulation for the 12C + 27Al system with the PHITS code instead of actual experiments. Our results have large uncertainties but well reproduce the tendency of the experimental data.
The coupling technique: A two-wave acoustic method for the study of dislocation dynamics
NASA Astrophysics Data System (ADS)
Gremaud, G.; Bujard, M.; Benoit, W.
1987-03-01
Progress in the study of dislocation dynamics has been achieved using a two-wave acoustic method, which has been called the coupling technique. In this method, the attenuation α and the velocity v of ultrasonic waves are measured in a sample submitted simultaneously to a harmonic stress σ of low frequency. Closed curves Δα(σ) and Δv/v(σ) are drawn during each cycle of the applied stress. The shapes of these curves and their evolution are characteristic of each dislocation motion mechanism which is activated by the low-frequency applied stress. For this reason, the closed curves Δα(σ) and Δv/v(σ) can be considered as signatures of the interaction mechanism which controls the low-frequency dislocation motion. In this paper, the concept of signature is presented and explained with some experimental examples. It will also be shown that theoretical models can be developed which explain very well the experimental results.
Interaction model between capsule robot and intestine based on nonlinear viscoelasticity.
Zhang, Cheng; Liu, Hao; Tan, Renjia; Li, Hongyi
2014-03-01
Active capsule endoscope could also be called capsule robot, has been developed from laboratory research to clinical application. However, the system still has defects, such as poor controllability and failing to realize automatic checks. The imperfection of the interaction model between capsule robot and intestine is one of the dominating reasons causing the above problems. A model is hoped to be established for the control method of the capsule robot in this article. It is established based on nonlinear viscoelasticity. The interaction force of the model consists of environmental resistance, viscous resistance and Coulomb friction. The parameters of the model are identified by experimental investigation. Different methods are used in the experiment to obtain different values of the same parameter at different velocities. The model is proved to be valid by experimental verification. The achievement in this article is the attempted perfection of an interaction model. It is hoped that the model can optimize the control method of the capsule robot in the future.
Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming
NASA Astrophysics Data System (ADS)
Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita
2018-03-01
We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.
NASA Astrophysics Data System (ADS)
Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.
2014-09-01
For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.
AMOBH: Adaptive Multiobjective Black Hole Algorithm.
Wu, Chong; Wu, Tao; Fu, Kaiyuan; Zhu, Yuan; Li, Yongbo; He, Wangyong; Tang, Shengwen
2017-01-01
This paper proposes a new multiobjective evolutionary algorithm based on the black hole algorithm with a new individual density assessment (cell density), called "adaptive multiobjective black hole algorithm" (AMOBH). Cell density has the characteristics of low computational complexity and maintains a good balance of convergence and diversity of the Pareto front. The framework of AMOBH can be divided into three steps. Firstly, the Pareto front is mapped to a new objective space called parallel cell coordinate system. Then, to adjust the evolutionary strategies adaptively, Shannon entropy is employed to estimate the evolution status. At last, the cell density is combined with a dominance strength assessment called cell dominance to evaluate the fitness of solutions. Compared with the state-of-the-art methods SPEA-II, PESA-II, NSGA-II, and MOEA/D, experimental results show that AMOBH has a good performance in terms of convergence rate, population diversity, population convergence, subpopulation obtention of different Pareto regions, and time complexity to the latter in most cases.
Jo, Sunhwan; Bahar, Ivet; Roux, Benoît
2014-01-01
Biomolecular conformational transitions are essential to biological functions. Most experimental methods report on the long-lived functional states of biomolecules, but information about the transition pathways between these stable states is generally scarce. Such transitions involve short-lived conformational states that are difficult to detect experimentally. For this reason, computational methods are needed to produce plausible hypothetical transition pathways that can then be probed experimentally. Here we propose a simple and computationally efficient method, called ANMPathway, for constructing a physically reasonable pathway between two endpoints of a conformational transition. We adopt a coarse-grained representation of the protein and construct a two-state potential by combining two elastic network models (ENMs) representative of the experimental structures resolved for the endpoints. The two-state potential has a cusp hypersurface in the configuration space where the energies from both the ENMs are equal. We first search for the minimum energy structure on the cusp hypersurface and then treat it as the transition state. The continuous pathway is subsequently constructed by following the steepest descent energy minimization trajectories starting from the transition state on each side of the cusp hypersurface. Application to several systems of broad biological interest such as adenylate kinase, ATP-driven calcium pump SERCA, leucine transporter and glutamate transporter shows that ANMPathway yields results in good agreement with those from other similar methods and with data obtained from all-atom molecular dynamics simulations, in support of the utility of this simple and efficient approach. Notably the method provides experimentally testable predictions, including the formation of non-native contacts during the transition which we were able to detect in two of the systems we studied. An open-access web server has been created to deliver ANMPathway results. PMID:24699246
Principles of Experimental Design for Big Data Analysis.
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2017-08-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.
Structure, Elastic Constants and XRD Spectra of Extended Solids under High Pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batyrev, I. G.; Coleman, S. P.; Ciezak-Jenkins, J. A.
We present results of evolutionary simulations based on density functional calculations of a potentially new type of energetic materials called extended solids: P-N and N-H. High-density structures with covalent bonds generated using variable and fixed concentration methods were analysed in terms of thermo-dynamical stability and agreement with experimental X-ray diffraction (XRD) spectra. X-ray diffraction spectra were calculated using a virtual diffraction algorithm that computes kinematic diffraction intensity in three-dimensional reciprocal space before being reduced to a two-theta line profile. Calculated XRD patterns were used to search for the structure of extended solids present at experimental pressures by optimizing data accordingmore » to experimental XRD peak position, peak intensity and theoretically calculated enthalpy. Elastic constants has been calculated for thermodynamically stable structures of P-N system.« less
Principles of Experimental Design for Big Data Analysis
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2016-01-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686
Beamspace fast fully adaptive brain source localization for limited data sequences
NASA Astrophysics Data System (ADS)
Ravan, Maryam
2017-05-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data.
Visual texture perception via graph-based semi-supervised learning
NASA Astrophysics Data System (ADS)
Zhang, Qin; Dong, Junyu; Zhong, Guoqiang
2018-04-01
Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Electron–vibration coupling induced renormalization in the photoemission spectrum of diamondoids
Gali, Adam; Demján, Tamás; Vörös, Márton; ...
2016-04-22
The development of theories and methods devoted to the accurate calculation of the electronic quasi-particle states and levels of molecules, clusters and solids is of prime importance to interpret the experimental data. These quantum systems are often modelled by using the Born–Oppenheimer approximation where the coupling between the electrons and vibrational modes is not fully taken into account, and the electrons are treated as pure quasi-particles. Here, we show that in small diamond cages, called diamondoids, the electron–vibration coupling leads to the breakdown of the electron quasi-particle picture. More importantly, we demonstrate that the strong electron–vibration coupling is essential tomore » properly describe the overall lineshape of the experimental photoemission spectrum. This cannot be obtained by methods within Born–Oppenheimer approximation. Furthermore, we deduce a link between the vibronic states found by our many-body perturbation theory approach and the well-known Jahn–Teller effect.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gali, Adam; Demján, Tamás; Vörös, Márton
The development of theories and methods devoted to the accurate calculation of the electronic quasi-particle states and levels of molecules, clusters and solids is of prime importance to interpret the experimental data. These quantum systems are often modelled by using the Born–Oppenheimer approximation where the coupling between the electrons and vibrational modes is not fully taken into account, and the electrons are treated as pure quasi-particles. Here, we show that in small diamond cages, called diamondoids, the electron–vibration coupling leads to the breakdown of the electron quasi-particle picture. More importantly, we demonstrate that the strong electron–vibration coupling is essential tomore » properly describe the overall lineshape of the experimental photoemission spectrum. This cannot be obtained by methods within Born–Oppenheimer approximation. Furthermore, we deduce a link between the vibronic states found by our many-body perturbation theory approach and the well-known Jahn–Teller effect.« less
Electron–vibration coupling induced renormalization in the photoemission spectrum of diamondoids
Gali, Adam; Demján, Tamás; Vörös, Márton; Thiering, Gergő; Cannuccia, Elena; Marini, Andrea
2016-01-01
The development of theories and methods devoted to the accurate calculation of the electronic quasi-particle states and levels of molecules, clusters and solids is of prime importance to interpret the experimental data. These quantum systems are often modelled by using the Born–Oppenheimer approximation where the coupling between the electrons and vibrational modes is not fully taken into account, and the electrons are treated as pure quasi-particles. Here, we show that in small diamond cages, called diamondoids, the electron–vibration coupling leads to the breakdown of the electron quasi-particle picture. More importantly, we demonstrate that the strong electron–vibration coupling is essential to properly describe the overall lineshape of the experimental photoemission spectrum. This cannot be obtained by methods within Born–Oppenheimer approximation. Moreover, we deduce a link between the vibronic states found by our many-body perturbation theory approach and the well-known Jahn–Teller effect. PMID:27103340
Tissue-Informative Mechanism for Wearable Non-invasive Continuous Blood Pressure Monitoring
NASA Astrophysics Data System (ADS)
Woo, Sung Hun; Choi, Yun Young; Kim, Dae Jung; Bien, Franklin; Kim, Jae Joon
2014-10-01
Accurate continuous direct measurement of the blood pressure is currently available thru direct invasive methods via intravascular needles, and is mostly limited to use during surgical procedures or in the intensive care unit (ICU). Non-invasive methods that are mostly based on auscultation or cuff oscillometric principles do provide relatively accurate measurement of blood pressure. However, they mostly involve physical inconveniences such as pressure or stress on the human body. Here, we introduce a new non-invasive mechanism of tissue-informative measurement, where an experimental phenomenon called subcutaneous tissue pressure equilibrium is revealed and related for application in detection of absolute blood pressure. A prototype was experimentally verified to provide an absolute blood pressure measurement by wearing a watch-type measurement module that does not cause any discomfort. This work is supposed to contribute remarkably to the advancement of continuous non-invasive mobile devices for 24-7 daily-life ambulatory blood-pressure monitoring.
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.
Browning, Brian L.; Yu, Zhaoxia
2009-01-01
We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040
HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features.
Zaman, Rianon; Chowdhury, Shahana Yasmin; Rashid, Mahmood A; Sharma, Alok; Dehzangi, Abdollah; Shatabda, Swakkhar
2017-01-01
DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM) as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.
Development of the mathematical model for design and verification of acoustic modal analysis methods
NASA Astrophysics Data System (ADS)
Siner, Alexander; Startseva, Maria
2016-10-01
To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.
Off-lexicon online Arabic handwriting recognition using neural network
NASA Astrophysics Data System (ADS)
Yahia, Hamdi; Chaabouni, Aymen; Boubaker, Houcine; Alimi, Adel M.
2017-03-01
This paper highlights a new method for online Arabic handwriting recognition based on graphemes segmentation. The main contribution of our work is to explore the utility of Beta-elliptic model in segmentation and features extraction for online handwriting recognition. Indeed, our method consists in decomposing the input signal into continuous part called graphemes based on Beta-Elliptical model, and classify them according to their position in the pseudo-word. The segmented graphemes are then described by the combination of geometric features and trajectory shape modeling. The efficiency of the considered features has been evaluated using feed forward neural network classifier. Experimental results using the benchmarking ADAB Database show the performance of the proposed method.
Singer product apertures-A coded aperture system with a fast decoding algorithm
NASA Astrophysics Data System (ADS)
Byard, Kevin; Shutler, Paul M. E.
2017-06-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
Applying the Multiple Signal Classification Method to Silent Object Detection Using Ambient Noise
NASA Astrophysics Data System (ADS)
Mori, Kazuyoshi; Yokoyama, Tomoki; Hasegawa, Akio; Matsuda, Minoru
2004-05-01
The revolutionary concept of using ocean ambient noise positively to detect objects, called acoustic daylight imaging, has attracted much attention. The authors attempted the detection of a silent target object using ambient noise and a wide-band beam former consisting of an array of receivers. In experimental results obtained in air, using the wide-band beam former, we successfully applied the delay-sum array (DSA) method to detect a silent target object in an acoustic noise field generated by a large number of transducers. This paper reports some experimental results obtained by applying the multiple signal classification (MUSIC) method to a wide-band beam former to detect silent targets. The ocean ambient noise was simulated by transducers decentralized to many points in air. Both MUSIC and DSA detected a spherical target object in the noise field. The relative power levels near the target obtained with MUSIC were compared with those obtained by DSA. Then the effectiveness of the MUSIC method was evaluated according to the rate of increase in the maximum and minimum relative power levels.
Lin, Hongli; Yang, Xuedong; Wang, Weisheng
2014-08-01
Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.
NASA Astrophysics Data System (ADS)
Calderer, Antoni; Neal, Douglas; Prevost, Richard; Mayrhofer, Arno; Lawrenz, Alan; Foss, John; Sotiropoulos, Fotis
2015-11-01
Secondary flows in a rotating flow in a cylinder, resulting in the so called ``tea leaf paradox'', are fundamental for understanding atmospheric pressure systems, developing techniques for separating red blood cells from the plasma, and even separating coagulated trub in the beer brewing process. We seek to gain deeper insights in this phenomenon by integrating numerical simulations and experiments. We employ the Curvilinear Immersed boundary method (CURVIB) of Calderer et al. (J. Comp. Physics 2014), which is a two-phase flow solver based on the level set method, to simulate rotating free-surface flow in a cylinder partially filled with water as in the tea leave paradox flow. We first demonstrate the validity of the numerical model by simulating a cylinder with a rotating base filled with a single fluid, obtaining results in excellent agreement with available experimental data. Then, we present results for the cylinder case with free surface, investigate the complex formation of secondary flow patterns, and show comparisons with new experimental data for this flow obtained by Lavision. Computational resources were provided by the Minnesota Supercomputing Institute.
Ji, Chengdong; Guo, Xuan; Li, Zhen; Qian, Shuwen; Zheng, Feng; Qin, Haiqing
2013-01-01
Many studies have been conducted on colorectal anastomotic leakage to reduce the incidence of anastomotic leakage. However, how to precisely determine if the bowel can withstand the pressure of a colorectal anastomosis experiment, which is called anastomotic bursting pressure, has not been determined. A task force developed the experimental animal hollow organ mechanical testing system to provide precise measurement of the maximum pressure that an anastomotic colon can withstand, and to compare it with the commonly used method such as the mercury and air bag pressure manometer in a rat colon rupture pressure test. Forty-five male Sprague-Dawley rats were randomly divided into the manual ball manometry (H) group, the tracing machine manometry pressure gauge head (MP) group, and the experimental animal hollow organ mechanical testing system (ME) group. The rats in each group were subjected to a cut colon rupture pressure test after injecting anesthesia in the tail vein. Colonic end-to-end anastomosis was performed, and the rats were rested for 1 week before anastomotic bursting pressure was determined by one of the three methods. No differences were observed between the normal colon rupture pressure and colonic anastomotic bursting pressure, which were determined using the three manometry methods. However, several advantages, such as reduction in errors, were identified in the ME group. Different types of manometry methods can be applied to the normal rat colon, but the colonic anastomotic bursting pressure test using the experimental animal hollow organ mechanical testing system is superior to traditional methods. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Villeval, M; Carayol, M; Lamy, S; Lepage, B; Lang, T
2016-12-01
In the field of health, evidence-based medicine and associated methods like randomised controlled trials (RCTs) have become widely used. RCT has become the gold standard for evaluating causal links between interventions and health results. Originating in pharmacology, this method has been progressively expanded to medical devices, non-pharmacological individual interventions, as well as collective public health interventions. Its use in these domains has led to the formulation of several limits, and it has been called into question as an undisputed gold standard. Some of those limits (e.g. confounding biases and external validity) are common to these four different domains, while others are more specific. This paper describes the different limits, as well as several research avenues. Some are methodological reflections aiming at adapting RCT to the complexity of the tested interventions, and at overcoming some of its limits. Others are alternative methods. The objective is not to remove RCT from the range of evaluation methodologies, but to resituate it within this range. The aim is to encourage choosing between different methods according to the features and the level of the intervention to evaluate, thereby calling for methodological pluralism. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Validation of an Adaptive Combustion Instability Control Method for Gas-Turbine Engines
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2004-01-01
This paper describes ongoing testing of an adaptive control method to suppress high frequency thermo-acoustic instabilities like those found in lean-burning, low emission combustors that are being developed for future aircraft gas turbine engines. The method called Adaptive Sliding Phasor Averaged Control, was previously tested in an experimental rig designed to simulate a combustor with an instability of about 530 Hz. Results published earlier, and briefly presented here, demonstrated that this method was effective in suppressing the instability. Because this test rig did not exhibit a well pronounced instability, a question remained regarding the effectiveness of the control methodology when applied to a more coherent instability. To answer this question, a modified combustor rig was assembled at the NASA Glenn Research Center in Cleveland, Ohio. The modified rig exhibited a more coherent, higher amplitude instability, but at a lower frequency of about 315 Hz. Test results show that this control method successfully reduced the instability pressure of the lower frequency test rig. In addition, due to a certain phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling, a dramatic suppression of the instability was achieved by focusing control on the second harmonic of the instability. These results and their implications are discussed, as well as a hypothesis describing the mechanism of intra-harmonic coupling.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure
Dharan, Smitha; Nair, Achuthsankar S
2009-01-01
Background Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. Results We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. Conclusion The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts. PMID:19208127
The rid-redundant procedure in C-Prolog
NASA Technical Reports Server (NTRS)
Chen, Huo-Yan; Wah, Benjamin W.
1987-01-01
C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.
Development of a Mobile Robot with Wavy Movement by Rotating Bars
NASA Astrophysics Data System (ADS)
Kitagawa, Ato; Zhang, Liang; Eguchi, Takashi; Tsukagoshi, Hideyuki
A mobile robot with a new type of movement called wavy movement is proposed in this paper. Wavy movement can be readily realized by many bars or crosses which are rotating at equivalent speeds, and the robot with simple structure and easy control method is able to ascend and descend stairs by covering the corners of stairs within separate wave shapes between touching points. The principle of wavy movement, the mechanism, and the experimental result of the proposed robot are discussed.
Multiscale Modeling of Stiffness, Friction and Adhesion in Mechanical Contacts
2012-02-29
over a lateral length l scales as a power law: h lH, where H is called the Hurst exponent . For typical experimental surfaces, H ranges from 0.5 to 0.8...surfaces with a wide range of Hurst exponents using fully atomistic calculations and the Green’s function method. A simple relation like Eq. (2...described above to explore a full range of parameter space with different rms roughness h0, rms slope h’0, Hurst exponent H, adhesion energy
In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions
NASA Astrophysics Data System (ADS)
Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio
2017-01-01
Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering.
In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions
Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio
2017-01-01
Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering. PMID:28071731
Xu, Lingyang; Hou, Yali; Bickhart, Derek M; Song, Jiuzhou; Liu, George E
2013-06-25
Copy number variations (CNVs) are gains and losses of genomic sequence between two individuals of a species when compared to a reference genome. The data from single nucleotide polymorphism (SNP) microarrays are now routinely used for genotyping, but they also can be utilized for copy number detection. Substantial progress has been made in array design and CNV calling algorithms and at least 10 comparison studies in humans have been published to assess them. In this review, we first survey the literature on existing microarray platforms and CNV calling algorithms. We then examine a number of CNV calling tools to evaluate their impacts using bovine high-density SNP data. Large incongruities in the results from different CNV calling tools highlight the need for standardizing array data collection, quality assessment and experimental validation. Only after careful experimental design and rigorous data filtering can the impacts of CNVs on both normal phenotypic variability and disease susceptibility be fully revealed.
O'Dell, Luke A; Schurko, Robert W
2009-05-20
A new approach for the acquisition of static, wideline (14)N NMR powder patterns is outlined. The method involves the use of frequency-swept pulses which serve two simultaneous functions: (1) broad-band excitation of magnetization and (2) signal enhancement via population transfer. The signal enhancement mechanism is described using numerical simulations and confirmed experimentally. This approach, which we call DEISM (Direct Enhancement of Integer Spin Magnetization), allows high-quality (14)N spectra to be acquired at intermediate field strengths in an uncomplicated way and in a fraction of the time required for previously reported methods.
Evaluation of an Integrated Curriculum in Physics, Mathematics, Engineering, and Chemistry
NASA Astrophysics Data System (ADS)
Beichner, Robert
1997-04-01
An experimental, student centered, introductory curriculum called IMPEC (for Integrated Mathematics, Physics, Engineering, and Chemistry curriculum) is in its third year of pilot-testing at NCSU. The curriculum is taught by a multidisciplinary team of professors using a combination of traditional lecturing and alternative instructional methods including cooperative learning, activity-based class sessions, and extensive use of computer modeling, simulations, and the world wide web. This talk will discuss the research basis for our design and implementation of the curriculum, the qualitative and quantitative methods we have been using to assess its effectiveness, and the educational outcomes we have noted so far.
Tunable properties of light propagation in photonic liquid crystal fibers
NASA Astrophysics Data System (ADS)
Szaniawska, K.; Nasilowski, T.; Woliński, T. R.; Thienpont, H.
2006-12-01
Tunable properties of light propagation in photonic crystal fibers filled with liquid crystals, called photonic liquid crystal fibers (PLCFs) are presented. The propagation properties of PLCFs strongly depend on contrast between refractive indices of the solid core (pure silica glass) and liquid crystals (LCs) filing the holes of the fiber. Due to relatively strong thermo-optical effect, we can change the refractive index of the LC by changing its temperature. Numerical analysis of light propagation in PLCF, based on two simulation methods, such as finite difference (FD) and multipole method (MM) is presented. The numerical results obtained are in good agreement with our earlier experimental results presented elsewhere [1].
A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.
Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei
2014-10-01
Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.
Fracture toughness in Mode I (GIC) for ductile adhesives
NASA Astrophysics Data System (ADS)
Gálvez, P.; Carbas, RJC; Campilho, RDSG; Abenojar, J.; Martínez, MA; Silva LFM, da
2017-05-01
Works carried out in this publication belong to a project that seeks the replacement of welded joints by adhesive joints at stress concentration nodes in bus structures. Fracture toughness in Mode I (GIC) has been measured for two different ductile adhesives, SikaTack Drive and SikaForce 7720. SikaTack Drive is a single-component polyurethane adhesive with high viscoelasticity (more than 100%), whose main use is the car-glass joining and SikaForce 7720 is double-component structural polyurethane adhesive. Experimental works have been carried out from the test called Double Cantilever Beam (DCB), using two steel beams as adherents and an adhesive thickness according to the problem posed in the Project, of 2 and 3 mm for SikaForce 7720 and SikaTack Drive, respectively. Three different methods have been used for measuring the fracture toughness in mode I (GIC) from the values obtained in the experimental DCB procedure for each adhesive: Corrected Beam Theory (CBT), Compliance Calibration Method (CCM) and Compliance Based Beam Method (CBBM). Four DCB specimens have been tested for each adhesive. Dispersion of each GIC calculation method for each adhesive has been studied. Likewise variations between the three different methods have been also studied for each adhesive.
Investigation of heat transfer and material flow of P-FSSW: Experimental and numerical study
NASA Astrophysics Data System (ADS)
Rezazadeh, Niki; Mosavizadeh, Seyed Mostafa; Azizi, Hamed
2018-02-01
Friction stir spot welding (FSSW) is the joining process which utilizes a rotating tool consisting of a shoulder and/or a probe. In this study, the novel method of FSSW, which is called protrusion friction stir spot welding (P-FSSW), has been presented and effect of shoulder diameter parameter has been studied numerically and experimentally on the weld quality including temperature field, velocity contour, material flow, bonding length, and the depth of the stirred area. The results show that the numerical findings are in good agreement with experimental measurements. The present model could well predict the temperature distribution, velocity contour, depth of the stirred area, and the bonding length. As the shoulder diameter increases, the amount of temperature rises which leads to a rise in stirred area depth, bonding length and temperatures and velocities. Therefore, a weld of higher quality will be performed.
Wild Birds Use an Ordering Rule to Decode Novel Call Sequences.
Suzuki, Toshitaka N; Wheatcroft, David; Griesser, Michael
2017-08-07
The generative power of human language depends on grammatical rules, such as word ordering, that allow us to produce and comprehend even novel combinations of words [1-3]. Several species of birds and mammals produce sequences of calls [4-6], and, like words in human sentences, their order may influence receiver responses [7]. However, it is unknown whether animals use call ordering to extract meaning from truly novel sequences. Here, we use a novel experimental approach to test this in a wild bird species, the Japanese tit (Parus minor). Japanese tits are attracted to mobbing a predator when they hear conspecific alert and recruitment calls ordered as alert-recruitment sequences [7]. They also approach in response to recruitment calls of heterospecific individuals in mixed-species flocks [8, 9]. Using experimental playbacks, we assess their responses to artificial sequences in which their own alert calls are combined into different orderings with heterospecific recruitment calls. We find that Japanese tits respond similarly to mixed-species alert-recruitment call sequences and to their own alert-recruitment sequences. Importantly, however, tits rarely respond to mixed-species sequences in which the call order is reversed. Thus, Japanese tits extract a compound meaning from novel call sequences using an ordering rule. These results demonstrate a new parallel between animal communication systems and human language, opening new avenues for exploring the evolution of ordering rules and compositionality in animal vocal sequences. Copyright © 2017 Elsevier Ltd. All rights reserved.
Generation of dark hollow beam via coherent combination based on adaptive optics.
Zheng, Yi; Wang, Xiaohua; Shen, Feng; Li, Xinyang
2010-12-20
A novel method for generating a dark hollow beam (DHB) is proposed and studied both theoretically and experimentally. A coherent combination technique for laser arrays is implemented based on adaptive optics (AO). A beam arraying structure and an active segmented mirror are designed and described. Piston errors are extracted by a zero-order interference detection system with the help of a custom-made photo-detectors array. An algorithm called the extremum approach is adopted to calculate feedback control signals. A dynamic piston error is imported by LiNbO3 to test the capability of the AO servo. In a closed loop the stable and clear DHB is obtained. The experimental results confirm the feasibility of the concept.
Calligraphic Poling for WGM Resonators
NASA Technical Reports Server (NTRS)
Mohageg, Makan; Strekalov, Dmitry; Savchenkov, Anatoliy; Matsko, Andrey; Ilchenko, Vladimir; Maleki, Lute
2007-01-01
By engineering the geometry of a nonlinear optical crystal, the effective efficiency of all nonlinear optical oscillations can be increased dramatically. Specifically, sphere and disk shaped crystal resonators have been used to demonstrate nonlinear optical oscillations at sub-milliwatt input power when cs light propagates in a Whispering Gallery Mode (WGM) of such a resonant cavity. in terms of both device production and experimentation in quantum optics, some nonlinear optical effects with naturally high efficiency can occult the desired nonlinear scattering process. the structure to the crystal resonator. In this paper, I will discuss a new method for generating poling structures in ferroelectric crystal resonators called calligraphic poling. The details of the poling apparatus, experimental results and speculation on future applications will be discussed.
Greased Lightning (GL-10) Performance Flight Research: Flight Data Report
NASA Technical Reports Server (NTRS)
McSwain, Robert G.; Glaab, Louis J.; Theodore, Colin R.; Rhew, Ray D. (Editor); North, David D. (Editor)
2017-01-01
Modern aircraft design methods have produced acceptable designs for large conventional aircraft performance. With revolutionary electronic propulsion technologies fueled by the growth in the small UAS (Unmanned Aerial Systems) industry, these same prediction models are being applied to new smaller, and experimental design concepts requiring a VTOL (Vertical Take Off and Landing) capability for ODM (On Demand Mobility). A 50% sub-scale GL-10 flight model was built and tested to demonstrate the transition from hover to forward flight utilizing DEP (Distributed Electric Propulsion)[1][2]. In 2016 plans were put in place to conduct performance flight testing on the 50% sub-scale GL-10 flight model to support a NASA project called DELIVER (Design Environment for Novel Vertical Lift Vehicles). DELIVER was investigating the feasibility of including smaller and more experimental aircraft configurations into a NASA design tool called NDARC (NASA Design and Analysis of Rotorcraft)[3]. This report covers the performance flight data collected during flight testing of the GL-10 50% sub-scale flight model conducted at Beaver Dam Airpark, VA. Overall the flight test data provides great insight into how well our existing conceptual design tools predict the performance of small scale experimental DEP concepts. Low fidelity conceptual design tools estimated the (L/D)( sub max)of the GL-10 50% sub-scale flight model to be 16. Experimentally measured (L/D)( sub max) for the GL-10 50% scale flight model was 7.2. The aerodynamic performance predicted versus measured highlights the complexity of wing and nacelle interactions which is not currently accounted for in existing low fidelity tools.
A Note on Improving Process Efficiency in Panel Surveys with Paradata
ERIC Educational Resources Information Center
Kreuter, Frauke; Müller, Gerrit
2015-01-01
Call scheduling is a challenge for surveys around the world. Unlike cross-sectional surveys, panel surveys can use information from prior waves to enhance call-scheduling algorithms. Past observational studies showed the benefit of calling panel cases at times that had been successful in the past. This article is the first to experimentally assign…
ROCS: a Reproducibility Index and Confidence Score for Interaction Proteomics Studies
2012-01-01
Background Affinity-Purification Mass-Spectrometry (AP-MS) provides a powerful means of identifying protein complexes and interactions. Several important challenges exist in interpreting the results of AP-MS experiments. First, the reproducibility of AP-MS experimental replicates can be low, due both to technical variability and the dynamic nature of protein interactions in the cell. Second, the identification of true protein-protein interactions in AP-MS experiments is subject to inaccuracy due to high false negative and false positive rates. Several experimental approaches can be used to mitigate these drawbacks, including the use of replicated and control experiments and relative quantification to sensitively distinguish true interacting proteins from false ones. Methods To address the issues of reproducibility and accuracy of protein-protein interactions, we introduce a two-step method, called ROCS, which makes use of Indicator Prey Proteins to select reproducible AP-MS experiments, and of Confidence Scores to select specific protein-protein interactions. The Indicator Prey Proteins account for measures of protein identifiability as well as protein reproducibility, effectively allowing removal of outlier experiments that contribute noise and affect downstream inferences. The filtered set of experiments is then used in the Protein-Protein Interaction (PPI) scoring step. Prey protein scoring is done by computing a Confidence Score, which accounts for the probability of occurrence of prey proteins in the bait experiments relative to the control experiment, where the significance cutoff parameter is estimated by simultaneously controlling false positives and false negatives against metrics of false discovery rate and biological coherence respectively. In summary, the ROCS method relies on automatic objective criterions for parameter estimation and error-controlled procedures. Results We illustrate the performance of our method by applying it to five previously published AP-MS experiments, each containing well characterized protein interactions, allowing for systematic benchmarking of ROCS. We show that our method may be used on its own to make accurate identification of specific, biologically relevant protein-protein interactions, or in combination with other AP-MS scoring methods to significantly improve inferences. Conclusions Our method addresses important issues encountered in AP-MS datasets, making ROCS a very promising tool for this purpose, either on its own or in conjunction with other methods. We anticipate that our methodology may be used more generally in proteomics studies and databases, where experimental reproducibility issues arise. The method is implemented in the R language, and is available as an R package called “ROCS”, freely available from the CRAN repository http://cran.r-project.org/. PMID:22682516
Using the Git Software Tool on the Peregrine System | High-Performance
branch workflow. Create a local branch called "experimental" based on the current master... git branch experimental Use your branch (start working on that experimental branch....) git checkout experimental git pull origin experimental # work, work, work, commit.... Send local branch to the repo git push
NASA Astrophysics Data System (ADS)
Cai, Jiaxin; Chen, Tingting; Li, Yan; Zhu, Nenghui; Qiu, Xuan
2018-03-01
In order to analysis the fibrosis stage and inflammatory activity grade of chronic hepatitis C, a novel classification method based on collaborative representation (CR) with smoothly clipped absolute deviation penalty (SCAD) penalty term, called CR-SCAD classifier, is proposed for pattern recognition. After that, an auto-grading system based on CR-SCAD classifier is introduced for the prediction of fibrosis stage and inflammatory activity grade of chronic hepatitis C. The proposed method has been tested on 123 clinical cases of chronic hepatitis C based on serological indexes. Experimental results show that the performance of the proposed method outperforms the state-of-the-art baselines for the classification of fibrosis stage and inflammatory activity grade of chronic hepatitis C.
NASA Astrophysics Data System (ADS)
Aucejo, M.; Totaro, N.; Guyader, J.-L.
2010-08-01
In noise control, identification of the source velocity field remains a major problem open to investigation. Consequently, methods such as nearfield acoustical holography (NAH), principal source projection, the inverse frequency response function and hybrid NAH have been developed. However, these methods require free field conditions that are often difficult to achieve in practice. This article presents an alternative method known as inverse patch transfer functions, designed to identify source velocities and developed in the framework of the European SILENCE project. This method is based on the definition of a virtual cavity, the double measurement of the pressure and particle velocity fields on the aperture surfaces of this volume, divided into elementary areas called patches and the inversion of impedances matrices, numerically computed from a modal basis obtained by FEM. Theoretically, the method is applicable to sources with complex 3D geometries and measurements can be carried out in a non-anechoic environment even in the presence of other stationary sources outside the virtual cavity. In the present paper, the theoretical background of the iPTF method is described and the results (numerical and experimental) for a source with simple geometry (two baffled pistons driven in antiphase) are presented and discussed.
Diffusion modulation of DNA by toehold exchange
NASA Astrophysics Data System (ADS)
Rodjanapanyakul, Thanapop; Takabatake, Fumi; Abe, Keita; Kawamata, Ibuki; Nomura, Shinichiro M.; Murata, Satoshi
2018-05-01
We propose a method to control the diffusion speed of DNA molecules with a target sequence in a polymer solution. The interaction between solute DNA and diffusion-suppressing DNA that has been anchored to a polymer matrix is modulated by the concentration of the third DNA molecule called the competitor by a mechanism called toehold exchange. Experimental results show that the sequence-specific modulation of the diffusion coefficient is successfully achieved. The diffusion coefficient can be modulated up to sixfold by changing the concentration of the competitor. The specificity of the modulation is also verified under the coexistence of a set of DNA with noninteracting base sequences. With this mechanism, we are able to control the diffusion coefficient of individual DNA species by the concentration of another DNA species. This methodology introduces a programmability to a DNA-based reaction-diffusion system.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.
A method for the determination of the coefficient of rolling friction using cycloidal pendulum
NASA Astrophysics Data System (ADS)
Ciornei, M. C.; Alaci, S.; Ciornei, F. C.; Romanu, I. C.
2017-08-01
The paper presents a method for experimental finding of coefficient of rolling friction appropriate for biomedical applications based on the theory of cycloidal pendulum. When a mobile circle rolls over a fixed straight line, the points from the circle describe trajectories called normal cycloids. To materialize this model, it is sufficient that a small region from boundary surfaces of a moving rigid body is spherical. Assuming pure rolling motion, the equation of motion of the cycloidal pendulum is obtained - an ordinary nonlinear differential equation. The experimental device is composed by two interconnected balls rolling over the material to be studied. The inertial characteristics of the pendulum can be adjusted via weights placed on a rod. A laser spot oscillates together to the pendulum and provides the amplitude of oscillations. After finding the experimental parameters necessary in differential equation of motion, it can be integrated using the Runge-Kutta of fourth order method. The equation was integrated for several materials and found values of rolling friction coefficients. Two main conclusions are drawn: the coefficient of rolling friction influenced significantly the amplitude of oscillation but the effect upon the period of oscillation is practically imperceptible. A methodology is proposed for finding the rolling friction coefficient and the pure rolling condition is verified.
Application of Patterson-function direct methods to materials characterization.
Rius, Jordi
2014-09-01
The aim of this article is a general description of the so-called Patterson-function direct methods (PFDM), from their origin to their present state. It covers a 20-year period of methodological contributions to crystal structure solution, most of them published in Acta Crystallographica Section A. The common feature of these variants of direct methods is the introduction of the experimental intensities in the form of the Fourier coefficients of origin-free Patterson-type functions, which allows the active use of both strong and weak reflections. The different optimization algorithms are discussed and their performances compared. This review focuses not only on those PFDM applications related to powder diffraction data but also on some recent results obtained with electron diffraction tomography data.
Multitask visual learning using genetic programming.
Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz
2008-01-01
We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
TAP score: torsion angle propensity normalization applied to local protein structure evaluation
Tosatto, Silvio CE; Battistutta, Roberto
2007-01-01
Background Experimentally determined protein structures may contain errors and require validation. Conformational criteria based on the Ramachandran plot are mainly used to distinguish between distorted and adequately refined models. While the readily available criteria are sufficient to detect totally wrong structures, establishing the more subtle differences between plausible structures remains more challenging. Results A new criterion, called TAP score, measuring local sequence to structure fitness based on torsion angle propensities normalized against the global minimum and maximum is introduced. It is shown to be more accurate than previous methods at estimating the validity of a protein model in terms of commonly used experimental quality parameters on two test sets representing the full PDB database and a subset of obsolete PDB structures. Highly selective TAP thresholds are derived to recognize over 90% of the top experimental structures in the absence of experimental information. Both a web server and an executable version of the TAP score are available at . Conclusion A novel procedure for energy normalization (TAP) has significantly improved the possibility to recognize the best experimental structures. It will allow the user to more reliably isolate problematic structures in the context of automated experimental structure determination. PMID:17504537
Evolutionary neural networks for anomaly detection based on the behavior of a program.
Han, Sang-Jun; Cho, Sung-Bae
2006-06-01
The process of learning the behavior of a given program by using machine-learning techniques (based on system-call audit data) is effective to detect intrusions. Rule learning, neural networks, statistics, and hidden Markov models (HMMs) are some of the kinds of representative methods for intrusion detection. Among them, neural networks are known for good performance in learning system-call sequences. In order to apply this knowledge to real-world problems successfully, it is important to determine the structures and weights of these call sequences. However, finding the appropriate structures requires very long time periods because there are no suitable analytical solutions. In this paper, a novel intrusion-detection technique based on evolutionary neural networks (ENNs) is proposed. One advantage of using ENNs is that it takes less time to obtain superior neural networks than when using conventional approaches. This is because they discover the structures and weights of the neural networks simultaneously. Experimental results with the 1999 Defense Advanced Research Projects Agency (DARPA) Intrusion Detection Evaluation (IDEVAL) data confirm that ENNs are promising tools for intrusion detection.
MAVTgsa: An R Package for Gene Set (Enrichment) Analysis
Chien, Chih-Yi; Chang, Ching-Wei; Tsai, Chen-An; ...
2014-01-01
Gene semore » t analysis methods aim to determine whether an a priori defined set of genes shows statistically significant difference in expression on either categorical or continuous outcomes. Although many methods for gene set analysis have been proposed, a systematic analysis tool for identification of different types of gene set significance modules has not been developed previously. This work presents an R package, called MAVTgsa, which includes three different methods for integrated gene set enrichment analysis. (1) The one-sided OLS (ordinary least squares) test detects coordinated changes of genes in gene set in one direction, either up- or downregulation. (2) The two-sided MANOVA (multivariate analysis variance) detects changes both up- and downregulation for studying two or more experimental conditions. (3) A random forests-based procedure is to identify gene sets that can accurately predict samples from different experimental conditions or are associated with the continuous phenotypes. MAVTgsa computes the P values and FDR (false discovery rate) q -value for all gene sets in the study. Furthermore, MAVTgsa provides several visualization outputs to support and interpret the enrichment results. This package is available online.« less
Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.
2004-03-09
The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Theoretical study on the photoabsorption in the Herzberg I band system of the O 2 molecule
NASA Astrophysics Data System (ADS)
Takegami, Ryuta; Yabushita, Satoshi
2005-01-01
The Herzberg I band system of the oxygen molecule is electric-dipole forbidden and its absorption strength has been explained by intensity borrowing models which include the spin-orbit (SO) and L-uncoupling (RO) interactions as perturbations. We employed three different levels of theoretical models to evaluate these two interactions, and obtained the rotational and vibronic absorption strengths using the ab initio method. The first model calculates the transition moments induced by the SO interaction variationally with the SO configuration interaction method (SOCI), and uses the first-order perturbation theory for the RO interaction, and is called SOCI. The second is based on the first-order perturbation theory for both the SO and RO interactions, and is called Pert(Full). The last is a limited version of Pert(Full), in that the first-order perturbation wavefunction for the initial and final state is represented by only one dominant basis, namely the 1 3Π g and B3Σu- state, respectively, as originally used by England et al. [Can. J. Phys. 74 (1996) 185], and is called Pert(England). The vibronic oscillator strengths calculated by these three models were in good agreement with the experimental values. As for the integrated rotational linestrengths, the SOCI and Pert(Full) models reproduced the experimental results very well, however the Pert(England) model did not give satisfactory results. Since the Pert(England) model takes only the 1 3Π g and B3Σu- states into consideration, it cannot contain the complicated configuration interactions with highly excited states induced by the SO and RO interaction, which plays an important role for calculating the delicate integrated rotational linestrength. This result suggests that the configuration interaction with highly excited states due to some perturbations cannot be neglected in the case of very weak absorption band systems.
Khalil, Hossam; Kim, Dongkyu; Jo, Youngjoon; Park, Kyihwan
2017-06-01
An optical component called a Dove prism is used to rotate the laser beam of a laser-scanning vibrometer (LSV). This is called a derotator and is used for measuring the vibration of rotating objects. The main advantage of a derotator is that it works independently from an LSV. However, this device requires very specific alignment, in which the axis of the Dove prism must coincide with the rotational axis of the object. If the derotator is misaligned with the rotating object, the results of the vibration measurement are imprecise, owing to the alteration of the laser beam on the surface of the rotating object. In this study, a method is proposed for aligning a derotator with a rotating object through an image-processing algorithm that obtains the trajectory of a landmark attached to the object. After the trajectory of the landmark is mathematically modeled, the amount of derotator misalignment with respect to the object is calculated. The accuracy of the proposed method for aligning the derotator with the rotating object is experimentally tested.
The unitary convolution approximation for heavy ions
NASA Astrophysics Data System (ADS)
Grande, P. L.; Schiwietz, G.
2002-10-01
The convolution approximation for the impact-parameter dependent energy loss is reviewed with emphasis on the determination of the stopping force for heavy projectiles. In this method, the energy loss in different impact-parameter regions is well determined and interpolated smoothly. The physical inputs of the model are the projectile-screening function (in the case of dressed ions), the electron density and oscillators strengths of the target atoms. Moreover, the convolution approximation, in the perturbative mode (called PCA), yields remarkable agreement with full semi-classical-approximation (SCA) results for bare as well as for screened ions at all impact parameters. In the unitary mode (called UCA), the method contains some higher-order effects (yielding in some cases rather good agreement with full coupled-channel calculations) and approaches the classical regime similar as the Bohr model for large perturbations ( Z/ v≫1). The results are then used to compare with experimental values of the non-equilibrium stopping force as a function of the projectile charge as well as with the equilibrium energy loss under non-aligned and channeling conditions.
Guide to Using Onionskin Analysis Code (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Morzinski, Jerome Arthur
2016-09-15
This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
Bastos, Monique; Medeiros, Karolina; Jones, Gareth; Bezerra, Bruna
2018-03-01
Vocalizations are often used by animals to communicate and mediate social interactions. Animals may benefit from eavesdropping on calls from other species to avoid predation and thus increase their chances of survival. Here we use both observational and experimental evidence to investigate eavesdropping and how acoustic signals may mediate interactions between two sympatric and endemic primate species (common marmosets and blonde capuchin monkeys) in a fragment of Atlantic Rainforest in Northeastern Brazil. We observed 22 natural vocal encounters between the study species, but no evident visual or physical contact over the study period. These two species seem to use the same area throughout the day, but at different times. We broadcasted alarm and long distance calls to and from both species as well as two control stimuli (i.e., forest background noise and a loud call from an Amazonian primate) in our playback experiments. Common marmosets showed anti-predator behavior (i.e., vigilance and flight) when exposed to blonde capuchin calls both naturally and experimentally. However, blonde capuchin monkeys showed no anti-predator behavior in response to common marmoset calls. Blonde capuchins uttered long distance calls and looked in the direction of the speaker following exposure to their own long distance call, whereas they fled when exposed to their own alarm calls. Both blonde capuchin monkeys and common marmosets showed fear behaviors in response to the loud call from a primate species unknown to them, and showed no apparent response to the forest background noise. Common marmoset responses to blonde capuchin calls suggests that the latter is a potential predator. Furthermore, common marmosets appear to be eavesdropping on calls from blonde capuchin monkeys to avoid potentially costly encounters with them. © 2018 Wiley Periodicals, Inc.
Two blowing concepts for roll and lateral control of aircraft
NASA Technical Reports Server (NTRS)
Tavella, D. A.; Wood, N. J.; Lee, C. S.; Roberts, L.
1986-01-01
Two schemes to modulate aerodynamic forces for roll and lateral control of aircraft have been investigated. The first scheme, called the lateral blowing concept, consists of thin jets of air exiting spanwise, or at small angle with the spanwise direction, from slots at the tips of straight wings. For this scheme, in addition to experimental measurements, a theory was developed showing the analytical relationship between aerodynamic forces and jet and wing parameters. Experimental results confirmed the theoretically derived scaling laws. The second scheme, which was studied experimentally, is called the jet spoiler concept and consists of thin jets exiting normally to the wing surface from slots aligned with the spanwise direction.
Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order
Favalli, Andrea; Croft, Stephen; Santi, Peter
2015-06-15
Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less
Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729
Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun
2002-06-01
Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.
Modeling, simulation, and estimation of optical turbulence
NASA Astrophysics Data System (ADS)
Formwalt, Byron Paul
This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.
From Bacon to Banks: the vision and the realities of pursuing science for the common good.
Sargent, Rose-Mary
2012-03-01
Francis Bacon's call for philosophers to investigate nature and "join in consultation for the common good" is one example of a powerful vision that helped to shape modern science. His ideal clearly linked the experimental method with the production of beneficial effects that could be used both as "pledges of truth" and for "the comforts of life." When Bacon's program was implemented in the following generation, however, the tensions inherent in his vision became all too real. The history of the Royal Society of London, from its founding in 1660 to the 42-year presidency of Joseph Banks (1778-1820), shows how these tensions led to changes in the way in which both the experimental method and the ideal of the common good were understood. A more nuanced understanding of the problems involved in recent philosophical analyses of science in the public interest can be achieved by appreciating the complexity revealed from this historical perspective.
Investigating a holobiont: Microbiota perturbations and transkingdom networks.
Greer, Renee; Dong, Xiaoxi; Morgun, Andrey; Shulzhenko, Natalia
2016-01-01
The scientific community has recently come to appreciate that, rather than existing as independent organisms, multicellular hosts and their microbiota comprise a complex evolving superorganism or metaorganism, termed a holobiont. This point of view leads to a re-evaluation of our understanding of different physiological processes and diseases. In this paper we focus on experimental and computational approaches which, when combined in one study, allowed us to dissect mechanisms (traditionally named host-microbiota interactions) regulating holobiont physiology. Specifically, we discuss several approaches for microbiota perturbation, such as use of antibiotics and germ-free animals, including advantages and potential caveats of their usage. We briefly review computational approaches to characterize the microbiota and, more importantly, methods to infer specific components of microbiota (such as microbes or their genes) affecting host functions. One such approach called transkingdom network analysis has been recently developed and applied in our study. (1) Finally, we also discuss common methods used to validate the computational predictions of host-microbiota interactions using in vitro and in vivo experimental systems.
NASA Astrophysics Data System (ADS)
Himr, D.
2013-04-01
Article describes simulation of unsteady flow during water hammer with two programs, which use different numerical approaches to solve ordinary one dimensional differential equations describing the dynamics of hydraulic elements and pipes. First one is Matlab-Simulink-SimHydraulics, which is a commercial software developed to solve the dynamics of general hydraulic systems. It defines them with block elements. The other software is called HYDRA and it is based on the Lax-Wendrff numerical method, which serves as a tool to solve the momentum and continuity equations. This program was developed in Matlab by Brno University of Technology. Experimental measurements were performed on a simple test rig, which consists of an elastic pipe with strong damping connecting two reservoirs. Water hammer is induced with fast closing the valve. Physical properties of liquid and pipe elasticity parameters were considered in both simulations, which are in very good agreement and differences in comparison with experimental data are minimal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamieson, Kevin; Davis, IV, Warren L.
Active learning methods automatically adapt data collection by selecting the most informative samples in order to accelerate machine learning. Because of this, real-world testing and comparing active learning algorithms requires collecting new datasets (adaptively), rather than simply applying algorithms to benchmark datasets, as is the norm in (passive) machine learning research. To facilitate the development, testing and deployment of active learning for real applications, we have built an open-source software system for large-scale active learning research and experimentation. The system, called NEXT, provides a unique platform for realworld, reproducible active learning research. This paper details the challenges of building themore » system and demonstrates its capabilities with several experiments. The results show how experimentation can help expose strengths and weaknesses of active learning algorithms, in sometimes unexpected and enlightening ways.« less
A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, Ronald
My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less thanmore » half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.« less
A B-TOF mass spectrometer for the analysis of ions with extreme high start-up energies.
Lezius, M
2002-03-01
Weak magnetic deflection is combined with two acceleration stage time-of-flight mass spectrometry and subsequent position-sensitive ion detection. The experimental method, called B-TOF mass spectrometry, is described with respect to its theoretical background and some experimental results. It is demonstrated that the technique has distinct advantages over other approaches, with special respect to the identification and analysis of very highly energetic ions with an initially large energy broadening (up to 1 MeV) and with high charge states (up to 30+). Similar energetic targets are a common case in intense laser-matter interaction processes found during laser ablation, laser-cluster and laser-molecule interaction and fast particle and x-ray generation from laser-heated plasma. Copyright 2002 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Hayati, Samad; Tso, Kam; Roston, Gerald
1988-01-01
Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.
NASA Astrophysics Data System (ADS)
Maćkowiak-Pawłowska, Maja; Przybyła, Piotr
2018-05-01
The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for n=2 and n=3. Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.
Aguiar, Julio C; Galiano, Eduardo; Arenillas, Pablo
2005-08-01
The activity concentration of a (238)Pu solution was measured by the determined solid angle method employing a novel dual diaphragm-detector assembly, which has been previously described. Due to the special requirements of the detector, a new type of source holder was developed, which consisted of sandwiching the radioisotope between two organic films called VYNS. It was experimentally demonstrated that the VYNS films do not absorb alpha particles, but reduce their energy by an average of 22 keV.A mean activity concentration for (238)Pu of 359.10+/-0.8 kBq/g was measured.
Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin
2017-12-01
Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
Resolving and quantifying overlapped chromatographic bands by transmutation
Malinowski
2000-09-15
A new chemometric technique called "transmutation" is developed for the purpose of sharpening overlapped chromatographic bands in order to quantify the components. The "transmutation function" is created from the chromatogram of the pure component of interest, obtained from the same instrument, operating under the same experimental conditions used to record the unresolved chromatogram of the sample mixture. The method is used to quantify mixtures containing toluene, ethylbenzene, m-xylene, naphthalene, and biphenyl from unresolved chromatograms previously reported. The results are compared to those obtained using window factor analysis, rank annihilation factor analysis, and matrix regression analysis. Unlike the latter methods, the transmutation method is not restricted to two-dimensional arrays of data, such as those obtained from HPLC/DAD, but is also applicable to chromatograms obtained from single detector experiments. Limitations of the method are discussed.
47 CFR 5.115 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EXPERIMENTAL RADIO SERVICE (OTHER THAN BROADCAST... its assigned call sign at the end of each complete transmission: Provided, however, that the transmission of the call sign at the end of each transmission is not required for projects requiring continuous...
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085
Musclelike joint mechanism driven by dielectric elastomer actuator for robotic applications
NASA Astrophysics Data System (ADS)
Jung, Ho Sang; Cho, Kyeong Ho; Park, Jae Hyeong; Yang, Sang Yul; Kim, Youngeun; Kim, Kihyeon; Nguyen, Canh Toan; Phung, Hoa; Tien Hoang, Phi; Moon, Hyungpil; Koo, Ja Choon; Ryeol Choi, Hyouk
2018-07-01
The purpose of this study is to develop an artificial muscle actuator suitable for robotic applications, and to demonstrate the feasibility of applying this actuator to an arm mechanism, and controlling it delicately and smoothly like a human being. To accomplish this, we perform the procedures that integrate the soft actuator, called the single body dielectric elastomer actuator, which is very flexible and capable of high speed operation, and the displacement amplification mechanism called the sliding filament joint mechanism, which mimics the sliding filament model of human muscles. In this paper, we describe the characteristics and control method of the actuation system that consists of actuator, mechanism, and embedded controller, and show the experimental results of the closed-loop position and static stiffness control of the robotic arm application. Finally, based on the results, we evaluate the performance of this application.
Measuring average angular velocity with a smartphone magnetic field sensor
NASA Astrophysics Data System (ADS)
Pili, Unofre; Violanda, Renante
2018-02-01
The angular velocity of a spinning object is, by standard, measured using a device called a tachometer. However, by directly using it in a classroom setting, the activity is likely to appear as less instructive and less engaging. Indeed, some alternative classroom-suitable methods for measuring angular velocity have been presented. In this paper, we present a further alternative that is smartphone-based, making use of the real-time magnetic field (simply called B-field in what follows) data gathering capability of the B-field sensor of the smartphone device as the timer for measuring average rotational period and average angular velocity. The in-built B-field sensor in smartphones has already found a number of uses in undergraduate experimental physics. For instance, in elementary electrodynamics, it has been used to explore the well-known Bio-Savart law and in a measurement of the permeability of air.
PISA: Federated Search in P2P Networks with Uncooperative Peers
NASA Astrophysics Data System (ADS)
Ren, Zujie; Shou, Lidan; Chen, Gang; Chen, Chun; Bei, Yijun
Recently, federated search in P2P networks has received much attention. Most of the previous work assumed a cooperative environment where each peer can actively participate in information publishing and distributed document indexing. However, little work has addressed the problem of incorporating uncooperative peers, which do not publish their own corpus statistics, into a network. This paper presents a P2P-based federated search framework called PISA which incorporates uncooperative peers as well as the normal ones. In order to address the indexing needs for uncooperative peers, we propose a novel heuristic query-based sampling approach which can obtain high-quality resource descriptions from uncooperative peers at relatively low communication cost. We also propose an effective method called RISE to merge the results returned by uncooperative peers. Our experimental results indicate that PISA can provide quality search results, while utilizing the uncooperative peers at a low cost.
Optimization of an exchange-correlation density functional for water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritz, Michelle; Fernández-Serra, Marivi; Institute for Advanced Computational Science, Stony Brook University, Stony Brook, New York 11794-3800
2016-06-14
We describe a method, that we call data projection onto parameter space (DPPS), to optimize an energy functional of the electron density, so that it reproduces a dataset of experimental magnitudes. Our scheme, based on Bayes theorem, constrains the optimized functional not to depart unphysically from existing ab initio functionals. The resulting functional maximizes the probability of being the “correct” parameterization of a given functional form, in the sense of Bayes theory. The application of DPPS to water sheds new light on why density functional theory has performed rather poorly for liquid water, on what improvements are needed, and onmore » the intrinsic limitations of the generalized gradient approximation to electron exchange and correlation. Finally, we present tests of our water-optimized functional, that we call vdW-DF-w, showing that it performs very well for a variety of condensed water systems.« less
Mapping Base Modifications in DNA by Transverse-Current Sequencing
NASA Astrophysics Data System (ADS)
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2018-02-01
Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.
QuASAR: quantitative allele-specific analysis of reads
Harvey, Chris T.; Moyerbrailean, Gregory A.; Davis, Gordon O.; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger
2015-01-01
Motivation: Expression quantitative trait loci (eQTL) studies have discovered thousands of genetic variants that regulate gene expression, enabling a better understanding of the functional role of non-coding sequences. However, eQTL studies are costly, requiring large sample sizes and genome-wide genotyping of each sample. In contrast, analysis of allele-specific expression (ASE) is becoming a popular approach to detect the effect of genetic variation on gene expression, even within a single individual. This is typically achieved by counting the number of RNA-seq reads matching each allele at heterozygous sites and testing the null hypothesis of a 1:1 allelic ratio. In principle, when genotype information is not readily available, it could be inferred from the RNA-seq reads directly. However, there are currently no existing methods that jointly infer genotypes and conduct ASE inference, while considering uncertainty in the genotype calls. Results: We present QuASAR, quantitative allele-specific analysis of reads, a novel statistical learning method for jointly detecting heterozygous genotypes and inferring ASE. The proposed ASE inference step takes into consideration the uncertainty in the genotype calls, while including parameters that model base-call errors in sequencing and allelic over-dispersion. We validated our method with experimental data for which high-quality genotypes are available. Results for an additional dataset with multiple replicates at different sequencing depths demonstrate that QuASAR is a powerful tool for ASE analysis when genotypes are not available. Availability and implementation: http://github.com/piquelab/QuASAR. Contact: fluca@wayne.edu or rpique@wayne.edu Supplementary information: Supplementary Material is available at Bioinformatics online. PMID:25480375
Finger Vein Recognition Based on Local Directional Code
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-01-01
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194
Finger vein recognition based on local directional code.
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-11-05
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.
A novel method for intelligent fault diagnosis of rolling bearings using ensemble deep auto-encoders
NASA Astrophysics Data System (ADS)
Shao, Haidong; Jiang, Hongkai; Lin, Ying; Li, Xingqiu
2018-03-01
Automatic and accurate identification of rolling bearings fault categories, especially for the fault severities and fault orientations, is still a major challenge in rotating machinery fault diagnosis. In this paper, a novel method called ensemble deep auto-encoders (EDAEs) is proposed for intelligent fault diagnosis of rolling bearings. Firstly, different activation functions are employed as the hidden functions to design a series of auto-encoders (AEs) with different characteristics. Secondly, EDAEs are constructed with various auto-encoders for unsupervised feature learning from the measured vibration signals. Finally, a combination strategy is designed to ensure accurate and stable diagnosis results. The proposed method is applied to analyze the experimental bearing vibration signals. The results confirm that the proposed method can get rid of the dependence on manual feature extraction and overcome the limitations of individual deep learning models, which is more effective than the existing intelligent diagnosis methods.
Sheet metals characterization using the virtual fields method
NASA Astrophysics Data System (ADS)
Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice
2018-05-01
In this work, a characterisation method involving a deep-notched specimen subjected to a tensile loading is introduced. This specimen leads to heterogeneous states of stress and strain, the latter being measured using a stereo DIC system (MatchID). This heterogeneity enables the identification of multiple material parameters in a single test. In order to identify material parameters from the DIC data, an inverse method called the Virtual Fields Method is employed. The method combined with recently developed sensitivity-based virtual fields allows to optimally locate areas in the test where information about each material parameter is encoded, improving accuracy of the identification over the traditional user-defined virtual fields. It is shown that a single test performed at 45° to the rolling direction is sufficient to obtain all anisotropic plastic parameters, thus reducing experimental effort involved in characterisation. The paper presents the methodology and some numerical validation.
NASA Astrophysics Data System (ADS)
Rössler, Tomáš; Hrabovský, Miroslav; Pluháček, František
2005-08-01
The cotyle implantate is abraded in the body of patient and its shape changes. Information about the magnitude of abrasion is contained in the result contour map of the implantate. The locations and dimensions of abraded areas can be computed from the contours deformation. The method called the single-projector moire topography was used for the contour lines determination. The theoretical description of method is given at first. The design of the experimental set-up follows. The light grating projector was developed to realize the periodic structure on the measured surface. The method of fringe-shifting was carried out to increase the data quantity. The description of digital processing applied to the moire grating images is introduced at the end together with the examples of processed images.
Li, Xue; Song, Zhengxiang
2015-04-09
Liquid pressure is a key parameter for detecting and judging faults in hydraulic mechanisms, but traditional measurement methods have many deficiencies. An effective non-intrusive method using an ultrasound-based technique to measure liquid pressure in small diameter (less than 15 mm) pipelines is presented in this paper. The proposed method is based on the principle that the transmission speed of an ultrasonic wave in a Kneser liquid correlates with liquid pressure. Liquid pressure was calculated using the variation of ultrasonic propagation time in a liquid under different pressures: 0 Pa and X Pa. In this research the time difference was obtained by an electrical processing approach and was accurately measured to the nanosecond level through a high-resolution time measurement module. Because installation differences and liquid temperatures could influence the measurement accuracy, a special type of circuit called automatic gain control (AGC) circuit and a new back propagation network (BPN) model accounting for liquid temperature were employed to improve the measurement results. The corresponding pressure values were finally obtained by utilizing the relationship between time difference, transient temperature and liquid pressure. An experimental pressure measurement platform was built and the experimental results confirm that the proposed method has good measurement accuracy.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-12-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
Biomedical discovery acceleration, with applications to craniofacial development.
Leach, Sonia M; Tipney, Hannah; Feng, Weiguo; Baumgartner, William A; Kasliwal, Priyanka; Schuyler, Ronald P; Williams, Trevor; Spritz, Richard A; Hunter, Lawrence
2009-03-01
The profusion of high-throughput instruments and the explosion of new results in the scientific literature, particularly in molecular biomedicine, is both a blessing and a curse to the bench researcher. Even knowledgeable and experienced scientists can benefit from computational tools that help navigate this vast and rapidly evolving terrain. In this paper, we describe a novel computational approach to this challenge, a knowledge-based system that combines reading, reasoning, and reporting methods to facilitate analysis of experimental data. Reading methods extract information from external resources, either by parsing structured data or using biomedical language processing to extract information from unstructured data, and track knowledge provenance. Reasoning methods enrich the knowledge that results from reading by, for example, noting two genes that are annotated to the same ontology term or database entry. Reasoning is also used to combine all sources into a knowledge network that represents the integration of all sorts of relationships between a pair of genes, and to calculate a combined reliability score. Reporting methods combine the knowledge network with a congruent network constructed from experimental data and visualize the combined network in a tool that facilitates the knowledge-based analysis of that data. An implementation of this approach, called the Hanalyzer, is demonstrated on a large-scale gene expression array dataset relevant to craniofacial development. The use of the tool was critical in the creation of hypotheses regarding the roles of four genes never previously characterized as involved in craniofacial development; each of these hypotheses was validated by further experimental work.
NASA Technical Reports Server (NTRS)
Lallman, Frederick J.; Davidson, John B.; Murphy, Patrick C.
1998-01-01
A method, called pseudo controls, of integrating several airplane controls to achieve cooperative operation is presented. The method eliminates conflicting control motions, minimizes the number of feedback control gains, and reduces the complication of feedback gain schedules. The method is applied to the lateral/directional controls of a modified high-performance airplane. The airplane has a conventional set of aerodynamic controls, an experimental set of thrust-vectoring controls, and an experimental set of actuated forebody strakes. The experimental controls give the airplane additional control power for enhanced stability and maneuvering capabilities while flying over an expanded envelope, especially at high angles of attack. The flight controls are scheduled to generate independent body-axis control moments. These control moments are coordinated to produce stability-axis angular accelerations. Inertial coupling moments are compensated. Thrust-vectoring controls are engaged according to their effectiveness relative to that of the aerodynamic controls. Vane-relief logic removes steady and slowly varying commands from the thrust-vectoring controls to alleviate heating of the thrust turning devices. The actuated forebody strakes are engaged at high angles of attack. This report presents the forward-loop elements of a flight control system that positions the flight controls according to the desired stability-axis accelerations. This report does not include the generation of the required angular acceleration commands by means of pilot controls or the feedback of sensed airplane motions.
Song, Jiangning; Li, Fuyi; Takemoto, Kazuhiro; Haffari, Gholamreza; Akutsu, Tatsuya; Chou, Kuo-Chen; Webb, Geoffrey I
2018-04-14
Determining the catalytic residues in an enzyme is critical to our understanding the relationship between protein sequence, structure, function, and enhancing our ability to design novel enzymes and their inhibitors. Although many enzymes have been sequenced, and their primary and tertiary structures determined, experimental methods for enzyme functional characterization lag behind. Because experimental methods used for identifying catalytic residues are resource- and labor-intensive, computational approaches have considerable value and are highly desirable for their ability to complement experimental studies in identifying catalytic residues and helping to bridge the sequence-structure-function gap. In this study, we describe a new computational method called PREvaIL for predicting enzyme catalytic residues. This method was developed by leveraging a comprehensive set of informative features extracted from multiple levels, including sequence, structure, and residue-contact network, in a random forest machine-learning framework. Extensive benchmarking experiments on eight different datasets based on 10-fold cross-validation and independent tests, as well as side-by-side performance comparisons with seven modern sequence- and structure-based methods, showed that PREvaIL achieved competitive predictive performance, with an area under the receiver operating characteristic curve and area under the precision-recall curve ranging from 0.896 to 0.973 and from 0.294 to 0.523, respectively. We demonstrated that this method was able to capture useful signals arising from different levels, leveraging such differential but useful types of features and allowing us to significantly improve the performance of catalytic residue prediction. We believe that this new method can be utilized as a valuable tool for both understanding the complex sequence-structure-function relationships of proteins and facilitating the characterization of novel enzymes lacking functional annotations. Copyright © 2018 Elsevier Ltd. All rights reserved.
van Dijk, A M; van Weert, J C M; Dröes, R M
2012-12-01
Recently, a new communication method was introduced in nursing homes for people with dementia. This so-called Veder Method, developed by professional actors with former educational background in care,combines proven effective emotion-oriented care methods, like reminiscence,with theatrical stimuli like songs and poetry. The method is applied during theatre shows and living room theatre activities. In this exploratory study the surplus value of a living room theatre activity according to the Veder method compared to a reminiscence group activity was evaluated. Within a quasi experimental design, three groups of nursing home residents with dementia were compared: Experimental group 1 (E1; N=64)joined a 'living room theatre activity' offered by trained caregivers. Experimental group 2 (E2; N=31) joined a 'living room theatre activity' offered by professional actors. The control group (N=52) received a reminiscence group activity. Behaviour, mood and quality of life were measured using standardized observation scales at three points in time: (T1) pretest; (T2)during the intervention and; (T3) posttest, two hours after the intervention. During and after the intervention, positive effects were found in favour of E2 on behaviour (i.e. laughing, recalled memories), mood (i.e. happy/content) and quality of life (i.e. social involvement, feeling at home). A living room theatre activity according to the Veder Method has more positive effect on nursing home residents compared to a normal reminiscence group activity, if offered by professional actors.This article is a slightly edited translation of Does theatre improve the quality of life of people with dementia? International Psychogeriatrics2012;24: 36r381 by the same authors.
Beluga whale, Delphinapterus leucas, vocalizations from the Churchill River, Manitoba, Canada.
Chmelnitsky, Elly G; Ferguson, Steven H
2012-06-01
Classification of animal vocalizations is often done by a human observer using aural and visual analysis but more efficient, automated methods have also been utilized to reduce bias and increase reproducibility. Beluga whale, Delphinapterus leucas, calls were described from recordings collected in the summers of 2006-2008, in the Churchill River, Manitoba. Calls (n=706) were classified based on aural and visual analysis, and call characteristics were measured; calls were separated into 453 whistles (64.2%; 22 types), 183 pulsed∕noisy calls (25.9%; 15 types), and 70 combined calls (9.9%; seven types). Measured parameters varied within each call type but less variation existed in pulsed and noisy call types and some combined call types than in whistles. A more efficient and repeatable hierarchical clustering method was applied to 200 randomly chosen whistles using six call characteristics as variables; twelve groups were identified. Call characteristics varied less in cluster analysis groups than in whistle types described by visual and aural analysis and results were similar to the whistle contours described. This study provided the first description of beluga calls in Hudson Bay and using two methods provides more robust interpretations and an assessment of appropriate methods for future studies.
A dual tracer ratio method for comparative emission measurements in an experimental dairy housing
NASA Astrophysics Data System (ADS)
Mohn, Joachim; Zeyer, Kerstin; Keck, Margret; Keller, Markus; Zähner, Michael; Poteko, Jernej; Emmenegger, Lukas; Schrade, Sabine
2018-04-01
Agriculture, and in particular dairy farming, is an important source of ammonia (NH3) and non-carbon dioxide greenhouse gas (GHG) emissions. This calls for the development and quantification of effective mitigation strategies. Our study presents the implementation of a dual tracer ratio method in a novel experimental dairy housing with two identical, but spatially separated housing areas. Modular design and flexible floor elements allow the assessment of structural, process engineering and organisational abatement measures at practical scale. Thereby, the emission reduction potential of specific abatement measures can be quantified in relation to a reference system. Emissions in the naturally ventilated housing are determined by continuous dosing of two artificial tracers (sulphur hexafluoride SF6, trifluoromethylsulphur pentafluoride SF5CF3) and their real-time detection in the ppt range with an optimized GC-ECD method. The two tracers are dosed into different experimental sections, which enables the independent assessment of both housing areas. Mass flow emissions of NH3 and GHGs are quantified by areal dosing of tracer gases and multipoint sampling as well as real-time analysis of both tracer and target gases. Validation experiments demonstrate that the technique is suitable for both areal and point emission sources and achieves an uncertainty of less than 10% for the mass emissions of NH3, methane (CH4) and carbon dioxide (CO2), which is superior to other currently available methods. Comparative emission measurements in this experimental dairy housing will provide reliable, currently unavailable information on emissions for Swiss dairy farming and demonstrate the reduction potential of mitigation measures for NH3, GHGs and potentially other pollutants.
Kutz, D F; Marzocchi, N; Fattori, P; Cavalcanti, S; Galletti, C
2005-06-01
A new method is presented based on trinary logic able to check the state of different control variables and synchronously record the physiological and behavioral data of behaving animals and humans. The basic information structure of the method is a time interval of defined maximum duration, called time slice, during which the supervisor system periodically checks the status of a specific subset of input channels. An experimental condition is a sequence of time slices subsequently executed according to the final status of the previous time slice. The proposed method implements in its data structure the possibility to branch like an if-else cascade and the possibility to repeat parts of it recursively like the while-loop. Therefore its data structure contains the most basic control structures of programming languages. The method was implemented using a real-time version of LabVIEW programming environment to program and control our experimental setup. Using this supervision system, we synchronously record four analog data channels at 500 Hz (including eye movements) and the time stamps of up to six neurons at 100 kHz. The system reacts with a resolution within 1 ms to changes of state of digital input channels. The system is set to react to changes in eye position with a resolution within 4 ms. The time slices, experimental conditions, and data are handled by relational databases. This facilitates the construction of new experimental conditions and data analysis. The proposed implementation allows continuous recording without an inter-trial gap for data storage or task management. The implementation can be used to drive electrophysiological experiments of behaving animals and psychophysical studies with human subjects.
Experimental investigation of complex circular Airy beam characteristics
NASA Astrophysics Data System (ADS)
Porfirev, A. P.; Fomchenkov, S. A.; Khonina, S. N.
2018-04-01
We demonstrate a new type of circular Airy beams, the so-called azimuthally modulated circular Airy beams, generated by utilizing a diffraction element, whose transmission function is the sum of the transmission function of the element generating a "petal" pattern and the transmission function of the element generating a circular Airy beam. We experimentally investigate the propagation dynamics of such beams and demonstrate that their autofocusing and selfhealing properties are strongly dependent on the number of generated petals. These beams are a combination of a conventional circular Airy beam and vortex laser beams (or their superpositions). Using a spatial light modulator, we demonstrate that these beams have unique properties such as autofocusing, "nondiffractive" propagation and self-healing after passing through an obstacle. The experimental results are in good agreement with the simulation. We believe that these results can be very useful for lensless laser fabrication and laser manipulation techniques, as well as for development of new filament plasma multi-channel formation methods.
Experimental violation of Bell inequalities for multi-dimensional systems
Lo, Hsin-Pin; Li, Che-Ming; Yabushita, Atsushi; Chen, Yueh-Nan; Luo, Chih-Wei; Kobayashi, Takayoshi
2016-01-01
Quantum correlations between spatially separated parts of a d-dimensional bipartite system (d ≥ 2) have no classical analog. Such correlations, also called entanglements, are not only conceptually important, but also have a profound impact on information science. In theory the violation of Bell inequalities based on local realistic theories for d-dimensional systems provides evidence of quantum nonlocality. Experimental verification is required to confirm whether a quantum system of extremely large dimension can possess this feature, however it has never been performed for large dimension. Here, we report that Bell inequalities are experimentally violated for bipartite quantum systems of dimensionality d = 16 with the usual ensembles of polarization-entangled photon pairs. We also estimate that our entanglement source violates Bell inequalities for extremely high dimensionality of d > 4000. The designed scenario offers a possible new method to investigate the entanglement of multipartite systems of large dimensionality and their application in quantum information processing. PMID:26917246
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
A call for virtual experiments: accelerating the scientific process.
Cooper, Jonathan; Vik, Jon Olav; Waltemath, Dagmar
2015-01-01
Experimentation is fundamental to the scientific method, whether for exploration, description or explanation. We argue that promoting the reuse of virtual experiments (the in silico analogues of wet-lab or field experiments) would vastly improve the usefulness and relevance of computational models, encouraging critical scrutiny of models and serving as a common language between modellers and experimentalists. We review the benefits of reusable virtual experiments: in specifying, assaying, and comparing the behavioural repertoires of models; as prerequisites for reproducible research; to guide model reuse and composition; and for quality assurance in the translational application of models. A key step towards achieving this is that models and experimental protocols should be represented separately, but annotated so as to facilitate the linking of models to experiments and data. Lastly, we outline how the rigorous, streamlined confrontation between experimental datasets and candidate models would enable a "continuous integration" of biological knowledge, transforming our approach to systems biology. Copyright © 2014 Elsevier Ltd. All rights reserved.
Baygin, Mehmet; Karakose, Mehmet
2013-01-01
Nowadays, the increasing use of group elevator control systems owing to increasing building heights makes the development of high-performance algorithms necessary in terms of time and energy saving. Although there are many studies in the literature about this topic, they are still not effective enough because they are not able to evaluate all features of system. In this paper, a new approach of immune system-based optimal estimate is studied for dynamic control of group elevator systems. The method is mainly based on estimation of optimal way by optimizing all calls with genetic, immune system and DNA computing algorithms, and it is evaluated with a fuzzy system. The system has a dynamic feature in terms of the situation of calls and the option of the most appropriate algorithm, and it also adaptively works in terms of parameters such as the number of floors and cabins. This new approach which provides both time and energy saving was carried out in real time. The experimental results comparatively demonstrate the effects of method. With dynamic and adaptive control approach in this study carried out, a significant progress on group elevator control systems has been achieved in terms of time and energy efficiency according to traditional methods. PMID:23935433
Selecting a restoration technique to minimize OCR error.
Cannon, M; Fugate, M; Hush, D R; Scovel, C
2003-01-01
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.
Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks
NASA Astrophysics Data System (ADS)
Zhu, Shijia; Wang, Yadong
2015-12-01
Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.
Bonell, C
1999-07-01
In recent years, there have been calls within the United Kingdom's National Health Service (NHS) for evidence-based health care. These resonate with long-standing calls for nursing to become a research-based profession. Evidence-based practice could enable nurses to demonstrate their unique contribution to health care outcomes, and support their seeking greater professionalization, in terms of enhanced authority and autonomy. Nursing's professionalization project, and, within this, various practices comprising the 'new nursing', whilst sometimes not delivering all that was hoped of them, have been important in developing certain conditions conducive to developing evidence-based practice, notably a critical perspective on practice and a reluctance merely to follow physicians' orders. However, nursing has often been hesitant in its adoption of quantitative and experimental research. This hesitancy, it is argued, has been influenced by the propounding by some authors within the new nursing of a stereotyped view of quantitative/experimental methods which equates them with a number of methodological and philosophical points which are deemed, by at least some of these authors, as inimical to, or problematic within, nursing research. It is argued that, not only is the logic on which the various stereotyped views are based flawed, but further, that the wider influence of these viewpoints on nurses could lead to a greater marginalization of nurses in research and evidence-based practice initiatives, thus perhaps leading to evidence-based nursing being led by other groups. In the longer term, this might result in a form of evidence-based nursing emphasizing routinization, thus--ironically--working against strategies of professional authority and autonomy embedded in the new nursing. Nursing research should instead follow the example of nurse researchers who already embrace multiple methods. While the paper describes United Kingdom experiences and debates, points raised about the importance of questioning stereotyped views of research should have international relevance.
Rudolph A. Marcus and His Theory of Electron Transfer Reactions
early 1950s and soon discovered ... a strong experimental program at Brookhaven on electron-transfer experimental work provided the first verification of several of the predictions of his theory. This, in turn Marcus theory, namely, experimental evidence for the so-called "inverted region" where rates
Gerhardt, H Carl; Brooks, Robert
2009-10-01
Even simple biological signals vary in several measurable dimensions. Understanding their evolution requires, therefore, a multivariate understanding of selection, including how different properties interact to determine the effectiveness of the signal. We combined experimental manipulation with multivariate selection analysis to assess female mate choice on the simple trilled calls of male gray treefrogs. We independently and randomly varied five behaviorally relevant acoustic properties in 154 synthetic calls. We compared response times of each of 154 females to one of these calls with its response to a standard call that had mean values of the five properties. We found directional and quadratic selection on two properties indicative of the amount of signaling, pulse number, and call rate. Canonical rotation of the fitness surface showed that these properties, along with pulse rate, contributed heavily to a major axis of stabilizing selection, a result consistent with univariate studies showing diminishing effects of increasing pulse number well beyond the mean. Spectral properties contributed to a second major axis of stabilizing selection. The single major axis of disruptive selection suggested that a combination of two temporal and two spectral properties with values differing from the mean should be especially attractive.
Acoustic Blind Deconvolution and Frequency-Difference Beamforming in Shallow Ocean Environments
2012-01-01
acoustic field experiment (FAF06) conducted in July 2006 off the west coast of Italy. Dr. Heechun Song of the Scripps Institution of Oceanography...from seismic surveying and whale calls recorded on a vertical array with 12 elements. The whale call frequencies range from 100 to 500 Hz and the water...underway. Together Ms. Abadi and Dr. Thode had considerable success simulating the experimental environment, deconvolving whale calls, ranging the
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.
NASA Astrophysics Data System (ADS)
Kronsteiner, J.; Horwatitsch, D.; Zeman, K.
2017-10-01
Thermo-mechanical numerical modelling and simulation of extrusion processes faces several serious challenges. Large plastic deformations in combination with a strong coupling of thermal with mechanical effects leads to a high numerical demand for the solution as well as for the handling of mesh distortions. The two numerical methods presented in this paper also reflect two different ways to deal with mesh distortions. Lagrangian Finite Element Methods (FEM) tackle distorted elements by building a new mesh (called re-meshing) whereas Arbitrary Lagrangian Eulerian (ALE) methods use an "advection" step to remap the solution from the distorted to the undistorted mesh. Another difference between conventional Lagrangian and ALE methods is the separate treatment of material and mesh in ALE, allowing the definition of individual velocity fields. In theory, an ALE formulation contains the Eulerian formulation as a subset to the Lagrangian description of the material. The investigations presented in this paper were dealing with the direct extrusion of a tube profile using EN-AW 6082 aluminum alloy and a comparison of experimental with Lagrangian and ALE results. The numerical simulations cover the billet upsetting and last until one third of the billet length is extruded. A good qualitative correlation of experimental and numerical results could be found, however, major differences between Lagrangian and ALE methods concerning thermo-mechanical coupling lead to deviations in the thermal results.
Regional regularization method for ECT based on spectral transformation of Laplacian
NASA Astrophysics Data System (ADS)
Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.
2016-10-01
Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Romarly F. da; Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-580 Santo André, São Paulo; Oliveira, Eliane M. de
2015-03-14
We report theoretical and experimental total cross sections for electron scattering by phenol (C{sub 6}H{sub 5}OH). The experimental data were obtained with an apparatus based in Madrid and the calculated cross sections with two different methodologies, the independent atom method with screening corrected additivity rule (IAM-SCAR), and the Schwinger multichannel method with pseudopotentials (SMCPP). The SMCPP method in the N{sub open}-channel coupling scheme, at the static-exchange-plus-polarization approximation, is employed to calculate the scattering amplitudes at impact energies ranging from 5.0 eV to 50 eV. We discuss the multichannel coupling effects in the calculated cross sections, in particular how the numbermore » of excited states included in the open-channel space impacts upon the convergence of the elastic cross sections at higher collision energies. The IAM-SCAR approach was also used to obtain the elastic differential cross sections (DCSs) and for correcting the experimental total cross sections for the so-called forward angle scattering effect. We found a very good agreement between our SMCPP theoretical differential, integral, and momentum transfer cross sections and experimental data for benzene (a molecule differing from phenol by replacing a hydrogen atom in benzene with a hydroxyl group). Although some discrepancies were found for lower energies, the agreement between the SMCPP data and the DCSs obtained with the IAM-SCAR method improves, as expected, as the impact energy increases. We also have a good agreement among the present SMCPP calculated total cross section (which includes elastic, 32 inelastic electronic excitation processes and ionization contributions, the latter estimated with the binary-encounter-Bethe model), the IAM-SCAR total cross section, and the experimental data when the latter is corrected for the forward angle scattering effect [Fuss et al., Phys. Rev. A 88, 042702 (2013)].« less
NASA Astrophysics Data System (ADS)
Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng
2018-02-01
The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.
Video based object representation and classification using multiple covariance matrices.
Zhang, Yurong; Liu, Quan
2017-01-01
Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.
Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis
NASA Astrophysics Data System (ADS)
Dion, J.-L.; Tawfiq, I.; Chevallier, G.
2012-01-01
This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
NASA Technical Reports Server (NTRS)
Noever, David A.
2000-01-01
The effects of gravity in influencing the theoretical limit for bubble lattice coarsening and aging behavior, otherwise called von Neumann's law, is examined theoretically and experimentally. Preliminary microgravity results will be discussed.
Experimental Drug Metarrestin Targets Metastatic Tumors
An experimental drug called metarrestin appears to selectively target tumors that have spread to other parts of the body. As this Cancer Currents blog post reports, the drug shrank metastatic tumors and extended survival in in mouse models of pancreatic cancer.
Theory of the development of alternans in the heart during controlled diastolic interval pacing
NASA Astrophysics Data System (ADS)
Otani, Niels F.
2017-09-01
The beat-to-beat alternation in action potential durations (APDs) in the heart, called APD alternans, has been linked to the development of serious cardiac rhythm disorders, including ventricular tachycardia and fibrillation. The length of the period between action potentials, called the diastolic interval (DI), is a key dynamical variable in the standard theory of alternans development. Thus, methods that control the DI may be useful in preventing dangerous cardiac rhythms. In this study, we examine the dynamics of alternans during controlled-DI pacing using a series of single-cell and one-dimensional (1D) fiber models of alternans dynamics. We find that a model that combines a so-called memory model with a calcium cycling model can reasonably explain two key experimental results: the possibility of alternans during constant-DI pacing and the phase lag of APDs behind DIs during sinusoidal-DI pacing. We also find that these results can be replicated by incorporating the memory model into an amplitude equation description of a 1D fiber. The 1D fiber result is potentially concerning because it seems to suggest that constant-DI control of alternans can only be effective over only a limited region in space.
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Implementation of sobel method to detect the seed rubber plant leaves
NASA Astrophysics Data System (ADS)
Suyanto; Munte, J.
2018-03-01
This research was conducted to develop a system that can identify and recognize the type of rubber tree based on the pattern of leaves of the plant. The steps research are started with the identification of the image data acquisition, image processing, image edge detection and identification method template matching. Edge detection is using Sobel edge detection. Pattern recognition would detect image as input and compared with other images in a database called templates. Experiments carried out in one phase, identification of the leaf edge, using a rubber plant leaf image 14 are superior and 5 for each type of test images (clones) of the plant. From the experimental results obtained by the recognition rate of 91.79%.
Identification of inactivity behavior in smart home.
Poujaud, J; Noury, N; Lundy, J-E
2008-01-01
To help elderly people live independently at home, the TIMC-IMAG laboratory developed Health Smart Homes called 'HIS'. These smart Homes are composed of several sensors to monitor the activities of daily living of the patients. Volunteers have accepted to be monitored during 2 years in their own flats. During one year, we carried out our survey on one elderly patient. Thanks to this experimentation, we will access to relevant information like physiological, environmental and activity. This paper focuses on daily living activity. We will introduce an original data splitting method based on the relationship between the frame of time and the location in the flat. Moreover we will present two different methods to determine a threshold of critical inactivity and eventually we will discuss their possible utilities.
Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.
Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero
2008-09-01
Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.
NASA Astrophysics Data System (ADS)
Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.
2018-07-01
Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.
Interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Gottwald, James A.; Bryce, Jeffrey W.
1987-01-01
Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at low frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is studied which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panel would oscillate at equal amplitude, to give equal acoustic source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to be cut off, and therefore be nonpropagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is being investigated theoretically and experimentally. Progress to date is discussed.
Bat echolocation calls facilitate social communication
Knörnschild, Mirjam; Jung, Kirsten; Nagy, Martina; Metz, Markus; Kalko, Elisabeth
2012-01-01
Bat echolocation is primarily used for orientation and foraging but also holds great potential for social communication. The communicative function of echolocation calls is still largely unstudied, especially in the wild. Eavesdropping on vocal signatures encoding social information in echolocation calls has not, to our knowledge, been studied in free-living bats so far. We analysed echolocation calls of the polygynous bat Saccopteryx bilineata and found pronounced vocal signatures encoding sex and individual identity. We showed experimentally that free-living males discriminate approaching male and female conspecifics solely based on their echolocation calls. Males always produced aggressive vocalizations when hearing male echolocation calls and courtship vocalizations when hearing female echolocation calls; hence, they responded with complex social vocalizations in the appropriate social context. Our study demonstrates that social information encoded in bat echolocation calls plays a crucial and hitherto underestimated role for eavesdropping conspecifics and thus facilitates social communication in a highly mobile nocturnal mammal. PMID:23034703
Bat echolocation calls facilitate social communication.
Knörnschild, Mirjam; Jung, Kirsten; Nagy, Martina; Metz, Markus; Kalko, Elisabeth
2012-12-07
Bat echolocation is primarily used for orientation and foraging but also holds great potential for social communication. The communicative function of echolocation calls is still largely unstudied, especially in the wild. Eavesdropping on vocal signatures encoding social information in echolocation calls has not, to our knowledge, been studied in free-living bats so far. We analysed echolocation calls of the polygynous bat Saccopteryx bilineata and found pronounced vocal signatures encoding sex and individual identity. We showed experimentally that free-living males discriminate approaching male and female conspecifics solely based on their echolocation calls. Males always produced aggressive vocalizations when hearing male echolocation calls and courtship vocalizations when hearing female echolocation calls; hence, they responded with complex social vocalizations in the appropriate social context. Our study demonstrates that social information encoded in bat echolocation calls plays a crucial and hitherto underestimated role for eavesdropping conspecifics and thus facilitates social communication in a highly mobile nocturnal mammal.
Local anaesthesia through the action of cocaine, the oral mucosa and the Vienna group.
López-Valverde, A; de Vicente, J; Martínez-Domínguez, L; de Diego, R Gómez
2014-07-11
Local anaesthesia through the action of cocaine was introduced in Europe by the Vienna group, which includeed Freud, Koller and Königstein. Before using the alkaloid in animal or human experimentation all these scientists tested it on their oral mucosa - so-called self-experimentation. Some of them with different pathologies (that is, in the case of Freud), eventually became addicted to the alkaloid. Here we attempt to describe the people forming the so-called 'Vienna group', their social milieu, their experiences and internal disputes within the setting of a revolutionary discovery of the times.
NASA Astrophysics Data System (ADS)
Avdeev, Maxim V.; Proshin, Yurii N.
2017-10-01
We theoretically study the proximity effect in the thin-film layered ferromagnet (F) - superconductor (S) heterostructures in F1F2S design. We consider the boundary value problem for the Usadel-like equations in the case of so-called ;dirty; limit. The ;latent; superconducting pairing interaction in F layers taken into account. The focus is on the recipe of experimental preparation the state with so-called solitary superconductivity. We also propose and discuss the model of the superconducting spin valve based on F1F2S trilayers in solitary superconductivity regime.
NASA Astrophysics Data System (ADS)
Whitcher, Carrie Lynn
2005-08-01
Adolescence is marked with many changes in the development of higher order thinking skills. As students enter high school they are expected to utilize these skills to solve problems, become abstract thinkers, and contribute to society. The goal of this study was to assess horticultural science knowledge achievement and attitude toward horticulture, science, and school in high school agriculture students. There were approximately 240 high school students in the sample including both experimental and control groups from California and Washington. Students in the experimental group participated in an educational program called "Hands-On Hortscience" which emphasized problem solving in investigation and experimentation activities with greenhouse plants, soilless media, and fertilizers. Students in the control group were taught by the subject matter method. The activities included in the Hands-On Hortscience curriculum were created to reinforce teaching the scientific method through the context of horticulture. The objectives included evaluating whether the students participating in the Hands-On Hortscience experimental group benefited in the areas of science literacy, data acquisition and analysis, and attitude toward horticulture, science, and school. Pre-tests were administered in both the experimental and control groups prior to the research activities and post-tests were administered after completion. The survey questionnaire included a biographical section and attitude survey. Significant increases in hortscience achievement were found from pre-test to post-test in both control and experimental study groups. The experimental treatment group had statistically higher achievement scores than the control group in the two areas tested: scientific method (p=0.0016) and horticulture plant nutrition (p=0.0004). In addition, the students participating in the Hands-On Hortscience activities had more positive attitudes toward horticulture, science, and school (p=0.0033). Students who were more actively involved in hands-on projects had higher attitude scores compared to students who were taught traditional methods alone. In demographic comparisons, females had more positive attitudes toward horticulture science than males; and students from varying ethnic backgrounds had statistically different achievement (p=0.0001). Ethnicity was determined with few students in each background, 8 in one ethnicity and 10 students in another. Youth organization membership such as FFA or 4-H had no significant bearing on achievement or attitude.
Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W
2017-01-01
Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.
Thermal measurement of brake pad lining surfaces during the braking process
NASA Astrophysics Data System (ADS)
Piątkowski, Tadeusz; Polakowski, Henryk; Kastek, Mariusz; Baranowski, Pawel; Damaziak, Krzysztof; Małachowski, Jerzy; Mazurkiewicz, Łukasz
2012-06-01
This paper presents the test campaign concept and definition and the analysis of the recorded measurements. One of the most important systems in cars and trucks are brakes. The braking temperature on a lining surface can rise above 500°C. This shows how linings requirements are so strict and, what is more, continuously rising. Besides experimental tests, very supportive method for investigating processes which occur on the brake pad linings are numerical analyses. Experimental tests were conducted on the test machine called IL-68. The main component of IL-68 is so called frictional unit, which consists of: rotational head, which convey a shaft torque and where counter samples are placed and translational head, where samples of coatings are placed and pressed against counter samples. Due to the high rotational speeds and thus the rapid changes in temperature field, the infrared camera was used for testing. The paper presents results of analysis registered thermograms during the tests with different conditions. Furthermore, based on this testing machine, the numerical model was developed. In order to avoid resource demanding analyses only the frictional unit (described above) was taken into consideration. Firstly the geometrical model was performed thanks to CAD techniques, which in the next stage was a base for developing the finite element model. Material properties and boundary conditions exactly correspond to experimental tests. Computations were performed using a dynamic LS-Dyna code where heat generation was estimated assuming full (100%) conversion of mechanical work done by friction forces. Paper presents the results of dynamic thermomechanical analysis too and these results were compared with laboratory tests.
Breakup fusion theory of nuclear reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mastroleo, R.C.
1987-01-01
Continuum spectra of particles emitted in incomplete fusion reactions are one of the major interests in current nuclear reaction studies. Based on an idea of the so-called breakup fusion (BF) reaction, several authors derived closed formulas for the singles cross section of the particles that are emitted. There have been presented, however, two conflicting cross section formulas for the same BF reaction. For convenience, we shall call one of them the IAV (Ichimura, Austern and Vincent) and the other UT (Udagawa and Tamura) cross section formulas. In this work, the formulation of the UT cross section formula (prior-form) is presented,more » and the post-form version of the IAV cross section formula is evaluted for a few {alpha}- and d-induced reactions based on the exact finite range method. It is shown that the values thus calculated are larger by an order of magnitude as compared with the experimental cross sections for the {alpha}-induced reactions, while they are comparable with the experimental cross sections for the d-induced reactions. A possible origin of why such a large cross section is resulted in the case of {alpha}-induced reactions is also discussed. Polarization of the residual compound nucleus produced in breakup fusion reactions are calculated and compared with experiments. It is shown that the polarization is rather sensitive to the deflection angles of the strongly absortive partial waves and to obtain a good fit with the experimental data a l-dependent potential in the incident channel is needed in order to stress the lower partial waves.« less
Ren, Jun; Zhou, Wei; Wang, Jianxin
2014-01-01
Many evidences have demonstrated that protein complexes are overlapping and hierarchically organized in PPI networks. Meanwhile, the large size of PPI network wants complex detection methods have low time complexity. Up to now, few methods can identify overlapping and hierarchical protein complexes in a PPI network quickly. In this paper, a novel method, called MCSE, is proposed based on λ-module and “seed-expanding.” First, it chooses seeds as essential PPIs or edges with high edge clustering values. Then, it identifies protein complexes by expanding each seed to a λ-module. MCSE is suitable for large PPI networks because of its low time complexity. MCSE can identify overlapping protein complexes naturally because a protein can be visited by different seeds. MCSE uses the parameter λ_th to control the range of seed expanding and can detect a hierarchical organization of protein complexes by tuning the value of λ_th. Experimental results of S. cerevisiae show that this hierarchical organization is similar to that of known complexes in MIPS database. The experimental results also show that MCSE outperforms other previous competing algorithms, such as CPM, CMC, Core-Attachment, Dpclus, HC-PIN, MCL, and NFC, in terms of the functional enrichment and matching with known protein complexes. PMID:25143945
Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments
Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria
2015-01-01
Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162
An evolutionary algorithm for large traveling salesman problems.
Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan
2004-08-01
This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.
NASA Astrophysics Data System (ADS)
Al-Rabadi, Anas N.
2009-10-01
This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.
Active Solution Space and Search on Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo
In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.
Leap-dynamics: efficient sampling of conformational space of proteins and peptides in solution.
Kleinjung, J; Bayley, P; Fraternali, F
2000-03-31
A molecular simulation scheme, called Leap-dynamics, that provides efficient sampling of protein conformational space in solution is presented. The scheme is a combined approach using a fast sampling method, imposing conformational 'leaps' to force the system over energy barriers, and molecular dynamics (MD) for refinement. The presence of solvent is approximated by a potential of mean force depending on the solvent accessible surface area. The method has been successfully applied to N-acetyl-L-alanine-N-methylamide (alanine dipeptide), sampling experimentally observed conformations inaccessible to MD alone under the chosen conditions. The method predicts correctly the increased partial flexibility of the mutant Y35G compared to native bovine pancreatic trypsin inhibitor. In particular, the improvement over MD consists of the detection of conformational flexibility that corresponds closely to slow motions identified by nuclear magnetic resonance techniques.
Nuthatches eavesdrop on variations in heterospecific chickadee mobbing alarm calls
Templeton, Christopher N.; Greene, Erick
2007-01-01
Many animals recognize the alarm calls produced by other species, but the amount of information they glean from these eavesdropped signals is unknown. We previously showed that black-capped chickadees (Poecile atricapillus) have a sophisticated alarm call system in which they encode complex information about the size and risk of potential predators in variations of a single type of mobbing alarm call. Here we show experimentally that red-breasted nuthatches (Sitta canadensis) respond appropriately to subtle variations of these heterospecific “chick-a-dee” alarm calls, thereby evidencing that they have gained important information about potential predators in their environment. This study demonstrates a previously unsuspected level of discrimination in intertaxon eavesdropping. PMID:17372225
Nuthatches eavesdrop on variations in heterospecific chickadee mobbing alarm calls.
Templeton, Christopher N; Greene, Erick
2007-03-27
Many animals recognize the alarm calls produced by other species, but the amount of information they glean from these eavesdropped signals is unknown. We previously showed that black-capped chickadees (Poecile atricapillus) have a sophisticated alarm call system in which they encode complex information about the size and risk of potential predators in variations of a single type of mobbing alarm call. Here we show experimentally that red-breasted nuthatches (Sitta canadensis) respond appropriately to subtle variations of these heterospecific "chick-a-dee" alarm calls, thereby evidencing that they have gained important information about potential predators in their environment. This study demonstrates a previously unsuspected level of discrimination in intertaxon eavesdropping.
Removing Fats, Oils and Greases from Grease Trap by Hybrid AOPs (Ozonation and Sonication)
NASA Astrophysics Data System (ADS)
Kwiatkowski, Michal Piotr; Satoh, Saburoh; Yamabe, Chobei; Ihara, Satoshi; Nieda, Masanori
The purpose of this study was to investigate the electrical energy for the environmental applications using AOPs (advanced oxidation processes) combined with ozonation and sonication to remove the FOG (fats, oils and greases) from wastewater of the sewage system. This study focused on FOG removal from a grease trap using the hybrid AOPs. Fatty acids (linoleic, oleic, stearic and palmitic acids) were used as representative standards of FOG. The studies were conducted experimentally in a glass reactor under various operational conditions. The oxidation efficiency using the combination of the ozonation and sonication was determined by the KI dosimetry method and the calorimetry method. Fatty acids concentration were measured by GC/MS. The local reaction field of the high temperature and high pressure, so-called hot spot, was generated by the quasi-adiabatic collapse of bubbles produced in the water under sonication, which is called cavitation phenomenon. Mixing the ozone bubbles into the water under acoustic cavitation, the formation of OH radicals increased. The mechanical effect of acoustic cavitation such as microstreaming and shock waves have an influence on the probability of reactions of ozone and radicals with fatty acids.
Asymptotic formulae for likelihood-based tests of new physics
NASA Astrophysics Data System (ADS)
Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer
2011-02-01
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.
NASA Astrophysics Data System (ADS)
Su, Tengfei
2018-04-01
In this paper, an unsupervised evaluation scheme for remote sensing image segmentation is developed. Based on a method called under- and over-segmentation aware (UOA), the new approach is improved by overcoming the defect in the part of estimating over-segmentation error. Two cases of such error-prone defect are listed, and edge strength is employed to devise a solution to this issue. Two subsets of high resolution remote sensing images were used to test the proposed algorithm, and the experimental results indicate its superior performance, which is attributed to its improved OSE detection model.
Gianni, Stefano; Jemth, Per
2014-07-01
The only experimental strategy to address the structure of folding transition states, the so-called Φ value analysis, relies on the synergy between site directed mutagenesis and the measurement of reaction kinetics. Despite its importance, the Φ value analysis has been often criticized and its power to pinpoint structural information has been questioned. In this hypothesis, we demonstrate that comparing the Φ values between proteins not only allows highlighting the robustness of folding pathways but also provides per se a strong validation of the method. © 2014 International Union of Biochemistry and Molecular Biology.
NASA Astrophysics Data System (ADS)
Raksincharoensak, Pongsathorn; Khaisongkram, Wathanyoo; Nagai, Masao; Shimosaka, Masamichi; Mori, Taketoshi; Sato, Tomomasa
2010-12-01
This paper describes the modelling of naturalistic driving behaviour in real-world traffic scenarios, based on driving data collected via an experimental automobile equipped with a continuous sensing drive recorder. This paper focuses on the longitudinal driving situations which are classified into five categories - car following, braking, free following, decelerating and stopping - and are referred to as driving states. Here, the model is assumed to be represented by a state flow diagram. Statistical machine learning of driver-vehicle-environment system model based on driving database is conducted by a discriminative modelling approach called boosting sequential labelling method.
A new efficient mixture screening design for optimization of media.
Rispoli, Fred; Shah, Vishal
2009-01-01
Screening ingredients for the optimization of media is an important first step to reduce the many potential ingredients down to the vital few components. In this study, we propose a new method of screening for mixture experiments called the centroid screening design. Comparison of the proposed design with Plackett-Burman, fractional factorial, simplex lattice design, and modified mixture design shows that the centroid screening design is the most efficient of all the designs in terms of the small number of experimental runs needed and for detecting high-order interaction among ingredients. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.
NaOH-based high temperature heat-of-fusion thermal energy storage device
NASA Technical Reports Server (NTRS)
Cohen, B. M.; Rice, R. E.
1978-01-01
A material called Thermkeep, developed as a low-cost method for the storage of thermal energy for solar electric power generating systems is discussed. The storage device consists of an insulated cylinder containing Thermkeep in which coiled tubular heat exchangers are immersed. A one-tenth scale model of the design contains 25 heat-exchanger tubes and 1500 kg of Thermkeep. Its instrumentation includes thermocouples to measure internal Thermkeep temperatures, vessel surface, heated shroud surface, and pressure gauges to indicate heat-exchanger pressure drops. The test-circuit design is presented and experimental results are discussed.
Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng
2018-04-27
Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.
Sabouni, Rana; Kazemian, Hossein; Rohani, Sohrab
2013-08-20
It is essential to capture carbon dioxide from flue gas because it is considered one of the main causes of global warming. Several materials and different methods have been reported for CO2 capturing including adsorption onto zeolites and porous membranes, as well as absorption in amine solutions. All such methods require high energy input and high cost. A new class of porous materials called Metal Organic Frameworks (MOFs) exhibited excellent performance in extracting carbon dioxide from a gas mixture. In this study, the breakthrough curves for the adsorption of carbon dioxide on CPM-5 (crystalline porous materials) were obtained experimentally and theoretically using a laboratory-scale fixed-bed column at different experimental conditions such as feed flow rate, adsorption temperature, and feed concentration. It was found that the CPM-5 has a dynamic CO2 adsorption capacity of 11.9 wt % (2.7 mmol/g) (corresponding to 8 mL/min, 298 K, and 25% v/v CO2). The tested CPM-5 showed an outstanding adsorption equilibrium capacity (e.g., 2.3 mmol/g (10.2 wt %) at 298 K) compared to other adsorbents, which can be considered as an attractive adsorbent for separation of CO2 from flue gas.
Alternate methodologies to experimentally investigate shock initiation properties of explosives
NASA Astrophysics Data System (ADS)
Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger
2017-01-01
Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.
QuASAR: quantitative allele-specific analysis of reads.
Harvey, Chris T; Moyerbrailean, Gregory A; Davis, Gordon O; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger
2015-04-15
Expression quantitative trait loci (eQTL) studies have discovered thousands of genetic variants that regulate gene expression, enabling a better understanding of the functional role of non-coding sequences. However, eQTL studies are costly, requiring large sample sizes and genome-wide genotyping of each sample. In contrast, analysis of allele-specific expression (ASE) is becoming a popular approach to detect the effect of genetic variation on gene expression, even within a single individual. This is typically achieved by counting the number of RNA-seq reads matching each allele at heterozygous sites and testing the null hypothesis of a 1:1 allelic ratio. In principle, when genotype information is not readily available, it could be inferred from the RNA-seq reads directly. However, there are currently no existing methods that jointly infer genotypes and conduct ASE inference, while considering uncertainty in the genotype calls. We present QuASAR, quantitative allele-specific analysis of reads, a novel statistical learning method for jointly detecting heterozygous genotypes and inferring ASE. The proposed ASE inference step takes into consideration the uncertainty in the genotype calls, while including parameters that model base-call errors in sequencing and allelic over-dispersion. We validated our method with experimental data for which high-quality genotypes are available. Results for an additional dataset with multiple replicates at different sequencing depths demonstrate that QuASAR is a powerful tool for ASE analysis when genotypes are not available. http://github.com/piquelab/QuASAR. fluca@wayne.edu or rpique@wayne.edu Supplementary Material is available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Masseroli, Marco; Kaitoua, Abdulrahman; Pinoli, Pietro; Ceri, Stefano
2016-12-01
While a huge amount of (epi)genomic data of multiple types is becoming available by using Next Generation Sequencing (NGS) technologies, the most important emerging problem is the so-called tertiary analysis, concerned with sense making, e.g., discovering how different (epi)genomic regions and their products interact and cooperate with each other. We propose a paradigm shift in tertiary analysis, based on the use of the Genomic Data Model (GDM), a simple data model which links genomic feature data to their associated experimental, biological and clinical metadata. GDM encompasses all the data formats which have been produced for feature extraction from (epi)genomic datasets. We specifically describe the mapping to GDM of SAM (Sequence Alignment/Map), VCF (Variant Call Format), NARROWPEAK (for called peaks produced by NGS ChIP-seq or DNase-seq methods), and BED (Browser Extensible Data) formats, but GDM supports as well all the formats describing experimental datasets (e.g., including copy number variations, DNA somatic mutations, or gene expressions) and annotations (e.g., regarding transcription start sites, genes, enhancers or CpG islands). We downloaded and integrated samples of all the above-mentioned data types and formats from multiple sources. The GDM is able to homogeneously describe semantically heterogeneous data and makes the ground for providing data interoperability, e.g., achieved through the GenoMetric Query Language (GMQL), a high-level, declarative query language for genomic big data. The combined use of the data model and the query language allows comprehensive processing of multiple heterogeneous data, and supports the development of domain-specific data-driven computations and bio-molecular knowledge discovery. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Feng, Shou; Fu, Ping; Zheng, Wenbin
2018-03-01
Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.
Soft x-ray holographic tomography for biological specimens
NASA Astrophysics Data System (ADS)
Gao, Hongyi; Chen, Jianwen; Xie, Honglan; Li, Ruxin; Xu, Zhizhan; Jiang, Shiping; Zhang, Yuxuan
2003-10-01
In this paper, we present some experimental results on X -ray holography, holographic tomography, and a new holographic tomography method called pre-amplified holographic tomography is proposed. Due to the shorter wavelength and the larger penetration depths, X-rays provide the potential of higher resolution in imaging techniques, and have the ability to image intact, living, hydrated cells w ithout slicing, dehydration, chemical fixation or stain. Recently, using X-ray source in National Synchrotron Radiation Laboratory in Hefei, we have successfully performed some soft X-ray holography experiments on biological specimen. The specimens used in the experiments was the garlic clove epidermis, we got their X-ray hologram, and then reconstructed them by computer programs, the feature of the cell walls, the nuclei and some cytoplasm were clearly resolved. However, there still exist some problems in realization of practical 3D microscopic imaging due to the near-unity refractive index of the matter. There is no X-ray optics having a sufficient high numerical aperture to achieve a depth resolution that is comparable to the transverse resolution. On the other hand, computer tomography needs a record of hundreds of views of the test object at different angles for high resolution. This is because the number of views required for a densely packed object is equal to the object radius divided by the desired depth resolution. Clearly, it is impractical for a radiation-sensitive biological specimen. Moreover, the X-ray diffraction effect makes projection data blur, this badly degrades the resolution of the reconstructed image. In order to observe 3D structure of the biological specimens, McNulty proposed a new method for 3D imaging called "holographic tomography (HT)" in which several holograms of the specimen are recorded from various illumination directions and combined in the reconstruction step. This permits the specimens to be sampled over a wide range of spatial frequencies to improve the depth resolution. In NSRL, we performed soft X-ray holographic tomography experiments. The specimen was the spider filaments and PM M A as recording medium. By 3D CT reconstruction of the projection data, three dimensional density distribution of the specimen was obtained. Also, we developed a new X-ray holographic tomography m ethod called pre-amplified holographic tomography. The method permits a digital real-time 3D reconstruction with high-resolution and a simple and compact experimental setup as well.
Mission-based Scenario Research: Experimental Design And Analysis
2012-01-01
neurotechnologies called Brain-Computer Interaction Technologies. 15. SUBJECT TERMS neuroimaging, EEG, task loading, neurotechnologies , ground... neurotechnologies called Brain-Computer Interaction Technologies. INTRODUCTION Imagine a system that can identify operator fatigue during a long-term...BCIT), a class of neurotechnologies , that aim to improve task performance by incorporating measures of brain activity to optimize the interactions
Global search in photoelectron diffraction structure determination using genetic algorithms
NASA Astrophysics Data System (ADS)
Viana, M. L.; Díez Muiño, R.; Soares, E. A.; Van Hove, M. A.; de Carvalho, V. E.
2007-11-01
Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 × 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J.; Gamble, John King; Nielsen, Erik
Quantum tomography is used to characterize quantum operations implemented in quantum information processing (QIP) hardware. Traditionally, state tomography has been used to characterize the quantum state prepared in an initialization procedure, while quantum process tomography is used to characterize dynamical operations on a QIP system. As such, tomography is critical to the development of QIP hardware (since it is necessary both for debugging and validating as-built devices, and its results are used to influence the next generation of devices). But tomography suffers from several critical drawbacks. In this report, we present new research that resolves several of these flaws. Wemore » describe a new form of tomography called gate set tomography (GST), which unifies state and process tomography, avoids prior methods critical reliance on precalibrated operations that are not generally available, and can achieve unprecedented accuracies. We report on theory and experimental development of adaptive tomography protocols that achieve far higher fidelity in state reconstruction than non-adaptive methods. Finally, we present a new theoretical and experimental analysis of process tomography on multispin systems, and demonstrate how to more effectively detect and characterize quantum noise using carefully tailored ensembles of input states.« less
Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation
NASA Astrophysics Data System (ADS)
Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra
2017-12-01
Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.
Bayesian Estimation of Thermonuclear Reaction Rates for Deuterium+Deuterium Reactions
NASA Astrophysics Data System (ADS)
Gómez Iñesta, Á.; Iliadis, C.; Coc, A.
2017-11-01
The study of d+d reactions is of major interest since their reaction rates affect the predicted abundances of D, 3He, and 7Li. In particular, recent measurements of primordial D/H ratios call for reduced uncertainties in the theoretical abundances predicted by Big Bang nucleosynthesis (BBN). Different authors have studied reactions involved in BBN by incorporating new experimental data and a careful treatment of systematic and probabilistic uncertainties. To analyze the experimental data, Coc et al. used results of ab initio models for the theoretical calculation of the energy dependence of S-factors in conjunction with traditional statistical methods based on χ 2 minimization. Bayesian methods have now spread to many scientific fields and provide numerous advantages in data analysis. Astrophysical S-factors and reaction rates using Bayesian statistics were calculated by Iliadis et al. Here we present a similar analysis for two d+d reactions, d(d, n)3He and d(d, p)3H, that has been translated into a total decrease of the predicted D/H value by 0.16%.
ERIC Educational Resources Information Center
Carter, Angela
This study involved observing a second-grade classroom to investigate how the teacher called on students, noting whether the teacher gave enough attention to students who raised their hands frequently by calling on them and examining students' responses when called on. Researchers implemented a new method of calling on students using name cards,…
NASA Astrophysics Data System (ADS)
Kloutse, A. F.; Zacharia, R.; Cossement, D.; Chahine, R.; Balderas-Xicohténcatl, R.; Oh, H.; Streppel, B.; Schlichtenmayer, M.; Hirscher, M.
2015-12-01
Isosteric heat of adsorption is an important parameter required to describe the thermal performance of adsorptive storage systems. It is most frequently calculated from adsorption isotherms measured over wide ranges of pressure and temperature, using the so-called adsorption isosteric method. Direct quantitative estimation of isosteric heats on the other hand is possible using the coupled calorimetric-volumetric method, which involves simultaneous measurement of heat and adsorption. In this work, we compare the isosteric heats of hydrogen adsorption on microporous materials measured by both methods. Furthermore, the experimental data are compared with the isosteric heats obtained using the modified Dubinin-Astakhov, Tóth, and Unilan adsorption analytical models to establish the reliability and limitations of simpler methods and assumptions. To this end, we measure the hydrogen isosteric heats on five prototypical metal-organic frameworks: MOF-5, Cu-BTC, Fe-BTC, MIL-53, and MOF-177 using both experimental methods. For all MOFs, we find a very good agreement between the isosteric heats measured using the calorimetric and isosteric methods throughout the range of loading studied. Models' prediction on the other hand deviates from both experiments depending on the MOF studied and the range of loading. Under low-loadings of less than 5 mol kg-1, the isosteric heat of hydrogen adsorption decreases in the order Cu-BTC > MIL-53 > MOF-5 > Fe-BTC > MOF-177. The order of isosteric heats is coherent with the strength of hydrogen interaction revealed from previous thermal desorption spectroscopy measurements.
Love, Elliot K; Bee, Mark A
2010-09-01
One strategy for coping with the constraints on acoustic signal reception posed by ambient noise is to signal louder as noise levels increase. Termed the 'Lombard effect', this reflexive behaviour is widespread among birds and mammals and occurs with a diversity of signal types, leading to the hypothesis that voice amplitude regulation represents a general vertebrate mechanism for coping with environmental noise. Support for this evolutionary hypothesis, however, remains limited due to a lack of studies in taxa other than birds and mammals. Here, we report the results of an experimental test of the hypothesis that male grey treefrogs increase the amplitude of their advertisement calls in response to increasing levels of chorus-shaped noise. We recorded spontaneously produced calls in quiet and in the presence of noise broadcast at sound pressure levels ranging between 40 dB and 70 dB. While increasing noise levels induced predictable changes in call duration and rate, males did not regulate call amplitude. These results do not support the hypothesis that voice amplitude regulation is a generic vertebrate mechanism for coping with noise. We discuss the possibility that intense sexual selection and high levels of competition for mates in choruses place some frogs under strong selection to call consistently as loudly as possible.
The Fernow Experimental Forest and Canaan Valley: A history of research
Mary Beth Adams; James N. Kochenderfer
2015-01-01
The Fernow Experimental Forest (herein called the Fernow) in Tucker County, WV, was set aside in 1934 for âexperimental and demonstration purposes under the direction of the Appalachian Forest Experiment Stationâ of the US Forest Service. Named after a famous German forester, Bernhard Fernow, the Fernow was initially developed with considerable assistance from the...
Jet-mixing of initially-stratified liquid-liquid pipe flows: experiments and numerical simulations
NASA Astrophysics Data System (ADS)
Wright, Stuart; Ibarra-Hernandes, Roberto; Xie, Zhihua; Markides, Christos; Matar, Omar
2016-11-01
Low pipeline velocities lead to stratification and so-called 'phase slip' in horizontal liquid-liquid flows due to differences in liquid densities and viscosities. Stratified flows have no suitable single point for sampling, from which average phase properties (e.g. fractions) can be established. Inline mixing, achieved by static mixers or jets in cross-flow (JICF), is often used to overcome liquid-liquid stratification by establishing unstable two-phase dispersions for sampling. Achieving dispersions in liquid-liquid pipeline flows using JICF is the subject of this experimental and modelling work. The experimental facility involves a matched refractive index liquid-liquid-solid system, featuring an ETFE test section, and experimental liquids which are silicone oil and a 51-wt% glycerol solution. The matching then allows the dispersed fluid phase fractions and velocity fields to be established through advanced optical techniques, namely PLIF (for phase) and PTV or PIV (for velocity fields). CFD codes using the volume of a fluid (VOF) method are then used to demonstrate JICF breakup and dispersion in stratified pipeline flows. A number of simple jet configurations are described and their dispersion effectiveness is compared with the experimental results. Funding from Cameron for Ph.D. studentship (SW) gratefully acknowledged.
Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang
2016-01-01
In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.
Technical Parameters Modeling of a Gas Probe Foaming Using an Active Experimental Type Research
NASA Astrophysics Data System (ADS)
Tîtu, A. M.; Sandu, A. V.; Pop, A. B.; Ceocea, C.; Tîtu, S.
2018-06-01
The present paper deals with a current and complex topic, namely - a technical problem solving regarding the modeling and then optimization of some technical parameters related to the natural gas extraction process. The study subject is to optimize the gas probe sputtering using experimental research methods and data processing by regular probe intervention with different sputtering agents. This procedure makes that the hydrostatic pressure to be reduced by the foam formation from the water deposit and the scrubbing agent which can be removed from the surface by the produced gas flow. The probe production data was analyzed and the so-called candidate for the research itself emerged. This is an extremely complex study and it was carried out on the field works, finding that due to the severe gas field depletion the wells flow decreases and the start of their loading with deposit water, was registered. It was required the regular wells foaming, to optimize the daily production flow and the disposal of the wellbore accumulated water. In order to analyze the process of natural gas production, the factorial experiment and other methods were used. The reason of this choice is that the method can offer very good research results with a small number of experimental data. Finally, through this study the extraction process problems were identified by analyzing and optimizing the technical parameters, which led to a quality improvement of the extraction process.
NASA Astrophysics Data System (ADS)
Olivieri, Ferdinando; Fazi, Filippo Maria; Nelson, Philip A.; Shin, Mincheol; Fontana, Simone; Yue, Lang
2016-07-01
Methods for beamforming are available that provide the signals used to drive an array of sources for the implementation of systems for the so-called personal audio. In this work, performance of the delay-and-sum (DAS) method and of three widely used methods for optimal beamforming are compared by means of computer simulations and experiments in an anechoic environment using a linear array of sources with given constraints on quality of the reproduced field at the listener's position and limit to input energy to the array. Using the DAS method as a benchmark for performance, the frequency domain responses of the loudspeaker filters can be characterized in three regions. In the first region, at low frequencies, input signals designed with the optimal methods are identical and provide higher directivity performance than that of the DAS. In the second region, performance of the optimal methods are similar to the DAS method. The third region starts above the limit due to spatial aliasing. A method is presented to estimate the boundaries of these regions.
Chambers, Nola
2009-01-01
There is extensive experimental evidence that altered auditory feedback (AAF) can have a clinically significant effect on the severity of speech symptoms in people who stutter. However, there is less evidence regarding whether these experimental effects can be observed in naturalistic everyday settings particularly when using the telephone. This study aimed to investigate the effectiveness of the Telephone Assistive Device (TAD), which is designed to provide AAF on the telephone to people who stutter, on reducing stuttering severity. Nine adults participated in a quasi-experimental study. Stuttering severity was measured first without and then with the device in participants' naturalistic settings while making and receiving telephone calls (immediate benefit). Participants were then allowed a week of repeated use of the device following which all measurements were repeated (delayed benefit). Overall, results revealed significant immediate benefits from the TAD in all call conditions. Delayed benefits in received and total calls were also significant. There was substantial individual variability in response to the TAD but none of the demographic or speech-related factors measured in the study were found to significantly impact the benefit (immediate or delayed) derived from the TAD. Results have implications for clinical decision making for adults who stutter.
NASA Astrophysics Data System (ADS)
Yamazaki, Kenji; Maehara, Yosuke; Gohara, Kazutoshi
2018-06-01
The number of layers affects the electronic properties of graphene owing to its unique band structure, called the Dirac corn. Raman spectroscopy is a key diagnostic tool for identifying the number of graphene layers and for determining their physical properties. Here, we observed moiré structures in transmission electron microscopy (TEM) observations; these are signature patterns in multilayer, although Raman spectra showed the typical intensity of the 2D/G peak in the monolayer. We also performed a multi-slice TEM image simulation to compare the 3D atomic structures of the two graphene membranes with experimental TEM images. We found that the experimental moiré image was constructed with a 9-12 Å interlayer distance between graphene membranes. This structure was constructed by transferring CVD-grown graphene films that formed on both sides of the Cu substrate at once.
Experimental demonstration of photon upconversion via cooperative energy pooling
Weingarten, Daniel H.; LaCount, Michael D.; van de Lagemaat, Jao; ...
2017-03-15
Photon upconversion is a fundamental interaction of light and matter that has applications in fields ranging from bioimaging to microfabrication. However, all photon upconversion methods demonstrated thus far involve challenging aspects, including requirements of high excitation intensities, degradation in ambient air, requirements of exotic materials or phases, or involvement of inherent energy loss processes. Here we experimentally demonstrate a mechanism of photon upconversion in a thin film, binary mixture of organic chromophores that provides a pathway to overcoming the aforementioned disadvantages. This singlet-based process, called Cooperative Energy Pooling (CEP), utilizes a sensitizer-acceptor design in which multiple photoexcited sensitizers resonantly andmore » simultaneously transfer their energies to a higher-energy state on a single acceptor. Data from this proof-of-concept implementation is fit by a proposed model of the CEP process. As a result, design guidelines are presented to facilitate further research and development of more optimized CEP systems.« less
Experimental demonstration of photon upconversion via cooperative energy pooling
Weingarten, Daniel H.; LaCount, Michael D.; van de Lagemaat, Jao; Rumbles, Garry; Lusk, Mark T.; Shaheen, Sean E.
2017-01-01
Photon upconversion is a fundamental interaction of light and matter that has applications in fields ranging from bioimaging to microfabrication. However, all photon upconversion methods demonstrated thus far involve challenging aspects, including requirements of high excitation intensities, degradation in ambient air, requirements of exotic materials or phases, or involvement of inherent energy loss processes. Here we experimentally demonstrate a mechanism of photon upconversion in a thin film, binary mixture of organic chromophores that provides a pathway to overcoming the aforementioned disadvantages. This singlet-based process, called Cooperative Energy Pooling (CEP), utilizes a sensitizer-acceptor design in which multiple photoexcited sensitizers resonantly and simultaneously transfer their energies to a higher-energy state on a single acceptor. Data from this proof-of-concept implementation is fit by a proposed model of the CEP process. Design guidelines are presented to facilitate further research and development of more optimized CEP systems. PMID:28294129
Bioinformatics and molecular modeling in glycobiology
Schloissnig, Siegfried
2010-01-01
The field of glycobiology is concerned with the study of the structure, properties, and biological functions of the family of biomolecules called carbohydrates. Bioinformatics for glycobiology is a particularly challenging field, because carbohydrates exhibit a high structural diversity and their chains are often branched. Significant improvements in experimental analytical methods over recent years have led to a tremendous increase in the amount of carbohydrate structure data generated. Consequently, the availability of databases and tools to store, retrieve and analyze these data in an efficient way is of fundamental importance to progress in glycobiology. In this review, the various graphical representations and sequence formats of carbohydrates are introduced, and an overview of newly developed databases, the latest developments in sequence alignment and data mining, and tools to support experimental glycan analysis are presented. Finally, the field of structural glycoinformatics and molecular modeling of carbohydrates, glycoproteins, and protein–carbohydrate interaction are reviewed. PMID:20364395
An inexpensive frequency-modulated (FM) audio monitor of time-dependent analog parameters.
Langdon, R B; Jacobs, R S
1980-02-01
The standard method for quantification and presentation of an experimental variable in real time is the use of visual display on the ordinate of an oscilloscope screen or chart recorder. This paper describes a relatively simple electronic circuit, using commercially available and inexpensive integrated circuits (IC), which generates an audible tone, the pitch of which varies in proportion to a running variable of interest. This device, which we call an "Audioscope," can accept as input the monitor output from any instrument that expresses an experimental parameter as a dc voltage. The Audioscope is particularly useful in implanting microelectrodes intracellularly. It may also function to mediate the first step in data recording on magnetic tape, and/or data analysis and reduction by electronic circuitary. We estimate that this device can be built, with two-channel capability, for less than $50, and in less than 10 hr by an experienced electronics technician.
Systematic cloning of an ORFeome using the Gateway system.
Matsuyama, Akihisa; Yoshida, Minoru
2009-01-01
With the completion of the genome projects, there are increasing demands on the experimental systems that enable to exploit the entire set of protein-coding open reading frames (ORFs), viz. ORFeome, en masse. Systematic proteomic studies based on cloned ORFeomes are called "reverse proteomics," and have been launched in many organisms in recent years. Cloning of an ORFeome is such an attractive way for comprehensive understanding of biological phenomena, but is a challenging and daunting task. However, recent advances in techniques for DNA cloning using site-specific recombination and for high-throughput experimental techniques have made it feasible to clone an ORFeome with the minimum of exertion. The Gateway system is one of such the approaches, employing the recombination reaction of the bacteriophage lambda. Combining traditional DNA manipulation methods with modern technique of the recombination-based cloning system, it is possible to clone an ORFeome of an organism on an individual level.
Researches on Preliminary Chemical Reactions in Spark-Ignition Engines
NASA Technical Reports Server (NTRS)
Muehlner, E.
1943-01-01
Chemical reactions can demonstrably occur in a fuel-air mixture compressed in the working cylinder of an Otto-cycle (spark ignition) internal-combustion engine even before the charge is ignited by the flame proceeding from the sparking plug. These are the so-called "prelinminary reactions" ("pre-flame" combustion or oxidation), and an exact knowledge of their characteristic development is of great importance for a correct appreciation of the phenomena of engine-knock (detonation), and consequently for its avoidance. Such reactions can be studied either in a working engine cylinder or in a combustion bomb. The first method necessitates a complicated experimental technique, while the second has the disadvantage of enabling only a single reaction to be studied at one time. Consequently, a new series of experiments was inaugurated, conducted in a motored (externally-driven) experimental engine of mixture-compression type, without ignition, the resulting preliminary reactions being detectable and measurable thermometrically.
Measuring the Edge Recombination Velocity of Monolayer Semiconductors.
Zhao, Peida; Amani, Matin; Lien, Der-Hsien; Ahn, Geun Ho; Kiriya, Daisuke; Mastandrea, James P; Ager, Joel W; Yablonovitch, Eli; Chrzan, Daryl C; Javey, Ali
2017-09-13
Understanding edge effects and quantifying their impact on the carrier properties of two-dimensional (2D) semiconductors is an essential step toward utilizing this material for high performance electronic and optoelectronic devices. WS 2 monolayers patterned into disks of varying diameters are used to experimentally explore the influence of edges on the material's optical properties. Carrier lifetime measurements show a decrease in the effective lifetime, τ effective , as a function of decreasing diameter, suggesting that the edges are active sites for carrier recombination. Accordingly, we introduce a metric called edge recombination velocity (ERV) to characterize the impact of 2D material edges on nonradiative carrier recombination. The unpassivated WS 2 monolayer disks yield an ERV ∼ 4 × 10 4 cm/s. This work quantifies the nonradiative recombination edge effects in monolayer semiconductors, while simultaneously establishing a practical characterization approach that can be used to experimentally explore edge passivation methods for 2D materials.
BioBlocks: Programming Protocols in Biology Made Easier.
Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso
2017-07-21
The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.
Experimental demonstration of photon upconversion via cooperative energy pooling
NASA Astrophysics Data System (ADS)
Weingarten, Daniel H.; Lacount, Michael D.; van de Lagemaat, Jao; Rumbles, Garry; Lusk, Mark T.; Shaheen, Sean E.
2017-03-01
Photon upconversion is a fundamental interaction of light and matter that has applications in fields ranging from bioimaging to microfabrication. However, all photon upconversion methods demonstrated thus far involve challenging aspects, including requirements of high excitation intensities, degradation in ambient air, requirements of exotic materials or phases, or involvement of inherent energy loss processes. Here we experimentally demonstrate a mechanism of photon upconversion in a thin film, binary mixture of organic chromophores that provides a pathway to overcoming the aforementioned disadvantages. This singlet-based process, called Cooperative Energy Pooling (CEP), utilizes a sensitizer-acceptor design in which multiple photoexcited sensitizers resonantly and simultaneously transfer their energies to a higher-energy state on a single acceptor. Data from this proof-of-concept implementation is fit by a proposed model of the CEP process. Design guidelines are presented to facilitate further research and development of more optimized CEP systems.
Neural-network quantum state tomography
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe
2018-05-01
The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.
The Effect of the Laboratory Specimen on Fatigue Crack Growth Rate
NASA Technical Reports Server (NTRS)
Forth, S. C.; Johnston, W. M.; Seshadri, B. R.
2006-01-01
Over the past thirty years, laboratory experiments have been devised to develop fatigue crack growth rate data that is representative of the material response. The crack growth rate data generated in the laboratory is then used to predict the safe operating envelope of a structure. The ability to interrelate laboratory data and structural response is called similitude. In essence, a nondimensional term, called the stress intensity factor, was developed that includes the applied stresses, crack size and geometric configuration. The stress intensity factor is then directly related to the rate at which cracks propagate in a material, resulting in the material property of fatigue crack growth response. Standardized specimen configurations and experimental procedures have been developed for laboratory testing to generate crack growth rate data that supports similitude of the stress intensity factor solution. In this paper, the authors present laboratory fatigue crack growth rate test data and finite element analyses that show similitude between standard specimen configurations tested using the constant stress ratio test method is unobtainable.
Turnover of Lipidated LC3 and Autophagic Cargoes in Mammalian Cells.
Rodríguez-Arribas, M; Yakhine-Diop, S M S; González-Polo, R A; Niso-Santano, M; Fuentes, J M
2017-01-01
Macroautophagy (usually referred to as autophagy) is the most important degradation system in mammalian cells. It is responsible for the elimination of protein aggregates, organelles, and other cellular content. During autophagy, these materials (i.e., cargo) must be engulfed by a double-membrane structure called an autophagosome, which delivers the cargo to the lysosome to complete its degradation. Autophagy is a very dynamic pathway called autophagic flux. The process involves all the steps that are implicated in cargo degradation from autophagosome formation. There are several techniques to monitor autophagic flux. Among them, the method most used experimentally to assess autophagy is the detection of LC3 protein processing and p62 degradation by Western blotting. In this chapter, we provide a detailed and straightforward protocol for this purpose in cultured mammalian cells, including a brief set of notes concerning problems associated with the Western-blotting detection of LC3 and p62. © 2017 Elsevier Inc. All rights reserved.
Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules
Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh
2011-01-01
This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
Blind Linguistic Steganalysis against Translation Based Steganography
NASA Astrophysics Data System (ADS)
Chen, Zhili; Huang, Liusheng; Meng, Peng; Yang, Wei; Miao, Haibo
Translation based steganography (TBS) is a kind of relatively new and secure linguistic steganography. It takes advantage of the "noise" created by automatic translation of natural language text to encode the secret information. Up to date, there is little research on the steganalysis against this kind of linguistic steganography. In this paper, a blind steganalytic method, which is named natural frequency zoned word distribution analysis (NFZ-WDA), is presented. This method has improved on a previously proposed linguistic steganalysis method based on word distribution which is targeted for the detection of linguistic steganography like nicetext and texto. The new method aims to detect the application of TBS and uses none of the related information about TBS, its only used resource is a word frequency dictionary obtained from a large corpus, or a so called natural frequency dictionary, so it is totally blind. To verify the effectiveness of NFZ-WDA, two experiments with two-class and multi-class SVM classifiers respectively are carried out. The experimental results show that the steganalytic method is pretty promising.
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
An optimized resistor pattern for temperature gradient control in microfluidics
NASA Astrophysics Data System (ADS)
Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline
2009-06-01
In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.
NASA Astrophysics Data System (ADS)
van Es, Maarten H.; Mohtashami, Abbas; Piras, Daniele; Sadeghian, Hamed
2018-03-01
Nondestructive subsurface nanoimaging through optically opaque media is considered to be extremely challenging and is essential for several semiconductor metrology applications including overlay and alignment and buried void and defect characterization. The current key challenge in overlay and alignment is the measurement of targets that are covered by optically opaque layers. Moreover, with the device dimensions moving to the smaller nodes and the issue of the so-called loading effect causing offsets between between targets and product features, it is increasingly desirable to perform alignment and overlay on product features or so-called on-cell overlay, which requires higher lateral resolution than optical methods can provide. Our recently developed technique known as SubSurface Ultrasonic Resonance Force Microscopy (SSURFM) has shown the capability for high-resolution imaging of structures below a surface based on (visco-)elasticity of the constituent materials and as such is a promising technique to perform overlay and alignment with high resolution in upcoming production nodes. In this paper, we describe the developed SSURFM technique and the experimental results on imaging buried features through various layers and the ability to detect objects with resolution below 10 nm. In summary, the experimental results show that the SSURFM is a potential solution for on-cell overlay and alignment as well as detecting buried defects or voids and generally metrology through optically opaque layers.
Action of Molecular Switches in GPCRs - Theoretical and Experimental Studies
Trzaskowski, B; Latek, D; Yuan, S; Ghoshdastider, U; Debinski, A; Filipek, S
2012-01-01
G protein coupled receptors (GPCRs), also called 7TM receptors, form a huge superfamily of membrane proteins that, upon activation by extracellular agonists, pass the signal to the cell interior. Ligands can bind either to extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (Rhodopsin-like family). They are all activated by agonists although a spontaneous auto-activation of an empty receptor can also be observed. Biochemical and crystallographic methods together with molecular dynamics simulations and other theoretical techniques provided models of the receptor activation based on the action of so-called “molecular switches” buried in the receptor structure. They are changed by agonists but also by inverse agonists evoking an ensemble of activation states leading toward different activation pathways. Switches discovered so far include the ionic lock switch, the 3-7 lock switch, the tyrosine toggle switch linked with the nPxxy motif in TM7, and the transmission switch. The latter one was proposed instead of the tryptophan rotamer toggle switch because no change of the rotamer was observed in structures of activated receptors. The global toggle switch suggested earlier consisting of a vertical rigid motion of TM6, seems also to be implausible based on the recent crystal structures of GPCRs with agonists. Theoretical and experimental methods (crystallography, NMR, specific spectroscopic methods like FRET/BRET but also single-molecule-force-spectroscopy) are currently used to study the effect of ligands on the receptor structure, location of stable structural segments/domains of GPCRs, and to answer the still open question on how ligands are binding: either via ensemble of conformational receptor states or rather via induced fit mechanisms. On the other hand the structural investigations of homo- and heterodimers and higher oligomers revealed the mechanism of allosteric signal transmission and receptor activation that could lead to design highly effective and selective allosteric or ago-allosteric drugs. PMID:22300046
Action of molecular switches in GPCRs--theoretical and experimental studies.
Trzaskowski, B; Latek, D; Yuan, S; Ghoshdastider, U; Debinski, A; Filipek, S
2012-01-01
G protein coupled receptors (GPCRs), also called 7TM receptors, form a huge superfamily of membrane proteins that, upon activation by extracellular agonists, pass the signal to the cell interior. Ligands can bind either to extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (Rhodopsin-like family). They are all activated by agonists although a spontaneous auto-activation of an empty receptor can also be observed. Biochemical and crystallographic methods together with molecular dynamics simulations and other theoretical techniques provided models of the receptor activation based on the action of so-called "molecular switches" buried in the receptor structure. They are changed by agonists but also by inverse agonists evoking an ensemble of activation states leading toward different activation pathways. Switches discovered so far include the ionic lock switch, the 3-7 lock switch, the tyrosine toggle switch linked with the nPxxy motif in TM7, and the transmission switch. The latter one was proposed instead of the tryptophan rotamer toggle switch because no change of the rotamer was observed in structures of activated receptors. The global toggle switch suggested earlier consisting of a vertical rigid motion of TM6, seems also to be implausible based on the recent crystal structures of GPCRs with agonists. Theoretical and experimental methods (crystallography, NMR, specific spectroscopic methods like FRET/BRET but also single-molecule-force-spectroscopy) are currently used to study the effect of ligands on the receptor structure, location of stable structural segments/domains of GPCRs, and to answer the still open question on how ligands are binding: either via ensemble of conformational receptor states or rather via induced fit mechanisms. On the other hand the structural investigations of homoand heterodimers and higher oligomers revealed the mechanism of allosteric signal transmission and receptor activation that could lead to design highly effective and selective allosteric or ago-allosteric drugs.
Normal mode-guided transition pathway generation in proteins
Lee, Byung Ho; Seo, Sangjae; Kim, Min Hyeok; Kim, Youngjin; Jo, Soojin; Choi, Moon-ki; Lee, Hoomin; Choi, Jae Boong
2017-01-01
The biological function of proteins is closely related to its structural motion. For instance, structurally misfolded proteins do not function properly. Although we are able to experimentally obtain structural information on proteins, it is still challenging to capture their dynamics, such as transition processes. Therefore, we need a simulation method to predict the transition pathways of a protein in order to understand and study large functional deformations. Here, we present a new simulation method called normal mode-guided elastic network interpolation (NGENI) that performs normal modes analysis iteratively to predict transition pathways of proteins. To be more specific, NGENI obtains displacement vectors that determine intermediate structures by interpolating the distance between two end-point conformations, similar to a morphing method called elastic network interpolation. However, the displacement vector is regarded as a linear combination of the normal mode vectors of each intermediate structure, in order to enhance the physical sense of the proposed pathways. As a result, we can generate more reasonable transition pathways geometrically and thermodynamically. By using not only all normal modes, but also in part using only the lowest normal modes, NGENI can still generate reasonable pathways for large deformations in proteins. This study shows that global protein transitions are dominated by collective motion, which means that a few lowest normal modes play an important role in this process. NGENI has considerable merit in terms of computational cost because it is possible to generate transition pathways by partial degrees of freedom, while conventional methods are not capable of this. PMID:29020017
The pharmacology of lysergic acid diethylamide: a review.
Passie, Torsten; Halpern, John H; Stichtenoth, Dirk O; Emrich, Hinderk M; Hintzen, Annelie
2008-01-01
Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called "experimental psychosis" by altering neurotransmitter system and in psychotherapeutic procedures ("psycholytic" and "psychedelic" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.
Identification of genomic indels and structural variations using split reads
2011-01-01
Background Recent studies have demonstrated the genetic significance of insertions, deletions, and other more complex structural variants (SVs) in the human population. With the development of the next-generation sequencing technologies, high-throughput surveys of SVs on the whole-genome level have become possible. Here we present split-read identification, calibrated (SRiC), a sequence-based method for SV detection. Results We start by mapping each read to the reference genome in standard fashion using gapped alignment. Then to identify SVs, we score each of the many initial mappings with an assessment strategy designed to take into account both sequencing and alignment errors (e.g. scoring more highly events gapped in the center of a read). All current SV calling methods have multilevel biases in their identifications due to both experimental and computational limitations (e.g. calling more deletions than insertions). A key aspect of our approach is that we calibrate all our calls against synthetic data sets generated from simulations of high-throughput sequencing (with realistic error models). This allows us to calculate sensitivity and the positive predictive value under different parameter-value scenarios and for different classes of events (e.g. long deletions vs. short insertions). We run our calculations on representative data from the 1000 Genomes Project. Coupling the observed numbers of events on chromosome 1 with the calibrations gleaned from the simulations (for different length events) allows us to construct a relatively unbiased estimate for the total number of SVs in the human genome across a wide range of length scales. We estimate in particular that an individual genome contains ~670,000 indels/SVs. Conclusions Compared with the existing read-depth and read-pair approaches for SV identification, our method can pinpoint the exact breakpoints of SV events, reveal the actual sequence content of insertions, and cover the whole size spectrum for deletions. Moreover, with the advent of the third-generation sequencing technologies that produce longer reads, we expect our method to be even more useful. PMID:21787423
Image encryption using random sequence generated from generalized information domain
NASA Astrophysics Data System (ADS)
Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu
2016-05-01
A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.
Kharazian, B; Hadipour, N L; Ejtehadi, M R
2016-06-01
Nanoparticles (NP) have capability to adsorb proteins from biological fluids and form protein layer, which is called protein corona. As the cell sees corona coated NPs, the protein corona can dictate biological response to NPs. The composition of protein corona is varied by physicochemical properties of NPs including size, shape, surface chemistry. Processing of protein adsorption is dynamic phenomena; to that end, a protein may desorb or leave a surface vacancy that is rapidly filled by another protein and cause changes in the corona composition mainly by the Vroman effect. In this review, we discuss the interaction between NP and proteins and the available techniques for identification of NP-bound proteins. Also we review current developed computational methods for understanding the NP-protein complex interactions. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Bagli, Enrico; Guidi, Vincenzo
2013-08-01
A toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures, called DYNECHARM++ has been developed. The code has been written in C++ language taking advantage of this object-oriented programing method. The code is capable to evaluating the electrical characteristics of complex atomic structures and to simulate and track the particle trajectory within them. Calculation method of electrical characteristics based on their expansion in Fourier series has been adopted. Two different approaches to simulate the interaction have been adopted, relying on the full integration of particle trajectories under the continuum potential approximation and on the definition of cross-sections of coherent processes. Finally, the code has proved to reproduce experimental results and to simulate interaction of charged particles with complex structures.
NASA Astrophysics Data System (ADS)
Mishra, A.; Vibhute, V.; Ninama, S.; Parsai, N.; Jha, S. N.; Sharma, P.
2016-10-01
X-ray absorption fine structure (XAFS) at the K-edge of copper has been studied in some copper (II) complexes with substituted anilines like (2Cl, 4Br, 2NO2, 4NO2 and pure aniline) with o-PDA (orthophenylenediamine) as ligand. The X-ray absorption measurements have been performed at the recently developed BL-8 dispersive EXAFS beam line at 2.5 GeV Indus-2 Synchrotron Source at RRCAT, Indore, India. The data obtained has been processed using EXAFS data analysis program Athena.The graphical method gives the useful information about bond length and also the environment of the absorbing atom. The theoretical bond lengths of the complexes were calculated by using interactive fitting of EXAFS using fast Fourier inverse transformation (IFEFFIT) method. This method is also called as Fourier transform method. The Lytle, Sayers and Stern method and Levy's method have been used for determination of bond lengths experimentally of the studied complexes. The results of both methods have been compared with theoretical IFEFFIT method.
NASA Astrophysics Data System (ADS)
Kostyuchenko, V. I.; Makarova, A. S.; Ryazantsev, O. B.; Samarin, S. I.; Uglov, A. S.
2014-06-01
A great breakthrough in proton therapy has happened in the new century: several tens of dedicated centers are now operated throughout the world and their number increases every year. An important component of proton therapy is a treatment planning system. To make calculations faster, these systems usually use analytical methods whose reliability and accuracy do not allow the advantages of this method of treatment to implement to the full extent. Predictions by the Monte Carlo (MC) method are a "gold" standard for the verification of calculations with these systems. At the Institute of Experimental and Theoretical Physics (ITEP) which is one of the eldest proton therapy centers in the world, an MC code is an integral part of their treatment planning system. This code which is called IThMC was developed by scientists from RFNC-VNIITF (Snezhinsk) under ISTC Project 3563.
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
Convex formulation of multiple instance learning from positive and unlabeled bags.
Bao, Han; Sakai, Tomoya; Sato, Issei; Sugiyama, Masashi
2018-05-24
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available. MIL has a variety of applications such as content-based image retrieval, text categorization, and medical diagnosis. Most of the previous work for MIL assume that training bags are fully labeled. However, it is often difficult to obtain an enough number of labeled bags in practical situations, while many unlabeled bags are available. A learning framework called PU classification (positive and unlabeled classification) can address this problem. In this paper, we propose a convex PU classification method to solve an MIL problem. We experimentally show that the proposed method achieves better performance with significantly lower computation costs than an existing method for PU-MIL. Copyright © 2018 Elsevier Ltd. All rights reserved.
Clonal Selection Based Artificial Immune System for Generalized Pattern Recognition
NASA Technical Reports Server (NTRS)
Huntsberger, Terry
2011-01-01
The last two decades has seen a rapid increase in the application of AIS (Artificial Immune Systems) modeled after the human immune system to a wide range of areas including network intrusion detection, job shop scheduling, classification, pattern recognition, and robot control. JPL (Jet Propulsion Laboratory) has developed an integrated pattern recognition/classification system called AISLE (Artificial Immune System for Learning and Exploration) based on biologically inspired models of B-cell dynamics in the immune system. When used for unsupervised or supervised classification, the method scales linearly with the number of dimensions, has performance that is relatively independent of the total size of the dataset, and has been shown to perform as well as traditional clustering methods. When used for pattern recognition, the method efficiently isolates the appropriate matches in the data set. The paper presents the underlying structure of AISLE and the results from a number of experimental studies.
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
NASA Astrophysics Data System (ADS)
Luis, Josep M.; Duran, Miquel; Andrés, José L.
1997-08-01
An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values.
Decision Making Based on Fuzzy Aggregation Operators for Medical Diagnosis from Dental X-ray images.
Ngan, Tran Thi; Tuan, Tran Manh; Son, Le Hoang; Minh, Nguyen Hai; Dey, Nilanjan
2016-12-01
Medical diagnosis is considered as an important step in dentistry treatment which assists clinicians to give their decision about diseases of a patient. It has been affirmed that the accuracy of medical diagnosis, which is much influenced by the clinicians' experience and knowledge, plays an important role to effective treatment therapies. In this paper, we propose a novel decision making method based on fuzzy aggregation operators for medical diagnosis from dental X-Ray images. It firstly divides a dental X-Ray image into some segments and identified equivalent diseases by a classification method called Affinity Propagation Clustering (APC+). Lastly, the most potential disease is found using fuzzy aggregation operators. The experimental validation on real dental datasets of Hanoi Medical University Hospital, Vietnam showed the superiority of the proposed method against the relevant ones in terms of accuracy.
NASA Astrophysics Data System (ADS)
Lei, Dong; Bai, Pengxiang; Zhu, Feipeng
2018-01-01
Nowadays, acetabulum prosthesis replacement is widely used in clinical medicine. However, there is no efficient way to evaluate the implantation effect of the prosthesis. Based on a modern photomechanics technique called digital image correlation (DIC), the evaluation method of the installation effect of the acetabulum was established during a prosthetic replacement of a hip joint. The DIC method determines strain field by comparing the speckle images between the undeformed sample and the deformed counterpart. Three groups of experiments were carried out to verify the feasibility of the DIC method on the acetabulum installation deformation test. Experimental results indicate that the installation deformation of acetabulum generally includes elastic deformation (corresponding to the principal strain of about 1.2%) and plastic deformation. When the installation angle is ideal, the plastic deformation can be effectively reduced, which could prolong the service life of acetabulum prostheses.
Real Time Updating Genetic Network Programming for Adapting to the Change of Stock Prices
NASA Astrophysics Data System (ADS)
Chen, Yan; Mabu, Shingo; Shimada, Kaoru; Hirasawa, Kotaro
The key in stock trading model is to take the right actions for trading at the right time, primarily based on the accurate forecast of future stock trends. Since an effective trading with given information of stock prices needs an intelligent strategy for the decision making, we applied Genetic Network Programming (GNP) to creating a stock trading model. In this paper, we propose a new method called Real Time Updating Genetic Network Programming (RTU-GNP) for adapting to the change of stock prices. There are three important points in this paper: First, the RTU-GNP method makes a stock trading decision considering both the recommendable information of technical indices and the candlestick charts according to the real time stock prices. Second, we combine RTU-GNP with a Sarsa learning algorithm to create the programs efficiently. Also, sub-nodes are introduced in each judgment and processing node to determine appropriate actions (buying/selling) and to select appropriate stock price information depending on the situation. Third, a Real Time Updating system has been firstly introduced in our paper considering the change of the trend of stock prices. The experimental results on the Japanese stock market show that the trading model with the proposed RTU-GNP method outperforms other models without real time updating. We also compared the experimental results using the proposed method with Buy&Hold method to confirm its effectiveness, and it is clarified that the proposed trading model can obtain much higher profits than Buy&Hold method.
Investigation of hydroelastic ship responses of an ULOC in head seas
NASA Astrophysics Data System (ADS)
Wang, Xue-liang; Temarel, Pandeli; Hu, Jia-jun; Gu, Xue-kang
2016-10-01
Investigation of hydroelastic ship responses has been brought to the attention of the scientific and engineering world for several decades. There are two kinds of high-frequency vibrations in general ship responses to a large ocean-going ship in its shipping line, so-called springing and whipping, which are important for the determination of design wave load and fatigue damage as well. Because of the huge scale of an ultra large ore carrier (ULOC), it will suffer seldom slamming events in the ocean. The resonance vibration with high frequency is springing, which is caused by continuous wave excitation. In this paper, the wave-induced vibrations of the ULOC are addressed by experimental and numerical methods according to 2D and 3D hydroelasticity theories and an elastic model under full-load and ballast conditions. The influence of loading conditions on high-frequency vibration is studied both by numerical and experimental results. Wave-induced vibrations are higher under ballast condition including the wave frequency part, the multiple frequencies part, the 2-node and the 3-node vertical bending parts of the hydroelastic responses. The predicted results from the 2D method have less accuracy than the 3D method especially under ballast condition because of the slender-body assumption in the former method. The applicability of the 2D method and the further development of nonlinear effects to 3D method in the prediction of hydroelastic responses of the ULOC are discussed.
Projection methods for line radiative transfer in spherical media.
NASA Astrophysics Data System (ADS)
Anusha, L. S.; Nagendra, K. N.
An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).
Experimental Investigation of Spatially-Periodic Scalar Patterns in an Inline Mixer
NASA Astrophysics Data System (ADS)
Baskan, Ozge; Speetjens, Michel F. M.; Clercx, Herman J. H.
2015-11-01
Spatially persisting patterns with exponentially decaying intensities form during the downstream evolution of passive scalars in three-dimensional (3D) spatially periodic flows due to the coupled effect of the chaotic nature of the flow and the diffusivity of the material. This has been investigated in many computational and theoretical studies on 3D spatially-periodic flow fields. However, in the limit of zero-diffusivity, the evolution of the scalar fields results in more detailed structures that can only be captured by experiments due to limitations in the computational tools. Our study employs the-state-of-the-art experimental methods to analyze the evolution of 3D advective scalar field in a representative inline mixer, called Quatro static mixer. The experimental setup consists of an optically accessible test section with transparent internal elements, accommodating a pressure-driven pipe flow and equipped with 3D Laser-Induced Fluorescence. The results reveal that the continuous process of stretching and folding of material creates finer structures as the flow progresses, which is an indicator of chaotic advection and the experiments outperform the simulations by revealing far greater level of detail.
Counter-Flow Cooling Tower Test Cell
NASA Astrophysics Data System (ADS)
Dvořák, Lukáš; Nožička, Jiří
2014-03-01
The article contains a design of a functional experimental model of a cross-flow mechanical draft cooling tower and the results and outcomes of measurements. This device is primarily used for measuring performance characteristics of cooling fills, but with a simple rebuild, it can be used for measuring other thermodynamic processes that take part in so-called wet cooling. The main advantages of the particular test cell lie in the accuracy, size, and the possibility of changing the water distribution level. This feature is very useful for measurements of fills of different heights without the influence of the spray and rain zone. The functionality of this test cell has been verified experimentally during assembly, and data from the measurement of common film cooling fills have been compared against the results taken from another experimental line. For the purpose of evaluating the data gathered, computational scripts were created in the MATLAB numerical computing environment. The first script is for exact calculation of the thermal balance of the model, and the second is for determining Merkel's number via Chebyshev's method.
2009-01-01
Background Insertional mutagenesis is an effective method for functional genomic studies in various organisms. It can rapidly generate easily tractable mutations. A large-scale insertional mutagenesis with the piggyBac (PB) transposon is currently performed in mice at the Institute of Developmental Biology and Molecular Medicine (IDM), Fudan University in Shanghai, China. This project is carried out via collaborations among multiple groups overseeing interconnected experimental steps and generates a large volume of experimental data continuously. Therefore, the project calls for an efficient database system for recording, management, statistical analysis, and information exchange. Results This paper presents a database application called MP-PBmice (insertional mutation mapping system of PB Mutagenesis Information Center), which is developed to serve the on-going large-scale PB insertional mutagenesis project. A lightweight enterprise-level development framework Struts-Spring-Hibernate is used here to ensure constructive and flexible support to the application. The MP-PBmice database system has three major features: strict access-control, efficient workflow control, and good expandability. It supports the collaboration among different groups that enter data and exchange information on daily basis, and is capable of providing real time progress reports for the whole project. MP-PBmice can be easily adapted for other large-scale insertional mutation mapping projects and the source code of this software is freely available at http://www.idmshanghai.cn/PBmice. Conclusion MP-PBmice is a web-based application for large-scale insertional mutation mapping onto the mouse genome, implemented with the widely used framework Struts-Spring-Hibernate. This system is already in use by the on-going genome-wide PB insertional mutation mapping project at IDM, Fudan University. PMID:19958505
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Hamlet on the Macintosh: An Experimental Seminar That Worked.
ERIC Educational Resources Information Center
Strange, William C.
1987-01-01
Describes experimental college Shakespeare seminar that used Macintosh computers and software called ELIZA and ADVENTURE to develop character dialogs and adventure games based on Hamlet's characters and plots. Programming languages are examined, particularly their relationship to metaphor, and the use of computers in humanities is discussed. (LRW)
An experimental ward. Improving care and learning.
Ronan, L; Stoeckle, J D
1992-01-01
The rapidly changing health care system is still largely organized according to old, and increasingly outdated models. The contemporary demands of patient care and residency training call for an experimental ward, which can develop and test new techniques in hospital organization and the delivery of care in a comprehensive way.
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Methodical fitting for mathematical models of rubber-like materials
NASA Astrophysics Data System (ADS)
Destrade, Michel; Saccomandi, Giuseppe; Sgura, Ivonne
2017-02-01
A great variety of models can describe the nonlinear response of rubber to uniaxial tension. Yet an in-depth understanding of the successive stages of large extension is still lacking. We show that the response can be broken down in three steps, which we delineate by relying on a simple formatting of the data, the so-called Mooney plot transform. First, the small-to-moderate regime, where the polymeric chains unfold easily and the Mooney plot is almost linear. Second, the strain-hardening regime, where blobs of bundled chains unfold to stiffen the response in correspondence to the `upturn' of the Mooney plot. Third, the limiting-chain regime, with a sharp stiffening occurring as the chains extend towards their limit. We provide strain-energy functions with terms accounting for each stage that (i) give an accurate local and then global fitting of the data; (ii) are consistent with weak nonlinear elasticity theory and (iii) can be interpreted in the framework of statistical mechanics. We apply our method to Treloar's classical experimental data and also to some more recent data. Our method not only provides models that describe the experimental data with a very low quantitative relative error, but also shows that the theory of nonlinear elasticity is much more robust that seemed at first sight.
Numerical simulation of liquid-layer breakup on a moving wall due to an impinging jet
NASA Astrophysics Data System (ADS)
Yu, Taejong; Moon, Hojoon; You, Donghyun; Kim, Dokyun; Ovsyannikov, Andrey
2014-11-01
Jet wiping, which is a hydrodynamic method for controlling the liquid film thickness in coating processes, is constrained by a rather violent film instability called splashing. The instability is characterized by the ejection of droplets from the runback flow and results in an explosion of the film. The splashing phenomenon degrades the final coating quality. In the present research, a volume-of-fluid (VOF)-based method, which is developed at Cascade Technologies, is employed to simulate the air-liquid multiphase flow dynamics. The present numerical method is based on an unstructured-grid unsplit geometric VOF scheme and guarantees strict conservation of mass of two-phase flow, The simulation results are compared with experimental measurements such as the liquid-film thickness before and after the jet wiping, wall pressure and shear stress distributions. The trajectories of liquid droplets due to the fluid motion entrained by the gas-jet operation, are also qualitatively compared with experimental visualization. Physical phenomena observed during the liquid-layer breakup due to an impinging jet is characterized in order to develop ideas for controlling the liquid-layer instability and resulting splash generation and propagation. Supported by the Grant NRF-2012R1A1A2003699, the Brain Korea 21+ program, POSCO, and 2014 CTR Summer Program.
Innovative hybrid pile oscillator technique in the Minerve reactor: open loop vs. closed loop
NASA Astrophysics Data System (ADS)
Geslot, Benoit; Gruel, Adrien; Bréaud, Stéphane; Leconte, Pierre; Blaise, Patrick
2018-01-01
Pile oscillator techniques are powerful methods to measure small reactivity worth of isotopes of interest for nuclear data improvement. This kind of experiments has long been implemented in the Mineve experimental reactor, operated by CEA Cadarache. A hybrid technique, mixing reactivity worth estimation and measurement of small changes around test samples is presented here. It was made possible after the development of high sensitivity miniature fission chambers introduced next to the irradiation channel. A test campaign, called MAESTRO-SL, took place in 2015. Its objective was to assess the feasibility of the hybrid method and investigate the possibility to separate mixed neutron effects, such as fission/capture or scattering/capture. Experimental results are presented and discussed in this paper, which focus on comparing two measurements setups, one using a power control system (closed loop) and another one where the power is free to drift (open loop). First, it is demonstrated that open loop is equivalent to closed loop. Uncertainty management and methods reproducibility are discussed. Second, results show that measuring the flux depression around oscillated samples provides valuable information regarding partial neutron cross sections. The technique is found to be very sensitive to the capture cross section at the expense of scattering, making it very useful to measure small capture effects of highly scattering samples.
PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine
Manavalan, Balachandran; Shin, Tae H.; Lee, Gwang
2018-01-01
Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html. PMID:29616000
PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine.
Manavalan, Balachandran; Shin, Tae H; Lee, Gwang
2018-01-01
Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html.
AUC-Maximized Deep Convolutional Neural Fields for Protein Sequence Labeling.
Wang, Sheng; Sun, Siqi; Xu, Jinbo
2016-09-01
Deep Convolutional Neural Networks (DCNN) has shown excellent performance in a variety of machine learning tasks. This paper presents Deep Convolutional Neural Fields (DeepCNF), an integration of DCNN with Conditional Random Field (CRF), for sequence labeling with an imbalanced label distribution. The widely-used training methods, such as maximum-likelihood and maximum labelwise accuracy, do not work well on imbalanced data. To handle this, we present a new training algorithm called maximum-AUC for DeepCNF. That is, we train DeepCNF by directly maximizing the empirical Area Under the ROC Curve (AUC), which is an unbiased measurement for imbalanced data. To fulfill this, we formulate AUC in a pairwise ranking framework, approximate it by a polynomial function and then apply a gradient-based procedure to optimize it. Our experimental results confirm that maximum-AUC greatly outperforms the other two training methods on 8-state secondary structure prediction and disorder prediction since their label distributions are highly imbalanced and also has similar performance as the other two training methods on solvent accessibility prediction, which has three equally-distributed labels. Furthermore, our experimental results show that our AUC-trained DeepCNF models greatly outperform existing popular predictors of these three tasks. The data and software related to this paper are available at https://github.com/realbigws/DeepCNF_AUC.
AUC-Maximized Deep Convolutional Neural Fields for Protein Sequence Labeling
Wang, Sheng; Sun, Siqi
2017-01-01
Deep Convolutional Neural Networks (DCNN) has shown excellent performance in a variety of machine learning tasks. This paper presents Deep Convolutional Neural Fields (DeepCNF), an integration of DCNN with Conditional Random Field (CRF), for sequence labeling with an imbalanced label distribution. The widely-used training methods, such as maximum-likelihood and maximum labelwise accuracy, do not work well on imbalanced data. To handle this, we present a new training algorithm called maximum-AUC for DeepCNF. That is, we train DeepCNF by directly maximizing the empirical Area Under the ROC Curve (AUC), which is an unbiased measurement for imbalanced data. To fulfill this, we formulate AUC in a pairwise ranking framework, approximate it by a polynomial function and then apply a gradient-based procedure to optimize it. Our experimental results confirm that maximum-AUC greatly outperforms the other two training methods on 8-state secondary structure prediction and disorder prediction since their label distributions are highly imbalanced and also has similar performance as the other two training methods on solvent accessibility prediction, which has three equally-distributed labels. Furthermore, our experimental results show that our AUC-trained DeepCNF models greatly outperform existing popular predictors of these three tasks. The data and software related to this paper are available at https://github.com/realbigws/DeepCNF_AUC. PMID:28884168
Vertical transmission of learned signatures in a wild parrot
Berg, Karl S.; Delgado, Soraya; Cortopassi, Kathryn A.; Beissinger, Steven R.; Bradbury, Jack W.
2012-01-01
Learned birdsong is a widely used animal model for understanding the acquisition of human speech. Male songbirds often learn songs from adult males during sensitive periods early in life, and sing to attract mates and defend territories. In presumably all of the 350+ parrot species, individuals of both sexes commonly learn vocal signals throughout life to satisfy a wide variety of social functions. Despite intriguing parallels with humans, there have been no experimental studies demonstrating learned vocal production in wild parrots. We studied contact call learning in video-rigged nests of a well-known marked population of green-rumped parrotlets (Forpus passerinus) in Venezuela. Both sexes of naive nestlings developed individually unique contact calls in the nest, and we demonstrate experimentally that signature attributes are learned from both primary care-givers. This represents the first experimental evidence for the mechanisms underlying the transmission of a socially acquired trait in a wild parrot population. PMID:21752824
Initial experiments in thrusterless locomotion control of a free-flying robot
NASA Technical Reports Server (NTRS)
Jasper, W. J.; Cannon, R. H., Jr.
1990-01-01
A two-arm free-flying robot has been constructed to study thrusterless locomotion in space. This is accomplished by pushing off or landing on a large structure in a coordinated two-arm maneuver. A new control method, called system momentum control, allows the robot to follow desired momentum trajectories and thus leap or crawl from one structure to another. The robot floats on an air-cushion, simulating in two dimensions the drag-free zero-g environment of space. The control paradigm has been verified experimentally by commanding the robot to push off a bar with both arms, rotate 180 degrees, and catch itself on another bar.
Experience of superheat of solutions: doubly metastable systems
NASA Astrophysics Data System (ADS)
Skripov, P. V.
2017-11-01
The phenomenon of attainable superheat of two-component mixtures has been studied experimentally by the method of pulse heating of a wire probe. Special attention was called to the appearance of double metastability in the course of heating. Besides the usual superheating with respect to the liquid-vapor equilibrium temperature, the objects under study turn out to be supersaturated with respect to the carbon dioxide content. Preliminary experiments were carried out in the region of instability located above the diffusion spinodal. The results obtained lead to the choice of the program of further research on doubly metastable and unstable systems with different degrees of component compatibility.
A new approach of watermarking technique by means multichannel wavelet functions
NASA Astrophysics Data System (ADS)
Agreste, Santa; Puccio, Luigia
2012-12-01
The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.
Enhanced backscatter of optical beams reflected in turbulent air.
Nelson, W; Palastro, J P; Wu, C; Davis, C C
2015-07-01
Optical beams propagating through air acquire phase distortions from turbulent fluctuations in the refractive index. While these distortions are usually deleterious to propagation, beams reflected in a turbulent medium can undergo a local recovery of spatial coherence and intensity enhancement referred to as enhanced backscatter (EBS). Here we validate the commonly used phase screen simulation with experimental results obtained from lab-scale experiments. We also verify theoretical predictions of the dependence of the turbulence strength on EBS. Finally, we present a novel algorithm called the "tilt-shift method" which allows detection of EBS in frozen turbulence, reducing the time required to detect the EBS signal.
An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction
NASA Technical Reports Server (NTRS)
Juang, J. N.; Pappa, R. S.
1985-01-01
A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.
Electronic properties of a molecular system with Platinum
NASA Astrophysics Data System (ADS)
Ojeda, J. H.; Medina, F. G.; Becerra-Alonso, David
2017-10-01
The electronic properties are studied using a finite homogeneous molecule called Trans-platinum-linked oligo(tetraethenylethenes). This system is composed of individual molecules such as benzene rings, platinum, Phosphore and Sulfur. The mechanism for the study of the electron transport through this system is based on placing the molecule between metal contacts to control the current through the molecular system. We study this molecule based on the tight-binding approach for the calculation of the transport properties using the Landauer-Büttiker formalism and the Fischer-Lee relationship, based on a semi-analytic Green's function method within a real-space renormalization approach. Our results show a significant agreement with experimental measurements.
Efficient continuous-variable state tomography using Padua points
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.
Impulse Noise Cancellation of Medical Images Using Wavelet Networks and Median Filters
Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeid; Gheissari, Niloofar
2012-01-01
This paper presents a new two-stage approach to impulse noise removal for medical images based on wavelet network (WN). The first step is noise detection, in which the so-called gray-level difference and average background difference are considered as the inputs of a WN. Wavelet Network is used as a preprocessing for the second stage. The second step is removing impulse noise with a median filter. The wavelet network presented here is a fixed one without learning. Experimental results show that our method acts on impulse noise effectively, and at the same time preserves chromaticity and image details very well. PMID:23493998
Spotting effect in microarray experiments
Mary-Huard, Tristan; Daudin, Jean-Jacques; Robin, Stéphane; Bitton, Frédérique; Cabannes, Eric; Hilson, Pierre
2004-01-01
Background Microarray data must be normalized because they suffer from multiple biases. We have identified a source of spatial experimental variability that significantly affects data obtained with Cy3/Cy5 spotted glass arrays. It yields a periodic pattern altering both signal (Cy3/Cy5 ratio) and intensity across the array. Results Using the variogram, a geostatistical tool, we characterized the observed variability, called here the spotting effect because it most probably arises during steps in the array printing procedure. Conclusions The spotting effect is not appropriately corrected by current normalization methods, even by those addressing spatial variability. Importantly, the spotting effect may alter differential and clustering analysis. PMID:15151695
Breaking chaotic secure communication using a spectrogram
NASA Astrophysics Data System (ADS)
Yang, Tao; Yang, Lin-Bao; Yang, Chun-Mei
1998-10-01
We present the results of breaking a kind of chaotic secure communication system called chaotic switching scheme, also known as chaotic shift keying, in which a binary message signal is scrambled by two chaotic attractors. The spectrogram which can reveal the energy evolving process in the spectral-temporal space is used to distinguish the two different chaotic attractors, which are qualitatively and statistically similar in phase space. Then mathematical morphological filters are used to decode the binary message signal without the knowledge of the binary message signal and the transmitter. The computer experimental results are provided to show how our method works when both the chaotic and hyper-chaotic transmitter are used.
Awareness Effects of a Youth Suicide Prevention Media Campaign in Louisiana
ERIC Educational Resources Information Center
Jenner, Eric; Jenner, Lynne Woodward; Matthews-Sterling, Maya; Butts, Jessica K.; Williams, Trina Evans
2010-01-01
Research on the efficacy of mediated suicide awareness campaigns is limited. The impacts of a state-wide media campaign on call volumes to a national hotline were analyzed to determine if the advertisements have raised awareness of the hotline. We use a quasi-experimental design to compare call volumes from ZIP codes where and when the campaign is…
ERIC Educational Resources Information Center
Marcano Lárez, Beatriz Elena
2014-01-01
War videogames raise a lot of controversy in the educational field and are by far the most played videogames worldwide. This study explores the factors that encouraged gamers to choose war videogames with a sample of 387 Call of Duty players. The motivational factors were pinpointed using a non-experimental descriptive exploratory study through an…
Predicting Drug-Target Interactions With Multi-Information Fusion.
Peng, Lihong; Liao, Bo; Zhu, Wen; Li, Zejun; Li, Keqin
2017-03-01
Identifying potential associations between drugs and targets is a critical prerequisite for modern drug discovery and repurposing. However, predicting these associations is difficult because of the limitations of existing computational methods. Most models only consider chemical structures and protein sequences, and other models are oversimplified. Moreover, datasets used for analysis contain only true-positive interactions, and experimentally validated negative samples are unavailable. To overcome these limitations, we developed a semi-supervised based learning framework called NormMulInf through collaborative filtering theory by using labeled and unlabeled interaction information. The proposed method initially determines similarity measures, such as similarities among samples and local correlations among the labels of the samples, by integrating biological information. The similarity information is then integrated into a robust principal component analysis model, which is solved using augmented Lagrange multipliers. Experimental results on four classes of drug-target interaction networks suggest that the proposed approach can accurately classify and predict drug-target interactions. Part of the predicted interactions are reported in public databases. The proposed method can also predict possible targets for new drugs and can be used to determine whether atropine may interact with alpha1B- and beta1- adrenergic receptors. Furthermore, the developed technique identifies potential drugs for new targets and can be used to assess whether olanzapine and propiomazine may target 5HT2B. Finally, the proposed method can potentially address limitations on studies of multitarget drugs and multidrug targets.
Impact sensitivity test of liquid energetic materials
NASA Astrophysics Data System (ADS)
Tiutiaev, A.; Dolzhikov, A.; Zvereva, I.
2017-10-01
This paper presents new experimental method for sensitivity evaluation at the impact. A large number of researches shown that the probability of explosion initiating of liquid explosives by impact depends on the chemical nature and the various external characteristics. But the sensitivity of liquid explosive in the presence of gas bubbles increases many times as compared with the liquid without gas bubbles. In this case local chemical reaction focus are formed as a result of compression and heating of the gas inside the bubbles. In the liquid as a result of convection, wave motion, shock, etc. gas bubbles are easily generated, it is necessary to develop methods for determining sensitivity of liquid explosives to impact and to research the explosives ignition with bubbles. For the experimental investigation, the well-known impact machine and the so-called appliance 1 were used. Instead of the metal cup in the standard method in this paper polyurethane foam cylindrical container with liquid explosive was used. Polyurethane foam cylindrical container is easily deforms by impact. A large number of tests with different liquid explosives were made. It was found that the test liquid explosive to impact in appliance 1 with polyurethane foam to a large extent reflect the real mechanical sensitivity due to the small loss of impact energy on the deformation of the metal cup, as well as the best differentiation liquid explosive sensitivity due to the higher resolution method.
ERIC Educational Resources Information Center
Levy, Mike
2015-01-01
The article considers the role of qualitative research methods in CALL through describing a series of examples. These examples are used to highlight the importance and value of qualitative data in relation to a specific research objective in CALL. The use of qualitative methods in conjunction with other approaches as in mixed method research…
Acoustic signals of baby black caimans.
Vergne, Amélie L; Aubin, Thierry; Taylor, Peter; Mathevon, Nicolas
2011-12-01
In spite of the importance of crocodilian vocalizations for the understanding of the evolution of sound communication in Archosauria and due to the small number of experimental investigations, information concerning the vocal world of crocodilians is limited. By studying black caimans Melanosuchus niger in their natural habitat, here we supply the experimental evidence that juvenile crocodilians can use a graded sound system in order to elicit adapted behavioral responses from their mother and siblings. By analyzing the acoustic structure of calls emitted in two different situations ('undisturbed context', during which spontaneous calls of juvenile caimans were recorded without perturbing the group, and a simulated 'predator attack', during which calls were recorded while shaking juveniles) and by testing their biological relevance through playback experiments, we reveal the existence of two functionally different types of juvenile calls that produce a different response from the mother and other siblings. Young black caimans can thus modulate the structure of their vocalizations along an acoustic continuum as a function of the emission context. Playback experiments show that both mother and juveniles discriminate between these 'distress' and 'contact' calls. Acoustic communication is thus an important component mediating relationships within family groups in caimans as it is in birds, their archosaurian relatives. Although probably limited, the vocal repertoire of young crocodilians is capable of transmitting the information necessary for allowing siblings and mother to modulate their behavior. Copyright © 2011 Elsevier GmbH. All rights reserved.
Vocal Learning via Social Reinforcement by Infant Marmoset Monkeys.
Takahashi, Daniel Y; Liao, Diana A; Ghazanfar, Asif A
2017-06-19
For over half a century now, primate vocalizations have been thought to undergo little or no experience-dependent acoustic changes during development [1]. If any changes are apparent, then they are routinely (and quite reasonably) attributed to the passive consequences of growth. Indeed, previous experiments on squirrel monkeys and macaque monkeys showed that social isolation [2, 3], deafness [2], cross-fostering [4] and parental absence [5] have little or no effect on vocal development. Here, we explicitly test in marmoset monkeys-a very vocal and cooperatively breeding species [6]-whether the transformation of immature into mature contact calls by infants is influenced by contingent parental vocal feedback. Using a closed-loop design, we experimentally provided more versus less contingent vocal feedback to twin infant marmoset monkeys over their first 2 months of life, the interval during which their contact calls transform from noisy, immature calls to tonal adult-like "phee" calls [7, 8]. Infants who received more contingent feedback had a faster rate of vocal development, producing mature-sounding contact calls earlier than the other twin. The differential rate of vocal development was not linked to genetics, perinatal experience, or body growth; nor did the amount of contingency influence the overall rate of spontaneous vocal production. Thus, we provide the first experimental evidence for production-related vocal learning during the development of a nonhuman primate. Copyright © 2017 Elsevier Ltd. All rights reserved.
MIANN models in medicinal, physical and organic chemistry.
González-Díaz, Humberto; Arrasate, Sonia; Sotomayor, Nuria; Lete, Esther; Munteanu, Cristian R; Pazos, Alejandro; Besada-Porto, Lina; Ruso, Juan M
2013-01-01
Reducing costs in terms of time, animal sacrifice, and material resources with computational methods has become a promising goal in Medicinal, Biological, Physical and Organic Chemistry. There are many computational techniques that can be used in this sense. In any case, almost all these methods focus on few fundamental aspects including: type (1) methods to quantify the molecular structure, type (2) methods to link the structure with the biological activity, and others. In particular, MARCH-INSIDE (MI), acronym for Markov Chain Invariants for Networks Simulation and Design, is a well-known method for QSAR analysis useful in step (1). In addition, the bio-inspired Artificial-Intelligence (AI) algorithms called Artificial Neural Networks (ANNs) are among the most powerful type (2) methods. We can combine MI with ANNs in order to seek QSAR models, a strategy which is called herein MIANN (MI & ANN models). One of the first applications of the MIANN strategy was in the development of new QSAR models for drug discovery. MIANN strategy has been expanded to the QSAR study of proteins, protein-drug interactions, and protein-protein interaction networks. In this paper, we review for the first time many interesting aspects of the MIANN strategy including theoretical basis, implementation in web servers, and examples of applications in Medicinal and Biological chemistry. We also report new applications of the MIANN strategy in Medicinal chemistry and the first examples in Physical and Organic Chemistry, as well. In so doing, we developed new MIANN models for several self-assembly physicochemical properties of surfactants and large reaction networks in organic synthesis. In some of the new examples we also present experimental results which were not published up to date.
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents.
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call 'plausibility' - including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents.
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call ‘plausibility’ – including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents. PMID:20076762
Boosting compound-protein interaction prediction by deep learning.
Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng
2016-11-01
The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.
Wide baseline stereo matching based on double topological relationship consistency
NASA Astrophysics Data System (ADS)
Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang
2009-07-01
Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.
Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation
Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.
2013-01-01
This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
Color image segmentation with support vector machines: applications to road signs detection.
Cyganek, Bogusław
2008-08-01
In this paper we propose efficient color segmentation method which is based on the Support Vector Machine classifier operating in a one-class mode. The method has been developed especially for the road signs recognition system, although it can be used in other applications. The main advantage of the proposed method comes from the fact that the segmentation of characteristic colors is performed not in the original but in the higher dimensional feature space. By this a better data encapsulation with a linear hypersphere can be usually achieved. Moreover, the classifier does not try to capture the whole distribution of the input data which is often difficult to achieve. Instead, the characteristic data samples, called support vectors, are selected which allow construction of the tightest hypersphere that encloses majority of the input data. Then classification of a test data simply consists in a measurement of its distance to a centre of the found hypersphere. The experimental results show high accuracy and speed of the proposed method.
NASA Astrophysics Data System (ADS)
Nakamura, Yoshihiko; Nimura, Yukitaka; Kitasaka, Takayuki; Mizuno, Shinji; Furukawa, Kazuhiro; Goto, Hidemi; Fujiwara, Michitaka; Misawa, Kazunari; Ito, Masaaki; Nawano, Shigeru; Mori, Kensaku
2013-03-01
This paper presents an automated method of abdominal lymph node detection to aid the preoperative diagnosis of abdominal cancer surgery. In abdominal cancer surgery, surgeons must resect not only tumors and metastases but also lymph nodes that might have a metastasis. This procedure is called lymphadenectomy or lymph node dissection. Insufficient lymphadenectomy carries a high risk for relapse. However, excessive resection decreases a patient's quality of life. Therefore, it is important to identify the location and the structure of lymph nodes to make a suitable surgical plan. The proposed method consists of candidate lymph node detection and false positive reduction. Candidate lymph nodes are detected using a multi-scale blob-like enhancement filter based on local intensity structure analysis. To reduce false positives, the proposed method uses a classifier based on support vector machine with the texture and shape information. The experimental results reveal that it detects 70.5% of the lymph nodes with 13.0 false positives per case.
An Accurate Framework for Arbitrary View Pedestrian Detection in Images
NASA Astrophysics Data System (ADS)
Fan, Y.; Wen, G.; Qiu, S.
2018-01-01
We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.
A new approach to characterize very-low-level radioactive waste produced at hadron accelerators.
Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa
2017-04-01
Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called "radionuclide inventory", and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. Copyright © 2017 Elsevier Ltd. All rights reserved.
Group-sparse representation with dictionary learning for medical image denoising and fusion.
Li, Shutao; Yin, Haitao; Fang, Leyuan
2012-12-01
Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.
Estimating 3D topographic map of optic nerve head from a single fundus image
NASA Astrophysics Data System (ADS)
Wang, Peipei; Sun, Jiuai
2018-04-01
Optic nerve head also called optic disc is the distal portion of optic nerve locating and clinically visible on the retinal surface. It is a 3 dimensional elliptical shaped structure with a central depression called the optic cup. This shape of the ONH and the size of the depression can be varied due to different retinopathy or angiopathy, therefore the estimation of topography of optic nerve head is significant for assisting diagnosis of those retinal related complications. This work describes a computer vision based method, i.e. shape from shading (SFS) to recover and visualize 3D topographic map of optic nerve head from a normal fundus image. The work is expected helpful for assessing those complications associated the deformation of optic nerve head such as glaucoma and diabetes. The illumination is modelled as uniform over the area around optic nerve head and its direction estimated from the available image. The Tsai discrete method has been employed to recover the 3D topographic map of the optic nerve head. The initial experimental result demonstrates our approach works on most of fundus images and provides a cheap, but good alternation for rendering and visualizing the topographic information of the optic nerve head for potential clinical use.
rpiCOOL: A tool for In Silico RNA-protein interaction detection using random forest.
Akbaripour-Elahabad, Mohammad; Zahiri, Javad; Rafeh, Reza; Eslami, Morteza; Azari, Mahboobeh
2016-08-07
Understanding the principle of RNA-protein interactions (RPIs) is of critical importance to provide insights into post-transcriptional gene regulation and is useful to guide studies about many complex diseases. The limitations and difficulties associated with experimental determination of RPIs, call an urgent need to computational methods for RPI prediction. In this paper, we proposed a machine learning method to detect RNA-protein interactions based on sequence information. We used motif information and repetitive patterns, which have been extracted from experimentally validated RNA-protein interactions, in combination with sequence composition as descriptors to build a model to RPI prediction via a random forest classifier. About 20% of the "sequence motifs" and "nucleotide composition" features have been selected as the informative features with the feature selection methods. These results suggest that these two feature types contribute effectively in RPI detection. Results of 10-fold cross-validation experiments on three non-redundant benchmark datasets show a better performance of the proposed method in comparison with the current state-of-the-art methods in terms of various performance measures. In addition, the results revealed that the accuracy of the RPI prediction methods could vary considerably across different organisms. We have implemented the proposed method, namely rpiCOOL, as a stand-alone tool with a user friendly graphical user interface (GUI) that enables the researchers to predict RNA-protein interaction. The rpiCOOL is freely available at http://biocool.ir/rpicool.html for non-commercial uses. Copyright © 2016 Elsevier Ltd. All rights reserved.
MRL and SuperFine+MRL: new supertree methods
2012-01-01
Background Supertree methods combine trees on subsets of the full taxon set together to produce a tree on the entire set of taxa. Of the many supertree methods, the most popular is MRP (Matrix Representation with Parsimony), a method that operates by first encoding the input set of source trees by a large matrix (the "MRP matrix") over {0,1, ?}, and then running maximum parsimony heuristics on the MRP matrix. Experimental studies evaluating MRP in comparison to other supertree methods have established that for large datasets, MRP generally produces trees of equal or greater accuracy than other methods, and can run on larger datasets. A recent development in supertree methods is SuperFine+MRP, a method that combines MRP with a divide-and-conquer approach, and produces more accurate trees in less time than MRP. In this paper we consider a new approach for supertree estimation, called MRL (Matrix Representation with Likelihood). MRL begins with the same MRP matrix, but then analyzes the MRP matrix using heuristics (such as RAxML) for 2-state Maximum Likelihood. Results We compared MRP and SuperFine+MRP with MRL and SuperFine+MRL on simulated and biological datasets. We examined the MRP and MRL scores of each method on a wide range of datasets, as well as the resulting topological accuracy of the trees. Our experimental results show that MRL, coupled with a very good ML heuristic such as RAxML, produced more accurate trees than MRP, and MRL scores were more strongly correlated with topological accuracy than MRP scores. Conclusions SuperFine+MRP, when based upon a good MP heuristic, such as TNT, produces among the best scores for both MRP and MRL, and is generally faster and more topologically accurate than other supertree methods we tested. PMID:22280525
Splines and polynomial tools for flatness-based constrained motion planning
NASA Astrophysics Data System (ADS)
Suryawan, Fajar; De Doná, José; Seron, María
2012-08-01
This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng
2018-01-01
Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
Helping Students Make Sense of Graphs: An Experimental Trial of SmartGraphs Software
ERIC Educational Resources Information Center
Zucker, Andrew; Kay, Rachel; Staudt, Carolyn
2014-01-01
Graphs are commonly used in science, mathematics, and social sciences to convey important concepts; yet students at all ages demonstrate difficulties interpreting graphs. This paper reports on an experimental study of free, Web-based software called SmartGraphs that is specifically designed to help students overcome their misconceptions regarding…
Evidence-Based Practices in a Changing World: Reconsidering the Counterfactual in Education Research
ERIC Educational Resources Information Center
Lemons, Christopher J.; Fuchs, Douglas; Gilbert, Jennifer K.; Fuchs, Lynn S.
2014-01-01
Experimental and quasi-experimental designs are used in educational research to establish causality and develop effective practices. These research designs rely on a counterfactual model that, in simple form, calls for a comparison between a treatment group and a control group. Developers of educational practices often assume that the population…
Quantum-tomographic cryptography with a semiconductor single-photon source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaszlikowski, D.; Yang, L.J.; Yong, L.S.
2005-09-15
We analyze the security of so-called quantum-tomographic cryptography with the source producing entangled photons via an experimental scheme proposed by Fattal et al. [Phys. Rev. Lett. 92, 37903 (2004)]. We determine the range of the experimental parameters for which the protocol is secure against the most general incoherent attacks.
NASA Astrophysics Data System (ADS)
Syifahayu
2017-02-01
The study was conducted based on teaching and learning problems led by conventional method that had been done in the process of learning science. It gave students lack opportunities to develop their competence and thinking skills. Consequently, the process of learning science was neglected. Students did not have opportunity to improve their critical attitude and creative thinking skills. To cope this problem, the study was conducted using Project-Based Learning model through inquiry-based science education about environment. The study also used an approach called Sains Lingkungan and Teknologi masyarakat - “Saling Temas” (Environmental science and Technology in Society) which promoted the local content in Lampung as a theme in integrated science teaching and learning. The study was a quasi-experimental with pretest-posttest control group design. Initially, the subjects were given a pre-test. The experimental group was given inquiry learning method while the control group was given conventional learning. After the learning process, the subjects of both groups were given post-test. Quantitative analysis was performed using the Mann-Whitney U-test and also a qualitative descriptive. Based on the result, environmental literacy skills of students who get inquiry learning strategy, with project-based learning model on the theme soil washing, showed significant differences. The experimental group is better than the control group. Data analysis showed the p-value or sig. (2-tailed) is 0.000 <α = 0.05 with the average N-gain of experimental group is 34.72 and control group is 16.40. Besides, the learning process becomes more meaningful.
Downhole microseismic signal-to-noise ratio enhancement via strip matching shearlet transform
NASA Astrophysics Data System (ADS)
Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili
2018-04-01
Shearlet transform has been proved effective in noise attenuation. However, because of the low magnitude and high frequency of downhole microseismic signals, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is hard to suppress the noise. In this paper, we present a novel signal-to-noise ratio enhancement scheme called strip matching shearlet transform. The method takes into account the directivity of microseismic events and shearlets. Through strip matching, the matching degree in direction between them has been promoted. Then the coefficient values of valid signals are much larger than those of the noise. Consequently, we can separate them well with the help of thresholding. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.
CFD analysis of a twin scroll radial turbine
NASA Astrophysics Data System (ADS)
Fürst, Jiří; Žák, Zdenĕk
2018-06-01
The contribution deals with the application of coupled implicit solver for compressible flows to CFD analysis of a twin scroll radial turbine. The solver is based on the finite volume method, convective terms are approximated using AUSM+up scheme, viscous terms use central approximation and the time evolution is achieved with lower-upper symmetric Gauss-Seidel (LU-SGS) method. The solver allows steady simulation with the so called frozen rotor approach as well as the fully unsteady solution. Both approaches are at first validated for the case of ERCOFTAC pump [1]. Then the CFD analysis of the flow through a twin scroll radial turbine and the predictions of the efficiency and turbine power is performed and the results are compared to experimental data obtained in the framework of Josef Božek - Competence Centre for Automotive Industry.
Reliable prediction intervals with regression neural networks.
Papadopoulos, Harris; Haralambous, Haris
2011-10-01
This paper proposes an extension to conventional regression neural networks (NNs) for replacing the point predictions they produce with prediction intervals that satisfy a required level of confidence. Our approach follows a novel machine learning framework, called Conformal Prediction (CP), for assigning reliable confidence measures to predictions without assuming anything more than that the data are independent and identically distributed (i.i.d.). We evaluate the proposed method on four benchmark datasets and on the problem of predicting Total Electron Content (TEC), which is an important parameter in trans-ionospheric links; for the latter we use a dataset of more than 60000 TEC measurements collected over a period of 11 years. Our experimental results show that the prediction intervals produced by our method are both well calibrated and tight enough to be useful in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
The Complete, Temperature Resolved Spectrum of Methyl Cyanide Between 200 and 277 GHZ
NASA Astrophysics Data System (ADS)
McMillan, James P.; Neese, Christopher F.; De Lucia, Frank C.
2016-06-01
We have studied methyl cyanide, one of the so-called 'astronomical weeds', in the 200--277 GHz band. We have experimentally gathered a set of intensity calibrated, complete, and temperature resolved spectra from across the temperature range of 231--351 K. Using our previously reported method of analysis, the point by point method, we are capable of generating the complete spectrum at astronomically significant temperatures. Lines, of nontrivial intensity, which were previously not included in the available astrophysical catalogs have been found. Lower state energies and line strengths have been found for a number of lines which are not currently present in the catalogs. The extent to which this may be useful in making assignments will be discussed. J. McMillan, S. Fortman, C. Neese, F. DeLucia, ApJ. 795, 56 (2014)
Dimethyl Ether Between 214.6 and 265.3 Ghz: the Complete, Temperature Resolved Spectrum
NASA Astrophysics Data System (ADS)
McMillan, James P.; Neese, Christopher F.; De Lucia, Frank C.
2017-06-01
We have studied dimethyl ether, one of the so-called 'astronomical weeds', in the 214.6-265.3 GHz band. We have experimentally gathered a set of intensity calibrated, complete, and temperature resolved spectra from across the temperature range of 238-391 K. Using our previously reported method of analysis, the point by point method, we are capable of generating the complete spectrum at astronomically significant temperatures. Many lines, of nontrivial intensity, which were previously not included in the available astrophysical catalogs have been found. Lower state energies and line strengths have been found for a number of lines which are not currently present in the catalogs. The extent to which this may be useful in making assignments will be discussed. J. McMillan, S. Fortman, C. Neese, F. DeLucia, ApJ. 795, 56 (2014)
ParticleCall: A particle filter for base calling in next-generation sequencing systems
2012-01-01
Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067
New fluorescence techniques for high-throughput drug discovery.
Jäger, S; Brand, L; Eggeling, C
2003-12-01
The rapid increase of compound libraries as well as new targets emerging from the Human Genome Project require constant progress in pharmaceutical research. An important tool is High-Throughput Screening (HTS), which has evolved as an indispensable instrument in the pre-clinical target-to-IND (Investigational New Drug) discovery process. HTS requires machinery, which is able to test more than 100,000 potential drug candidates per day with respect to a specific biological activity. This calls for certain experimental demands especially with respect to sensitivity, speed, and statistical accuracy, which are fulfilled by using fluorescence technology instrumentation. In particular the recently developed family of fluorescence techniques, FIDA (Fluorescence Intensity Distribution Analysis), which is based on confocal single-molecule detection, has opened up a new field of HTS applications. This report describes the application of these new techniques as well as of common fluorescence techniques--such as confocal fluorescence lifetime and anisotropy--to HTS. It gives experimental examples and presents advantages and disadvantages of each method. In addition the most common artifacts (auto-fluorescence or quenching by the drug candidates) emerging from the fluorescence detection techniques are highlighted and correction methods for confocal fluorescence read-outs are presented, which are able to circumvent this deficiency.
Clutter Mitigation in Echocardiography Using Sparse Signal Separation
Yavneh, Irad
2015-01-01
In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
Differing types of cellular phone conversations and dangerous driving.
Dula, Chris S; Martin, Benjamin A; Fox, Russell T; Leonard, Robin L
2011-01-01
This study sought to investigate the relationship between cell phone conversation type and dangerous driving behaviors. It was hypothesized that more emotional phone conversations engaged in while driving would produce greater frequencies of dangerous driving behaviors in a simulated environment than more mundane conversation or no phone conversation at all. Participants were semi-randomly assigned to one of three conditions: (1) no call, (2) mundane call, and, (3) emotional call. While driving in a simulated environment, participants in the experimental groups received a phone call from a research confederate who either engaged them in innocuous conversation (mundane call) or arguing the opposite position of a deeply held belief of the participant (emotional call). Participants in the no call and mundane call groups differed significantly only on percent time spent speeding and center line crossings, though the mundane call group consistently engaged in more of all dangerous driving behaviors than did the no call participants. Participants in the emotional call group engaged in significantly more dangerous driving behaviors than participants in both the no call and mundane call groups, with the exception of traffic light infractions, where there were no significant group differences. Though there is need for replication, the authors concluded that whereas talking on a cell phone while driving is risky to begin with, having emotionally intense conversations is considerably more dangerous. Copyright © 2010 Elsevier Ltd. All rights reserved.
A new UK fission yield evaluation UKFY3.7
NASA Astrophysics Data System (ADS)
Mills, Robert William
2017-09-01
The JEFF neutron induced and spontaneous fission product yield evaluation is currently unchanged from JEFF-3.1.1, also known by its UK designation UKFY3.6A. It is based upon experimental data combined with empirically fitted mass, charge and isomeric state models which are then adjusted within the experimental and model uncertainties to conform to the physical constraints of the fission process. A new evaluation has been prepared for JEFF, called UKFY3.7, that incorporates new experimental data and replaces the current empirical models (multi-Gaussian fits of mass distribution and Wahl Zp model for charge distribution combined with parameter extrapolation), with predictions from GEF. The GEF model has the advantage that one set of parameters allows the prediction of many different fissioning nuclides at different excitation energies unlike previous models where each fissioning nuclide at a specific excitation energy had to be fitted individually to the relevant experimental data. The new UKFY3.7 evaluation, submitted for testing as part of JEFF-3.3, is described alongside initial results of testing. In addition, initial ideas for future developments allowing inclusion of new measurements types and changing from any neutron spectrum type to true neutron energy dependence are discussed. Also, a method is proposed to propagate uncertainties of fission product yields based upon the experimental data that underlies the fission yield evaluation. The covariance terms being determined from the evaluated cumulative and independent yields combined with the experimental uncertainties on the cumulative yield measurements.
Curriculum system for experimental teaching in optoelectronic information
NASA Astrophysics Data System (ADS)
Di, Hongwei; Chen, Zhenqiang; Zhang, Jun; Luo, Yunhan
2017-08-01
The experimental curriculum system is directly related to talent training quality. Based on the careful investigation of the developing request of the optoelectronic information talents in the new century, the experimental teaching goal and the content, the teaching goal was set to cultivate students' innovative consciousness, innovative thinking, creativity and problem solving ability. Through straightening out the correlation among the experimental teaching in the main courses, the whole structure design was phased out, as well as the hierarchical curriculum connotation. According to the ideas of "basic, comprehensive, applied and innovative", the construction of experimental teaching system called "triple-three" was put forward for the optoelectronic information experimental teaching practice.
NASA Astrophysics Data System (ADS)
Fruchart, Michel; Vitelli, Vincenzo
2018-03-01
A theoretical framework for the design of so-called perturbative metamaterials, based on weakly interacting unit cells, has led to the experimental demonstration of a quadrupole topological insulator.
Wakefield, Andrew; Stone, Emma L.; Jones, Gareth; Harris, Stephen
2015-01-01
The light-emitting diode (LED) street light market is expanding globally, and it is important to understand how LED lights affect wildlife populations. We compared evasive flight responses of moths to bat echolocation calls experimentally under LED-lit and -unlit conditions. Significantly, fewer moths performed ‘powerdive’ flight manoeuvres in response to bat calls (feeding buzz sequences from Nyctalus spp.) under an LED street light than in the dark. LED street lights reduce the anti-predator behaviour of moths, shifting the balance in favour of their predators, aerial hawking bats. PMID:26361558
Accounting for GC-content bias reduces systematic errors and batch effects in ChIP-seq data.
Teng, Mingxiang; Irizarry, Rafael A
2017-11-01
The main application of ChIP-seq technology is the detection of genomic regions that bind to a protein of interest. A large part of functional genomics' public catalogs is based on ChIP-seq data. These catalogs rely on peak calling algorithms that infer protein-binding sites by detecting genomic regions associated with more mapped reads (coverage) than expected by chance, as a result of the experimental protocol's lack of perfect specificity. We find that GC-content bias accounts for substantial variability in the observed coverage for ChIP-seq experiments and that this variability leads to false-positive peak calls. More concerning is that the GC effect varies across experiments, with the effect strong enough to result in a substantial number of peaks called differently when different laboratories perform experiments on the same cell line. However, accounting for GC content bias in ChIP-seq is challenging because the binding sites of interest tend to be more common in high GC-content regions, which confounds real biological signals with unwanted variability. To account for this challenge, we introduce a statistical approach that accounts for GC effects on both nonspecific noise and signal induced by the binding site. The method can be used to account for this bias in binding quantification as well to improve existing peak calling algorithms. We use this approach to show a reduction in false-positive peaks as well as improved consistency across laboratories. © 2017 Teng and Irizarry; Published by Cold Spring Harbor Laboratory Press.
Krohs, Ulrich
2012-03-01
Systems biology aims at explaining life processes by means of detailed models of molecular networks, mainly on the whole-cell scale. The whole cell perspective distinguishes the new field of systems biology from earlier approaches within molecular cell biology. The shift was made possible by the high throughput methods that were developed for gathering 'omic' (genomic, proteomic, etc.) data. These new techniques are made commercially available as semi-automatic analytic equipment, ready-made analytic kits and probe arrays. There is a whole industry of supplies for what may be called convenience experimentation. My paper inquires some epistemic consequences of strong reliance on convenience experimentation in systems biology. In times when experimentation was automated to a lesser degree, modeling and in part even experimentation could be understood fairly well as either being driven by hypotheses, and thus proceed by the testing of hypothesis, or as being performed in an exploratory mode, intended to sharpen concepts or initially vague phenomena. In systems biology, the situation is dramatically different. Data collection became so easy (though not cheap) that experimentation is, to a high degree, driven by convenience equipment, and model building is driven by the vast amount of data that is produced by convenience experimentation. This results in a shift in the mode of science. The paper shows that convenience driven science is not primarily hypothesis-testing, nor is it in an exploratory mode. It rather proceeds in a gathering mode. This shift demands another shift in the mode of evaluation, which now becomes an exploratory endeavor, in response to the superabundance of gathered data. Copyright © 2011 Elsevier Ltd. All rights reserved.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Voellmy, Irene K; Goncalves, Ines Braga; Barrette, Marie-France; Monfort, Steven L; Manser, Marta B
2014-11-01
Adrenal hormones likely affect anti-predator behavior in animals. With experimental field studies, we first investigated associations between mean fecal glucocorticoid metabolite (fGC) excretion and vigilance and with behavioral responses to alarm call playbacks in free-ranging meerkats (Suricata suricatta). We then tested how vigilance and behavioral responses to alarm call playbacks were affected in individuals administered exogenous cortisol. We found a positive association between mean fGC concentrations and vigilance behavior, but no relationship with the intensity of behavioral responses to alarm calls. However, in response to alarm call playbacks, individuals administered cortisol took slightly longer to resume foraging than control individuals treated with saline solution. Vigilance behavior, which occurs in the presence and absence of dangerous stimuli, serves to detect and avoid potential dangers, whereas responses to alarm calls serve to avoid immediate predation. Our data show that mean fGC excretion in meerkats was associated with vigilance, as a re-occurring anti-predator behavior over long time periods, and experimentally induced elevations of plasma cortisol affected the response to immediate threats. Together, our results indicate an association between the two types of anti-predator behavior and glucocorticoids, but that the underlying mechanisms may differ. Our study emphasizes the need to consider appropriate measures of adrenal activity specific to different contexts when assessing links between stress physiology and different anti-predator behaviors. Copyright © 2014 Elsevier Inc. All rights reserved.
Best practices for evaluating single nucleotide variant calling methods for microbial genomics
Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.
2015-01-01
Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378
Malware analysis using visualized image matrices.
Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.
A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals
Castañón–Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo
2015-01-01
The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi–Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information. PMID:26633417
A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals.
Castañón-Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo
2015-12-02
The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi-Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo
2016-01-20
A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.
ERIC Educational Resources Information Center
von Arnim, Albrecht G.; Missra, Anamika
2017-01-01
Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of…
ERIC Educational Resources Information Center
Leap, Evelyn M.
2013-01-01
This quasi-experimental study was conducted with two fifth grade classrooms to investigate the effect of scent on students' acquisition and retention of multiplication facts and math anxiety. Forty participants received daily instruction for nine weeks, using a strategy-rich multiplication program called Factivation. Students in the Double Smencil…
Impacts of Early Childhood Education on Medium- and Long-Term Educational Outcomes
ERIC Educational Resources Information Center
McCoy, Dana Charles; Yoshikawa, Hirokazu; Ziol-Guest, Kathleen M.; Duncan, Greg J.; Schindler, Holly S.; Magnuson, Katherine; Yang, Rui; Koepp, Andrew; Shonkoff, Jack P.
2017-01-01
Despite calls to expand early childhood education (ECE) in the United States, questions remain regarding its medium- and long-term impacts on educational outcomes. We use meta-analysis of 22 high-quality experimental and quasi-experimental studies conducted between 1960 and 2016 to find that on average, participation in ECE leads to statistically…
75 FR 6033 - Agency Information Collection Request; 60-Day Public Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-05
... Initial Telephone Screen........ Active Control 2400 1 20 minutes 800 hours Group (ACG)/ Experimental Group (EG) In-person interview EG 1200 1 1.25 hours 1,500 hours Jump start phone call EG 1200 1 30... care insurance who are age 75 and over using a multi- tiered random experimental research design to...
75 FR 19976 - Agency Information Collection Request; 30-Day Public Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
...) hours Initial Telephone Screen...... Experimental 240 1 20/60 80 Group. In-person interview 240 1 80/60... Telephone Screen...... Active Control 240 1 20/60 80 Group. Quarterly phone calls......... 240 4 10/60 160... private long-term care insurance who are age 75 and over using a multi- tiered random experimental...
Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui
2017-08-17
It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.
Cross-modal individual recognition in wild African lions.
Gilfillan, Geoffrey; Vitale, Jessica; McNutt, John Weldon; McComb, Karen
2016-08-01
Individual recognition is considered to have been fundamental in the evolution of complex social systems and is thought to be a widespread ability throughout the animal kingdom. Although robust evidence for individual recognition remains limited, recent experimental paradigms that examine cross-modal processing have demonstrated individual recognition in a range of captive non-human animals. It is now highly relevant to test whether cross-modal individual recognition exists within wild populations and thus examine how it is employed during natural social interactions. We address this question by testing audio-visual cross-modal individual recognition in wild African lions (Panthera leo) using an expectancy-violation paradigm. When presented with a scenario where the playback of a loud-call (roaring) broadcast from behind a visual block is incongruent with the conspecific previously seen there, subjects responded more strongly than during the congruent scenario where the call and individual matched. These findings suggest that lions are capable of audio-visual cross-modal individual recognition and provide a useful method for studying this ability in wild populations. © 2016 The Author(s).
Shilling Attacks Detection in Recommender Systems Based on Target Item Analysis
Zhou, Wei; Wen, Junhao; Koh, Yun Sing; Xiong, Qingyu; Gao, Min; Dobbie, Gillian; Alam, Shafiq
2015-01-01
Recommender systems are highly vulnerable to shilling attacks, both by individuals and groups. Attackers who introduce biased ratings in order to affect recommendations, have been shown to negatively affect collaborative filtering (CF) algorithms. Previous research focuses only on the differences between genuine profiles and attack profiles, ignoring the group characteristics in attack profiles. In this paper, we study the use of statistical metrics to detect rating patterns of attackers and group characteristics in attack profiles. Another question is that most existing detecting methods are model specific. Two metrics, Rating Deviation from Mean Agreement (RDMA) and Degree of Similarity with Top Neighbors (DegSim), are used for analyzing rating patterns between malicious profiles and genuine profiles in attack models. Building upon this, we also propose and evaluate a detection structure called RD-TIA for detecting shilling attacks in recommender systems using a statistical approach. In order to detect more complicated attack models, we propose a novel metric called DegSim’ based on DegSim. The experimental results show that our detection model based on target item analysis is an effective approach for detecting shilling attacks. PMID:26222882
NASA Technical Reports Server (NTRS)
Flores, J.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.
1986-01-01
A fast diagonalized Beam-Warming algorithm is coupled with a zonal approach to solve the three-dimensional Euler/Navier-Stokes equations. The computer code, called Transonic Navier-Stokes (TNS), uses a total of four zones for wing configurations (or can be extended to complete aircraft configurations by adding zones). In the inner blocks near the wing surface, the thin-layer Navier-Stokes equations are solved, while in the outer two blocks the Euler equations are solved. The diagonal algorithm yields a speedup of as much as a factor of 40 over the original algorithm/zonal method code. The TNS code, in addition, has the capability to model wind tunnel walls. Transonic viscous solutions are obtained on a 150,000-point mesh for a NACA 0012 wing. A three-order-of-magnitude drop in the L2-norm of the residual requires approximately 500 iterations, which takes about 45 min of CPU time on a Cray-XMP processor. Simulations are also conducted for a different geometrical wing called WING C. All cases show good agreement with experimental data.
2016-08-03
instance, quantum systems that are near-integrable usually fail to thermalize in an experimentally realistic time scale and, instead, relax to quasi ...However, it is possible to observe quasi -stationary states, often called prethermal, that emerge within an experimentally accessible time scale. Previous...generalized Gibbs ensemble (GGE) [10–13]. Here we experimentally study the relaxation dynamics of a chain of up to 22 spins evolving under a long-range
Experimental verification of Pyragas-Schöll-Fiedler control.
von Loewenich, Clemens; Benner, Hartmut; Just, Wolfram
2010-09-01
We present an experimental realization of time-delayed feedback control proposed by Schöll and Fiedler. The scheme enables us to stabilize torsion-free periodic orbits in autonomous systems, and to overcome the so-called odd number limitation. The experimental control performance is in quantitative agreement with the bifurcation analysis of simple model systems. The results uncover some general features of the control scheme which are deemed to be relevant for a large class of setups.
Tracking fin whales in the northeast Pacific Ocean with a seafloor seismic network.
Wilcock, William S D
2012-10-01
Ocean bottom seismometer (OBS) networks represent a tool of opportunity to study fin and blue whales. A small OBS network on the Juan de Fuca Ridge in the northeast Pacific Ocean in ~2.3 km of water recorded an extensive data set of 20-Hz fin whale calls. An automated method has been developed to identify arrival times based on instantaneous frequency and amplitude and to locate calls using a grid search even in the presence of a few bad arrival times. When only one whale is calling near the network, tracks can generally be obtained up to distances of ~15 km from the network. When the calls from multiple whales overlap, user supervision is required to identify tracks. The absolute and relative amplitudes of arrivals and their three-component particle motions provide additional constraints on call location but are not useful for extending the distance to which calls can be located. The double-difference method inverts for changes in relative call locations using differences in residuals for pairs of nearby calls recorded on a common station. The method significantly reduces the unsystematic component of the location error, especially when inconsistencies in arrival time observations are minimized by cross-correlation.
Frequency standards requirements of the NASA deep space network to support outer planet missions
NASA Technical Reports Server (NTRS)
Fliegel, H. F.; Chao, C. C.
1974-01-01
Navigation of Mariner spacecraft to Jupiter and beyond will require greater accuracy of positional determination than heretofore obtained if the full experimental capabilities of this type of spacecraft are to be utilized. Advanced navigational techniques which will be available by 1977 include Very Long Baseline Interferometry (VLBI), three-way Doppler tracking (sometimes called quasi-VLBI), and two-way Doppler tracking. It is shown that VLBI and quasi-VLBI methods depend on the same basic concept, and that they impose nearly the same requirements on the stability of frequency standards at the tracking stations. It is also shown how a realistic modelling of spacecraft navigational errors prevents overspecifying the requirements to frequency stability.
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
Gorelik, Tatiana E; Schmidt, Martin U; Kolb, Ute; Billinge, Simon J L
2015-04-01
This paper shows that pair-distribution function (PDF) analyses can be carried out on organic and organometallic compounds from powder electron diffraction data. Different experimental setups are demonstrated, including selected area electron diffraction and nanodiffraction in transmission electron microscopy or nanodiffraction in scanning transmission electron microscopy modes. The methods were demonstrated on organometallic complexes (chlorinated and unchlorinated copper phthalocyanine) and on purely organic compounds (quinacridone). The PDF curves from powder electron diffraction data, called ePDF, are in good agreement with PDF curves determined from X-ray powder data demonstrating that the problems of obtaining kinematical scattering data and avoiding beam damage of the sample are possible to resolve.
The NASA Hydrogen Energy Systems Technology study - A summary
NASA Technical Reports Server (NTRS)
Laumann, E. A.
1976-01-01
This study is concerned with: hydrogen use, alternatives and comparisons, hydrogen production, factors affecting application, and technology requirements. Two scenarios for future use are explained. One is called the reference hydrogen use scenario and assumes continued historic uses of hydrogen along with additional use for coal gasification and liquefaction, consistent with the Ford technical fix baseline (1974) projection. The expanded scenario relies on the nuclear electric economy (1973) energy projection and assumes the addition of limited new uses such as experimental hydrogen-fueled aircraft, some mixing with natural gas, and energy storage by utilities. Current uses and supply of hydrogen are described, and the technological requirements for developing new methods of hydrogen production are discussed.
Hurka, Florian; Wenger, Thomas; Heininger, Sebastian; Lueth, Tim C
2011-01-01
This article describes a new interaction device for surgical navigation systems--the so-called navigation mouse system. The idea is to use a tracked instrument of a surgical navigation system like a pointer to control the software. The new interaction system extends existing navigation systems with a microcontroller-unit. The microcontroller-unit uses the existing communication line to extract the needed 3D-information of an instrument to calculate positions analogous to the PC mouse cursor and click events. These positions and events are used to manipulate the navigation system. In an experimental setup the reachable accuracy with the new mouse system is shown.
Digital imaging biomarkers feed machine learning for melanoma screening.
Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G
2017-07-01
We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.
Gravity assisted recovery of liquid xenon at large mass flow rates
NASA Astrophysics Data System (ADS)
Virone, L.; Acounis, S.; Beaupère, N.; Beney, J.-L.; Bert, J.; Bouvier, S.; Briend, P.; Butterworth, J.; Carlier, T.; Chérel, M.; Crespi, P.; Cussonneau, J.-P.; Diglio, S.; Manzano, L. Gallego; Giovagnoli, D.; Gossiaux, P.-B.; Kraeber-Bodéré, F.; Ray, P. Le; Lefèvre, F.; Marty, P.; Masbou, J.; Morteau, E.; Picard, G.; Roy, D.; Staempflin, M.; Stutzmann, J.-S.; Visvikis, D.; Xing, Y.; Zhu, Y.; Thers, D.
2018-06-01
We report on a liquid xenon gravity assisted recovery method for nuclear medical imaging applications. The experimental setup consists of an elevated detector enclosed in a cryostat connected to a storage tank called ReStoX. Both elements are part of XEMIS2 (XEnon Medical Imaging System): an innovative medical imaging facility for pre-clinical research that uses pure liquid xenon as detection medium. Tests based on liquid xenon transfer from the detector to ReStoX have been successfully performed showing that an unprecedented mass flow rate close to 1 ton per hour can be reached. This promising achievement as well as future areas of improvement will be discussed in this paper.
Cross Section Measurements of the Radioactive 107Pd and Stable 105,108Pd Nuclei at J-PARC/MLF/ANNRI
NASA Astrophysics Data System (ADS)
Nakamura, S.; Kimura, A.; Kitatani, F.; Ohta, M.; Furutaka, K.; Goko, S.; Hara, K. Y.; Harada, H.; Hirose, K.; Kin, T.; Koizumi, M.; Oshima, M.; Toh, Y.; Kino, K.; Hiraga, F.; Kamiyama, T.; Kiyanagi, Y.; Katabuchi, T.; Mizumoto, M.; Igashira, M.; Hori, J.; Fujii, T.; Fukutani, S.; Takamiya, K.
2014-05-01
The measurements of the neutron-capture cross sections were performed for the radioactive 107Pd and stable 105,108Pd nuclei by the time-of flight method using an apparatus called “Accurate Neutron-Nucleus Reaction measurement Instrument (ANNRI)” installed at the neutron Beam Line No.4 of the Materials and Life science experimental Facility (MLF) in the J-PARC. The neutron-capture cross sections of 107Pd and 105,108Pd have been measured in the low energy region from the thermal to a few hundreds eV. From the measurements, new information was obtained for some resonances of these Pd nuclei.
Longitudinal phase-space coating of beam in a storage ring
NASA Astrophysics Data System (ADS)
Bhat, C. M.
2014-06-01
In this Letter, I report on a novel scheme for beam stacking without any beam emittance dilution using a barrier rf system in synchrotrons. The general principle of the scheme called longitudinal phase-space coating, validation of the concept via multi-particle beam dynamics simulations applied to the Fermilab Recycler, and its experimental demonstration are presented. In addition, it has been shown and illustrated that the rf gymnastics involved in this scheme can be used in measuring the incoherent synchrotron tune spectrum of the beam in barrier buckets and in producing a clean hollow beam in longitudinal phase space. The method of beam stacking in synchrotrons presented here is the first of its kind.
Paper simulation techniques in user requirements analysis for interactive computer systems
NASA Technical Reports Server (NTRS)
Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.
1979-01-01
This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task
47 CFR 80.225 - Requirements for selective calling equipment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... selective calling (DSC) equipment and selective calling equipment installed in ship and coast stations, and...-STD, “RTCM Recommended Minimum Standards for Digital Selective Calling (DSC) Equipment Providing... Class ‘D’ Digital Selective Calling (DSC)—Methods of testing and required test results,” March 2003. ITU...
Hinterleitner, Gernot; Leopold-Wildburger, Ulrike
2015-01-01
This paper deals with the market structure at the opening of the trading day and its influence on subsequent trading. We compare a single continuous double auction and two complement markets with different call auction designs as opening mechanisms in a unified experimental framework. The call auctions differ with respect to their levels of transparency. We find that a call auction not only improves market efficiency and liquidity at the beginning of the trading day when compared to the stand-alone continuous double auction, but also causes positive spillover effects on subsequent trading. Concerning the design of the opening call auction, we find no significant differences between the transparent and nontransparent specification with respect to opening prices and liquidity. In the course of subsequent continuous trading, however, market quality is slightly higher after a nontransparent call auction. PMID:26351653
EFFECTS OF RESPONDING TO A NAME AND GROUP CALL ON PRESCHOOLERS' COMPLIANCE
Beaulieu, Lauren; Hanley, Gregory P.; Roberson, Aleasha A.
2012-01-01
We assessed teacher–child relations with respect to children's name calls, instructions, and compliance in a preschool classroom. The most frequent consequence to a child's name being called was the provision of instructions. We also observed a higher probability of compliance when children attended to a name call. Next, we evaluated the effects of teaching preschoolers to attend to their names and a group call on their compliance with typical instructions. We used a multiple baseline design across subjects and a control-group design to evaluate whether gains in compliance were a function of treatment or routine experience in preschool. Results showed that compliance increased as a function of teaching precursors for all children in the experimental group, and the effects on compliance were maintained despite a reduction of the occurrence of precursors. Moreover, it appeared that precursor teaching, not routine preschool experience, was responsible for the changes in compliance. PMID:23322926
Effects of responding to a name and group call on preschoolers' compliance.
Beaulieu, Lauren; Hanley, Gregory P; Roberson, Aleasha A
2012-01-01
We assessed teacher-child relations with respect to children's name calls, instructions, and compliance in a preschool classroom. The most frequent consequence to a child's name being called was the provision of instructions. We also observed a higher probability of compliance when children attended to a name call. Next, we evaluated the effects of teaching preschoolers to attend to their names and a group call on their compliance with typical instructions. We used a multiple baseline design across subjects and a control-group design to evaluate whether gains in compliance were a function of treatment or routine experience in preschool. Results showed that compliance increased as a function of teaching precursors for all children in the experimental group, and the effects on compliance were maintained despite a reduction of the occurrence of precursors. Moreover, it appeared that precursor teaching, not routine preschool experience, was responsible for the changes in compliance.
Hinterleitner, Gernot; Leopold-Wildburger, Ulrike; Mestel, Roland; Palan, Stefan
2015-01-01
This paper deals with the market structure at the opening of the trading day and its influence on subsequent trading. We compare a single continuous double auction and two complement markets with different call auction designs as opening mechanisms in a unified experimental framework. The call auctions differ with respect to their levels of transparency. We find that a call auction not only improves market efficiency and liquidity at the beginning of the trading day when compared to the stand-alone continuous double auction, but also causes positive spillover effects on subsequent trading. Concerning the design of the opening call auction, we find no significant differences between the transparent and nontransparent specification with respect to opening prices and liquidity. In the course of subsequent continuous trading, however, market quality is slightly higher after a nontransparent call auction.
Vocalizations associated with anxiety and fear in the common marmoset (Callithrix jacchus).
Kato, Yoko; Gokan, Hayato; Oh-Nishi, Arata; Suhara, Tetsuya; Watanabe, Shigeru; Minamimoto, Takafumi
2014-12-15
Vocalizations of common marmoset (Callithrix jacchus) were examined under experimental situations related to fear or anxiety. When marmosets were isolated in an unfamiliar environment, they frequently vocalized "tsik-egg" calls, which were the combination calls of 'tsik' followed by several 'egg'. Tsik-egg calls were also observed after treatment with the anxiogenic drug FG-7142 (20mg/kg, sc). In contrast, when marmosets were exposed to predatory stimuli as fear-evoking situations, they frequently vocalized tsik solo calls as well as tsik-egg calls. These results suggest that marmosets dissociate the vocalization of tsik-egg and tsik calls under conditions related to fear/anxiety; tsik-egg solo vocalizations were emitted under anxiety-related conditions (e.g., isolation and anxiogenic drug treatment), whereas a mixed vocalization of tsik-egg and tsik was emitted when confronted with fear-provoking stimuli (i.e., threatening predatory stimuli). Tsik-egg call with/without tsik can be used as a specific vocal index of fear/anxiety in marmosets, which allows us to understand the neural mechanism of negative emotions in primate. Copyright © 2014 Elsevier B.V. All rights reserved.
Microemulsion-based lycopene extraction: Effect of surfactants, co-surfactants and pretreatments.
Amiri-Rigi, Atefeh; Abbasi, Soleiman
2016-04-15
Lycopene is a potent antioxidant that has received extensive attention recently. Due to the challenges encountered with current methods of lycopene extraction using hazardous solvents, industry calls for a greener, safer and more efficient process. The main purpose of present study was application of microemulsion technique to extract lycopene from tomato pomace. In this respect, the effect of eight different surfactants, four different co-surfactants, and ultrasound and enzyme pretreatments on lycopene extraction efficiency was examined. Experimental results revealed that application of combined ultrasound and enzyme pretreatments, saponin as a natural surfactant, and glycerol as a co-surfactant, in the bicontinuous region of microemulsion was the optimal experimental conditions resulting in a microemulsion containing 409.68±0.68 μg/glycopene. The high lycopene concentration achieved, indicates that microemulsion technique, using a low-cost natural surfactant could be promising for a simple and safe separation of lycopene from tomato pomace and possibly from tomato industrial wastes. Copyright © 2015 Elsevier Ltd. All rights reserved.
The anomalous demagnetization behaviour of chondritic meteorites
NASA Astrophysics Data System (ADS)
Morden, S. J.
1992-06-01
Alternating field (AF) demagnetization of chondritic samples often shows anomalous results such as large directional and intensity changes; 'saw-tooth' intensity vs. demagnetizing field curves are also prevalent. An attempt to explain this behaviour is presented, using a computer model in which individual 'mineral grains' can be 'magnetized' in a variety of different ways. A simulated demagnetization can then be carried out to examine the results. It was found that the experimental behaviour of chondrites can be successfully mimicked by loading the computer model with a series of randomly orientated and sized vectors. The parameters of the model can be changed to reflect different trends seen in experimental data. Many published results can be modelled using this method. A known magnetic mineralogy can be modelled, and an unknown mineralogy deduced from AF demagnetization curves. Only by comparing data from mutually orientated samples can true stable regions for palaeointensity measurements be identified, calling into question some previous estimates of field strength from meteorites.
Experimental Demonstration of Counterfactual Quantum Communication
NASA Astrophysics Data System (ADS)
Liu, Yang; Ju, Lei; Liang, Xiao-Lei; Tang, Shi-Biao; Tu, Guo-Liang Shen; Zhou, Lei; Peng, Cheng-Zhi; Chen, Kai; Chen, Teng-Yun; Chen, Zeng-Bing; Pan, Jian-Wei
2012-07-01
Quantum effects, besides offering substantial superiority in many tasks over classical methods, are also expected to provide interesting ways to establish secret keys between remote parties. A striking scheme called “counterfactual quantum cryptography” proposed by Noh [Phys. Rev. Lett. 103, 230501 (2009).PRLTAO0031-900710.1103/PhysRevLett.103.230501] allows one to maintain secure key distributions, in which particles carrying secret information are seemingly not being transmitted through quantum channels. We have experimentally demonstrated, for the first time, a faithful implementation for such a scheme with an on-table realization operating at telecom wavelengths. To verify its feasibility for extension over a long distance, we have furthermore reported an illustration on a 1 km fiber. In both cases, high visibilities of more than 98% are achieved through active stabilization of interferometers. Our demonstration is crucial as a direct verification of such a remarkable application, and this procedure can become a key communication module for revealing fundamental physics through counterfactuals.
NASA Technical Reports Server (NTRS)
Wilkenfeld, J. M.; Judge, R. J. R.; Harlacher, B. L.
1982-01-01
A combined experimental and analytical program to develop system electrical test procedures for the qualification of spacecraft against damage produced by space-electron-induced discharges (EID) occurring on spacecraft dielectric outer surfaces is described. The data on the response of a simple satellite model, called CAN, to electron-induced discharges is presented. The experimental results were compared to predicted behavior and to the response of the CAN to electrical injection techniques simulating blowoff and arc discharges. Also included is a review of significant results from other ground tests and the P78-2 program to form a data base from which is specified those test procedures which optimally simulate the response of spacecraft to EID. The electrical and electron spraying test data were evaluated to provide a first-cut determination of the best methods for performance of electrical excitation qualification tests from the point of view of simulation fidelity.
Experimental research of heterogeneous nuclei in superheated steam
NASA Astrophysics Data System (ADS)
Bartoš, Ondřej; Kolovratník, Michal; Šmíd, Bohuslav; Hrubý, Jan
2016-03-01
A mobile steam expansion chamber has been developed to investigate experimentally homogeneous and heterogeneous nucleation processes in steam, both in the laboratory and at power plants using the steam withdrawn from the steam turbine. The purpose of the device is to provide new insight into the physics of nonequilibrium wet steam formation, which is one of the factors limiting the efficiency and reliability of steam turbines. The expanded steam or a mixture of steam with a non-condensable gas rapidly expands in the expansion chamber. Due to adiabatic cooling, the temperature drops below the dew point of the steam at a given pressure. When reaching a sufficiently high supersaturation, droplets are nucleated. By tuning the supersaturation in the so-called nucleation pulse, particles of various size ranges can be activated. This fact is used in the present study to measure the aerosol particles present in the air. Homogeneous nucleation was negligible in this case. The experiment demonstrates the functionality of the device, data acquisition system and data evaluation methods.
Experimental Protein Structure Verification by Scoring with a Single, Unassigned NMR Spectrum.
Courtney, Joseph M; Ye, Qing; Nesbitt, Anna E; Tang, Ming; Tuttle, Marcus D; Watt, Eric D; Nuzzio, Kristin M; Sperling, Lindsay J; Comellas, Gemma; Peterson, Joseph R; Morrissey, James H; Rienstra, Chad M
2015-10-06
Standard methods for de novo protein structure determination by nuclear magnetic resonance (NMR) require time-consuming data collection and interpretation efforts. Here we present a qualitatively distinct and novel approach, called Comparative, Objective Measurement of Protein Architectures by Scoring Shifts (COMPASS), which identifies the best structures from a set of structural models by numerical comparison with a single, unassigned 2D (13)C-(13)C NMR spectrum containing backbone and side-chain aliphatic signals. COMPASS does not require resonance assignments. It is particularly well suited for interpretation of magic-angle spinning solid-state NMR spectra, but also applicable to solution NMR spectra. We demonstrate COMPASS with experimental data from four proteins--GB1, ubiquitin, DsbA, and the extracellular domain of human tissue factor--and with reconstructed spectra from 11 additional proteins. For all these proteins, with molecular mass up to 25 kDa, COMPASS distinguished the correct fold, most often within 1.5 Å root-mean-square deviation of the reference structure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simulation and experimental validation of the dynamical model of a dual-rotor vibrotactor
NASA Astrophysics Data System (ADS)
Miklós, Á.; Szabó, Z.
2015-01-01
In this work, a novel design for small vibrotactors called the Dual Excenter is presented, which makes it possible to produce vibrations with independently adjustable frequency and amplitude. This feature has been realized using two coaxially aligned eccentric rotors, which are driven by DC motors independently. The prototype of the device has been built, where mechanical components are integrated on a frame with two optical sensors for the measurement of angular velocity and phase angle. The system is equipped with a digital controller. Simulations confirm the results of analytical investigations and they allow us to model the sampling method of the signals of the angular velocity and the phase angle between the rotors. Furthermore, we model the discrete behavior of the controller, which is a PI controller for the angular velocities and a PID controller for the phase angle. Finally, simulation results are compared to experimental ones, which show that the Dual Excenter concept is feasible.
Fabrication and application of a non-contact double-tapered optical fiber tweezers.
Liu, Z L; Liu, Y X; Tang, Y; Zhang, N; Wu, F P; Zhang, B
2017-09-18
A double-tapered optical fiber tweezers (DOFTs) was fabricated by a chemical etching called interfacial layer etching. In this method, the second taper angle (STA) of DOFTs can be controlled easily by the interfacial layer etching time. Application of the DOFTs to the optical trapping of the yeast cells was presented. Effects of the STA on the axile trapping efficiency and the trapping position were investigated experimentally and theoretically. The experimental results are good agreement with the theoretical ones. The results demonstrated that the non-contact capture can be realized for the large STA (e.g. 90 deg) and there was an optimal axile trapping efficiency as the STA increasing. In order to obtain a more accurate measurement result of the trapping force, a correction factor to Stokes drag coefficient was introduced. This work provided a way of designing and fabricating an optical fiber tweezers (OFTs) with a high trapping efficient or a non-contact capture.
Bridgeman, Devon; Tsow, Francis; Xian, Xiaojun; Forzani, Erica
2016-01-01
The development and performance characterization of a new differential pressure-based flow meter for human breath measurements is presented in this article. The device, called a “Confined Pitot Tube,” is comprised of a pipe with an elliptically shaped expansion cavity located in the pipe center, and an elliptical disk inside the expansion cavity. The elliptical disk, named Pitot Tube, is exchangeable, and has different diameters, which are smaller than the diameter of the elliptical cavity. The gap between the disk and the cavity allows the flow of human breath to pass through. The disk causes an obstruction in the flow inside the pipe, but the elliptical cavity provides an expansion for the flow to circulate around the disk, decreasing the overall flow resistance. We characterize the new sensor flow experimentally and theoretically, using Comsol Multiphysics® software with laminar and turbulent models. We also validate the sensor, using inhalation and exhalation tests and a reference method. PMID:27818521
Experimental Protein Structure Verification by Scoring with a Single, Unassigned NMR Spectrum
Courtney, Joseph M.; Ye, Qing; Nesbitt, Anna E.; Tang, Ming; Tuttle, Marcus D.; Watt, Eric D.; Nuzzio, Kristin M.; Sperling, Lindsay J.; Comellas, Gemma; Peterson, Joseph R.; Morrissey, James H.; Rienstra, Chad M.
2016-01-01
Standard methods for de novo protein structure determination by nuclear magnetic resonance (NMR) require time-consuming data collection and interpretation efforts. Here we present a qualitatively distinct and novel approach, called Comparative, Objective Measurement of Protein Architectures by Scoring Shifts (COMPASS), which identifies the best structures from a set of structural models by numerical comparison with a single, unassigned 2D 13C-13C NMR spectrum containing backbone and side-chain aliphatic signals. COMPASS does not require resonance assignments. It is particularly well suited for interpretation of magic-angle spinning solid-state NMR spectra, but also applicable to solution NMR spectra. We demonstrate COMPASS with experimental data from four proteins—GB1, ubiquitin, DsbA, and the extracellular domain of human tissue factor—and with reconstructed spectra from 11 additional proteins. For all these proteins, with molecular mass up to 25 kDa, COMPASS distinguished the correct fold, most often within 1.5 Å root-mean-square deviation of the reference structure. PMID:26365800
Experimental approaches to identify cellular G-quadruplex structures and functions.
Di Antonio, Marco; Rodriguez, Raphaël; Balasubramanian, Shankar
2012-05-01
Guanine-rich nucleic acids can fold into non-canonical DNA secondary structures called G-quadruplexes. The formation of these structures can interfere with the biology that is crucial to sustain cellular homeostases and metabolism via mechanisms that include transcription, translation, splicing, telomere maintenance and DNA recombination. Thus, due to their implication in several biological processes and possible role promoting genomic instability, G-quadruplex forming sequences have emerged as potential therapeutic targets. There has been a growing interest in the development of synthetic molecules and biomolecules for sensing G-quadruplex structures in cellular DNA. In this review, we summarise and discuss recent methods developed for cellular imaging of G-quadruplexes, and the application of experimental genomic approaches to detect G-quadruplexes throughout genomic DNA. In particular, we will discuss the use of engineered small molecules and natural proteins to enable pull-down, ChIP-Seq, ChIP-chip and fluorescence imaging of G-quadruplex structures in cellular DNA. Copyright © 2012 Elsevier Inc. All rights reserved.
X-38 Experimental Controls Laws
NASA Technical Reports Server (NTRS)
Munday, Steve; Estes, Jay; Bordano, Aldo J.
2000-01-01
X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.
Closed-loop bird-computer interactions: a new method to study the role of bird calls.
Lerch, Alexandre; Roy, Pierre; Pachet, François; Nagle, Laurent
2011-03-01
In the field of songbird research, many studies have shown the role of male songs in territorial defense and courtship. Calling, another important acoustic communication signal, has received much less attention, however, because calls are assumed to contain less information about the emitter than songs do. Birdcall repertoire is diverse, and the role of calls has been found to be significant in the area of social interaction, for example, in pair, family, and group cohesion. However, standard methods for studying calls do not allow precise and systematic study of their role in communication. We propose herein a new method to study bird vocal interaction. A closed-loop computer system interacts with canaries, Serinus canaria, by (1) automatically classifying two basic types of canary vocalization, single versus repeated calls, as they are produced by the subject, and (2) responding with a preprogrammed call type recorded from another bird. This computerized animal-machine interaction requires no human interference. We show first that the birds do engage in sustained interactions with the system, by studying the rate of single and repeated calls for various programmed protocols. We then show that female canaries differentially use single and repeated calls. First, they produce significantly more single than repeated calls, and second, the rate of single calls is associated with the context in which they interact, whereas repeated calls are context independent. This experiment is the first illustration of how closed-loop bird-computer interaction can be used productively to study social relationships. © Springer-Verlag 2010
Precise control of flexible manipulators
NASA Technical Reports Server (NTRS)
Cannon, R. H., Jr.; Bindford, T. O.; Schmitz, E.
1984-01-01
The design and experimental testing of end point position controllers for a very flexible one link lightweight manipulator are summarized. The latest upgraded version of the experimental set up, and the basic differences between conventional joint angle feedback and end point position feedback are described. A general procedure for application of modern control methods to the problem is outlined. The relationship between weighting parameters and the bandwidth and control stiffness of the resulting end point position closed loop system is shown. It is found that joint rate angle feedback in addition to the primary end point position sensor is essential for adequate disturbance rejection capability of the closed loop system. The use of a low order multivariable compensator design computer code; called Sandy is documented. A solution to the problem of control mode switching between position sensor sets is outlined. The proof of concept for endpoint position feedback for a one link flexible manipulator was demonstrated. The bandwidth obtained with the experimental end point position controller is about twice as fast as the beam's first natural cantilevered frequency, and comes within a factor of four of the absolute physical speed limit imposed by the wave propagation time of the beam.
Zwetsloot, P P; Kouwenberg, L H J A; Sena, E S; Eding, J E; den Ruijter, H M; Sluijter, J P G; Pasterkamp, G; Doevendans, P A; Hoefer, I E; Chamuleau, S A J; van Hout, G P J; Jansen Of Lorkeers, S J
2017-10-27
Large animal models are essential for the development of novel therapeutics for myocardial infarction. To optimize translation, we need to assess the effect of experimental design on disease outcome and model experimental design to resemble the clinical course of MI. The aim of this study is therefore to systematically investigate how experimental decisions affect outcome measurements in large animal MI models. We used control animal-data from two independent meta-analyses of large animal MI models. All variables of interest were pre-defined. We performed univariable and multivariable meta-regression to analyze whether these variables influenced infarct size and ejection fraction. Our analyses incorporated 246 relevant studies. Multivariable meta-regression revealed that infarct size and cardiac function were influenced independently by choice of species, sex, co-medication, occlusion type, occluded vessel, quantification method, ischemia duration and follow-up duration. We provide strong systematic evidence that commonly used endpoints significantly depend on study design and biological variation. This makes direct comparison of different study-results difficult and calls for standardized models. Researchers should take this into account when designing large animal studies to most closely mimic the clinical course of MI and enable translational success.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Jian, E-mail: Jian.Zhou@tudelft.n; Ye Guang, E-mail: g.ye@tudelft.n; Magnel Laboratory for Concrete Research, Department of Structural Engineering, Ghent University, Technologiepark-Zwijnaarde 904 B-9052, Ghent
2010-07-15
Numerous mercury intrusion porosimetry (MIP) studies have been carried out to investigate the pore structure in cement-based materials. However, the standard MIP often results in an underestimation of large pores and an overestimation of small pores because of its intrinsic limitation. In this paper, an innovative MIP method is developed in order to provide a more accurate estimation of pore size distribution. The new MIP measurements are conducted following a unique mercury intrusion procedure, in which the applied pressure is increased from the minimum to the maximum by repeating pressurization-depressurization cycles instead of a continuous pressurization followed by a continuousmore » depressurization. Accordingly, this method is called pressurization-depressurization cycling MIP (PDC-MIP). By following the PDC-MIP testing sequence, the volumes of the throat pores and the corresponding ink-bottle pores can be determined at every pore size. These values are used to calculate pore size distribution by using the newly developed analysis method. This paper presents an application of PDC-MIP on the investigation of the pore size distribution in cement-based materials. The experimental results of PDC-MIP are compared with those measured by standard MIP. The PDC-MIP is further validated with the other experimental methods and numerical tool, including nitrogen sorption, backscanning electron (BSE) image analysis, Wood's metal intrusion porosimetry (WMIP) and the numerical simulation by the cement hydration model HYMOSTRUC3D.« less
Pourabbasi, Ata; Farzami, Jalal; Shirvani, Mahbubeh-Sadat Ebrahimnegad; Shams, Amir Hossein; Larijani, Bagher
2017-01-01
One of the main usages of social networks in clinical studies is facilitating the process of sampling and case finding for scientists. The main focus of this study is on comparing two different methods of sampling through phone calls and using social network, for study purposes. One of the researchers started calling 214 families of children with diabetes during 90 days. After this period, phone calls stopped, and the team started communicating with families through telegram, a virtual social network for 30 days. The number of children who participated in the study was evaluated. Although the telegram method was 60 days shorter than the phone call method, researchers found that the number of participants from telegram (17.6%) did not have any significant differences compared with the ones being phone called (12.9%). Using social networks can be suggested as a beneficial method for local researchers who look for easier sampling methods, winning their samples' trust, following up with the procedure, and an easy-access database.
Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini
2014-01-01
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717
The effect of predation on begging-call evolution in nestling wood warblers.
Haskell
1999-04-01
I combined a comparative study of begging in ground- and tree-nesting wood warblers (Parulidae) with experimental measures of the predation costs of warbler begging calls. Throughout their development, ground-nesting warbler nestlings had significantly higher-frequency begging calls than did tree-nesting warblers. There was also a trend for ground-nesting birds to have less rapidly modulated calls. There were no consistent associations between nesting site and the amplitude of the calls. Using miniature walkie-talkies hidden inside artificial nests, I reciprocally transplanted the begging calls of 5- and 8-day-old black-throated blue warblers, Dendroica caerulescens (tree-nesting) and ovenbirds, Seiurus aurocapillus (ground-nesting) and measured the corresponding changes in rates of nest predation. For the begging calls of 8-day-old nestlings, but not those of 5-day-olds, the calls of the tree-nesting species coming from ground nests incurred greater costs than did the calls of ground nesters. The reciprocal transplant had little effect on the rate of predation. Tooth imprints on clay eggs placed in artificial nests indicated that eastern chipmunks, Tamias striatus, were responsible for the increased cost of begging for black-throated blue calls coming from the ground. These data suggest that nest predation may be responsible for maintaining some of the interspecific differences in the acoustic structure of begging calls. Copyright 1999 The Association for the Study of Animal Behaviour.
Fenzl, Thomas; Schuller, Gerd
2005-01-01
Background Echolocating bats emit vocalizations that can be classified either as echolocation calls or communication calls. Neural control of both types of calls must govern the same pool of motoneurons responsible for vocalizations. Electrical microstimulation in the periaqueductal gray matter (PAG) elicits both communication and echolocation calls, whereas stimulation of the paralemniscal area (PLA) induces only echolocation calls. In both the PAG and the PLA, the current thresholds for triggering natural vocalizations do not habituate to stimuli and remain low even for long stimulation periods, indicating that these structures have relative direct access to the final common pathway for vocalization. This study intended to clarify whether echolocation calls and communication calls are controlled differentially below the level of the PAG via separate vocal pathways before converging on the motoneurons used in vocalization. Results Both structures were probed simultaneously in a single experimental approach. Two stimulation electrodes were chronically implanted within the PAG in order to elicit either echolocation or communication calls. Blockade of the ipsilateral PLA site with iontophoretically application of the glutamate antagonist kynurenic acid did not impede either echolocation or communication calls elicited from the PAG. However, blockade of the contralateral PLA suppresses PAG-elicited echolocation calls but not communication calls. In both cases the blockade was reversible. Conclusion The neural control of echolocation and communication calls seems to be differentially organized below the level of the PAG. The PLA is an essential functional unit for echolocation call control before the descending pathways share again the final common pathway for vocalization. PMID:16053533
Vinther, Joachim M; Nielsen, Anders B; Bjerring, Morten; van Eck, Ernst R H; Kentgens, Arno P M; Khaneja, Navin; Nielsen, Niels Chr
2012-12-07
A novel strategy for heteronuclear dipolar decoupling in magic-angle spinning solid-state nuclear magnetic resonance (NMR) spectroscopy is presented, which eliminates residual static high-order terms in the effective Hamiltonian originating from interactions between oscillating dipolar and anisotropic shielding tensors. The method, called refocused continuous-wave (rCW) decoupling, is systematically established by interleaving continuous wave decoupling with appropriately inserted rotor-synchronized high-power π refocusing pulses of alternating phases. The effect of the refocusing pulses in eliminating residual effects from dipolar coupling in heteronuclear spin systems is rationalized by effective Hamiltonian calculations to third order. In some variants the π pulse refocusing is supplemented by insertion of rotor-synchronized π/2 purging pulses to further reduce the residual dipolar coupling effects. Five different rCW decoupling sequences are presented and their performance is compared to state-of-the-art decoupling methods. The rCW decoupling sequences benefit from extreme broadbandedness, tolerance towards rf inhomogeneity, and improved potential for decoupling at relatively low average rf field strengths. In numerical simulations, the rCW schemes clearly reveal superior characteristics relative to the best decoupling schemes presented so far, which we to some extent also are capable of demonstrating experimentally. A major advantage of the rCW decoupling methods is that they are easy to set up and optimize experimentally.
Saletti, Dominique
2017-01-01
Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505
Identifying direct miRNA-mRNA causal regulatory relationships in heterogeneous data.
Zhang, Junpeng; Le, Thuc Duy; Liu, Lin; Liu, Bing; He, Jianfeng; Goodall, Gregory J; Li, Jiuyong
2014-12-01
Discovering the regulatory relationships between microRNAs (miRNAs) and mRNAs is an important problem that interests many biologists and medical researchers. A number of computational methods have been proposed to infer miRNA-mRNA regulatory relationships, and are mostly based on the statistical associations between miRNAs and mRNAs discovered in observational data. The miRNA-mRNA regulatory relationships identified by these methods can be both direct and indirect regulations. However, differentiating direct regulatory relationships from indirect ones is important for biologists in experimental designs. In this paper, we present a causal discovery based framework (called DirectTarget) to infer direct miRNA-mRNA causal regulatory relationships in heterogeneous data, including expression profiles of miRNAs and mRNAs, and miRNA target information. DirectTarget is applied to the Epithelial to Mesenchymal Transition (EMT) datasets. The validation by experimentally confirmed target databases suggests that the proposed method can effectively identify direct miRNA-mRNA regulatory relationships. To explore the upstream regulators of miRNA regulation, we further identify the causal feedforward patterns (CFFPs) of TF-miRNA-mRNA to provide insights into the miRNA regulation in EMT. DirectTarget has the potential to be applied to other datasets to elucidate the direct miRNA-mRNA causal regulatory relationships and to explore the regulatory patterns. Copyright © 2014 Elsevier Inc. All rights reserved.
Yildizoglu, Tugce; Weislogel, Jan-Marek; Mohammad, Farhan; Chan, Edwin S-Y; Assam, Pryseley N; Claridge-Chang, Adam
2015-12-01
Genetic studies in Drosophila reveal that olfactory memory relies on a brain structure called the mushroom body. The mainstream view is that each of the three lobes of the mushroom body play specialized roles in short-term aversive olfactory memory, but a number of studies have made divergent conclusions based on their varying experimental findings. Like many fields, neurogenetics uses null hypothesis significance testing for data analysis. Critics of significance testing claim that this method promotes discrepancies by using arbitrary thresholds (α) to apply reject/accept dichotomies to continuous data, which is not reflective of the biological reality of quantitative phenotypes. We explored using estimation statistics, an alternative data analysis framework, to examine published fly short-term memory data. Systematic review was used to identify behavioral experiments examining the physiological basis of olfactory memory and meta-analytic approaches were applied to assess the role of lobular specialization. Multivariate meta-regression models revealed that short-term memory lobular specialization is not supported by the data; it identified the cellular extent of a transgenic driver as the major predictor of its effect on short-term memory. These findings demonstrate that effect sizes, meta-analysis, meta-regression, hierarchical models and estimation methods in general can be successfully harnessed to identify knowledge gaps, synthesize divergent results, accommodate heterogeneous experimental design and quantify genetic mechanisms.
Evaluation of the New B-REX Fatigue Testing System for Multi-Megawatt Wind Turbine Blades: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D.; Musial, W.; Engberg, S.
2004-12-01
The National Renewable Energy Laboratory (NREL) recently developed a new hybrid fatigue testing system called the Blade Resonance Excitation (B-REX) test system. The new system uses 65% less energy to test large wind turbine blades in half the time of NREL's dual-axis forced-displacement test method with lower equipment and operating costs. The B-REX is a dual-axis test system that combines resonance excitation with forced hydraulic loading to reduce the total test time required while representing the operating strains on the critical inboard blade stations more accurately than a single-axis test system. The analysis and testing required to fully implement themore » B-REX was significant. To control unanticipated blade motion and vibrations caused by dynamic coupling between the flap, lead-lag, and torsional directions, we needed to incorporate additional test hardware and control software. We evaluated the B-REX test system under stable operating conditions using a combination of various sensors. We then compared our results with results from the same blade, tested previously using NREL's dual-axis forced-displacement test method. Experimental results indicate that strain levels produced by the B-REX system accurately replicated the forced-displacement method. This paper describes the challenges we encountered while developing the new blade fatigue test system and the experimental results that validate its accuracy.« less
Optimization of radial-type superconducting magnetic bearing using the Taguchi method
NASA Astrophysics Data System (ADS)
Ai, Liwang; Zhang, Guomin; Li, Wanjie; Liu, Guole; Liu, Qi
2018-07-01
It is important and complicated to model and optimize the levitation behavior of superconducting magnetic bearing (SMB). That is due to the nonlinear constitutive relationships of superconductor and ferromagnetic materials, the relative movement between the superconducting stator and PM rotor, and the multi-parameter (e.g., air-gap, critical current density, and remanent flux density, etc.) affecting the levitation behavior. In this paper, we present a theoretical calculation and optimization method of the levitation behavior for radial-type SMB. A simplified model of levitation force calculation is established using 2D finite element method with H-formulation. In the model, the boundary condition of superconducting stator is imposed by harmonic series expressions to describe the traveling magnetic field generated by the moving PM rotor. Also, experimental measurements of the levitation force are performed and validate the model method. A statistical method called Taguchi method is adopted to carry out an optimization of load capacity for SMB. Then the factor effects of six optimization parameters on the target characteristics are discussed and the optimum parameters combination is determined finally. The results show that the levitation behavior of SMB is greatly improved and the Taguchi method is suitable for optimizing the SMB.
Federico, Alejandro; Kaufmann, Guillermo H
2006-03-20
We propose a novel approach to retrieving the phase map coded by a single closed-fringe pattern in digital speckle pattern interferometry, which is based on the estimation of the local sign of the quadrature component. We obtain the estimate by calculating the local orientation of the fringes that have previously been denoised by a weighted smoothing spline method. We carry out the procedure of sign estimation by determining the local abrupt jumps of size pi in the orientation field of the fringes and by segmenting the regions defined by these jumps. The segmentation method is based on the application of two-dimensional active contours (snakes), with which one can also estimate absent jumps, i.e., those that cannot be detected from the local orientation of the fringes. The performance of the proposed phase-retrieval technique is evaluated for synthetic and experimental fringes and compared with the results obtained with the spiral-phase- and Fourier-transform methods.
NASA Astrophysics Data System (ADS)
Xing, Y. F.; Wang, Y. S.; Shi, L.; Guo, H.; Chen, H.
2016-01-01
According to the human perceptional characteristics, a method combined by the optimal wavelet-packet transform and artificial neural network, so-called OWPT-ANN model, for psychoacoustical recognition is presented. Comparisons of time-frequency analysis methods are performed, and an OWPT with 21 critical bands is designed for feature extraction of a sound, as is a three-layer back-propagation ANN for sound quality (SQ) recognition. Focusing on the loudness and sharpness, the OWPT-ANN model is applied on vehicle noises under different working conditions. Experimental verifications show that the OWPT can effectively transfer a sound into a time-varying energy pattern as that in the human auditory system. The errors of loudness and sharpness of vehicle noise from the OWPT-ANN are all less than 5%, which suggest a good accuracy of the OWPT-ANN model in SQ recognition. The proposed methodology might be regarded as a promising technique for signal processing in the human-hearing related fields in engineering.
Bi-model processing for early detection of breast tumor in CAD system
NASA Astrophysics Data System (ADS)
Mughal, Bushra; Sharif, Muhammad; Muhammad, Nazeer
2017-06-01
Early screening of skeptical masses in mammograms may reduce mortality rate among women. This rate can be further reduced upon developing the computer-aided diagnosis system with decrease in false assumptions in medical informatics. This method highlights the early tumor detection in digitized mammograms. For improving the performance of this system, a novel bi-model processing algorithm is introduced. It divides the region of interest into two parts, the first one is called pre-segmented region (breast parenchyma) and other is the post-segmented region (suspicious region). This system follows the scheme of the preprocessing technique of contrast enhancement that can be utilized to segment and extract the desired feature of the given mammogram. In the next phase, a hybrid feature block is presented to show the effective performance of computer-aided diagnosis. In order to assess the effectiveness of the proposed method, a database provided by the society of mammographic images is tested. Our experimental outcomes on this database exhibit the usefulness and robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Buttgereit, R.; Roths, T.; Honerkamp, J.; Aberle, L. B.
2001-10-01
Dynamic light scattering experiments have become a powerful tool in order to investigate the dynamical properties of complex fluids. In many applications in both soft matter research and industry so-called ``real world'' systems are subject of great interest. Here, the dilution of the investigated system often cannot be changed without getting measurement artifacts, so that one often has to deal with highly concentrated and turbid media. The investigation of such systems requires techniques that suppress the influence of multiple scattering, e.g., cross correlation techniques. However, measurements at turbid as well as highly diluted media lead to data with low signal-to-noise ratio, which complicates data analysis and leads to unreliable results. In this article a multiangle regularization method is discussed, which copes with the difficulties arising from such samples and enhances enormously the quality of the estimated solution. In order to demonstrate the efficiency of this multiangle regularization method we applied it to cross correlation functions measured at highly turbid samples.
Geant4 simulations of NIST beam neutron lifetime experiment
NASA Astrophysics Data System (ADS)
Valete, Daniel; Crawford, Bret; BL2 Collaboration Collaboration
2017-09-01
A free neutron is unstable and its decay is described by the Standard Model as the transformation of a down quark into an up quark through the weak interaction. Precise measurements of the neutron lifetime test the validity of the theory of the weak interaction and provide useful information for the predictions of the theory of Big Bang nucleosynthesis of the primordial helium abundance in the universe and the number of different types of light neutrinos Nν. The predominant experimental methods for determination of the neutron lifetime are commonly called `beam' and `bottle' methods, and the most recent uses of each method do not agree with each other within their stated uncertainties. An improved experiment of the beam technique, which uses magnetic and electric fields to trap and guide the decay protons of a beam of cold neutrons to a detector, is in progress at the National Institute of Standards and Technology, Gaithersburg, MD with a precision goal of 0.1. I acknowledge the support of the Cross-Diciplinary Institute at Gettysburg College.
Light Microscopy at Maximal Precision
NASA Astrophysics Data System (ADS)
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
Discovering latent commercial networks from online financial news articles
NASA Astrophysics Data System (ADS)
Xia, Yunqing; Su, Weifeng; Lau, Raymond Y. K.; Liu, Yi
2013-08-01
Unlike most online social networks where explicit links among individual users are defined, the relations among commercial entities (e.g. firms) may not be explicitly declared in commercial Web sites. One main contribution of this article is the development of a novel computational model for the discovery of the latent relations among commercial entities from online financial news. More specifically, a CRF model which can exploit both structural and contextual features is applied to commercial entity recognition. In addition, a point-wise mutual information (PMI)-based unsupervised learning method is developed for commercial relation identification. To evaluate the effectiveness of the proposed computational methods, a prototype system called CoNet has been developed. Based on the financial news articles crawled from Google finance, the CoNet system achieves average F-scores of 0.681 and 0.754 in commercial entity recognition and commercial relation identification, respectively. Our experimental results confirm that the proposed shallow natural language processing methods are effective for the discovery of latent commercial networks from online financial news.
NASA Astrophysics Data System (ADS)
Wada, Sanehiro; Furuichi, Noriyuki; Shimada, Takashi
2017-11-01
This paper proposes the application of a novel ultrasonic pulse, called a partial inversion pulse (PIP), to the measurement of the velocity profile and flow rate in a pipe using the ultrasound time-domain correlation (UTDC) method. In general, the measured flow rate depends on the velocity profile in the pipe; thus, on-site calibration is the only method of checking the accuracy of on-site flow rate measurements. Flow rate calculation using UTDC is based on the integration of the measured velocity profile. The advantages of this method compared with the ultrasonic pulse Doppler method include the possibility of the velocity range having no limitation and its applicability to flow fields without a sufficient amount of reflectors. However, it has been previously reported that the measurable velocity range for UTDC is limited by false detections. Considering the application of this method to on-site flow fields, the issue of velocity range is important. To reduce the effect of false detections, a PIP signal, which is an ultrasound signal that contains a partially inverted region, was developed in this study. The advantages of the PIP signal are that it requires little additional hardware cost and no additional software cost in comparison with conventional methods. The effects of inversion on the characteristics of the ultrasound transmission were estimated through numerical calculation. Then, experimental measurements were performed at a national standard calibration facility for water flow rate in Japan. The experimental results demonstrate that measurements made using a PIP signal are more accurate and yield a higher detection ratio than measurements using a normal pulse signal.
Air-kerma strength determination of a new directional 103Pd source
Reed, Joshua L.; DeWerd, Larry A.; Culberson, Wesley S.
2015-01-01
Purpose: A new directional 103Pd planar source array called a CivaSheet™ has been developed by CivaTech Oncology, Inc., for potential use in low-dose-rate (LDR) brachytherapy treatments. The array consists of multiple individual polymer capsules called CivaDots, containing 103Pd and a gold shield that attenuates the radiation on one side, thus defining a hot and cold side. This novel source requires new methods to establish a source strength metric. The presence of gold material in such close proximity to the active 103Pd region causes the source spectrum to be significantly different than the energy spectra of seeds normally used in LDR brachytherapy treatments. In this investigation, the authors perform air-kerma strength (SK) measurements, develop new correction factors for these measurements based on an experimentally verified energy spectrum, and test the robustness of transferring SK to a well-type ionization chamber. Methods: SK measurements were performed with the variable-aperture free-air chamber (VAFAC) at the University of Wisconsin Medical Radiation Research Center. Subsequent measurements were then performed in a well-type ionization chamber. To realize the quantity SK from a directional source with gold material present, new methods and correction factors were considered. Updated correction factors were calculated using the mcnp 6 Monte Carlo code in order to determine SK with the presence of gold fluorescent energy lines. In addition to SK measurements, a low-energy high-purity germanium (HPGe) detector was used to experimentally verify the calculated spectrum, a sodium iodide (NaI) scintillating counter was used to verify the azimuthal and polar anisotropy, and a well-type ionization chamber was used to test the feasibility of disseminating SK values for a directional source within a cylindrically symmetric measurement volume. Results: The UW VAFAC was successfully used to measure the SK of four CivaDots with reproducibilities within 0.3%. Monte Carlo methods were used to calculate the UW VAFAC correction factors and the calculated spectrum emitted from a CivaDot was experimentally verified with HPGe detector measurements. The well-type ionization chamber showed minimal variation in response (<1.5%) as a function of source positioning angle, indicating that an American Association of Physicists in Medicine (AAPM) Accredited Dosimetry Calibration Laboratory calibrated well chamber would be a suitable device to transfer an SK-based calibration to a clinical user. SK per well-chamber ionization current ratios were consistent among the four dots measured. Additionally, the measurements and predictions of anisotropy show uniform emission within the solid angle of the VAFAC, which demonstrates the robustness of the SK measurement approach. Conclusions: This characterization of a new 103Pd directional brachytherapy source helps to establish calibration methods that could ultimately be used in the well-established AAPM Task Group 43 formalism. Monte Carlo methods accurately predict the changes in the energy spectrum caused by the fluorescent x-rays produced in the gold shield. PMID:26632069
Bubbles and denaturation in DNA
NASA Astrophysics Data System (ADS)
van Erp, T. S.; Cuesta-López, S.; Peyrard, M.
2006-08-01
The local opening of DNA is an intriguing phenomenon from a statistical-physics point of view, but is also essential for its biological function. For instance, the transcription and replication of our genetic code cannot take place without the unwinding of the DNA double helix. Although these biological processes are driven by proteins, there might well be a relation between these biological openings and the spontaneous bubble formation due to thermal fluctuations. Mesoscopic models, like the Peyrard-Bishop-Dauxois (PBD) model, have fairly accurately reproduced some experimental denaturation curves and the sharp phase transition in the thermodynamic limit. It is, hence, tempting to see whether these models could be used to predict the biological activity of DNA. In a previous study, we introduced a method that allows to obtain very accurate results on this subject, which showed that some previous claims in this direction, based on molecular-dynamics studies, were premature. This could either imply that the present PBD model should be improved or that biological activity can only be predicted in a more complex framework that involves interactions with proteins and super helical stresses. In this article, we give a detailed description of the statistical method introduced before. Moreover, for several DNA sequences, we give a thorough analysis of the bubble-statistics as a function of position and bubble size and the so-called l-denaturation curves that can be measured experimentally. These show that some important experimental observations are missing in the present model. We discuss how the present model could be improved.
Opto-electronic oscillator and its applications
NASA Astrophysics Data System (ADS)
Yao, X. S.; Maleki, Lute
1997-04-01
We review the properties of a new class of microwave oscillators called opto-electronic oscillators (OEO). We present theoretical and experimental results of a multi-loop technique for single mode selection. We then describe a new development called coupled OEO (COEO) in which the electrical oscillation is directly coupled with the optical oscillation, producing an OEO that generates stable optical pulses and single mode microwave oscillation simultaneously. Finally we discuss various applications of OEO.
Data-driven reverse engineering of signaling pathways using ensembles of dynamic models.
Henriques, David; Villaverde, Alejandro F; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2017-02-01
Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM's ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge.
Data-driven reverse engineering of signaling pathways using ensembles of dynamic models
Henriques, David; Villaverde, Alejandro F.; Banga, Julio R.
2017-01-01
Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM’s ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge. PMID:28166222
NASA Astrophysics Data System (ADS)
Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.
2017-06-01
The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.
Finding Specification Pages from the Web
NASA Astrophysics Data System (ADS)
Yoshinaga, Naoki; Torisawa, Kentaro
This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.
NASA Astrophysics Data System (ADS)
Wu, Jiangning; Wang, Xiaohuan
Rapidly increasing amount of mobile phone users and types of services leads to a great accumulation of complaining information. How to use this information to enhance the quality of customers' services is a big issue at present. To handle this kind of problem, the paper presents an approach to construct a domain knowledge map for navigating the explicit and tacit knowledge in two ways: building the Topic Map-based explicit knowledge navigation model, which includes domain TM construction, a semantic topic expansion algorithm and VSM-based similarity calculation; building Social Network Analysis-based tacit knowledge navigation model, which includes a multi-relational expert navigation algorithm and the criterions to evaluate the performance of expert networks. In doing so, both the customer managers and operators in call centers can find the appropriate knowledge and experts quickly and exactly. The experimental results show that the above method is very powerful for knowledge navigation.
Generation of a spiral wave using amplitude masks
NASA Astrophysics Data System (ADS)
Anguiano-Morales, Marcelino; Salas-Peimbert, Didia P.; Trujillo-Schiaffino, Gerardo
2011-09-01
Optical beams of Bessel-type whose transverse intensity profile remains unchanged under free-space propagation are called nondiffracting beams. Experimentally, Durnin used an annular slit on the focal plane of a convergent lens to generate a Bessel beam. However, this configuration is only one of many that can be used to generate nondiffracting beams. The method can be modified in order to generate a required phase distribution in the beam. In this work, we propose a simple and effective method to generate spiral beams whose intensity remains invariant during propagation using amplitude masks. Laser beams with spiral phase, i.e., vortex beams have attracted great interest because of their possible use in different applications for areas ranging from laser technologies, medicine, and microbiology to the production of light tweezers and optical traps. We present a study of spiral structures generated by the interference between two incomplete annular beams.
Representation of Cultural Role-Play for Training
NASA Technical Reports Server (NTRS)
Santarelli, Thomas; Pepe, Aaron; Rosenzweiz, Larry; Paulus, John; Yi, Ahn Na
2010-01-01
The Department of Defense (000) has successfully applied a number of methods for cultural familiarization training ranging from stand-up classroom training, to face-to-face live role-play, to so-called smart-cards. Recent interest has turned to the use of single and mUlti-player gaming technologies to augment these traditional methods of cultural familiarization. One such system, termed CulturePad, has been designed as a game-based role-play environment suitable for use in training and experimentation involving cultural roleplay scenarios. This paper describes the initial CulturePad effort focused on a literature review regarding the use of role-play for cultural training and a feasibility assessment of using a game-mediated environment for role-play. A small-scale pilot involving cultural experts was conducted to collect qualitative behavioral data comparing live role-play to game-mediated role-play in a multiplayer gaming engine.
Diverse Region-Based CNN for Hyperspectral Image Classification.
Zhang, Mengmeng; Li, Wei; Du, Qian
2018-06-01
Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less
Functional feature embedded space mapping of fMRI data.
Hu, Jin; Tian, Jie; Yang, Lei
2006-01-01
We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.
New fuzzy support vector machine for the class imbalance problem in medical datasets classification.
Gu, Xiaoqing; Ni, Tongguang; Wang, Hongyuan
2014-01-01
In medical datasets classification, support vector machine (SVM) is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM) for the class imbalance problem (called FSVM-CIP) is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.
Infrared small target detection based on multiscale center-surround contrast measure
NASA Astrophysics Data System (ADS)
Fu, Hao; Long, Yunli; Zhu, Ran; An, Wei
2018-04-01
Infrared(IR) small target detection plays a critical role in the Infrared Search And Track (IRST) system. Although it has been studied for years, there are some difficulties remained to the clutter environment. According to the principle of human discrimination of small targets from a natural scene that there is a signature of discontinuity between the object and its neighboring regions, we develop an efficient method for infrared small target detection called multiscale centersurround contrast measure (MCSCM). First, to determine the maximum neighboring window size, an entropy-based window selection technique is used. Then, we construct a novel multiscale center-surround contrast measure to calculate the saliency map. Compared with the original image, the MCSCM map has less background clutters and noise residual. Subsequently, a simple threshold is used to segment the target. Experimental results show our method achieves better performance.
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.