Sample records for existing experimental methods

  1. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Broadcasting a Lab Measurement over Existing Conductor Networks

    ERIC Educational Resources Information Center

    Knipp, Peter A.

    2009-01-01

    Students learn about physical laws and the scientific method when they analyze experimental data in a laboratory setting. Three common sources exist for the experimental data that they analyze: (1) "hands-on" measurements by the students themselves, (2) electronic transfer (by downloading a spreadsheet, video, or computer-aided data-acquisition…

  3. Quasi-experimental study designs series-paper 13: realizing the full potential of quasi-experiments for health research.

    PubMed

    Rockers, Peter C; Tugwell, Peter; Røttingen, John-Arne; Bärnighausen, Till

    2017-09-01

    Although the number of quasi-experiments conducted by health researchers has increased in recent years, there clearly remains unrealized potential for using these methods for causal evaluation of health policies and programs globally. This article proposes five prescriptions for capturing the full value of quasi-experiments for health research. First, new funding opportunities targeting proposals that use quasi-experimental methods should be made available to a broad pool of health researchers. Second, administrative data from health programs, often amenable to quasi-experimental analysis, should be made more accessible to researchers. Third, training in quasi-experimental methods should be integrated into existing health science graduate programs to increase global capacity to use these methods. Fourth, clear guidelines for primary research and synthesis of evidence from quasi-experiments should be developed. Fifth, strategic investments should be made to continue to develop new innovations in quasi-experimental methodologies. Tremendous opportunities exist to expand the use of quasi-experimental methods to increase our understanding of which health programs and policies work and which do not. Health researchers should continue to expand their commitment to rigorous causal evaluation with quasi-experimental methods, and international institutions should increase their support for these efforts. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest

    Treesearch

    Scott W. Woods

    2007-01-01

    We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...

  5. Experimental and CFD evidence of multiple solutions in a naturally ventilated building.

    PubMed

    Heiselberg, P; Li, Y; Andersen, A; Bjerre, M; Chen, Z

    2004-02-01

    This paper considers the existence of multiple solutions to natural ventilation of a simple one-zone building, driven by combined thermal and opposing wind forces. The present analysis is an extension of an earlier analytical study of natural ventilation in a fully mixed building, and includes the effect of thermal stratification. Both computational and experimental investigations were carried out in parallel with an analytical investigation. When flow is dominated by thermal buoyancy, it was found experimentally that there is thermal stratification. When the flow is wind-dominated, the room is fully mixed. Results from all three methods have shown that the hysteresis phenomena exist. Under certain conditions, two different stable steady-state solutions are found to exist by all three methods for the same set of parameters. As shown by both the computational fluid dynamics (CFD) and experimental results, one of the solutions can shift to another when there is a sufficient perturbation. These results have probably provided the strongest evidence so far for the conclusion that multiple states exist in natural ventilation of simple buildings. Different initial conditions in the CFD simulations led to different solutions, suggesting that caution must be taken when adopting the commonly used 'zero initialization'.

  6. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  7. Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Haller, Harold S.

    2009-01-01

    It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.

  8. Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results

    NASA Astrophysics Data System (ADS)

    Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.

    2017-05-01

    The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.

  9. Adaptive identification of vessel's added moments of inertia with program motion

    NASA Astrophysics Data System (ADS)

    Alyshev, A. S.; Melnikov, V. G.

    2018-05-01

    In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.

  10. Numerical simulation for the air entrainment of aerated flow with an improved multiphase SPH model

    NASA Astrophysics Data System (ADS)

    Wan, Hang; Li, Ran; Pu, Xunchi; Zhang, Hongwei; Feng, Jingjie

    2017-11-01

    Aerated flow is a complex hydraulic phenomenon that exists widely in the field of environmental hydraulics. It is generally characterised by large deformation and violent fragmentation of the free surface. Compared to Euler methods (volume of fluid (VOF) method or rigid-lid hypothesis method), the existing single-phase Smooth Particle Hydrodynamics (SPH) method has performed well for solving particle motion. A lack of research on interphase interaction and air concentration, however, has affected the application of SPH model. In our study, an improved multiphase SPH model is presented to simulate aeration flows. A drag force was included in the momentum equation to ensure accuracy of the air particle slip velocity. Furthermore, a calculation method for air concentration is developed to analyse the air entrainment characteristics. Two studies were used to simulate the hydraulic and air entrainment characteristics. And, compared with the experimental results, the simulation results agree with the experimental results well.

  11. A unified framework for unraveling the functional interaction structure of a biomolecular network based on stimulus-response experimental data.

    PubMed

    Cho, Kwang-Hyun; Choo, Sang-Mok; Wellstead, Peter; Wolkenhauer, Olaf

    2005-08-15

    We propose a unified framework for the identification of functional interaction structures of biomolecular networks in a way that leads to a new experimental design procedure. In developing our approach, we have built upon previous work. Thus we begin by pointing out some of the restrictions associated with existing structure identification methods and point out how these restrictions may be eased. In particular, existing methods use specific forms of experimental algebraic equations with which to identify the functional interaction structure of a biomolecular network. In our work, we employ an extended form of these experimental algebraic equations which, while retaining their merits, also overcome some of their disadvantages. Experimental data are required in order to estimate the coefficients of the experimental algebraic equation set associated with the structure identification task. However, experimentalists are rarely provided with guidance on which parameters to perturb, and to what extent, to perturb them. When a model of network dynamics is required then there is also the vexed question of sample rate and sample time selection to be resolved. Supplying some answers to these questions is the main motivation of this paper. The approach is based on stationary and/or temporal data obtained from parameter perturbations, and unifies the previous approaches of Kholodenko et al. (PNAS 99 (2002) 12841-12846) and Sontag et al. (Bioinformatics 20 (2004) 1877-1886). By way of demonstration, we apply our unified approach to a network model which cannot be properly identified by existing methods. Finally, we propose an experiment design methodology, which is not limited by the amount of parameter perturbations, and illustrate its use with an in numero example.

  12. Evaluation of Traditional and Technology-Based Grocery Store Nutrition Education

    ERIC Educational Resources Information Center

    Schultz, Jennifer; Litchfield, Ruth

    2016-01-01

    Background: A literature gap exists for grocery interventions with realistic resource expectations; few technology-based publications exist, and none document traditional comparison. Purpose: Compare grocery store traditional aisle demonstrations (AD) and technology-based (TB) nutrition education treatments. Methods: A quasi-experimental 4-month…

  13. Laboratory investigations of earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Xia, Kaiwen

    In this thesis this will be attempted through controlled laboratory experiments that are designed to mimic natural earthquake scenarios. The earthquake dynamic rupturing process itself is a complicated phenomenon, involving dynamic friction, wave propagation, and heat production. Because controlled experiments can produce results without assumptions needed in theoretical and numerical analysis, the experimental method is thus advantageous over theoretical and numerical methods. Our laboratory fault is composed of carefully cut photoelastic polymer plates (Homahte-100, Polycarbonate) held together by uniaxial compression. As a unique unit of the experimental design, a controlled exploding wire technique provides the triggering mechanism of laboratory earthquakes. Three important components of real earthquakes (i.e., pre-existing fault, tectonic loading, and triggering mechanism) correspond to and are simulated by frictional contact, uniaxial compression, and the exploding wire technique. Dynamic rupturing processes are visualized using the photoelastic method and are recorded via a high-speed camera. Our experimental methodology, which is full-field, in situ, and non-intrusive, has better control and diagnostic capacity compared to other existing experimental methods. Using this experimental approach, we have investigated several problems: dynamics of earthquake faulting occurring along homogeneous faults separating identical materials, earthquake faulting along inhomogeneous faults separating materials with different wave speeds, and earthquake faulting along faults with a finite low wave speed fault core. We have observed supershear ruptures, subRayleigh to supershear rupture transition, crack-like to pulse-like rupture transition, self-healing (Heaton) pulse, and rupture directionality.

  14. Theoretical and experimental investigation of supersonic aerodynamic characteristics of a twin-fuselage concept

    NASA Technical Reports Server (NTRS)

    Wood, R. M.; Miller, D. S.; Brentner, K. S.

    1983-01-01

    A theoretical and experimental investigation has been conducted to evaluate the fundamental supersonic aerodynamic characteristics of a generic twin-body model at a Mach number of 2.70. Results show that existing aerodynamic prediction methods are adequate for making preliminary aerodynamic estimates.

  15. Robust volcano plot: identification of differential metabolites in the presence of outliers.

    PubMed

    Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro

    2018-04-11

    The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .

  16. A sampling framework for incorporating quantitative mass spectrometry data in protein interaction analysis.

    PubMed

    Tucker, George; Loh, Po-Ru; Berger, Bonnie

    2013-10-04

    Comprehensive protein-protein interaction (PPI) maps are a powerful resource for uncovering the molecular basis of genetic interactions and providing mechanistic insights. Over the past decade, high-throughput experimental techniques have been developed to generate PPI maps at proteome scale, first using yeast two-hybrid approaches and more recently via affinity purification combined with mass spectrometry (AP-MS). Unfortunately, data from both protocols are prone to both high false positive and false negative rates. To address these issues, many methods have been developed to post-process raw PPI data. However, with few exceptions, these methods only analyze binary experimental data (in which each potential interaction tested is deemed either observed or unobserved), neglecting quantitative information available from AP-MS such as spectral counts. We propose a novel method for incorporating quantitative information from AP-MS data into existing PPI inference methods that analyze binary interaction data. Our approach introduces a probabilistic framework that models the statistical noise inherent in observations of co-purifications. Using a sampling-based approach, we model the uncertainty of interactions with low spectral counts by generating an ensemble of possible alternative experimental outcomes. We then apply the existing method of choice to each alternative outcome and aggregate results over the ensemble. We validate our approach on three recent AP-MS data sets and demonstrate performance comparable to or better than state-of-the-art methods. Additionally, we provide an in-depth discussion comparing the theoretical bases of existing approaches and identify common aspects that may be key to their performance. Our sampling framework extends the existing body of work on PPI analysis using binary interaction data to apply to the richer quantitative data now commonly available through AP-MS assays. This framework is quite general, and many enhancements are likely possible. Fruitful future directions may include investigating more sophisticated schemes for converting spectral counts to probabilities and applying the framework to direct protein complex prediction methods.

  17. Laboratory test methods for combustion stability properties of solid propellants

    NASA Technical Reports Server (NTRS)

    Strand, L. D.; Brown, R. S.

    1992-01-01

    An overview is presented of experimental methods for determining the combustion-stability properties of solid propellants. The methods are generally based on either the temporal response to an initial disturbance or on external methods for generating the required oscillations. The size distribution of condensed-phase combustion products are characterized by means of the experimental approaches. The 'T-burner' approach is shown to assist in the derivation of pressure-coupled driving contributions and particle damping in solid-propellant rocket motors. Other techniques examined include the rotating-valve apparatus, the impedance tube, the modulated throat-acoustic damping burner, and the magnetic flowmeter. The paper shows that experimental methods do not exist for measuring the interactions between acoustic velocity oscillations and burning propellant.

  18. a Theoretical and Experimental Investigation of 1/F Noise in the Alpha Decay Rates of AMERICIUM-241.

    NASA Astrophysics Data System (ADS)

    Pepper, Gary T.

    New experimental methods and data analysis techniques were used to investigate the hypothesis of the existence of 1/f noise in a alpha particle emission rates for ^{241}Am. Experimental estimates of the flicker floor were found to be almost two orders of magnitude less than Handel's theoretical prediction and previous measurements. The existence of a flicker floor for ^{57}Co decay, a process for which no charged particles are emitted, indicate that instrumental instability is likely responsible for the values of the flicker floor obtained. The experimental results and the theoretical arguments presented indicate that a re-examination of Handel's theory of 1/f noise is appropriate. Methods of numerical simulation of noise processes with a 1/f^{rm n} power spectral density were developed. These were used to investigate various statistical aspects of 1/f ^{rm n} noise. The probability density function for the Allan variance was investigated in order to establish confidence limits for the observations made. The effect of using grouped (correlated) data, for evaluating the Allan variance, was also investigated.

  19. Development of the psychological impact of tinnitus interview: a clinician-administered measure of tinnitus-related distress.

    PubMed

    Henry, J L; Kangas, M; Wilson, P H

    2001-01-01

    The development of valid and reliable methods for assessing psychological aspects of tinnitus continues to be an important goal of research. Such assessment methods are potentially useful in clinical and research contexts. Existing self-report measures have a number of disadvantages, and so a need exists to develop a form of assessment that is less open to response bias and the effects of experimental demand. A new approach, the Psychological Impact of Tinnitus Interview (PITI), is described, and some preliminary data on its psychometric properties are reported. The results suggest that the PITI is capable of providing a measure of separate, relatively independent dimensions of tinnitus-related distress--namely, sleep difficulties, general distress, mood, suicidal aspects, and avoidance of or interference with normal activities. This method may lead to more refined measures of these dimensions of tinnitus-related psychological difficulties. The PITI should be regarded as a promising assessment tool for use in experimental settings, pending further work on its content, coding method, and administration.

  20. Boundary Electron and Beta Dosimetry-Quantification of the Effects of Dissimilar Media on Absorbed Dose

    NASA Astrophysics Data System (ADS)

    Nunes, Josane C.

    1991-02-01

    This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.

  1. Plans for Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Ballmann, Josef; Bhatia, Kumar; Blades, Eric; Boucke, Alexander; Chwalowski, Pawel; Dietz, Guido; Dowell, Earl; Florance, Jennifer P.; Hansen, Thorsten; hide

    2011-01-01

    This paper summarizes the plans for the first Aeroelastic Prediction Workshop. The workshop is designed to assess the state of the art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. Three subject configurations have been chosen from existing wind tunnel data sets where there is pertinent experimental data available for comparison. For each case chosen, the wind tunnel testing was conducted using forced oscillation of the model at specified frequencies

  2. Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao

    Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.

  3. Advanced Computational Techniques for Hypersonic Propulsion

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1996-01-01

    CFD has played a major role in the resurgence of hypersonic flight, on the premise that numerical methods will allow us to perform simulations at conditions for which no ground test capability exists. Validation of CFD methods is being established using the experimental data base available, which is below Mach 8. It is important, however, to realize the limitations involved in the extrapolation process as well as the deficiencies that exist in numerical methods at the present time. Current features of CFD codes are examined for application to propulsion system components. The shortcomings in simulation and modeling are identified and discussed.

  4. Experimental design and quantitative analysis of microbial community multiomics.

    PubMed

    Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis

    2017-11-30

    Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.

  5. Wind-induced vibration of stay cables : brief

    DOT National Transportation Integrated Search

    2005-02-01

    The objectives of this project were to: : Identify gaps in current knowledge base : Conduct analytical and experimental research in critical areas : Study performance of existing cable-stayed bridges : Study current mitigation methods...

  6. The pointillism method for creating stimuli suitable for use in computer-based visual contrast sensitivity testing.

    PubMed

    Turner, Travis H

    2005-03-30

    An increasingly large corpus of clinical and experimental neuropsychological research has demonstrated the utility of measuring visual contrast sensitivity. Unfortunately, existing means of measuring contrast sensitivity can be prohibitively expensive, difficult to standardize, or lack reliability. Additionally, most existing tests do not allow full control over important characteristics, such as off-angle rotations, waveform, contrast, and spatial frequency. Ideally, researchers could manipulate characteristics and display stimuli in a computerized task designed to meet experimental needs. Thus far, 256-bit color limitation in standard cathode ray tube (CRT) monitors has been preclusive. To this end, the pointillism method (PM) was developed. Using MATLAB software, stimuli are created based on both mathematical and stochastic components, such that differences in regional luminance values of the gradient field closely approximate the desired contrast. This paper describes the method and examines its performance in sine and square-wave image sets from a range of contrast values. Results suggest the utility of the method for most experimental applications. Weaknesses in the current version, the need for validation and reliability studies, and considerations regarding applications are discussed. Syntax for the program is provided in an appendix, and a version of the program independent of MATLAB is available from the author.

  7. Evaluation of Sub Query Performance in SQL Server

    NASA Astrophysics Data System (ADS)

    Oktavia, Tanty; Sujarwo, Surya

    2014-03-01

    The paper explores several sub query methods used in a query and their impact on the query performance. The study uses experimental approach to evaluate the performance of each sub query methods combined with indexing strategy. The sub query methods consist of in, exists, relational operator and relational operator combined with top operator. The experimental shows that using relational operator combined with indexing strategy in sub query has greater performance compared with using same method without indexing strategy and also other methods. In summary, for application that emphasized on the performance of retrieving data from database, it better to use relational operator combined with indexing strategy. This study is done on Microsoft SQL Server 2012.

  8. Geochemical Data for Upper Mineral Creek, Colorado, Under Existing Ambient Conditions and During an Experimental pH Modification, August 2005

    USGS Publications Warehouse

    Runkel, Robert L.; Kimball, Briant A.; Steiger, Judy I.; Walton-Day, Katherine

    2009-01-01

    Mineral Creek, an acid mine drainage stream in south-western Colorado, was the subject of a water-quality study that employed a paired synoptic approach. Under the paired synoptic approach, two synoptic sampling campaigns were conducted on the same study reach. The initial synoptic campaign, conducted August 22, 2005, documented stream-water quality under existing ambient conditions. A second synoptic campaign, conducted August 24, 2005, documented stream-water quality during a pH-modification experiment that elevated the pH of Mineral Creek. The experimental pH modification was designed to determine the potential reductions in dissolved constituent concentrations that would result from the implementation of an active treatment system for acid mine drainage. During both synoptic sampling campaigns, a solution containing lithium bromide was injected continuously to allow for the calculation of streamflow using the tracer-dilution method. Synoptic water-quality samples were collected from 30 stream sites and 11 inflow locations along the 2-kilometer study reach. Data from the study provide spatial profiles of pH, concentration, and streamflow under both existing and experimentally-altered conditions. This report presents the data obtained August 21-24, 2005, as well as the methods used for sample collection and data analysis.

  9. Full-Scale Experimental Verification of Soft-Story-Only Retrofits of Wood-Frame Buildings using Hybrid Testing

    Treesearch

    Elaina Jennings; John W. van de Lindt; Ershad Ziaei; Pouria Bahmani; Sangki Park; Xiaoyun Shao; Weichiang Pang; Douglas Rammer; Gary Mochizuki; Mikhail Gershfeld

    2015-01-01

    The FEMA P-807 Guidelines were developed for retrofitting soft-story wood-frame buildings based on existing data, and the method had not been verified through full-scale experimental testing. This article presents two different retrofit designs based directly on the FEMA P-807 Guidelines that were examined at several different seismic intensity levels. The...

  10. An application of artificial neural networks to experimental data approximation

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1993-01-01

    As an initial step in the evaluation of networks, a feedforward architecture is trained to approximate experimental data by the backpropagation algorithm. Several drawbacks were detected and an alternative learning algorithm was then developed to partially address the drawbacks. This noniterative algorithm has a number of advantages over the backpropagation method and is easily implemented on existing hardware.

  11. Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming

    NASA Astrophysics Data System (ADS)

    Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita

    2018-03-01

    We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.

  12. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  13. Two types of modes in finite size one-dimensional coaxial photonic crystals: General rules and experimental evidence

    NASA Astrophysics Data System (ADS)

    El Boudouti, E. H.; El Hassouani, Y.; Djafari-Rouhani, B.; Aynaou, H.

    2007-08-01

    We demonstrate analytically and experimentally the existence and behavior of two types of modes in finite size one-dimensional coaxial photonic crystals made of N cells with vanishing magnetic field on both sides. We highlight the existence of N-1 confined modes in each band and one mode by gap associated to either one or the other of the two surfaces surrounding the structure. The latter modes are independent of N . These results generalize our previous findings on the existence of surface modes in two semi-infinite superlattices obtained from the cleavage of an infinite superlattice between two cells. The analytical results are obtained by means of the Green’s function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime.

  14. An Overview and Empirical Comparison of Distance Metric Learning Methods.

    PubMed

    Moutafis, Panagiotis; Leng, Mengjun; Kakadiaris, Ioannis A

    2016-02-16

    In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.

  15. Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel

    2013-01-01

    This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.

  16. Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

    PubMed Central

    Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC

    2008-01-01

    Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020

  17. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  18. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  19. Dynamics of a parametrically excited simple pendulum

    NASA Astrophysics Data System (ADS)

    Depetri, Gabriela I.; Pereira, Felipe A. C.; Marin, Boris; Baptista, Murilo S.; Sartorelli, J. C.

    2018-03-01

    The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7 π/180 , and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).

  20. Dynamics of a parametrically excited simple pendulum.

    PubMed

    Depetri, Gabriela I; Pereira, Felipe A C; Marin, Boris; Baptista, Murilo S; Sartorelli, J C

    2018-03-01

    The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7π/180, and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).

  1. Uncertain decision tree inductive inference

    NASA Astrophysics Data System (ADS)

    Zarban, L.; Jafari, S.; Fakhrahmad, S. M.

    2011-10-01

    Induction is the process of reasoning in which general rules are formulated based on limited observations of recurring phenomenal patterns. Decision tree learning is one of the most widely used and practical inductive methods, which represents the results in a tree scheme. Various decision tree algorithms have already been proposed such as CLS, ID3, Assistant C4.5, REPTree and Random Tree. These algorithms suffer from some major shortcomings. In this article, after discussing the main limitations of the existing methods, we introduce a new decision tree induction algorithm, which overcomes all the problems existing in its counterparts. The new method uses bit strings and maintains important information on them. This use of bit strings and logical operation on them causes high speed during the induction process. Therefore, it has several important features: it deals with inconsistencies in data, avoids overfitting and handles uncertainty. We also illustrate more advantages and the new features of the proposed method. The experimental results show the effectiveness of the method in comparison with other methods existing in the literature.

  2. Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series

    NASA Astrophysics Data System (ADS)

    Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki

    2015-03-01

    Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.

  3. Charge transfer complex between 2,3-diaminopyridine with chloranilic acid. Synthesis, characterization and DFT, TD-DFT computational studies

    NASA Astrophysics Data System (ADS)

    Al-Ahmary, Khairia M.; Habeeb, Moustafa M.; Al-Obidan, Areej H.

    2018-05-01

    New charge transfer complex (CTC) between the electron donor 2,3-diaminopyridine (DAP) with the electron acceptor chloranilic (CLA) acid has been synthesized and characterized experimentally and theoretically using a variety of physicochemical techniques. The experimental work included the use of elemental analysis, UV-vis, IR and 1H NMR studies to characterize the complex. Electronic spectra have been carried out in different hydrogen bonded solvents, methanol (MeOH), acetonitrile (AN) and 1:1 mixture from AN-MeOH. The molecular composition of the complex was identified to be 1:1 from Jobs and molar ratio methods. The stability constant was determined using minimum-maximum absorbances method where it recorded high values confirming the high stability of the formed complex. The solid complex was prepared and characterized by elemental analysis that confirmed its formation in 1:1 stoichiometric ratio. Both IR and NMR studies asserted the existence of proton and charge transfers in the formed complex. For supporting the experimental results, DFT computations were carried out using B3LYP/6-31G(d,p) method to compute the optimized structures of the reactants and complex, their geometrical parameters, reactivity parameters, molecular electrostatic potential map and frontier molecular orbitals. The analysis of DFT results strongly confirmed the high stability of the formed complex based on existing charge transfer beside proton transfer hydrogen bonding concordant with experimental results. The origin of electronic spectra was analyzed using TD-DFT method where the observed λmax are strongly consisted with the computed ones. TD-DFT showed the contributed states for various electronic transitions.

  4. Toward Fully in Silico Melting Point Prediction Using Molecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y; Maginn, EJ

    2013-03-01

    Melting point is one of the most fundamental and practically important properties of a compound. Molecular computation of melting points. However, all of these methods simulation methods have been developed for the accurate need an experimental crystal structure as input, which means that such calculations are not really predictive since the melting point can be measured easily in experiments once a crystal structure is known. On the other hand, crystal structure prediction (CSP) has become an active field and significant progress has been made, although challenges still exist. One of the main challenges is the existence of many crystal structuresmore » (polymorphs) that are very close in energy. Thermal effects and kinetic factors make the situation even more complicated, such that it is still not trivial to predict experimental crystal structures. In this work, we exploit the fact that free energy differences are often small between crystal structures. We show that accurate melting point predictions can be made by using a reasonable crystal structure from CSP as a starting point for a free energy-based melting point calculation. The key is that most crystal structures predicted by CSP have free energies that are close to that of the experimental structure. The proposed method was tested on two rigid molecules and the results suggest that a fully in silico melting point prediction method is possible.« less

  5. First-Principles Modeling of Hydrogen Storage in Metal Hydride Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Karl Johnson

    The objective of this project is to complement experimental efforts of MHoCE partners by using state-of-the-art theory and modeling to study the structure, thermodynamics, and kinetics of hydrogen storage materials. Specific goals include prediction of the heats of formation and other thermodynamic properties of alloys from first principles methods, identification of new alloys that can be tested experimentally, calculation of surface and energetic properties of nanoparticles, and calculation of kinetics involved with hydrogenation and dehydrogenation processes. Discovery of new metal hydrides with enhanced properties compared with existing materials is a critical need for the Metal Hydride Center of Excellence. Newmore » materials discovery can be aided by the use of first principles (ab initio) computational modeling in two ways: (1) The properties, including mechanisms, of existing materials can be better elucidated through a combined modeling/experimental approach. (2) The thermodynamic properties of novel materials that have not been made can, in many cases, be quickly screened with ab initio methods. We have used state-of-the-art computational techniques to explore millions of possible reaction conditions consisting of different element spaces, compositions, and temperatures. We have identified potentially promising single- and multi-step reactions that can be explored experimentally.« less

  6. Semiempirical Theories of the Affinities of Negative Atomic Ions

    NASA Technical Reports Server (NTRS)

    Edie, John W.

    1961-01-01

    The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.

  7. Prediction and analysis of protein solubility using a novel scoring card method with dipeptide composition

    PubMed Central

    2012-01-01

    Background Existing methods for predicting protein solubility on overexpression in Escherichia coli advance performance by using ensemble classifiers such as two-stage support vector machine (SVM) based classifiers and a number of feature types such as physicochemical properties, amino acid and dipeptide composition, accompanied with feature selection. It is desirable to develop a simple and easily interpretable method for predicting protein solubility, compared to existing complex SVM-based methods. Results This study proposes a novel scoring card method (SCM) by using dipeptide composition only to estimate solubility scores of sequences for predicting protein solubility. SCM calculates the propensities of 400 individual dipeptides to be soluble using statistic discrimination between soluble and insoluble proteins of a training data set. Consequently, the propensity scores of all dipeptides are further optimized using an intelligent genetic algorithm. The solubility score of a sequence is determined by the weighted sum of all propensity scores and dipeptide composition. To evaluate SCM by performance comparisons, four data sets with different sizes and variation degrees of experimental conditions were used. The results show that the simple method SCM with interpretable propensities of dipeptides has promising performance, compared with existing SVM-based ensemble methods with a number of feature types. Furthermore, the propensities of dipeptides and solubility scores of sequences can provide insights to protein solubility. For example, the analysis of dipeptide scores shows high propensity of α-helix structure and thermophilic proteins to be soluble. Conclusions The propensities of individual dipeptides to be soluble are varied for proteins under altered experimental conditions. For accurately predicting protein solubility using SCM, it is better to customize the score card of dipeptide propensities by using a training data set under the same specified experimental conditions. The proposed method SCM with solubility scores and dipeptide propensities can be easily applied to the protein function prediction problems that dipeptide composition features play an important role. Availability The used datasets, source codes of SCM, and supplementary files are available at http://iclab.life.nctu.edu.tw/SCM/. PMID:23282103

  8. E-Flux2 and SPOT: Validated Methods for Inferring Intracellular Metabolic Flux Distributions from Transcriptomic Data.

    PubMed

    Kim, Min Kyung; Lane, Anatoliy; Kelley, James J; Lun, Desmond S

    2016-01-01

    Several methods have been developed to predict system-wide and condition-specific intracellular metabolic fluxes by integrating transcriptomic data with genome-scale metabolic models. While powerful in many settings, existing methods have several shortcomings, and it is unclear which method has the best accuracy in general because of limited validation against experimentally measured intracellular fluxes. We present a general optimization strategy for inferring intracellular metabolic flux distributions from transcriptomic data coupled with genome-scale metabolic reconstructions. It consists of two different template models called DC (determined carbon source model) and AC (all possible carbon sources model) and two different new methods called E-Flux2 (E-Flux method combined with minimization of l2 norm) and SPOT (Simplified Pearson cOrrelation with Transcriptomic data), which can be chosen and combined depending on the availability of knowledge on carbon source or objective function. This enables us to simulate a broad range of experimental conditions. We examined E. coli and S. cerevisiae as representative prokaryotic and eukaryotic microorganisms respectively. The predictive accuracy of our algorithm was validated by calculating the uncentered Pearson correlation between predicted fluxes and measured fluxes. To this end, we compiled 20 experimental conditions (11 in E. coli and 9 in S. cerevisiae), of transcriptome measurements coupled with corresponding central carbon metabolism intracellular flux measurements determined by 13C metabolic flux analysis (13C-MFA), which is the largest dataset assembled to date for the purpose of validating inference methods for predicting intracellular fluxes. In both organisms, our method achieves an average correlation coefficient ranging from 0.59 to 0.87, outperforming a representative sample of competing methods. Easy-to-use implementations of E-Flux2 and SPOT are available as part of the open-source package MOST (http://most.ccib.rutgers.edu/). Our method represents a significant advance over existing methods for inferring intracellular metabolic flux from transcriptomic data. It not only achieves higher accuracy, but it also combines into a single method a number of other desirable characteristics including applicability to a wide range of experimental conditions, production of a unique solution, fast running time, and the availability of a user-friendly implementation.

  9. A study of prediction methods for the high angle-of-attack aerodynamics of straight wings and fighter aircraft

    NASA Technical Reports Server (NTRS)

    Mcmillan, O. J.; Mendenhall, M. R.; Perkins, S. C., Jr.

    1984-01-01

    Work is described dealing with two areas which are dominated by the nonlinear effects of vortex flows. The first area concerns the stall/spin characteristics of a general aviation wing with a modified leading edge. The second area concerns the high-angle-of-attack characteristics of high performance military aircraft. For each area, the governing phenomena are described as identified with the aid of existing experimental data. Existing analytical methods are reviewed, and the most promising method for each area used to perform some preliminary calculations. Based on these results, the strengths and weaknesses of the methods are defined, and research programs recommended to improve the methods as a result of better understanding of the flow mechanisms involved.

  10. Online optimization of storage ring nonlinear beam dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xiaobiao; Safranek, James

    2015-08-01

    We propose to optimize the nonlinear beam dynamics of existing and future storage rings with direct online optimization techniques. This approach may have crucial importance for the implementation of diffraction limited storage rings. In this paper considerations and algorithms for the online optimization approach are discussed. We have applied this approach to experimentally improve the dynamic aperture of the SPEAR3 storage ring with the robust conjugate direction search method and the particle swarm optimization method. The dynamic aperture was improved by more than 5 mm within a short period of time. Experimental setup and results are presented.

  11. Pull-out fibers from composite materials at high rate of loading

    NASA Technical Reports Server (NTRS)

    Amijima, S.; Fujii, T.

    1981-01-01

    Numerical and experimental results are presented on the pullout phenomenon in composite materials at a high rate of loading. The finite element method was used, taking into account the existence of a virtual shear deformation layer as the interface between fiber and matrix. Experimental results agree well with those obtained by the finite element method. Numerical results show that the interlaminar shear stress is time dependent, in addition, it is shown to depend on the applied load time history. Under step pulse loading, the interlaminar shear stress fluctuates, finally decaying to its value under static loading.

  12. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  13. Comparison of Coupled Radiative Flow Solutions with Project Fire 2 Flight Data

    NASA Technical Reports Server (NTRS)

    Olynick, David R.; Henline, W. D.; Chambers, Lin Hartung; Candler, G. V.

    1995-01-01

    A nonequilibrium, axisymmetric, Navier-Stokes flow solver with coupled radiation has been developed for use in the design or thermal protection systems for vehicles where radiation effects are important. The present method has been compared with an existing now and radiation solver and with the Project Fire 2 experimental data. Good agreement has been obtained over the entire Fire 2 trajectory with the experimentally determined values of the stagnation radiation intensity in the 0.2-6.2 eV range and with the total stagnation heating. The effects of a number of flow models are examined to determine which combination of physical models produces the best agreement with the experimental data. These models include radiation coupling, multitemperature thermal models, and finite rate chemistry. Finally, the computational efficiency of the present model is evaluated. The radiation properties model developed for this study is shown to offer significant computational savings compared to existing codes.

  14. Comparison of analytical and experimental subsonic steady and unsteady pressure distributions for a high-aspect-ratio-supercritical wing model with oscillating control surfaces

    NASA Technical Reports Server (NTRS)

    Mccain, W. E.

    1982-01-01

    The results of a comparative study using the unsteady aerodynamic lifting surface theory, known as the Doublet Lattice method, and experimental subsonic steady- and unsteady-pressure measurements, are presented for a high-aspect-ratio supercritical wing model. Comparisons of pressure distributions due to wing angle of attack and control-surface deflections were made. In general, good correlation existed between experimental and theoretical data over most of the wing planform. The more significant deviations found between experimental and theoretical data were in the vicinity of control surfaces for both static and oscillatory control-surface deflections.

  15. Prediction of physical protein protein interactions

    NASA Astrophysics Data System (ADS)

    Szilágyi, András; Grimm, Vera; Arakaki, Adrián K.; Skolnick, Jeffrey

    2005-06-01

    Many essential cellular processes such as signal transduction, transport, cellular motion and most regulatory mechanisms are mediated by protein-protein interactions. In recent years, new experimental techniques have been developed to discover the protein-protein interaction networks of several organisms. However, the accuracy and coverage of these techniques have proven to be limited, and computational approaches remain essential both to assist in the design and validation of experimental studies and for the prediction of interaction partners and detailed structures of protein complexes. Here, we provide a critical overview of existing structure-independent and structure-based computational methods. Although these techniques have significantly advanced in the past few years, we find that most of them are still in their infancy. We also provide an overview of experimental techniques for the detection of protein-protein interactions. Although the developments are promising, false positive and false negative results are common, and reliable detection is possible only by taking a consensus of different experimental approaches. The shortcomings of experimental techniques affect both the further development and the fair evaluation of computational prediction methods. For an adequate comparative evaluation of prediction and high-throughput experimental methods, an appropriately large benchmark set of biophysically characterized protein complexes would be needed, but is sorely lacking.

  16. Mental Models for Mechanical Comprehension. A Review of Literature.

    DTIC Science & Technology

    1986-06-01

    the mental models that people use to understand and solve problems involving mechanics and motion. Method The existing psychological literature on...have been used to investigate mental models. The constructionist school is concerned with how mental models are formed. The information-processing...school uses the experimental methods of modern cognitive psychology to investigate mental structures. The componential approach attempts to meld the

  17. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  18. Spectroscopic criteria for identification of nuclear tetrahedral and octahedral symmetries: Illustration on a rare earth nucleus

    NASA Astrophysics Data System (ADS)

    Dudek, J.; Curien, D.; Dedes, I.; Mazurek, K.; Tagami, S.; Shimizu, Y. R.; Bhattacharjee, T.

    2018-02-01

    We formulate criteria for identification of the nuclear tetrahedral and octahedral symmetries and illustrate for the first time their possible realization in a rare earth nucleus 152Sm. We use realistic nuclear mean-field theory calculations with the phenomenological macroscopic-microscopic method, the Gogny-Hartree-Fock-Bogoliubov approach, and general point-group theory considerations to guide the experimental identification method as illustrated on published experimental data. Following group theory the examined symmetries imply the existence of exotic rotational bands on whose properties the spectroscopic identification criteria are based. These bands may contain simultaneously states of even and odd spins, of both parities and parity doublets at well-defined spins. In the exact-symmetry limit those bands involve no E 2 transitions. We show that coexistence of tetrahedral and octahedral deformations is essential when calculating the corresponding energy minima and surrounding barriers, and that it has a characteristic impact on the rotational bands. The symmetries in question imply the existence of long-lived shape isomers and, possibly, new waiting point nuclei—impacting the nucleosynthesis processes in astrophysics—and an existence of 16-fold degenerate particle-hole excitations. Specifically designed experiments which aim at strengthening the identification arguments are briefly discussed.

  19. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  20. A fast field-cycling device for high-resolution NMR: Design and application to spin relaxation and hyperpolarization experiments

    NASA Astrophysics Data System (ADS)

    Kiryutin, Alexey S.; Pravdivtsev, Andrey N.; Ivanov, Konstantin L.; Grishin, Yuri A.; Vieth, Hans-Martin; Yurkovskaya, Alexandra V.

    2016-02-01

    A device for performing fast magnetic field-cycling NMR experiments is described. A key feature of this setup is that it combines fast switching of the external magnetic field and high-resolution NMR detection. The field-cycling method is based on precise mechanical positioning of the NMR probe with the mounted sample in the inhomogeneous fringe field of the spectrometer magnet. The device enables field variation over several decades (from 100 μT up to 7 T) within less than 0.3 s; progress in NMR probe design provides NMR linewidths of about 10-3 ppm. The experimental method is very versatile and enables site-specific studies of spin relaxation (NMRD, LLSs) and spin hyperpolarization (DNP, CIDNP, and SABRE) at variable magnetic field and at variable temperature. Experimental examples of such studies are demonstrated; advantages of the experimental method are described and existing challenges in the field are outlined.

  1. Flight effects on exhaust noise for turbojet and turbofan engines: Comparison of experimental data with prediction

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1976-01-01

    It was demonstrated that static and in flight jet engine exhaust noise can be predicted with reasonable accuracy when the multiple source nature of the problem is taken into account. Jet mixing noise was predicted from the interim prediction method. Provisional methods of estimating internally generated noise and shock noise flight effects were used, based partly on existing prediction methods and partly on recent reported engine data.

  2. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-03-09

    This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less

  3. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  4. BPP: a sequence-based algorithm for branch point prediction.

    PubMed

    Zhang, Qing; Fan, Xiaodan; Wang, Yejun; Sun, Ming-An; Shao, Jianlin; Guo, Dianjing

    2017-10-15

    Although high-throughput sequencing methods have been proposed to identify splicing branch points in the human genome, these methods can only detect a small fraction of the branch points subject to the sequencing depth, experimental cost and the expression level of the mRNA. An accurate computational model for branch point prediction is therefore an ongoing objective in human genome research. We here propose a novel branch point prediction algorithm that utilizes information on the branch point sequence and the polypyrimidine tract. Using experimentally validated data, we demonstrate that our proposed method outperforms existing methods. Availability and implementation: https://github.com/zhqingit/BPP. djguo@cuhk.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Application of laser differential confocal technique in back vertex power measurement for phoropters

    NASA Astrophysics Data System (ADS)

    Li, Fei; Li, Lin; Ding, Xiang; Liu, Wenli

    2012-10-01

    A phoropter is one of the most popular ophthalmic instruments used in optometry and the back vertex power (BVP) is one of the most important parameters to evaluate the refraction characteristics of a phoropter. In this paper, a new laser differential confocal vertex-power measurement method which takes advantage of outstanding focusing ability of laser differential confocal (LDC) system is proposed for measuring the BVP of phoropters. A vertex power measurement system is built up. Experimental results are presented and some influence factor is analyzed. It is demonstrated that the method based on LDC technique has higher measurement precision and stronger environmental anti-interference capability compared to existing methods. Theoretical analysis and experimental results indicate that the measurement error of the method is about 0.02m-1.

  6. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    NASA Technical Reports Server (NTRS)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  7. Experimental demonstrations in audible frequency range of band gap tunability and negative refraction in two-dimensional sonic crystal.

    PubMed

    Pichard, Hélène; Richoux, Olivier; Groby, Jean-Philippe

    2012-10-01

    The propagation of audible acoustic waves in two-dimensional square lattice tunable sonic crystals (SC) made of square cross-section infinitely rigid rods embedded in air is investigated experimentally. The band structure is calculated with the plane wave expansion (PWE) method and compared with experimental measurements carried out on a finite extend structure of 200 cm width, 70 cm depth and 15 cm height. The structure is made of square inclusions of 5 cm side with a periodicity of L = 7.5 cm placed inbetween two rigid plates. The existence of tunable complete band gaps in the audible frequency range is demonstrated experimentally by rotating the scatterers around their vertical axis. Negative refraction is then analyzed by use of the anisotropy of the equi-frequency surface (EFS) in the first band and of a finite difference time domain (FDTD) method. Experimental results finally show negative refraction in the audible frequency range.

  8. YamiPred: A Novel Evolutionary Method for Predicting Pre-miRNAs and Selecting Relevant Features.

    PubMed

    Kleftogiannis, Dimitrios; Theofilatos, Konstantinos; Likothanassis, Spiros; Mavroudi, Seferina

    2015-01-01

    MicroRNAs (miRNAs) are small non-coding RNAs, which play a significant role in gene regulation. Predicting miRNA genes is a challenging bioinformatics problem and existing experimental and computational methods fail to deal with it effectively. We developed YamiPred, an embedded classification method that combines the efficiency and robustness of support vector machines (SVM) with genetic algorithms (GA) for feature selection and parameters optimization. YamiPred was tested in a new and realistic human dataset and was compared with state-of-the-art computational intelligence approaches and the prevalent SVM-based tools for miRNA prediction. Experimental results indicate that YamiPred outperforms existing approaches in terms of accuracy and of geometric mean of sensitivity and specificity. The embedded feature selection component selects a compact feature subset that contributes to the performance optimization. Further experimentation with this minimal feature subset has achieved very high classification performance and revealed the minimum number of samples required for developing a robust predictor. YamiPred also confirmed the important role of commonly used features such as entropy and enthalpy, and uncovered the significance of newly introduced features, such as %A-U aggregate nucleotide frequency and positional entropy. The best model trained on human data has successfully predicted pre-miRNAs to other organisms including the category of viruses.

  9. PubMed Central

    Serteyn, D; Pincemail, J; Mottart, E; Caudron, I; Deby, C; Deby-Dupont, G; Philippart, C; Lamy, M

    1994-01-01

    This preliminary study demonstrated the existence of a free radical generation during an experimental postischemic muscular reperfusion in a halothane anesthetized horse. The authors used alpha-phényl-N-tert-butylnitrone as a spin trap agent and the electronic paramagnetic resonance method to observe in vivo a free radical generation. PMID:7889465

  10. Mathematical modeling of the aerodynamics of high-angle-of-attack maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Tobak, M.; Malcolm, G. N.

    1980-01-01

    This paper is a review of the current state of aerodynamic mathematical modeling for aircraft motions at high angles of attack. The mathematical model serves to define a set of characteristic motions from whose known aerodynamic responses the aerodynamic response to an arbitrary high angle-of-attack flight maneuver can be predicted. Means are explored of obtaining stability parameter information in terms of the characteristic motions, whether by wind-tunnel experiments, computational methods, or by parameter-identification methods applied to flight-test data. A rationale is presented for selecting and verifying the aerodynamic mathematical model at the lowest necessary level of complexity. Experimental results describing the wing-rock phenomenon are shown to be accommodated within the most recent mathematical model by admitting the existence of aerodynamic hysteresis in the steady-state variation of the rolling moment with roll angle. Interpretation of the experimental results in terms of bifurcation theory reveals the general conditions under which aerodynamic hysteresis must exist.

  11. The effect of contact angles and capillary dimensions on the burst frequency of super hydrophilic and hydrophilic centrifugal microfluidic platforms, a CFD study.

    PubMed

    Kazemzadeh, Amin; Ganesan, Poo; Ibrahim, Fatimah; He, Shuisheng; Madou, Marc J

    2013-01-01

    This paper employs the volume of fluid (VOF) method to numerically investigate the effect of the width, height, and contact angles on burst frequencies of super hydrophilic and hydrophilic capillary valves in centrifugal microfluidic systems. Existing experimental results in the literature have been used to validate the implementation of the numerical method. The performance of capillary valves in the rectangular and the circular microfluidic structures on super hydrophilic centrifugal microfluidic platforms is studied. The numerical results are also compared with the existing theoretical models and the differences are discussed. Our experimental and computed results show a minimum burst frequency occurring at square capillaries and this result is useful for designing and developing more sophisticated networks of capillary valves. It also predicts that in super hydrophilic microfluidics, the fluid leaks consistently from the capillary valve at low pressures which can disrupt the biomedical procedures in centrifugal microfluidic platforms.

  12. The Effect of Contact Angles and Capillary Dimensions on the Burst Frequency of Super Hydrophilic and Hydrophilic Centrifugal Microfluidic Platforms, a CFD Study

    PubMed Central

    Kazemzadeh, Amin; Ganesan, Poo; Ibrahim, Fatimah; He, Shuisheng; Madou, Marc J.

    2013-01-01

    This paper employs the volume of fluid (VOF) method to numerically investigate the effect of the width, height, and contact angles on burst frequencies of super hydrophilic and hydrophilic capillary valves in centrifugal microfluidic systems. Existing experimental results in the literature have been used to validate the implementation of the numerical method. The performance of capillary valves in the rectangular and the circular microfluidic structures on super hydrophilic centrifugal microfluidic platforms is studied. The numerical results are also compared with the existing theoretical models and the differences are discussed. Our experimental and computed results show a minimum burst frequency occurring at square capillaries and this result is useful for designing and developing more sophisticated networks of capillary valves. It also predicts that in super hydrophilic microfluidics, the fluid leaks consistently from the capillary valve at low pressures which can disrupt the biomedical procedures in centrifugal microfluidic platforms. PMID:24069169

  13. A reconsideration of negative ratings for network-based recommendation

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Ren, Liang; Lin, Wenbin

    2018-01-01

    Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.

  14. Argon thermochronology of mineral deposits; a review of analytical methods, formulations, and selected applications

    USGS Publications Warehouse

    Snee, Lawrence W.

    2002-01-01

    40Ar/39Ar geochronology is an experimentally robust and versatile method for constraining time and temperature in geologic processes. The argon method is the most broadly applied in mineral-deposit studies. Standard analytical methods and formulations exist, making the fundamentals of the method well defined. A variety of graphical representations exist for evaluating argon data. A broad range of minerals found in mineral deposits, alteration zones, and host rocks commonly is analyzed to provide age, temporal duration, and thermal conditions for mineralization events and processes. All are discussed in this report. The usefulness of and evolution of the applicability of the method are demonstrated in studies of the Panasqueira, Portugal, tin-tungsten deposit; the Cornubian batholith and associated mineral deposits, southwest England; the Red Mountain intrusive system and associated Urad-Henderson molybdenum deposits; and the Eastern Goldfields Province, Western Australia.

  15. Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.

    PubMed

    Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi

    2016-12-01

    Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.

  16. Method for Smoke Spread Testing of Large Premises

    NASA Astrophysics Data System (ADS)

    Walmerdahl, P.; Werling, P.

    2001-11-01

    A method for performing non-destructive smoke spread tests has been developed, tested and applied to several existing buildings. Burning methanol in different size steel trays cooled by water generates the heat source. Several tray sizes are available to cover fire sources up to nearly 1MW. The smoke is supplied by means of a suitable number of smoke generators that produce a smoke, which can be described as a non-toxic aerosol. The advantage of the method is that it provides a means for performing non-destructive tests in already existing buildings and other installations for the purpose of evaluating the functionality and design of the active fire protection measures such as smoke extraction systems, etc. In the report, the method is described in detail and experimental data from the try-out of the method are also presented in addition to a discussion on applicability and flexibility of the method.

  17. Wavelet-based image compression using shuffling and bit plane correlation

    NASA Astrophysics Data System (ADS)

    Kim, Seungjong; Jeong, Jechang

    2000-12-01

    In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.

  18. Strain gage measurement errors in the transient heating of structural components

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1993-01-01

    Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.

  19. Reconstituting protein interaction networks using parameter-dependent domain-domain interactions

    PubMed Central

    2013-01-01

    Background We can describe protein-protein interactions (PPIs) as sets of distinct domain-domain interactions (DDIs) that mediate the physical interactions between proteins. Experimental data confirm that DDIs are more consistent than their corresponding PPIs, lending support to the notion that analyses of DDIs may improve our understanding of PPIs and lead to further insights into cellular function, disease, and evolution. However, currently available experimental DDI data cover only a small fraction of all existing PPIs and, in the absence of structural data, determining which particular DDI mediates any given PPI is a challenge. Results We present two contributions to the field of domain interaction analysis. First, we introduce a novel computational strategy to merge domain annotation data from multiple databases. We show that when we merged yeast domain annotations from six annotation databases we increased the average number of domains per protein from 1.05 to 2.44, bringing it closer to the estimated average value of 3. Second, we introduce a novel computational method, parameter-dependent DDI selection (PADDS), which, given a set of PPIs, extracts a small set of domain pairs that can reconstruct the original set of protein interactions, while attempting to minimize false positives. Based on a set of PPIs from multiple organisms, our method extracted 27% more experimentally detected DDIs than existing computational approaches. Conclusions We have provided a method to merge domain annotation data from multiple sources, ensuring large and consistent domain annotation for any given organism. Moreover, we provided a method to extract a small set of DDIs from the underlying set of PPIs and we showed that, in contrast to existing approaches, our method was not biased towards DDIs with low or high occurrence counts. Finally, we used these two methods to highlight the influence of the underlying annotation density on the characteristics of extracted DDIs. Although increased annotations greatly expanded the possible DDIs, the lack of knowledge of the true biological false positive interactions still prevents an unambiguous assignment of domain interactions responsible for all protein network interactions. Executable files and examples are given at: http://www.bhsai.org/downloads/padds/ PMID:23651452

  20. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.

  1. A processing centre for the CNES CE-GPS experimentation

    NASA Technical Reports Server (NTRS)

    Suard, Norbert; Durand, Jean-Claude

    1994-01-01

    CNES is involved in a GPS (Global Positioning System) geostationary overlay experimentation. The purpose of this experimentation is to test various new techniques in order to select the optimal station synchronization method, as well as the geostationary spacecraft orbitography method. These new techniques are needed to develop the Ranging GPS Integrity Channel services. The CNES experimentation includes three transmitting/receiving ground stations (manufactured by IN-SNEC), one INMARSAT 2 C/L band transponder and a processing center named STE (Station de Traitements de l'Experimentation). Not all the techniques to be tested are implemented, but the experimental system has to include several functions; part of the future system simulation functions, such as a servo-loop function, and in particular a data collection function providing for rapid monitoring of system operation, analysis of existing ground station processes, and several weeks of data coverage for other scientific studies. This paper discusses system architecture and some criteria used in its design, as well as the monitoring function, the approach used to develop a low-cost and short-life processing center in collaboration with a CNES sub-contractor (ATTDATAID), and some results.

  2. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  3. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  4. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Experimental and theoretical studies of 3-benzyloxy-2-nitropyridine

    NASA Astrophysics Data System (ADS)

    Sun, Wenting; Cui, Yu; Liu, Huimin; Zhao, Haitao; Zhang, Wenqin

    2012-10-01

    The structure of 3-benzyloxy-2-nitropyridine has been investigated both experimentally and theoretically. The X-ray crystallography results show that the nitro group is tilted out of the pyridine ring plane by 66.4(4)°, which is mainly attributed to the electron-electron repulsions of the lone pairs in O atom of the 3-benzyloxy moiety with O atom in nitro group. An interesting centrosymmetric π-stacking molecular pair has been found in the crystalline state, which results in the approximate coplanarity of the pyridine ring with the benzene ring. The calculated results show that the dihedral angle between the nitro group and pyridine ring from the X3LYP method is much closer to the experimental data than that from the M06-2X one. The existing two conformational isomers of 3-benzyloxy-2-nitropyridine with equal energy explain well the disorder of the nitro group at room temperature. In addition, the vibrational frequencies are also calculated by the X3LYP and M06-2X methods and compared with the experimental results. The prediction from the X3LYP method coincides with the locations of the experimental frequencies well.

  6. Development plan for the External Hazards Experimental Group. Light Water Reactor Sustainability Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Justin Leigh; Smith, Curtis Lee; Burns, Douglas Edward

    This report describes the development plan for a new multi-partner External Hazards Experimental Group (EHEG) coordinated by Idaho National Laboratory (INL) within the Risk-Informed Safety Margin Characterization (RISMC) technical pathway of the Light Water Reactor Sustainability Program. Currently, there is limited data available for development and validation of the tools and methods being developed in the RISMC Toolkit. The EHEG is being developed to obtain high-quality, small- and large-scale experimental data validation of RISMC tools and methods in a timely and cost-effective way. The group of universities and national laboratories that will eventually form the EHEG (which is ultimately expectedmore » to include both the initial participants and other universities and national laboratories that have been identified) have the expertise and experimental capabilities needed to both obtain and compile existing data archives and perform additional seismic and flooding experiments. The data developed by EHEG will be stored in databases for use within RISMC. These databases will be used to validate the advanced external hazard tools and methods.« less

  7. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  8. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    PubMed

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  9. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  10. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  11. Max-margin multiattribute learning with low-rank constraint.

    PubMed

    Zhang, Qiang; Chen, Lin; Li, Baoxin

    2014-07-01

    Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.

  12. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  13. A critical evaluation of various methods for the analysis of flow-solid interaction in a nest of thin cylinders subjected to cross flows

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook

    1987-01-01

    Various experimental, analytical, and numerical analysis methods for flow-solid interaction of a nest of cylinders subjected to cross flows are reviewed. A nest of cylinders subjected to cross flows can be found in numerous engineering applications including the Space Shuttle Maine Engine-Main Injector Assembly (SSME-MIA) and nuclear reactor heat exchangers. Despite its extreme importance in engineering applications, understanding of the flow-solid interaction process is quite limited and design of the tube banks are mostly dependent on experiments and/or experimental correlation equations. For future development of major numerical analysis methods for the flow-solid interaction of a nest of cylinders subjected to cross flow, various turbulence models, nonlinear structural dynamics, and existing laminar flow-solid interaction analysis methods are included.

  14. Kinase Identification with Supervised Laplacian Regularized Least Squares

    PubMed Central

    Zhang, He; Wang, Minghui

    2015-01-01

    Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms. PMID:26448296

  15. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  16. Kinase Identification with Supervised Laplacian Regularized Least Squares.

    PubMed

    Li, Ao; Xu, Xiaoyi; Zhang, He; Wang, Minghui

    2015-01-01

    Phosphorylation is catalyzed by protein kinases and is irreplaceable in regulating biological processes. Identification of phosphorylation sites with their corresponding kinases contributes to the understanding of molecular mechanisms. Mass spectrometry analysis of phosphor-proteomes generates a large number of phosphorylated sites. However, experimental methods are costly and time-consuming, and most phosphorylation sites determined by experimental methods lack kinase information. Therefore, computational methods are urgently needed to address the kinase identification problem. To this end, we propose a new kernel-based machine learning method called Supervised Laplacian Regularized Least Squares (SLapRLS), which adopts a new method to construct kernels based on the similarity matrix and minimizes both structure risk and overall inconsistency between labels and similarities. The results predicted using both Phospho.ELM and an additional independent test dataset indicate that SLapRLS can more effectively identify kinases compared to other existing algorithms.

  17. A salient region detection model combining background distribution measure for indoor robots.

    PubMed

    Li, Na; Xu, Hui; Wang, Zhenhua; Sun, Lining; Chen, Guodong

    2017-01-01

    Vision system plays an important role in the field of indoor robot. Saliency detection methods, capturing regions that are perceived as important, are used to improve the performance of visual perception system. Most of state-of-the-art methods for saliency detection, performing outstandingly in natural images, cannot work in complicated indoor environment. Therefore, we propose a new method comprised of graph-based RGB-D segmentation, primary saliency measure, background distribution measure, and combination. Besides, region roundness is proposed to describe the compactness of a region to measure background distribution more robustly. To validate the proposed approach, eleven influential methods are compared on the DSD and ECSSD dataset. Moreover, we build a mobile robot platform for application in an actual environment, and design three different kinds of experimental constructions that are different viewpoints, illumination variations and partial occlusions. Experimental results demonstrate that our model outperforms existing methods and is useful for indoor mobile robots.

  18. A Laboratory Exercise with Related Rates.

    ERIC Educational Resources Information Center

    Sworder, Steven C.

    A laboratory experiment, based on a simple electric circuit that can be used to demonstrate the existence of real-world "related rates" problems, is outlined and an equation for voltage across the capacitor terminals during discharge is derived. The necessary materials, setup methods, and experimental problems are described. A student laboratory…

  19. Mix and match: how to regain your balance.

    PubMed

    Jupiter, Daniel C

    2013-01-01

    In retrospective studies, a demographic imbalance often exists between cases and controls. This imbalance may affect outcome, independent of experimental group. We discuss matching methods that allow us to overcome these imbalances. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  1. A practical model for pressure probe system response estimation (with review of existing models)

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  2. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  3. 3D shape reconstruction of specular surfaces by using phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan

    2016-10-01

    The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.

  4. Optimal chroma-like channel design for passive color image splicing detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin

    2012-12-01

    Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.

  5. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples

    PubMed Central

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-01-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260

  6. The Taguchi Method Application to Improve the Quality of a Sustainable Process

    NASA Astrophysics Data System (ADS)

    Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.

    2018-06-01

    Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.

  7. Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2018-03-01

    Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.

  8. Computer tomography of flows external to test models

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  9. Numerical-experimental investigation of resonance characteristics of a sounding board

    NASA Astrophysics Data System (ADS)

    Shlychkov, S. V.

    2007-05-01

    The paper presents results of numerical and experimental investigations into the vibrations of thin-walled structures, considering such their features as the complexity of geometry, the laminated structure of walls, the anisotropy of materials, the presence of stiffeners, and the initial stresses. The object of the study is the sounding board of an acoustic guitar, the main structural material of which is a three-layer birch veneer. Based on the finite-element method, a corresponding calculation model is created, and the steady-state regimes of forced vibrations of the sounding board are investigated. A good correspondence between calculation results and experimental data is found to exist.

  10. Experimental study on slow flexural waves around the defect modes in a phononic crystal beam using fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Chuang, Kuo-Chih; Zhang, Zhi-Qiang; Wang, Hua-Xin

    2016-12-01

    This work experimentally studies influences of the point defect modes on the group velocity of flexural waves in a phononic crystal Timoshenko beam. Using the transfer matrix method with a supercell technique, the band structures and the group velocities around the defect modes are theoretically obtained. Particularly, to demonstrate the existence of the localized defect modes inside the band gaps, a high-sensitivity fiber Bragg grating sensing system is set up and the displacement transmittance is measured. Slow propagation of flexural waves via defect coupling in the phononic crystal beam is then experimentally demonstrated with Hanning windowed tone burst excitations.

  11. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2011-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  12. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2013-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  13. Practical implementation of spectral-intensity dispersion-canceled optical coherence tomography with artifact suppression

    NASA Astrophysics Data System (ADS)

    Shirai, Tomohiro; Friberg, Ari T.

    2018-04-01

    Dispersion-canceled optical coherence tomography (OCT) based on spectral intensity interferometry was devised as a classical counterpart of quantum OCT to enhance the basic performance of conventional OCT. In this paper, we demonstrate experimentally that an alternative method of realizing this kind of OCT by means of two optical fiber couplers and a single spectrometer is a more practical and reliable option than the existing methods proposed previously. Furthermore, we develop a recipe for reducing multiple artifacts simultaneously on the basis of simple averaging and verify experimentally that it works successfully in the sense that all the artifacts are mitigated effectively and only the true signals carrying structural information about the sample survive.

  14. A New Adaptive Framework for Collaborative Filtering Prediction

    PubMed Central

    Almosallam, Ibrahim A.; Shang, Yi

    2010-01-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924

  15. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.

  16. Proposal on Calculation of Ventilation Threshold Using Non-contact Respiration Measurement with Pattern Light Projection

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo

    We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.

  17. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  18. Fuzzy method of recognition of high molecular substances in evidence-based biology

    NASA Astrophysics Data System (ADS)

    Olevskyi, V. I.; Smetanin, V. T.; Olevska, Yu. B.

    2017-10-01

    Nowadays modern requirements to achieving reliable results along with high quality of researches put mathematical analysis methods of results at the forefront. Because of this, evidence-based methods of processing experimental data have become increasingly popular in the biological sciences and medicine. Their basis is meta-analysis, a method of quantitative generalization of a large number of randomized trails contributing to a same special problem, which are often contradictory and performed by different authors. It allows identifying the most important trends and quantitative indicators of the data, verification of advanced hypotheses and discovering new effects in the population genotype. The existing methods of recognizing high molecular substances by gel electrophoresis of proteins under denaturing conditions are based on approximate methods for comparing the contrast of electrophoregrams with a standard solution of known substances. We propose a fuzzy method for modeling experimental data to increase the accuracy and validity of the findings of the detection of new proteins.

  19. A universal laboratory method for determining physical parameters of radon migration in dry granulated porous media.

    PubMed

    Ye, Yong-Jun; Zhang, Yun-Feng; Dai, Xin-Tao; Ding, De-Xin

    2017-10-01

    The particle size and heaped methods of exhalation media have important effects on physical parameters, such as the free radon production rate, porosity, permeability, and radon diffusion coefficient. However, existing methods for determining those parameters are too complex, and time-consuming. In this study, a novel, systematic determining method was proposed based on nuclide decay, radon diffusion migration theory, and the mass conservation law, and an associated experimental device was designed and manufactured. The parameters of uranium ore heap and sandy soil of radon diffusion coefficient (D), free radon production rate (α), media permeability (k), and porosity (ε) were obtained. At the same time, the practicality of the novel determining method was improved over other methods, with the results showing that accuracy was within the acceptable range of experimental error. This novel method will be of significance for the study of radon migration and exhalation in granulated porous media. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method.

    PubMed

    Leyde, Brian P; Klein, Sanford A; Nellis, Gregory F; Skye, Harrison

    2017-03-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model.

  1. Base Pressure at Supersonic Speeds on Two-dimensional Airfoils and on Bodies of Revolution with and Without Fins Having Turbulent Boundary Layers

    NASA Technical Reports Server (NTRS)

    LOVE EUGENE S

    1957-01-01

    An analysis has been made of available experimental data to show the effects of most of the variables that are more predominant in determining base pressure at supersonic speeds. The analysis covers base pressures for two-dimensional airfoils and for bodies of revolution with and without stabilizing fins and is restricted to turbulent boundary layers. The present status of available experimental information is summarized as are the existing methods for predicting base pressure. A simple semiempirical method is presented for estimating base pressure. For two-dimensional bases, this method stems from an analogy established between the base-pressure phenomena and the peak pressure rise associated with the separation of the boundary layer. An analysis made for axially symmetric flow indicates that the base pressure for bodies of revolution is subject to the same analogy. Based upon the methods presented, estimations are made of such effects as Mach number, angle of attack, boattailing, fineness ratio, and fins. These estimations give fair predictions of experimental results. (author)

  2. Computational modeling of RNA 3D structures, with the aid of experimental restraints

    PubMed Central

    Magnus, Marcin; Matelska, Dorota; Łach, Grzegorz; Chojnowski, Grzegorz; Boniecki, Michal J; Purta, Elzbieta; Dawson, Wayne; Dunin-Horkawicz, Stanislaw; Bujnicki, Janusz M

    2014-01-01

    In addition to mRNAs whose primary function is transmission of genetic information from DNA to proteins, numerous other classes of RNA molecules exist, which are involved in a variety of functions, such as catalyzing biochemical reactions or performing regulatory roles. In analogy to proteins, the function of RNAs depends on their structure and dynamics, which are largely determined by the ribonucleotide sequence. Experimental determination of high-resolution RNA structures is both laborious and difficult, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that simulate either the physical process of RNA structure formation (“Greek science” approach) or utilize information derived from known structures of other RNA molecules (“Babylonian science” approach). All computational methods suffer from various limitations that make them generally unreliable for structure prediction of long RNA sequences. However, in many cases, the limitations of computational and experimental methods can be overcome by combining these two complementary approaches with each other. In this work, we review computational approaches for RNA structure prediction, with emphasis on implementations (particular programs) that can utilize restraints derived from experimental analyses. We also list experimental approaches, whose results can be relatively easily used by computational methods. Finally, we describe case studies where computational and experimental analyses were successfully combined to determine RNA structures that would remain out of reach for each of these approaches applied separately. PMID:24785264

  3. Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.

    PubMed

    Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan

    2014-09-22

    A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.

  4. Strengthening of Existing Bridge Structures for Shear and Bending with Carbon Textile-Reinforced Mortar.

    PubMed

    Herbrand, Martin; Adam, Viviane; Classen, Martin; Kueres, Dominik; Hegger, Josef

    2017-09-19

    Increasing traffic loads and changes in code provisions lead to deficits in shear and flexural capacity of many existing highway bridges. Therefore, a large number of structures are expected to require refurbishment and strengthening in the future. This projection is based on the current condition of many older road bridges. Different strengthening methods for bridges exist to extend their service life, all having specific advantages and disadvantages. By applying a thin layer of carbon textile-reinforced mortar (CTRM) to bridge deck slabs and the webs of pre-stressed concrete bridges, the fatigue and ultimate strength of these members can be increased significantly. The CTRM layer is a combination of a corrosion resistant carbon fiber reinforced polymer (CFRP) fabric and an efficient mortar. In this paper, the strengthening method and the experimental results obtained at RWTH Aachen University are presented.

  5. Strengthening of Existing Bridge Structures for Shear and Bending with Carbon Textile-Reinforced Mortar

    PubMed Central

    Herbrand, Martin; Classen, Martin; Kueres, Dominik; Hegger, Josef

    2017-01-01

    Increasing traffic loads and changes in code provisions lead to deficits in shear and flexural capacity of many existing highway bridges. Therefore, a large number of structures are expected to require refurbishment and strengthening in the future. This projection is based on the current condition of many older road bridges. Different strengthening methods for bridges exist to extend their service life, all having specific advantages and disadvantages. By applying a thin layer of carbon textile-reinforced mortar (CTRM) to bridge deck slabs and the webs of pre-stressed concrete bridges, the fatigue and ultimate strength of these members can be increased significantly. The CTRM layer is a combination of a corrosion resistant carbon fiber reinforced polymer (CFRP) fabric and an efficient mortar. In this paper, the strengthening method and the experimental results obtained at RWTH Aachen University are presented. PMID:28925962

  6. General Open Systems Theory and the Substrata-Factor Theory of Reading.

    ERIC Educational Resources Information Center

    Kling, Martin

    This study was designed to extend the generality of the Substrata-Factor Theory by two methods of investigation: (1) theoretically, to establish the validity of the hypothesis that an isomorphic relationship exists between the Substrata-Factor Theory and the General Open Systems Theory, and (2) experimentally, to discover through a series of…

  7. General Open Systems Theory and the Substrata-Factor Theory of Reading.

    ERIC Educational Resources Information Center

    Kling, Martin

    This study was designed to extend the generality of the Substrata-Factor Theory by two methods of investigation: (1) theoretically, to est"blish the validity of the hypothesis that an isomorphic relationship exists between the Substrata-Factor Theory and the General Open Systems Theory, and (2) experimentally, to disc"ver through a…

  8. Remediating Misconception on Climate Change among Secondary School Students in Malaysia

    ERIC Educational Resources Information Center

    Karpudewan, Mageswary; Roth, Wolff-Michael; Chandrakesan, Kasturi

    2015-01-01

    Existing studies report on secondary school students' misconceptions related to climate change; they also report on the methods of teaching as reinforcing misconceptions. This quasi-experimental study was designed to test the null hypothesis that a curriculum based on constructivist principles does not lead to greater understanding and fewer…

  9. A System for English Vocabulary Acquisition Based on Code-Switching

    ERIC Educational Resources Information Center

    Mazur, Michal; Karolczak, Krzysztof; Rzepka, Rafal; Araki, Kenji

    2016-01-01

    Vocabulary plays an important part in second language learning and there are many existing techniques to facilitate word acquisition. One of these methods is code-switching, or mixing the vocabulary of two languages in one sentence. In this paper the authors propose an experimental system for computer-assisted English vocabulary learning in…

  10. A summary and evaluation of semi-empirical methods for the prediction of helicopter rotor noise

    NASA Technical Reports Server (NTRS)

    Pegg, R. J.

    1979-01-01

    Existing prediction techniques are compiled and described. The descriptions include input and output parameter lists, required equations and graphs, and the range of validity for each part of the prediction procedures. Examples are provided illustrating the analysis procedure and the degree of agreement with experimental results.

  11. Validation of the Quantitative Diagnostic Thinking Inventory for Athletic Training: A Pilot Study

    ERIC Educational Resources Information Center

    Kicklighter, Taz; Barnum, Mary; Geisler, Paul R.; Martin, Malissa

    2016-01-01

    Context: The cognitive process of making a clinical decision lies somewhere on a continuum between novices using hypothetico-deductive reasoning and experts relying more on case pattern recognition. Although several methods exist for measuring facets of clinical reasoning in specific situations, none have been experimentally applied, as of yet, to…

  12. Does Civic Education Matter?: The Power of Long-Term Observation and the Experimental Method

    ERIC Educational Resources Information Center

    Claassen, Ryan L.; Monson, J. Quin

    2015-01-01

    Despite consensus regarding the civic shortcomings of American citizens, no such scholarly consensus exists regarding the effectiveness of civic education addressing political apathy and ignorance. Accordingly, we report the results of a detailed study of students enrolled in introductory American politics courses on the campuses of two large…

  13. Identification of influential users by neighbors in online social networks

    NASA Astrophysics Data System (ADS)

    Sheikhahmadi, Amir; Nematbakhsh, Mohammad Ali; Zareie, Ahmad

    2017-11-01

    Identification and ranking of influential users in social networks for the sake of news spreading and advertising has recently become an attractive field of research. Given the large number of users in social networks and also the various relations that exist among them, providing an effective method to identify influential users has been gradually considered as an essential factor. In most of the already-provided methods, those users who are located in an appropriate structural position of the network are regarded as influential users. These methods do not usually pay attention to the interactions among users, and also consider those relations as being binary in nature. This paper, therefore, proposes a new method to identify influential users in a social network by considering those interactions that exist among the users. Since users tend to act within the frame of communities, the network is initially divided into different communities. Then the amount of interaction among users is used as a parameter to set the weight of relations existing within the network. Afterward, by determining the neighbors' role for each user, a two-level method is proposed for both detecting users' influence and also ranking them. Simulation and experimental results on twitter data shows that those users who are selected by the proposed method, comparing to other existing ones, are distributed in a more appropriate distance. Moreover, the proposed method outperforms the other ones in terms of both the influential speed and capacity of the users it selects.

  14. CCTOP: a Consensus Constrained TOPology prediction web server.

    PubMed

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Multistable orientation in a nematic liquid crystal cell induced by external field and interfacial interaction

    NASA Astrophysics Data System (ADS)

    Ong, Hiap Liew; Meyer, Robert B.; Hurd, Alan J.

    1984-04-01

    The effects of a short-range, arbitrary strength interfacial potential on the magnetic field, electric field, and optical field induced Freedericksz transition in a nematic liquid crystal cell are examined and the exact solution is obtained. By generalizing the criterion for the existence of a first-order optical field induced Freedericksz transition that was obtained previously [H. L. Ong, Phys. Rev. A 28, 2393 (1983)], the general criterion for the transition to be first order is obtained. Based on the existing experimental results, the possibility of surface induced first-order transitions is discussed and three simple empirical approaches are suggested for observing multistable orientation. The early results on the magnetic and electric fields induced Freedericksz transition and the inadequacy of the usual experimental observation methods (phase shift and capacitance measurements) are also discussed.

  16. Political science. Reverse-engineering censorship in China: randomized experimentation and participant observation.

    PubMed

    King, Gary; Pan, Jennifer; Roberts, Margaret E

    2014-08-22

    Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and--with their software, documentation, and even customer support--reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored. Copyright © 2014, American Association for the Advancement of Science.

  17. Summary: Experimental validation of real-time fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  18. PredPPCrys: accurate prediction of sequence cloning, protein production, purification and crystallization propensity from protein sequences using multi-step heterogeneous feature fusion and selection.

    PubMed

    Wang, Huilin; Wang, Mingjun; Tan, Hao; Li, Yuan; Zhang, Ziding; Song, Jiangning

    2014-01-01

    X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed 'PredPPCrys' using the support vector machine (SVM). Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I). Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II), which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization targets of currently non-crystallizable proteins were provided as compendium data, which are anticipated to facilitate target selection and design for the worldwide structural genomics consortium. PredPPCrys is freely available at http://www.structbioinfor.org/PredPPCrys.

  19. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  20. Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames

    NASA Astrophysics Data System (ADS)

    Heye, Colin; Raman, Venkat

    2012-11-01

    A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.

  1. ENFIN--A European network for integrative systems biology.

    PubMed

    Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan

    2009-11-01

    Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.

  2. Spatially Regularized Machine Learning for Task and Resting-state fMRI

    PubMed Central

    Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei

    2015-01-01

    Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627

  3. Airplane detection based on fusion framework by combining saliency model with Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen

    2018-03-01

    Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.

  4. Data base for the prediction of inlet external drag

    NASA Technical Reports Server (NTRS)

    Mcmillan, O. J.; Perkins, E. W.; Perkins, S. C., Jr.

    1980-01-01

    Results are presented from a study to define and evaluate the data base for predicting an airframe/propulsion system interference effect shown to be of considerable importance, inlet external drag. The study is focused on supersonic tactical aircraft with highly integrated jet propulsion systems, although some information is included for supersonic strategic aircraft and for transport aircraft designed for high subsonic or low supersonic cruise. The data base for inlet external drag is considered to consist of the theoretical and empirical prediction methods as well as the experimental data identified in an extensive literature search. The state of the art in the subsonic and transonic speed regimes is evaluated. The experimental data base is organized and presented in a series of tables in which the test article, the quantities measured and the ranges of test conditions covered are described for each set of data; in this way, the breadth of coverage and gaps in the existing experimental data are evident. Prediction methods are categorized by method of solution, type of inlet and speed range to which they apply, major features are given, and their accuracy is assessed by means of comparison to experimental data.

  5. FIND: difFerential chromatin INteractions Detection using a spatial Poisson process

    PubMed Central

    Chen, Yang; Zhang, Michael Q.

    2018-01-01

    Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. PMID:29440282

  6. Measuring saliency in images: which experimental parameters for the assessment of image quality?

    NASA Astrophysics Data System (ADS)

    Fredembach, Clement; Woolfe, Geoff; Wang, Jue

    2012-01-01

    Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a set of parameters, tasks and images that can be used to compare the various saliency prediction methods in a manner that is meaningful for image quality assessment.

  7. A complex-lamellar description of boundary layer transition

    NASA Astrophysics Data System (ADS)

    Kolla, Maureen Louise

    Flow transition is important, in both practical and phenomenological terms. However, there is currently no method for identifying the spatial locations associated with transition, such as the start and end of intermittency. The concept of flow stability and experimental correlations have been used, however, flow stability only identifies the location where disturbances begin to grow in the laminar flow and experimental correlations can only give approximations as measuring the start and end of intermittency is difficult. Therefore, the focus of this work is to construct a method to identify the start and end of intermittency, for a natural boundary layer transition and a separated flow transition. We obtain these locations by deriving a complex-lamellar description of the velocity field that exists between a fully laminar and fully turbulent boundary condition. Mathematically, this complex-lamellar decomposition, which is constructed from the classical Darwin-Lighthill-Hawthorne drift function and the transport of enstrophy, describes the flow that exists between the fully laminar Pohlhausen equations and Prandtl's fully turbulent one seventh power law. We approximate the difference in enstrophy density between the boundary conditions using a power series. The slope of the power series is scaled by using the shape of the universal intermittency distribution within the intermittency region. We solve the complex-lamellar decomposition of the velocity field along with the slope of the difference in enstrophy density function to determine the location of the laminar and turbulent boundary conditions. Then from the difference in enstrophy density function we calculate the start and end of intermittency. We perform this calculation on a natural boundary layer transition over a flat plate for zero pressure gradient flow and for separated shear flow over a separation bubble. We compare these results to existing experimental results and verify the accuracy of our transition model.

  8. Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence

    NASA Astrophysics Data System (ADS)

    van der Straeten, Erik; Beck, Christian

    2009-09-01

    We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.

  9. An extended set of yeast-based functional assays accurately identifies human disease mutations

    PubMed Central

    Sun, Song; Yang, Fan; Tan, Guihong; Costanzo, Michael; Oughtred, Rose; Hirschman, Jodi; Theesfeld, Chandra L.; Bansal, Pritpal; Sahni, Nidhi; Yi, Song; Yu, Analyn; Tyagi, Tanya; Tie, Cathy; Hill, David E.; Vidal, Marc; Andrews, Brenda J.; Boone, Charles; Dolinski, Kara; Roth, Frederick P.

    2016-01-01

    We can now routinely identify coding variants within individual human genomes. A pressing challenge is to determine which variants disrupt the function of disease-associated genes. Both experimental and computational methods exist to predict pathogenicity of human genetic variation. However, a systematic performance comparison between them has been lacking. Therefore, we developed and exploited a panel of 26 yeast-based functional complementation assays to measure the impact of 179 variants (101 disease- and 78 non-disease-associated variants) from 22 human disease genes. Using the resulting reference standard, we show that experimental functional assays in a 1-billion-year diverged model organism can identify pathogenic alleles with significantly higher precision and specificity than current computational methods. PMID:26975778

  10. Chapter 15: Disease Gene Prioritization

    PubMed Central

    Bromberg, Yana

    2013-01-01

    Disease-causing aberrations in the normal function of a gene define that gene as a disease gene. Proving a causal link between a gene and a disease experimentally is expensive and time-consuming. Comprehensive prioritization of candidate genes prior to experimental testing drastically reduces the associated costs. Computational gene prioritization is based on various pieces of correlative evidence that associate each gene with the given disease and suggest possible causal links. A fair amount of this evidence comes from high-throughput experimentation. Thus, well-developed methods are necessary to reliably deal with the quantity of information at hand. Existing gene prioritization techniques already significantly improve the outcomes of targeted experimental studies. Faster and more reliable techniques that account for novel data types are necessary for the development of new diagnostics, treatments, and cure for many diseases. PMID:23633938

  11. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  12. Pavement crack detection combining non-negative feature with fast LoG in complex scene

    NASA Astrophysics Data System (ADS)

    Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu

    2015-12-01

    Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.

  13. Dissecting Reactor Antineutrino Flux Calculations

    NASA Astrophysics Data System (ADS)

    Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.

    2017-09-01

    Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235U, 239Pu, 241Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In the present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238U contribution as well as the effective charge and the allowed shape assumption used in the conversion method. We observe that including a shape correction of about +6 % MeV-1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.

  14. Dissecting Reactor Antineutrino Flux Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.

    2017-09-15

    Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235 U , 239 Pu , 241 Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In our present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238 U contribution as wellmore » as the effective charge and the allowed shape assumption used in the conversion method. Here, we observe that including a shape correction of about + 6 % MeV - 1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.« less

  15. A novel infrared small moving target detection method based on tracking interest points under complicated background

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying

    2014-07-01

    Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.

  16. Attaining a steady air stream in wind tunnels

    NASA Technical Reports Server (NTRS)

    Prandtl, L

    1933-01-01

    Many experimental arrangements of varying kind involve the problems of assuring a large, steady air stream both as to volume and to time. For this reason a separate discussion of the methods by which this is achieved should prove of particular interest. Motors and blades receive special attention and a review of existent wind tunnels is also provided.

  17. Early crop-tree release in even-aged stands of Appalachian hardwoods

    Treesearch

    George R., Jr. Trimble; George R. Trimble

    1971-01-01

    Now that even-aged silviculture is well established as a successful method of growing Appalachian hardwoods, a pressing need exists for guidelines for precommercial operations. We started research several years ago on the Fernow Experimental Forest near Parsons, West Virginia, to learn more about the cost and methodology of early crop-tree release in mountain hardwood...

  18. Considerations for the Systematic Analysis and Use of Single-Case Research

    ERIC Educational Resources Information Center

    Horner, Robert H.; Swaminathan, Hariharan; Sugai, George; Smolkowski, Keith

    2012-01-01

    Single-case research designs provide a rigorous research methodology for documenting experimental control. If single-case methods are to gain wider application, however, a need exists to define more clearly (a) the logic of single-case designs, (b) the process and decision rules for visual analysis, and (c) an accepted process for integrating…

  19. Application of Influence Diagrams in Identifying Soviet Satellite Missions

    DTIC Science & Technology

    1990-12-01

    Probabilities Comparison ......................... 58 35. Continuous Model Variables ............................ 59 36. Sample Inclination Data...diagramming is a method which allows the simple construction of a model to illustrate the interrelationships which exist among variables by capturing an...environmental monitoring systems. The module also contained an array of instruments for geophysical and astrophysical experimentation . 4.3.14.3 Soyuz. The Soyuz

  20. Synthesis and Mechanical Characterization of Binary and Ternary Intermetallic Alloys Based on Fe-Ti-Al by Resonant Ultrasound Vibrational Methods.

    PubMed

    Chanbi, Daoud; Ogam, Erick; Amara, Sif Eddine; Fellah, Z E A

    2018-05-07

    Precise but simple experimental and inverse methods allowing the recovery of mechanical material parameters are necessary for the exploration of materials with novel crystallographic structures and elastic properties, particularly for new materials and those existing only in theory. The alloys studied herein are of new atomic compositions. This paper reports an experimental study involving the synthesis and development of methods for the determination of the elastic properties of binary (Fe-Al, Fe-Ti and Ti-Al) and ternary (Fe-Ti-Al) intermetallic alloys with different concentrations of their individual constituents. The alloys studied were synthesized from high purity metals using an arc furnace with argon flow to ensure their uniformity and homogeneity. Precise but simple methods for the recovery of the elastic constants of the isotropic metals from resonant ultrasound vibration data were developed. These methods allowed the fine analysis of the relationships between the atomic concentration of a given constituent and the Young’s modulus or alloy density.

  1. Robust and Efficient Biomolecular Clustering of Tumor Based on ${p}$ -Norm Singular Value Decomposition.

    PubMed

    Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan

    2017-07-01

    High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.

  2. Synthesis and Mechanical Characterization of Binary and Ternary Intermetallic Alloys Based on Fe-Ti-Al by Resonant Ultrasound Vibrational Methods

    PubMed Central

    Chanbi, Daoud; Amara, Sif Eddine; Fellah, Z. E. A.

    2018-01-01

    Precise but simple experimental and inverse methods allowing the recovery of mechanical material parameters are necessary for the exploration of materials with novel crystallographic structures and elastic properties, particularly for new materials and those existing only in theory. The alloys studied herein are of new atomic compositions. This paper reports an experimental study involving the synthesis and development of methods for the determination of the elastic properties of binary (Fe-Al, Fe-Ti and Ti-Al) and ternary (Fe-Ti-Al) intermetallic alloys with different concentrations of their individual constituents. The alloys studied were synthesized from high purity metals using an arc furnace with argon flow to ensure their uniformity and homogeneity. Precise but simple methods for the recovery of the elastic constants of the isotropic metals from resonant ultrasound vibration data were developed. These methods allowed the fine analysis of the relationships between the atomic concentration of a given constituent and the Young’s modulus or alloy density. PMID:29735946

  3. Knowledge-guided fuzzy logic modeling to infer cellular signaling networks from proteomic data

    PubMed Central

    Liu, Hui; Zhang, Fan; Mishra, Shital Kumar; Zhou, Shuigeng; Zheng, Jie

    2016-01-01

    Modeling of signaling pathways is crucial for understanding and predicting cellular responses to drug treatments. However, canonical signaling pathways curated from literature are seldom context-specific and thus can hardly predict cell type-specific response to external perturbations; purely data-driven methods also have drawbacks such as limited biological interpretability. Therefore, hybrid methods that can integrate prior knowledge and real data for network inference are highly desirable. In this paper, we propose a knowledge-guided fuzzy logic network model to infer signaling pathways by exploiting both prior knowledge and time-series data. In particular, the dynamic time warping algorithm is employed to measure the goodness of fit between experimental and predicted data, so that our method can model temporally-ordered experimental observations. We evaluated the proposed method on a synthetic dataset and two real phosphoproteomic datasets. The experimental results demonstrate that our model can uncover drug-induced alterations in signaling pathways in cancer cells. Compared with existing hybrid models, our method can model feedback loops so that the dynamical mechanisms of signaling networks can be uncovered from time-series data. By calibrating generic models of signaling pathways against real data, our method supports precise predictions of context-specific anticancer drug effects, which is an important step towards precision medicine. PMID:27774993

  4. Development of a cost-effectiveness analysis of leafy green marketing agreement irrigation water provisions.

    PubMed

    Jensen, Helen H; Pouliot, Sébastien; Wang, Tong; Jay-Russell, Michele T

    2014-06-01

    An analysis of the effectiveness of meeting the irrigation water provisions of the Leafy Green Marketing Agreement (LGMA) relative to its costs provides an approach to evaluating the cost-effectiveness of good agricultural practices that uses available data. A case example for lettuce is used to evaluate data requirements and provide a methodological example to determine the cost-effectiveness of the LGMA water quality provision. Both cost and field data on pathogen or indicator bacterial levels are difficult and expensive to obtain prospectively. Therefore, methods to use existing field and experimental data are required. Based on data from current literature and experimental studies, we calculate a cost-efficiency ratio that expresses the reduction in E. coli concentration per dollar expenditure on testing of irrigation water. With appropriate data, the same type of analysis can be extended to soil amendments and other practices and to evaluation of public benefits of practices used in production. Careful use of existing and experimental data can lead to evaluation of an expanded set of practices.

  5. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  6. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method

    PubMed Central

    Leyde, Brian P.; Klein, Sanford A; Nellis, Gregory F.; Skye, Harrison

    2017-01-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model. PMID:28785125

  7. Determining Semantically Related Significant Genes.

    PubMed

    Taha, Kamal

    2014-01-01

    GO relation embodies some aspects of existence dependency. If GO term xis existence-dependent on GO term y, the presence of y implies the presence of x. Therefore, the genes annotated with the function of the GO term y are usually functionally and semantically related to the genes annotated with the function of the GO term x. A large number of gene set enrichment analysis methods have been developed in recent years for analyzing gene sets enrichment. However, most of these methods overlook the structural dependencies between GO terms in GO graph by not considering the concept of existence dependency. We propose in this paper a biological search engine called RSGSearch that identifies enriched sets of genes annotated with different functions using the concept of existence dependency. We observe that GO term xcannot be existence-dependent on GO term y, if x- and y- have the same specificity (biological characteristics). After encoding into a numeric format the contributions of GO terms annotating target genes to the semantics of their lowest common ancestors (LCAs), RSGSearch uses microarray experiment to identify the most significant LCA that annotates the result genes. We evaluated RSGSearch experimentally and compared it with five gene set enrichment systems. Results showed marked improvement.

  8. A Summary of Data and Findings from the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Chwalowski, Pawel.; Heeg, Jennifer; Wieseman, Carol D.

    2012-01-01

    This paper summarizes data and findings from the first Aeroelastic Prediction Workshop (AePW) held in April, 2012. The workshop has been designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems, and to identify computational and experimental areas needing additional research and development. For this initial workshop, three subject configurations have been chosen from existing wind tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations and results from all of these computations were compared at the workshop. Keywords: Unsteady Aerodynamics, Aeroelasticity, Computational Fluid Dynamics, Transonic Flow, Separated Flow.

  9. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples.

    PubMed

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-05-05

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.

    PubMed

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2018-05-01

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  11. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  12. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  13. Passive Magnetic Bearing With Ferrofluid Stabilization

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph; DiRusso, Eliseo

    1996-01-01

    A new class of magnetic bearings is shown to exist analytically and is demonstrated experimentally. The class of magnetic bearings utilize a ferrofluid/solid magnet interaction to stabilize the axial degree of freedom of a permanent magnet radial bearing. Twenty six permanent magnet bearing designs and twenty two ferrofluid stabilizer designs are evaluated. Two types of radial bearing designs are tested to determine their force and stiffness utilizing two methods. The first method is based on the use of frequency measurements to determine stiffness by utilizing an analytical model. The second method consisted of loading the system and measuring displacement in order to measure stiffness. Two ferrofluid stabilizers are tested and force displacement curves are measured. Two experimental test fixtures are designed and constructed in order to conduct the stiffness testing. Polynomial models of the data are generated and used to design the bearing prototype. The prototype was constructed and tested and shown to be stable. Further testing shows the possibility of using this technology for vibration isolation. The project successfully demonstrated the viability of the passive magnetic bearing with ferrofluid stabilization both experimentally and analytically.

  14. The Association between Point-of-Sale Advertising Bans and Youth Experimental Smoking: Findings from the Global Youth Tobacco Survey (GYTS)

    PubMed Central

    Shang, Ce; Huang, Jidong; Li, Qing; Chaloupka, Frank J.

    2015-01-01

    Background and Objectives: while existing research has demonstrated a positive association between exposure to point-of-sale (POS) tobacco advertising and youth smoking, there is limited evidence on the relationship between POS advertising restrictions and experimental smoking among youth. This study aims to fill this research gap by analyzing the association between POS advertising bans and youths' experimental smoking. Methods: Global Youth Tobacco Surveys from 130 countries during 2007–2011 were linked to the WHO “MPOWER” tobacco control policy measures to analyze the association between POS advertising bans (a dichotomous measure of the existence of such bans) and experimental smoking using weighted logistic regressions. All analyses were clustered at the country level and controlled for age, parents' smoking status, GDP per capita, and country-level tobacco control scores in monitoring tobacco use, protecting people from smoke, offering help to quit, warning about the dangers of tobacco, enforcing promotion/advertising bans, and raising taxes on tobacco. Results: The results suggest that a POS advertising ban is significantly associated with reduced experimental smoking among youth (OR = 0.63, p < 0.01), and that this association is seen for both genders (boys OR = 0.74, p < 0.1; girls OR = 0.52, p < 0.001). Conclusions: POS advertising bans are significantly associated with reduced experimental smoking among youth. Adopting POS advertising bans has the potential to reduce tobacco use among their youth in countries currently without such bans. PMID:27294172

  15. Simulation assisted characterization of kaolinite-methanol intercalation complexes synthesized using cost-efficient homogenization method

    NASA Astrophysics Data System (ADS)

    Makó, Éva; Kovács, András; Ható, Zoltán; Kristóf, Tamás

    2015-12-01

    Recent experimental and simulation findings with kaolinite-methanol intercalation complexes raised the question of the existence of more stable structures in wet and dry state, which has not been fully cleared up yet. Experimental and molecular simulation analyses were used to investigate different types of kaolinite-methanol complexes, revealing their real structures. Cost-efficient homogenization methods were applied to synthesize the kaolinite-dimethyl sulfoxide and kaolinite-urea pre-intercalation complexes of the kaolinite-methanol ones. The tested homogenization method required an order of magnitude lower amount of reagents than the generally applied solution method. The influence of the type of pre-intercalated molecules and of the wetting or drying (at room temperature and at 150 °C) procedure on the intercalation was characterized experimentally by X-ray diffraction and thermal analysis. Consistent with the suggestion from the present simulations, 1.12-nm and 0.83-nm stable kaolinite-methanol complexes were identified. For these complexes, our molecular simulations predict either single-layered structures of mobile methanol/water molecules or non-intercalated structures of methoxy-functionalized kaolinite. We found that the methoxy-modified kaolinite can easily be intercalated by liquid methanol.

  16. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  17. Approximate convective heating equations for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.; Sutton, K.

    1979-01-01

    Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.

  18. Transport Test Problems for Hybrid Methods Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  19. A ranking algorithm for spacelab crew and experiment scheduling

    NASA Technical Reports Server (NTRS)

    Grone, R. D.; Mathis, F. H.

    1980-01-01

    The problem of obtaining an optimal or near optimal schedule for scientific experiments to be performed on Spacelab missions is addressed. The current capabilities in this regard are examined and a method of ranking experiments in order of difficulty is developed to support the existing software. Experimental data is obtained from applying this method to the sets of experiments corresponding to Spacelab mission 1, 2, and 3. Finally, suggestions are made concerning desirable modifications and features of second generation software being developed for this problem.

  20. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  1. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directlymore » applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO 3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO 3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO 3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer-layer glasses. The experimental design was completed by a center-point glass, a Vitreous State Laboratory glass, and replicates of the center point and Vitreous State Laboratory glasses.« less

  2. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  3. A novel method for multifactorial bio-chemical experiments design based on combinational design theory.

    PubMed

    Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan

    2017-01-01

    Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.

  4. Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering

    PubMed Central

    Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.

    2011-01-01

    Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761

  5. Incompressible Navier-Stokes Computations with Heat Transfer

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan; Rogers, Stuart; Kutler, Paul (Technical Monitor)

    1994-01-01

    The existing pseudocompressibility method for the system of incompressible Navier-Stokes equations is extended to heat transfer problems by including the energy equation. The solution method is based on the pseudo compressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. Current computations use one-equation Baldwin-Barth turbulence model which is derived from a simplified form of the standard k-epsilon model equations. Both forced and natural convection problems are examined. Numerical results from turbulent reattaching flow behind a backward-facing step will be compared against experimental measurements for the forced convection case. The validity of Boussinesq approximation to simplify the buoyancy force term will be investigated. The natural convective flow structure generated by heat transfer in a vertical rectangular cavity will be studied. The numerical results will be compared by experimental measurements by Morrison and Tran.

  6. Theory and preliminary experimental verification of quantitative edge illumination x-ray phase contrast tomography.

    PubMed

    Hagen, C K; Diemoz, P C; Endrizzi, M; Rigon, L; Dreossi, D; Arfelli, F; Lopez, F C M; Longo, R; Olivo, A

    2014-04-07

    X-ray phase contrast imaging (XPCi) methods are sensitive to phase in addition to attenuation effects and, therefore, can achieve improved image contrast for weakly attenuating materials, such as often encountered in biomedical applications. Several XPCi methods exist, most of which have already been implemented in computed tomographic (CT) modality, thus allowing volumetric imaging. The Edge Illumination (EI) XPCi method had, until now, not been implemented as a CT modality. This article provides indications that quantitative 3D maps of an object's phase and attenuation can be reconstructed from EI XPCi measurements. Moreover, a theory for the reconstruction of combined phase and attenuation maps is presented. Both reconstruction strategies find applications in tissue characterisation and the identification of faint, weakly attenuating details. Experimental results for wires of known materials and for a biological object validate the theory and confirm the superiority of the phase over conventional, attenuation-based image contrast.

  7. Is ``No-Threshold'' a ``Non-Concept''?

    NASA Astrophysics Data System (ADS)

    Schaeffer, David J.

    1981-11-01

    A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.

  8. Studies on Experimental Ontology and Knowledge Service Development in Bio-Environmental Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yunliang

    2018-01-01

    The existing domain-related ontology and information service patterns are analyzed, and the main problems faced by the experimental scheme knowledge service were clarified. The ontology framework model for knowledge service of Bio-environmental Engineering was proposed from the aspects of experimental materials, experimental conditions and experimental instruments, and this ontology will be combined with existing knowledge organization systems to organize scientific and technological literatures, data and experimental schemes. With the similarity and priority calculation, it can improve the related domain research.

  9. FIND: difFerential chromatin INteractions Detection using a spatial Poisson process.

    PubMed

    Djekidel, Mohamed Nadhir; Chen, Yang; Zhang, Michael Q

    2018-02-12

    Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. © 2018 Djekidel et al.; Published by Cold Spring Harbor Laboratory Press.

  10. Research of ceramic matrix for a safe immobilization of radioactive sludge waste

    NASA Astrophysics Data System (ADS)

    Dorofeeva, Ludmila; Orekhov, Dmitry

    2018-03-01

    The research and improvement of the existing method for radioactive waste hardening by fixation in a ceramic matrix was carried out. For the samples covered with the sodium silicate and tested after the storage on the air the speed of a radionuclides leaching was determined. The properties of a clay ceramics and the optimum conditions of sintering were defined. The experimental data about the influence of a temperature mode sintering, water quantities, sludge and additives in the samples on their mechanical durability and a water resistance were obtained. The comparative analysis of the conducted research is aimed at improvement of the existing method of the hardening radioactive waste by inclusion in a ceramic matrix and reveals the advantages of the received results over analogs.

  11. Generation of wideband chaos with suppressed time-delay signature by delayed self-interference.

    PubMed

    Wang, Anbang; Yang, Yibiao; Wang, Bingjie; Zhang, Beibei; Li, Lei; Wang, Yuncai

    2013-04-08

    We demonstrate experimentally and numerically a method using the incoherent delayed self-interference (DSI) of chaotic light from a semiconductor laser with optical feedback to generate wideband chaotic signal. The results show that, the DSI can eliminate the domination of laser relaxation oscillation existing in the chaotic laser light and therefore flatten and widen the power spectrum. Furthermore, the DSI depresses the time-delay signature induced by external cavity modes and improves the symmetry of probability distribution by more than one magnitude. We also experimentally show that this DSI signal is beneficial to the random number generation.

  12. n-body simulations using message passing parallel computers.

    NASA Astrophysics Data System (ADS)

    Grama, A. Y.; Kumar, V.; Sameh, A.

    The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.

  13. AGARD standard aeroelastic configurations for dynamic response. Candidate configuration I.-wing 445.6

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.

    1987-01-01

    To promote the evaluation of existing and emerging unsteady aerodynamic codes and methods for applying them to aeroelastic problems, especially for the transonic range, a limited number of aerodynamic configurations and experimental dynamic response data sets are to be designated by the AGARD Structures and Materials Panel as standards for comparison. This set is a sequel to that established several years ago for comparisons of calculated and measured aerodynamic pressures and forces. This report presents the information needed to perform flutter calculations for the first candidate standard configuration for dynamic response along with the related experimental flutter data.

  14. The BAPE 2 balloon-borne CO2

    NASA Technical Reports Server (NTRS)

    Degnan, J. J.; Walker, H. E.; Peruso, C. J.; Johnson, E. H.; Klein, B. J.; Mcelroy, J. H.

    1972-01-01

    The systems and techniques which were utilized in the experiment to establish an air-to-ground CO2 laser heterodyne link are described along with the successes and problems encountered when the heterodyne receiver and laser transmitter package were removed from the controlled environment of the laboratory. Major topics discussed include: existing systems and the underlying principles involved in their operation; experimental techniques and optical alignment methods which were found to be useful; theoretical calculations of signal strengths expected under a variety of test conditions and in actual flight; and the experimental results including problems encountered and their possible solutions.

  15. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  16. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE PAGES

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...

    2018-04-05

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  17. Cutting the wires: modularization of cellular networks for experimental design.

    PubMed

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-07

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  18. Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS

    PubMed Central

    Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2016-01-01

    We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332

  19. Finite-difference computations of rotor loads

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.; Tung, C.

    1985-01-01

    This paper demonstrates the current and future potential of finite-difference methods for solving real rotor problems which now rely largely on empiricism. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advance-ratio flight. Comparisons are made with experimental pressure data.

  20. Finite-difference computations of rotor loads

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.; Tung, C.

    1985-01-01

    The current and future potential of finite difference methods for solving real rotor problems which now rely largely on empiricism are demonstrated. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advanced-ratio flight. Comparisons are made with experimental pressure data.

  1. Teacher Self-Efficacy: A Link to Student Achievement in English Language and Mathematics in Belizean Primary Schools

    ERIC Educational Resources Information Center

    Alvarez-Nunez, Tanya Mae

    2012-01-01

    Scope and Method of Study: This quantitative, non-experimental study sought to determine if a statistically significant difference existed in student achievement on the PSE exam in Belizean primary schools for students who have teachers with varying levels of self-efficacy (high, medium and low). The Teacher Efficacy Scale (TES), which captures…

  2. Extensions of Existing Methods for Use with a New Class of Experimental Designs Useful when There Is Treatment Effect Contamination

    ERIC Educational Resources Information Center

    Rhoads, Christopher

    2011-01-01

    Researchers planning a randomized field trial to evaluate the effectiveness of an educational intervention often face the following dilemma. They plan to recruit schools to participate in their study. The question is, "Should the researchers randomly assign individuals (either students or teachers, depending on the intervention) within schools to…

  3. Quantitative Structure-Activity Relationships for Organophosphate Enzyme Inhibition (Briefing Charts)

    DTIC Science & Technology

    2011-09-22

    OPs) are a group of pesticides that inhibit enzymes such as acetylcholinesterase. Numerous OP structural variants exist and toxicity data can be...and human toxicity studies especially for OPs lacking experimental data. 15. SUBJECT TERMS QSAR Organophosphates...structure and mechanism of toxicity c) Linking QSAR and OP PBPK/PD 2. Methods a) Physiochemical Descriptors b) Regression Techniques 3. Results a

  4. Relative loading on biplane wings

    NASA Technical Reports Server (NTRS)

    Diehl, Walter S

    1934-01-01

    Recent improvements in stress analysis methods have made it necessary to revise and to extend the loading curves to cover all conditions of flight. This report is concerned with a study of existing biplane data by combining the experimental and theoretical data to derive a series of curves from which the lift curves of the individual wings of a biplane may be obtained.

  5. Experimental Study Comparing a Traditional Approach to Performance Appraisal Training to a Whole-Brain Training Method at C.B. Fleet Laboratories

    ERIC Educational Resources Information Center

    Selden, Sally; Sherrier, Tom; Wooters, Robert

    2012-01-01

    The purpose of this study is to examine the effects of a new approach to performance appraisal training. Motivated by split-brain theory and existing studies of cognitive information processing and performance appraisals, this exploratory study examined the effects of a whole-brain approach to training managers for implementing performance…

  6. A Quantitative Experimental Study of the Effectiveness of Systems to Identify Network Attackers

    ERIC Educational Resources Information Center

    Handorf, C. Russell

    2016-01-01

    This study analyzed the meta-data collected from a honeypot that was run by the Federal Bureau of Investigation for a period of 5 years. This analysis compared the use of existing industry methods and tools, such as Intrusion Detection System alerts, network traffic flow and system log traffic, within the Open Source Security Information Manager…

  7. Poly(ethylene oxide) Chains Are Not ``Hydrophilic'' When They Exist As Polymer Brush Chains

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Kim, Dae Hwan; Witte, Kevin N.; Ohn, Kimberly; Choi, Je; Kim, Kyungil; Meron, Mati; Lin, Binhua; Akgun, Bulent; Satija, Sushil; Won, You-Yeon

    2012-02-01

    By using a combined experimental and theoretical approach, a model poly(ethylene oxide) (PEO) brush system, prepared by spreading a poly(ethylene oxide)-poly(n-butyl acrylate) (PEO-PnBA) amphiphilic diblock copolymer onto an air-water interface, was investigated. The polymer segment density profiles of the PEO brush in the direction normal to the air-water interface under various grafting density conditions were determined from combined X-ray and neutron reflectivity data. In order to achieve a theoretically sound analysis of the reflectivity data, we developed a new data analysis method that uses the self-consistent field theoretical modeling as a tool for predicting expected reflectivity results for comparison with the experimental data. Using this new data analysis method, we discovered that the effective Flory-Huggins interaction parameter of the PEO brush chains is significantly greater than that corresponding to the theta condition, suggesting that contrary to what is more commonly observed for PEO in normal situations, the PEO chains are actually not ``hydrophilic'' when they exist as polymer brush chains, because of the many body interactions forced to be effective in the brush situation.

  8. Ionization potential for the 1s{sup 2}2s{sup 2} of berylliumlike systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, K.T.; Zhu, X.W.; Wang, Z.W.

    1993-05-01

    The 1s{sup 2}2s{sup 2}, ground state energies of beryllium- like systems are calculated with a full-core plus correlation method. A partial saturation of basis functions method is used to extrapolated a better nonrelativistic energy. The 1s{sup 2}2s{sup 2} ionization potentials are calculated by including the relativistic corrections, mass polarization and QED effects. These results are compared with the existing theoretical and experimental data in the literature. The predicted BeI, CIII, NIV, and OV ionization potentials are within the quoted experimental error. Our result for FVI, 1267606.7 cm{sup -1}, supports the recent experiment of Engstrom, 1267606(2) cm{sup -1}, over the datummore » in the existing data tables. The predicted specific mass polarization contribution to the ionization potential for BeI, 0.00688 a.u., agrees with the 0.00674(100) a.u. from the experiment of Wen. Using the calculated results of Z=4-10, 15, and 20, we extrapolated the results for other Z systems up to Z=25 for which the ionization potentials are not explicitly computed.« less

  9. [Disinfecting contact tonometers - a systematic review].

    PubMed

    Neubauer, A S; Heeg, P; Kampik, A; Hirneiss, C

    2009-05-01

    The aim of this study is to provide the best available evidence on how to disinfect contact Goldman tonometers. A systematic review of all articles on disinfection of contact tonometers was conducted. Articles published up to July 2008 were identified in Medline, Embase and references from included articles. Two observers participated in the data retrieval and assessment of the studies identified. A total of 89 articles was retrieved, of which 58 could be included. Of those, 18 were clinical studies, 17 experimental microbiological studies, 8 expert assessments or guidelines and 15 reviews, surveys, descriptions of new methods. The clinical studies illustrate the importance of the problem, possible side effects of some disinfection methods but yield inconclusive results regarding efficacy. Experimental studies investigated a variety of bacterial and virological questions as well as material damage by disinfection. Both chlorine-based and hydrogen peroxide-based liquid disinfection were shown to be effective if applied for 5 min. Inconsistent results exist for alcohol wipes and UV disinfection - material damage has been described for both. The US guidelines and most expert recommendations are supported by evidence of the existing data. Chlorine-based and hydrogen peroxide-based liquid disinfections for 5 minutes are effective and relatively safe for disinfecting contact tonometers.

  10. Copper interstitial recombination centers in Cu3N

    NASA Astrophysics Data System (ADS)

    Yee, Ye Sheng; Inoue, Hisashi; Hultqvist, Adam; Hanifi, David; Salleo, Alberto; Magyari-Köpe, Blanka; Nishi, Yoshio; Bent, Stacey F.; Clemens, Bruce M.

    2018-06-01

    We present a comprehensive study of the earth-abundant semiconductor Cu3N as a potential solar energy conversion material, using density functional theory and experimental methods. Density functional theory indicates that among the dominant intrinsic point defects, copper vacancies VCu have shallow defect levels while copper interstitials Cui behave as deep potential wells in the conduction band, which mediate Shockley-Read-Hall recombination. The existence of Cui defects has been experimentally verified using photothermal deflection spectroscopy. A Cu3N /ZnS heterojunction diode with good current-voltage rectification behavior has been demonstrated experimentally, but no photocurrent is generated under illumination. The absence of photocurrent can be explained by a large concentration of Cui recombination centers capturing electrons in p -type Cu3N .

  11. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  12. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  13. Covariance analysis for evaluating head trackers

    NASA Astrophysics Data System (ADS)

    Kang, Donghoon

    2017-10-01

    Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.

  14. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  15. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  16. All-Versus-Nothing Proof of Einstein-Podolsky-Rosen Steering

    PubMed Central

    Chen, Jing-Ling; Ye, Xiang-Jun; Wu, Chunfeng; Su, Hong-Yi; Cabello, Adán; Kwek, L. C.; Oh, C. H.

    2013-01-01

    Einstein-Podolsky-Rosen steering is a form of quantum nonlocality intermediate between entanglement and Bell nonlocality. Although Schrödinger already mooted the idea in 1935, steering still defies a complete understanding. In analogy to “all-versus-nothing” proofs of Bell nonlocality, here we present a proof of steering without inequalities rendering the detection of correlations leading to a violation of steering inequalities unnecessary. We show that, given any two-qubit entangled state, the existence of certain projective measurement by Alice so that Bob's normalized conditional states can be regarded as two different pure states provides a criterion for Alice-to-Bob steerability. A steering inequality equivalent to the all-versus-nothing proof is also obtained. Our result clearly demonstrates that there exist many quantum states which do not violate any previously known steering inequality but are indeed steerable. Our method offers advantages over the existing methods for experimentally testing steerability, and sheds new light on the asymmetric steering problem. PMID:23828242

  17. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  18. The cost of quality: Implementing generalization and suppression for anonymizing biomedical data with minimal information loss.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A

    2015-12-01

    With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Experimental Treatment for Duchenne Muscular Dystrophy Gets Boost from Existing Medication

    MedlinePlus

    ... Boost from Existing Medication Spotlight on Research Experimental Treatment for Duchenne Muscular Dystrophy Gets Boost from Existing Medication By Colleen Labbe, M.S. | March 1, 2013 A mouse hanging on a wire during a test of muscle strength. Mice with a mutant dystrophin gene, which ...

  20. Compression Testing of Textile Composite Materials

    NASA Technical Reports Server (NTRS)

    Masters, John E.

    1996-01-01

    The applicability of existing test methods, which were developed primarily for laminates made of unidirectional prepreg tape, to textile composites is an area of concern. The issue is whether the values measured for the 2-D and 3-D braided, woven, stitched, and knit materials are accurate representations of the true material response. This report provides a review of efforts to establish a compression test method for textile reinforced composite materials. Experimental data have been gathered from several sources and evaluated to assess the effectiveness of a variety of test methods. The effectiveness of the individual test methods to measure the material's modulus and strength is determined. Data are presented for 2-D triaxial braided, 3-D woven, and stitched graphite/epoxy material. However, the determination of a recommended test method and specimen dimensions is based, primarily, on experimental results obtained by the Boeing Defense and Space Group for 2-D triaxially braided materials. They evaluated seven test methods: NASA Short Block, Modified IITRI, Boeing Open Hole Compression, Zabora Compression, Boeing Compression after Impact, NASA ST-4, and a Sandwich Column Test.

  1. MLACP: machine-learning-based prediction of anticancer peptides

    PubMed Central

    Manavalan, Balachandran; Basith, Shaherin; Shin, Tae Hwan; Choi, Sun; Kim, Myeong Ok; Lee, Gwang

    2017-01-01

    Cancer is the second leading cause of death globally, and use of therapeutic peptides to target and kill cancer cells has received considerable attention in recent years. Identification of anticancer peptides (ACPs) through wet-lab experimentation is expensive and often time consuming; therefore, development of an efficient computational method is essential to identify potential ACP candidates prior to in vitro experimentation. In this study, we developed support vector machine- and random forest-based machine-learning methods for the prediction of ACPs using the features calculated from the amino acid sequence, including amino acid composition, dipeptide composition, atomic composition, and physicochemical properties. We trained our methods using the Tyagi-B dataset and determined the machine parameters by 10-fold cross-validation. Furthermore, we evaluated the performance of our methods on two benchmarking datasets, with our results showing that the random forest-based method outperformed the existing methods with an average accuracy and Matthews correlation coefficient value of 88.7% and 0.78, respectively. To assist the scientific community, we also developed a publicly accessible web server at www.thegleelab.org/MLACP.html. PMID:29100375

  2. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  3. Verification of Internal Dose Calculations.

    NASA Astrophysics Data System (ADS)

    Aissi, Abdelmadjid

    The MIRD internal dose calculations have been in use for more than 15 years, but their accuracy has always been questionable. There have been attempts to verify these calculations; however, these attempts had various shortcomings which kept the question of verification of the MIRD data still unanswered. The purpose of this research was to develop techniques and methods to verify the MIRD calculations in a more systematic and scientific manner. The research consisted of improving a volumetric dosimeter, developing molding techniques, and adapting the Monte Carlo computer code ALGAM to the experimental conditions and vice versa. The organic dosimetric system contained TLD-100 powder and could be shaped to represent human organs. The dosimeter possessed excellent characteristics for the measurement of internal absorbed doses, even in the case of the lungs. The molding techniques are inexpensive and were used in the fabrication of dosimetric and radioactive source organs. The adaptation of the computer program provided useful theoretical data with which the experimental measurements were compared. The experimental data and the theoretical calculations were compared for 6 source organ-7 target organ configurations. The results of the comparison indicated the existence of an agreement between measured and calculated absorbed doses, when taking into consideration the average uncertainty (16%) of the measurements, and the average coefficient of variation (10%) of the Monte Carlo calculations. However, analysis of the data gave also an indication that the Monte Carlo method might overestimate the internal absorbed doses. Even if the overestimate exists, at least it could be said that the use of the MIRD method in internal dosimetry was shown to lead to no unnecessary exposure to radiation that could be caused by underestimating the absorbed dose. The experimental and the theoretical data were also used to test the validity of the Reciprocity Theorem for heterogeneous phantoms, such as the MIRD phantom and its physical representation, Mr. ADAM. The results indicated that the Reciprocity Theorem is valid within an average range of uncertainty of 8%.

  4. Thermal Management Using Pulsating Jet Cooling Technology

    NASA Astrophysics Data System (ADS)

    Alimohammadi, S.; Dinneen, P.; Persoons, T.; Murray, D. B.

    2014-07-01

    The existing methods of heat removal from compact electronic devises are known to be deficient as the evolving technology demands more power density and accordingly better cooling techniques. Impinging jets can be used as a satisfactory method for thermal management of electronic devices with limited space and volume. Pulsating flows can produce an additional enhancement in heat transfer rate compared to steady flows. This article is part of a comprehensive experimental and numerical study performed on pulsating jet cooling technology. The experimental approach explores heat transfer performance of a pulsating air jet impinging onto a flat surface for nozzle-to-surface distances 1 <= H/D <= 6, Reynolds numbers 1,300 <= Re <= 2,800 pulsation frequency 2Hz <= f <= 65Hz, and Strouhal number 0.0012 <= Sr = fD/Um <= 0.084. The time-resolved velocity at the nozzle exit is measured to quantify the turbulence intensity profile. The numerical methodology is firstly validated using the experimental local Nusselt number distribution for the steady jet with the same geometry and boundary conditions. For a time-averaged Reynolds number of 6,000, the heat transfer enhancement using the pulsating jet for 9Hz <= f <= 55Hz and 0.017 <= Sr <= 0.102 and 1 <= H/D <= 6 are calculated. For the same range of Sr number, the numerical and experimental methods show consistent results.

  5. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison

    PubMed Central

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S.; Sinha, Saurabh

    2011-01-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, ‘enhancers’), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for ‘motif-blind’ CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to ‘supervise’ the search. We propose a new statistical method, based on ‘Interpolated Markov Models’, for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. PMID:21821659

  6. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  7. Saturation-inversion-recovery: A method for T1 measurement

    NASA Astrophysics Data System (ADS)

    Wang, Hongzhi; Zhao, Ming; Ackerman, Jerome L.; Song, Yiqiao

    2017-01-01

    Spin-lattice relaxation (T1) has always been measured by inversion-recovery (IR), saturation-recovery (SR), or related methods. These existing methods share a common behavior in that the function describing T1 sensitivity is the exponential, e.g., exp(- τ /T1), where τ is the recovery time. In this paper, we describe a saturation-inversion-recovery (SIR) sequence for T1 measurement with considerably sharper T1-dependence than those of the IR and SR sequences, and demonstrate it experimentally. The SIR method could be useful in improving the contrast between regions of differing T1 in T1-weighted MRI.

  8. Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi

    2018-03-01

    Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.

  9. DFT and experimental studies of the structure and vibrational spectra of curcumin

    NASA Astrophysics Data System (ADS)

    Kolev, Tsonko M.; Velcheva, Evelina A.; Stamboliyska, Bistra A.; Spiteller, Michael

    The potential energy surface of curcumin [1,7-bis(4-hydroxy-3-methoxyphenyl)-1,6-heptadiene-3,5-dione] was explored with the DFT correlation functional B3LYP method using 6-311G* basis. The single-point calculations were performed at levels up to B3LYP/6-311++G**//B3LYP/6-311G*. All isomers were located and relative energies determined. According to the calculation the planar enol form is more stable than the nonplanar diketo form. The results of the optimized molecular structure are presented and compared with the experimental X-ray diffraction. In addition, harmonic vibrational frequencies of the molecule were evaluated theoretically using B3LYP density functional methods. The computed vibrational frequencies were used to determine the types of molecular motions associated with each of the experimental bands observed. Our vibrational data show that in both the solid state and in all studied solutions curcumin exists in the enol form.

  10. Analytical and experimental investigation on transmission loss of clamped double panels: implication of boundary effects.

    PubMed

    Xin, F X; Lu, T J

    2009-03-01

    The air-borne sound insulation performance of a rectangular double-panel partition clamp mounted on an infinite acoustic rigid baffle is investigated both analytically and experimentally and compared with that of a simply supported one. With the clamped (or simply supported) boundary accounted for by using the method of modal function, a double series solution for the sound transmission loss (STL) of the structure is obtained by employing the weighted residual (Galerkin) method. Experimental measurements with Al double-panel partitions having air cavity are subsequently carried out to validate the theoretical model for both types of the boundary condition, and good overall agreement is achieved. A consistency check of the two different models (based separately on clamped modal function and simply supported modal function) is performed by extending the panel dimensions to infinite where no boundaries exist. The significant discrepancies between the two different boundary conditions are demonstrated in terms of the STL versus frequency plots as well as the panel deflection mode shapes.

  11. The Cl + O3 reaction: a detailed QCT simulation of molecular beam experiments.

    PubMed

    Menéndez, M; Castillo, J F; Martínez-Haya, B; Aoiz, F J

    2015-10-14

    We have studied in detail the dynamics of the Cl + O3 reaction in the 1-56 kcal mol(-1) collision energy range using quasi-classical trajectory (QCT) calculations on a recent potential energy surface (PES) [J. F. Castillo et al., Phys. Chem. Chem. Phys., 2011, 13, 8537]. The main goal of this work has been to assess the accuracy of the PES and the reliability of the QCT method by comparison with the existing crossed molecular beam results [J. Zhang and Y. T. Lee J. Phys. Chem. A, 1997, 101, 6485]. For this purpose, we have developed a methodology that allows us to determine the experimental observables in crossed molecular beam experiments (integral and differential cross sections, recoil velocity distributions, scattering angle-recoil velocity polar maps, etc.) as continuous functions of the collision energy. Using these distributions, raw experimental data in the laboratory frame (angular distributions and time-of-flight spectra) have been simulated from first principles with the sole information on the instrumental parameters and taking into account the energy spread. A general good agreement with the experimental data has been found, thereby demonstrating the adequacy of the QCT method and the quality of the PES to describe the dynamics of this reaction at the level of resolution of the existing crossed beam experiments. Some features which are apparent in the differential cross sections have also been analysed in terms of the dynamics of the reaction and its evolution with the collision energy.

  12. PredPPCrys: Accurate Prediction of Sequence Cloning, Protein Production, Purification and Crystallization Propensity from Protein Sequences Using Multi-Step Heterogeneous Feature Fusion and Selection

    PubMed Central

    Wang, Huilin; Wang, Mingjun; Tan, Hao; Li, Yuan; Zhang, Ziding; Song, Jiangning

    2014-01-01

    X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed ‘PredPPCrys’ using the support vector machine (SVM). Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I). Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II), which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization targets of currently non-crystallizable proteins were provided as compendium data, which are anticipated to facilitate target selection and design for the worldwide structural genomics consortium. PredPPCrys is freely available at http://www.structbioinfor.org/PredPPCrys. PMID:25148528

  13. A Generalized Weizsacker-Williams Method Applied to Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Ahern, Sean C.; Poyser, William J.; Norbury, John W.; Tripathi, R. K.

    2002-01-01

    A new "Generalized" Weizsacker-Williams method (GWWM) is used to calculate approximate cross sections for relativistic peripheral proton-proton collisions. Instead of a mass less photon mediator, the method allows for the mediator to have mass for short range interactions. This method generalizes the Weizsacker-Williams method (WWM) from Coulomb interactions to GWWM for strong interactions. An elastic proton-proton cross section is calculated using GWWM with experimental data for the elastic p+p interaction, where the mass p+ is now the mediator. The resulting calculated cross sections is compared to existing data for the elastic proton-proton interaction. A good approximate fit is found between the data and the calculation.

  14. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  15. Iterative methods for dose reduction and image enhancement in tomography

    DOEpatents

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  16. Selecting Models for Measuring Change When True Experimental Conditions Do Not Exist.

    ERIC Educational Resources Information Center

    Fortune, Jim C.; Hutson, Barbara A.

    1984-01-01

    Measuring change when true experimental conditions do not exist is a difficult process. This article reviews the artifacts of change measurement in evaluations and quasi-experimental designs, delineates considerations in choosing a model to measure change under nonideal conditions, and suggests ways to organize models to facilitate selection.…

  17. Coupled CFD and Particle Vortex Transport Method: Wing Performance and Wake Validations

    DTIC Science & Technology

    2008-06-26

    the PVTM analysis. The results obtained using the coupled RANS/PVTM analysis compare well with experimental data , in particular the pressure...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments...is validated against wind tunnel test data . Comparisons with measured pressure distribution, loadings, and vortex parameters, and the corresponding

  18. Thermal discharges and their role in pending power plant regulatory decisions

    NASA Technical Reports Server (NTRS)

    Miller, M. H.

    1978-01-01

    Federal and state laws require the imminent retrofit of offstream condenser cooling to the newer steam electric stations. Waiver can be granted based on sound experimental data, demonstrating that existing once-through cooling will not adversely affect aquatic ecosystems. Conventional methods for monitoring thermal plumes, and some remote sensing alternatives, are reviewed, using on going work at one Maryland power plant for illustration.

  19. Robust High Data Rate MIMO Underwater Acoustic Communications

    DTIC Science & Technology

    2011-09-30

    We solved it via exploiting FFTs. The extended CAN algorithm is referred to as periodic CAN ( PeCAN ). Unlike most existing sequence construction...methods which are algebraic and deterministic in nature, we start the iteration of PeCAN from random phase initializations and then proceed to...covert UAC applications. We will use PeCAN sequences for more in-water experimentations to demonstrate their effectiveness. Temporal Resampling: In

  20. Adaptive Array for Weak Interfering Signals: Geostationary Satellite Experiments. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Steadman, Karl

    1989-01-01

    The performance of an experimental adaptive array is evaluated using signals from an existing geostationary satellite interference environment. To do this, an earth station antenna was built to receive signals from various geostationary satellites. In these experiments the received signals have a frequency of approximately 4 GHz (C-band) and have a bandwidth of over 35 MHz. These signals are downconverted to a 69 MHz intermediate frequency in the experimental system. Using the downconverted signals, the performance of the experimental system for various signal scenarios is evaluated. In this situation, due to the inherent thermal noise, qualitative instead of quantitative test results are presented. It is shown that the experimental system can null up to two interfering signals well below the noise level. However, to avoid the cancellation of the desired signal, the use a steering vector is needed. Various methods to obtain an estimate of the steering vector are proposed.

  1. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  2. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  3. Experimental Design for Parameter Estimation of Gene Regulatory Networks

    PubMed Central

    Timmer, Jens

    2012-01-01

    Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723

  4. Determination of the top quark mass circa 2013: methods, subtleties, perspectives

    NASA Astrophysics Data System (ADS)

    Juste, Aurelio; Mantry, Sonny; Mitov, Alexander; Penin, Alexander; Skands, Peter; Varnes, Erich; Vos, Marcel; Wimpenny, Stephen

    2014-10-01

    We present an up-to-date overview of the problem of top quark mass determination. We assess the need for precision in the top mass extraction in the LHC era together with the main theoretical and experimental issues arising in precision top mass determination. We collect and document existing results on top mass determination at hadron colliders and map the prospects for future precision top mass determination at e+e- colliders. We present a collection of estimates for the ultimate precision of various methods for top quark mass extraction at the LHC.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demos, S G; Gandour-Edwards, R; Ramsamooj, R

    The feasibility of developing bladder cancer detection methods using intrinsic tissue optical properties is the focus of this investigation. In vitro experiments have been performed using polarized elastic light scattering in combination with tissue autofluorescence in the NIR spectral region under laser excitation in the green and red spectral regions. The experimental results obtained from a set of tissue specimens from 25 patients reveal the presence of optical fingerprint characteristics suitable for cancer detection with high contrast and accuracy. These photonic methods are compatible with existing endoscopic imaging modalities which make them suitable for in-vivo application.

  6. An analysis method for two-dimensional transonic viscous flow

    NASA Technical Reports Server (NTRS)

    Bavitz, P. C.

    1975-01-01

    A method for the approximate calculation of transonic flow over airfoils, including shock waves and viscous effects, is described. Numerical solutions are obtained by use of a computer program which is discussed in the appendix. The importance of including the boundary layer in the analysis is clearly demonstrated, as well as the need to improve on existing procedures near the trailing edge. Comparisons between calculations and experimental data are presented for both conventional and supercritical airfoils, emphasis being on the surface pressure distribution, and good agreement is indicated.

  7. A High Power Density Single-Phase PWM Rectifier with Active Ripple Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ning, Puqi; Wang, Ruxi; Wang, Fei

    It is well known that there exist second-order harmonic current and corresponding ripple voltage on dc bus for single phase PWM rectifiers. The low frequency harmonic current is normally filtered using a bulk capacitor in the bus which results in low power density. This paper proposed an active ripple energy storage method that can effectively reduce the energy storage capacitance. The feed-forward control method and design considerations are provided. Simulation and 15 kW experimental results are provided for verification purposes.

  8. Assessment of nonequilibrium radiation computation methods for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Sharma, Surendra

    1993-01-01

    The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

  9. Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics

    NASA Technical Reports Server (NTRS)

    Seginer, A.; Rusak, Z.; Wasserstrom, E.

    1983-01-01

    Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.

  10. Experimental Verification of a Dynamic Voltage Restorer Capable of Significantly Reducing an Energy-Storage Element

    NASA Astrophysics Data System (ADS)

    Jimichi, Takushi; Fujita, Hideaki; Akagi, Hirofumi

    This paper deals with a dynamic voltage restorer (DVR) characterized by installing the shunt converter at the load side. The DVR can compensate for the load voltage when a voltage sag appears in the supply voltage. An existing DVR requires a large capacitor bank or other energy-storage elements such as double-layer capacitors or batteries. The DVR presented in this paper requires only a small dc capacitor intended for smoothing the dc-link voltage. Moreover, three control methods for the series converter are compared and discussed to reduce the series-converter rating, paying attention to the zero-sequence voltages included in the supply voltage and the compensating voltage. Experimental results obtained from a 200-V, 5-kW laboratory system are shown to verify the viability of the system configuration and the control methods.

  11. Classification of bladder cancer cell lines using Raman spectroscopy: a comparison of excitation wavelength, sample substrate and statistical algorithms

    NASA Astrophysics Data System (ADS)

    Kerr, Laura T.; Adams, Aine; O'Dea, Shirley; Domijan, Katarina; Cullen, Ivor; Hennelly, Bryan M.

    2014-05-01

    Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.

  12. PDB_REDO: automated re-refinement of X-ray structure models in the PDB.

    PubMed

    Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert

    2009-06-01

    Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.

  13. Calculated Low-Speed Steady and Time-Dependent Aerodynamic Derivatives for Several Different Wings Using a Discrete Vortex Method

    NASA Technical Reports Server (NTRS)

    Riley, Donald R.

    2016-01-01

    Calculated numerical values for some aerodynamic terms and stability Derivatives for several different wings in unseparated inviscid incompressible flow were made using a discrete vortex method involving a limited number of horseshoe vortices. Both longitudinal and lateral-directional derivatives were calculated for steady conditions as well as for sinusoidal oscillatory motions. Variables included the number of vortices used and the rotation axis/moment center chordwise location. Frequencies considered were limited to the range of interest to vehicle dynamic stability (kb <.24 ). Comparisons of some calculated numerical results with experimental wind-tunnel measurements were in reasonable agreement in the low angle-of-attack range considering the differences existing between the mathematical representation and experimental wind-tunnel models tested. Of particular interest was the presence of induced drag for the oscillatory condition.

  14. Protein-protein interaction predictions using text mining methods.

    PubMed

    Papanikolaou, Nikolas; Pavlopoulos, Georgios A; Theodosiou, Theodosios; Iliopoulos, Ioannis

    2015-03-01

    It is beyond any doubt that proteins and their interactions play an essential role in most complex biological processes. The understanding of their function individually, but also in the form of protein complexes is of a great importance. Nowadays, despite the plethora of various high-throughput experimental approaches for detecting protein-protein interactions, many computational methods aiming to predict new interactions have appeared and gained interest. In this review, we focus on text-mining based computational methodologies, aiming to extract information for proteins and their interactions from public repositories such as literature and various biological databases. We discuss their strengths, their weaknesses and how they complement existing experimental techniques by simultaneously commenting on the biological databases which hold such information and the benchmark datasets that can be used for evaluating new tools. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Copper interstitial recombination centers in Cu 3 N

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Ye Sheng; Inoue, Hisashi; Hultqvist, Adam

    We present a comprehensive study of the earth-abundant semiconductor Cu 3N as a potential solar energy conversion material, using density functional theory and experimental methods. Density functional theory indicates that among the dominant intrinsic point defects, copper vacancies V Cu have shallow defect levels while copper interstitials Cu i behave as deep potential wells in the conduction band which mediate Shockley-Read-Hall recombination. The existence of Cu i defects has been experimentally verified using photothermal deflection spectroscopy. A Cu 3N/ZnS heterojunction diode with good current-voltage rectification behavior has been demonstrated experimentally, but no photocurrent is generated under illumination. Finally, the absencemore » of photocurrent can be explained by a large concentration of Cu i recombination centers capturing electrons in p-type Cu 3N.« less

  16. INSTRUMENTS AND METHODS OF INVESTIGATION: Experimental investigation of the thermal properties of carbon at high temperatures and moderate pressures

    NASA Astrophysics Data System (ADS)

    Asinovskii, Erik I.; Kirillin, Alexander V.; Kostanovskii, Alexander V.

    2002-08-01

    A consistent procedure for plotting the carbon melting and boiling coexistence curves based on published data and the authors' experimental results is proposed. The parameters of a triple point are predicted to differ markedly from the currently accepted values: pt approx1 bar and Tt approx 4000 K. Two types of experimental facilities were used, with laser heating of samples in one and direct ohmic heating in the other. The existence of a carbyne region (a stable linear polymer consisting of carbon atoms) in the carbon phase diagram is discussed. Results on the direct solid-phase graphite - carbyne transition are presented, and this is shown to occur under certain conditions in the form of a thermal explosion.

  17. Copper interstitial recombination centers in Cu 3 N

    DOE PAGES

    Yee, Ye Sheng; Inoue, Hisashi; Hultqvist, Adam; ...

    2018-06-04

    We present a comprehensive study of the earth-abundant semiconductor Cu 3N as a potential solar energy conversion material, using density functional theory and experimental methods. Density functional theory indicates that among the dominant intrinsic point defects, copper vacancies V Cu have shallow defect levels while copper interstitials Cu i behave as deep potential wells in the conduction band which mediate Shockley-Read-Hall recombination. The existence of Cu i defects has been experimentally verified using photothermal deflection spectroscopy. A Cu 3N/ZnS heterojunction diode with good current-voltage rectification behavior has been demonstrated experimentally, but no photocurrent is generated under illumination. Finally, the absencemore » of photocurrent can be explained by a large concentration of Cu i recombination centers capturing electrons in p-type Cu 3N.« less

  18. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  19. Development of p-y curves of laterally loaded piles in cohesionless soil.

    PubMed

    Khari, Mahdy; Kassim, Khairul Anuar; Adnan, Azlan

    2014-01-01

    The research on damages of structures that are supported by deep foundations has been quite intensive in the past decade. Kinematic interaction in soil-pile interaction is evaluated based on the p-y curve approach. Existing p-y curves have considered the effects of relative density on soil-pile interaction in sandy soil. The roughness influence of the surface wall pile on p-y curves has not been emphasized sufficiently. The presented study was performed to develop a series of p-y curves for single piles through comprehensive experimental investigations. Modification factors were studied, namely, the effects of relative density and roughness of the wall surface of pile. The model tests were subjected to lateral load in Johor Bahru sand. The new p-y curves were evaluated based on the experimental data and were compared to the existing p-y curves. The soil-pile reaction for various relative density (from 30% to 75%) was increased in the range of 40-95% for a smooth pile at a small displacement and 90% at a large displacement. For rough pile, the ratio of dense to loose relative density soil-pile reaction was from 2.0 to 3.0 at a small to large displacement. Direct comparison of the developed p-y curve shows significant differences in the magnitude and shapes with the existing load-transfer curves. Good comparison with the experimental and design studies demonstrates the multidisciplinary applications of the present method.

  20. Development of p-y Curves of Laterally Loaded Piles in Cohesionless Soil

    PubMed Central

    Khari, Mahdy; Kassim, Khairul Anuar; Adnan, Azlan

    2014-01-01

    The research on damages of structures that are supported by deep foundations has been quite intensive in the past decade. Kinematic interaction in soil-pile interaction is evaluated based on the p-y curve approach. Existing p-y curves have considered the effects of relative density on soil-pile interaction in sandy soil. The roughness influence of the surface wall pile on p-y curves has not been emphasized sufficiently. The presented study was performed to develop a series of p-y curves for single piles through comprehensive experimental investigations. Modification factors were studied, namely, the effects of relative density and roughness of the wall surface of pile. The model tests were subjected to lateral load in Johor Bahru sand. The new p-y curves were evaluated based on the experimental data and were compared to the existing p-y curves. The soil-pile reaction for various relative density (from 30% to 75%) was increased in the range of 40–95% for a smooth pile at a small displacement and 90% at a large displacement. For rough pile, the ratio of dense to loose relative density soil-pile reaction was from 2.0 to 3.0 at a small to large displacement. Direct comparison of the developed p-y curve shows significant differences in the magnitude and shapes with the existing load-transfer curves. Good comparison with the experimental and design studies demonstrates the multidisciplinary applications of the present method. PMID:24574932

  1. Towards Large-Scale, Non-Destructive Inspection of Concrete Bridges

    NASA Astrophysics Data System (ADS)

    Mahmoud, A.; Shah, A. H.; Popplewell, N.

    2005-04-01

    It is estimated that the rehabilitation of deteriorating engineering infrastructure in the harsh North American environment could cost billions of dollars. Bridges are key infrastructure components for surface transportation. Steel-free and fibre-reinforced concrete is used increasingly nowadays to circumvent the vulnerability of steel rebar to corrosion. Existing steel-free and fibre-reinforced bridges may experience extensive surface-breaking cracks that need to be characterized without incurring further damage. In the present study, a method that uses Lamb elastic wave propagation to non-destructively characterize cracks in plain as well as fibre-reinforced concrete is investigated both numerically and experimentally. Numerical and experimental data are corroborated with good agreement.

  2. FOR LOVE OR REWARD? CHARACTERISING PREFERENCES FOR GIVING TO PARENTS IN AN EXPERIMENTAL SETTING*

    PubMed Central

    Porter, Maria; Adams, Abi

    2017-01-01

    Understanding the motivations behind intergenerational transfers is an important and active research area in economics. The existence and responsiveness of familial transfers have consequences for the design of intra and intergenerational redistributive programmes, particularly as such programmes may crowd out private transfers amongst altruistic family members. Yet, despite theoretical and empirical advances in this area, significant gaps in our knowledge remain. In this article, we advance the current literature by shedding light on both the motivation for providing intergenerational transfers, and on the nature of preferences for such giving behaviour, by using experimental techniques and revealed preference methods. PMID:29151611

  3. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    PubMed

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  4. Retrospective estimation of the electric and magnetic field exposure conditions in in vitro experimental reports reveal considerable potential for uncertainty.

    PubMed

    Portelli, Lucas A; Falldorf, Karsten; Thuróczy, György; Cuppen, Jan

    2018-04-01

    Experiments on cell cultures exposed to extremely low frequency (ELF, 3-300 Hz) magnetic fields are often subject to multiple sources of uncertainty associated with specific electric and magnetic field exposure conditions. Here we systemically quantify these uncertainties based on exposure conditions described in a group of bioelectromagnetic experimental reports for a representative sampling of the existing literature. The resulting uncertainties, stemming from insufficient, ambiguous, or erroneous description, design, implementation, or validation of the experimental methods and systems, were often substantial enough to potentially make any successful reproduction of the original experimental conditions difficult or impossible. Without making any assumption about the true biological relevance of ELF electric and magnetic fields, these findings suggest another contributing factor which may add to the overall variability and irreproducibility traditionally associated with experimental results of in vitro exposures to low-level ELF magnetic fields. Bioelectromagnetics. 39:231-243, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Determination of wind tunnel constraint effects by a unified pressure signature method. Part 2: Application to jet-in-crossflow

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Sampath, S.; Phillips, C. G.

    1981-01-01

    The development of an improved jet-in-crossflow model for estimating wind tunnel blockage and angle-of-attack interference is described. Experiments showed that the simpler existing models fall seriously short of representing far-field flows properly. A new, vortex-source-doublet (VSD) model was therefore developed which employs curved trajectories and experimentally-based singularity strengths. The new model is consistent with existing and new experimental data and it predicts tunnel wall (i.e. far-field) pressures properly. It is implemented as a preprocessor to the wall-pressure-signature-based tunnel interference predictor. The supporting experiments and theoretical studies revealed some new results. Comparative flow field measurements with 1-inch "free-air" and 3-inch impinging jets showed that vortex penetration into the flow, in diameters, was almost unaltered until 'hard' impingement occurred. In modeling impinging cases, a 'plume redirection' term was introduced which is apparently absent in previous models. The effects of this term were found to be very significant.

  6. Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis.

    PubMed

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2009-01-06

    In mass spectrometry (MS) based proteomic data analysis, peak detection is an essential step for subsequent analysis. Recently, there has been significant progress in the development of various peak detection algorithms. However, neither a comprehensive survey nor an experimental comparison of these algorithms is yet available. The main objective of this paper is to provide such a survey and to compare the performance of single spectrum based peak detection methods. In general, we can decompose a peak detection procedure into three consequent parts: smoothing, baseline correction and peak finding. We first categorize existing peak detection algorithms according to the techniques used in different phases. Such a categorization reveals the differences and similarities among existing peak detection algorithms. Then, we choose five typical peak detection algorithms to conduct a comprehensive experimental study using both simulation data and real MALDI MS data. The results of comparison show that the continuous wavelet-based algorithm provides the best average performance.

  7. Grain boundary phase transformations in PtAu and relevance to thermal stabilization of bulk nanocrystalline metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Brien, C. J.; Barr, C. M.; Price, P. M.

    There has recently been a great deal of interest in employing immiscible solutes to stabilize nanocrystalline microstructures. Existing modeling efforts largely rely on mesoscale Monte Carlo approaches that employ a simplified model of the microstructure and result in highly homogeneous segregation to grain boundaries. However, there is ample evidence from experimental and modeling studies that demonstrates segregation to grain boundaries is highly non-uniform and sensitive to boundary character. This work employs a realistic nanocrystalline microstructure with experimentally relevant global solute concentrations to illustrate inhomogeneous boundary segregation. Furthermore, experiments quantifying segregation in thin films are reported that corroborate the prediction thatmore » grain boundary segregation is highly inhomogeneous. In addition to grain boundary structure modifying the degree of segregation, the existence of a phase transformation between low and high solute content grain boundaries is predicted. In order to conduct this study, new embedded atom method interatomic potentials are developed for Pt, Au, and the PtAu binary alloy.« less

  8. Experimental evidence for simultaneous relaxation processes in super spin glass γ-Fe2O3 nanoparticle system

    NASA Astrophysics Data System (ADS)

    Nikolic, V.; Perovic, M.; Kusigerski, V.; Boskovic, M.; Mrakovic, A.; Blanusa, J.; Spasojevic, V.

    2015-03-01

    Spherical γ-Fe2O3 nanoparticles with the narrow size distribution of (5 ± 1) nm were synthesized by the method of thermal decomposition from iron acetyl acetonate precursor. The existence of super spin-glass state at low temperatures and in low applied magnetic fields was confirmed by DC magnetization measurements on a SQUID magnetometer. The comprehensive investigation of magnetic relaxation dynamics in low-temperature region was conducted through the measurements of single-stop and multiple stop ZFC memory effects, ZFC magnetization relaxation, and AC susceptibility measurements. The experimental findings revealed the peculiar change of magnetic relaxation dynamics at T ≈ 10 K, which arose as a consequence of simultaneous existence of different relaxation processes in Fe2O3 nanoparticle system. Complementarity of the applied measurements was utilized in order to single out distinct relaxation processes as well as to elucidate complex relaxation mechanisms in the investigated interacting nanoparticle system.

  9. A critical survey of methods to detect plasma membrane rafts

    PubMed Central

    Klotzsch, Enrico; Schütz, Gerhard J.

    2013-01-01

    The plasma membrane is still one of the enigmatic cellular structures. Although the microscopic structure is getting clearer, not much is known about the organization at the nanometre level. Experimental difficulties have precluded unambiguous approaches, making the current picture rather fuzzy. In consequence, a variety of different membrane models has been proposed over the years, on the basis of different experimental strategies. Recent data obtained via high-resolution single-molecule microscopy shed new light on the existing hypotheses. We thus think it is a good time for reviewing the consistency of the existing models with the new data. In this paper, we summarize the available models in ten propositions, each of which is discussed critically with respect to the applied technologies and the strengths and weaknesses of the approaches. Our aim is to provide the reader with a sound basis for his own assessment. We close this chapter by exposing our picture of the membrane organization at the nanoscale. PMID:23267184

  10. Grain boundary phase transformations in PtAu and relevance to thermal stabilization of bulk nanocrystalline metals

    DOE PAGES

    O’Brien, C. J.; Barr, C. M.; Price, P. M.; ...

    2017-10-31

    There has recently been a great deal of interest in employing immiscible solutes to stabilize nanocrystalline microstructures. Existing modeling efforts largely rely on mesoscale Monte Carlo approaches that employ a simplified model of the microstructure and result in highly homogeneous segregation to grain boundaries. However, there is ample evidence from experimental and modeling studies that demonstrates segregation to grain boundaries is highly non-uniform and sensitive to boundary character. This work employs a realistic nanocrystalline microstructure with experimentally relevant global solute concentrations to illustrate inhomogeneous boundary segregation. Furthermore, experiments quantifying segregation in thin films are reported that corroborate the prediction thatmore » grain boundary segregation is highly inhomogeneous. In addition to grain boundary structure modifying the degree of segregation, the existence of a phase transformation between low and high solute content grain boundaries is predicted. In order to conduct this study, new embedded atom method interatomic potentials are developed for Pt, Au, and the PtAu binary alloy.« less

  11. An Investigation of G-Quadruplex Structural Polymorphism in the Human Telomere Using a Combined Approach of Hydrodynamic Bead Modeling and Molecular Dynamics Simulation

    PubMed Central

    2015-01-01

    Guanine-rich oligonucleotides can adopt noncanonical tertiary structures known as G-quadruplexes, which can exist in different forms depending on experimental conditions. High-resolution structural methods, such as X-ray crystallography and NMR spectroscopy, have been of limited usefulness in resolving the inherent structural polymorphism associated with G-quadruplex formation. The lack of, or the ambiguous nature of, currently available high-resolution structural data, in turn, has severely hindered investigations into the nature of these structures and their interactions with small-molecule inhibitors. We have used molecular dynamics in conjunction with hydrodynamic bead modeling to study the structures of the human telomeric G-quadruplex-forming sequences at the atomic level. We demonstrated that molecular dynamics can reproduce experimental hydrodynamic measurements and thus can be a powerful tool in the structural study of existing G-quadruplex sequences or in the prediction of new G-quadruplex structures. PMID:24779348

  12. Non-imaged based method for matching brains in a common anatomical space for cellular imagery.

    PubMed

    Midroit, Maëllie; Thevenet, Marc; Fournel, Arnaud; Sacquet, Joelle; Bensafi, Moustafa; Breton, Marine; Chalençon, Laura; Cavelius, Matthias; Didier, Anne; Mandairon, Nathalie

    2018-04-22

    Cellular imagery using histology sections is one of the most common techniques used in Neuroscience. However, this inescapable technique has severe limitations due to the need to delineate regions of interest on each brain, which is time consuming and variable across experimenters. We developed algorithms based on a vectors field elastic registration allowing fast, automatic realignment of experimental brain sections and associated labeling in a brain atlas with high accuracy and in a streamlined way. Thereby, brain areas of interest can be finely identified without outlining them and different experimental groups can be easily analyzed using conventional tools. This method directly readjusts labeling in the brain atlas without any intermediate manipulation of images. We mapped the expression of cFos, in the mouse brain (C57Bl/6J) after olfactory stimulation or a non-stimulated control condition and found an increased density of cFos-positive cells in the primary olfactory cortex but not in non-olfactory areas of the odor-stimulated animals compared to the controls. Existing methods of matching are based on image registration which often requires expensive material (two-photon tomography mapping or imaging with iDISCO) or are less accurate since they are based on mutual information contained in the images. Our new method is non-imaged based and relies only on the positions of detected labeling and the external contours of sections. We thus provide a new method that permits automated matching of histology sections of experimental brains with a brain reference atlas. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Beyond existence and aiming outside the laboratory: estimating frequency-dependent and pay-off-biased social learning strategies.

    PubMed

    McElreath, Richard; Bell, Adrian V; Efferson, Charles; Lubell, Mark; Richerson, Peter J; Waring, Timothy

    2008-11-12

    The existence of social learning has been confirmed in diverse taxa, from apes to guppies. In order to advance our understanding of the consequences of social transmission and evolution of behaviour, however, we require statistical tools that can distinguish among diverse social learning strategies. In this paper, we advance two main ideas. First, social learning is diverse, in the sense that individuals can take advantage of different kinds of information and combine them in different ways. Examining learning strategies for different information conditions illuminates the more detailed design of social learning. We construct and analyse an evolutionary model of diverse social learning heuristics, in order to generate predictions and illustrate the impact of design differences on an organism's fitness. Second, in order to eventually escape the laboratory and apply social learning models to natural behaviour, we require statistical methods that do not depend upon tight experimental control. Therefore, we examine strategic social learning in an experimental setting in which the social information itself is endogenous to the experimental group, as it is in natural settings. We develop statistical models for distinguishing among different strategic uses of social information. The experimental data strongly suggest that most participants employ a hierarchical strategy that uses both average observed pay-offs of options as well as frequency information, the same model predicted by our evolutionary analysis to dominate a wide range of conditions.

  14. Predicting protein complexes from weighted protein-protein interaction graphs with a novel unsupervised methodology: Evolutionary enhanced Markov clustering.

    PubMed

    Theofilatos, Konstantinos; Pavlopoulou, Niki; Papasavvas, Christoforos; Likothanassis, Spiros; Dimitrakopoulos, Christos; Georgopoulos, Efstratios; Moschopoulos, Charalampos; Mavroudi, Seferina

    2015-03-01

    Proteins are considered to be the most important individual components of biological systems and they combine to form physical protein complexes which are responsible for certain molecular functions. Despite the large availability of protein-protein interaction (PPI) information, not much information is available about protein complexes. Experimental methods are limited in terms of time, efficiency, cost and performance constraints. Existing computational methods have provided encouraging preliminary results, but they phase certain disadvantages as they require parameter tuning, some of them cannot handle weighted PPI data and others do not allow a protein to participate in more than one protein complex. In the present paper, we propose a new fully unsupervised methodology for predicting protein complexes from weighted PPI graphs. The proposed methodology is called evolutionary enhanced Markov clustering (EE-MC) and it is a hybrid combination of an adaptive evolutionary algorithm and a state-of-the-art clustering algorithm named enhanced Markov clustering. EE-MC was compared with state-of-the-art methodologies when applied to datasets from the human and the yeast Saccharomyces cerevisiae organisms. Using public available datasets, EE-MC outperformed existing methodologies (in some datasets the separation metric was increased by 10-20%). Moreover, when applied to new human datasets its performance was encouraging in the prediction of protein complexes which consist of proteins with high functional similarity. In specific, 5737 protein complexes were predicted and 72.58% of them are enriched for at least one gene ontology (GO) function term. EE-MC is by design able to overcome intrinsic limitations of existing methodologies such as their inability to handle weighted PPI networks, their constraint to assign every protein in exactly one cluster and the difficulties they face concerning the parameter tuning. This fact was experimentally validated and moreover, new potentially true human protein complexes were suggested as candidates for further validation using experimental techniques. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  16. Comparison of competing segmentation standards for X-ray computed topographic imaging using Lattice Boltzmann techniques

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.

    2013-12-01

    Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.

  17. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jong; Chun, Ho-Hwan; Park, Il-Ryong; Kim, Jin

    2011-12-01

    In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60) hull and KRISO container ship (KCS), a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI). The computational results were validated by comparing with the existing experimental data.

  18. Real-Space Analysis of Scanning Tunneling Microscopy Topography Datasets Using Sparse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Miyama, Masamichi J.; Hukushima, Koji

    2018-04-01

    A sparse modeling approach is proposed for analyzing scanning tunneling microscopy topography data, which contain numerous peaks originating from the electron density of surface atoms and/or impurities. The method, based on the relevance vector machine with L1 regularization and k-means clustering, enables separation of the peaks and peak center positioning with accuracy beyond the resolution of the measurement grid. The validity and efficiency of the proposed method are demonstrated using synthetic data in comparison with the conventional least-squares method. An application of the proposed method to experimental data of a metallic oxide thin-film clearly indicates the existence of defects and corresponding local lattice distortions.

  19. Image steganalysis using Artificial Bee Colony algorithm

    NASA Astrophysics Data System (ADS)

    Sajedi, Hedieh

    2017-09-01

    Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.

  20. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    NASA Astrophysics Data System (ADS)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  1. Stability basin estimates fall risk from observed kinematics, demonstrated on the Sit-to-Stand task.

    PubMed

    Shia, Victor; Moore, Talia Yuki; Holmes, Patrick; Bajcsy, Ruzena; Vasudevan, Ram

    2018-04-27

    The ability to quantitatively measure stability is essential to ensuring the safety of locomoting systems. While the response to perturbation directly reflects the stability of a motion, this experimental method puts human subjects at risk. Unfortunately, existing indirect methods for estimating stability from unperturbed motion have been shown to have limited predictive power. This paper leverages recent advances in dynamical systems theory to accurately estimate the stability of human motion without requiring perturbation. This approach relies on kinematic observations of a nominal Sit-to-Stand motion to construct an individual-specific dynamic model, input bounds, and feedback control that are then used to compute the set of perturbations from which the model can recover. This set, referred to as the stability basin, was computed for 14 individuals, and was able to successfully differentiate between less and more stable Sit-to-Stand strategies for each individual with greater accuracy than existing methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Minimum maximum temperature gradient coil design.

    PubMed

    While, Peter T; Poole, Michael S; Forbes, Larry K; Crozier, Stuart

    2013-08-01

    Ohmic heating is a serious problem in gradient coil operation. A method is presented for redesigning cylindrical gradient coils to operate at minimum peak temperature, while maintaining field homogeneity and coil performance. To generate these minimaxT coil windings, an existing analytic method for simulating the spatial temperature distribution of single layer gradient coils is combined with a minimax optimization routine based on sequential quadratic programming. Simulations are provided for symmetric and asymmetric gradient coils that show considerable improvements in reducing maximum temperature over existing methods. The winding patterns of the minimaxT coils were found to be heavily dependent on the assumed thermal material properties and generally display an interesting "fish-eye" spreading of windings in the dense regions of the coil. Small prototype coils were constructed and tested for experimental validation and these demonstrate that with a reasonable estimate of material properties, thermal performance can be improved considerably with negligible change to the field error or standard figures of merit. © 2012 Wiley Periodicals, Inc.

  3. Highly sensitive distributed birefringence measurements based on a two-pulse interrogation of a dynamic Brillouin grating

    NASA Astrophysics Data System (ADS)

    Soto, Marcelo A.; Denisov, Andrey; Angulo-Vinuesa, Xabier; Martin-Lopez, Sonia; Thévenaz, Luc; Gonzalez-Herraez, Miguel

    2017-04-01

    A method for distributed birefringence measurements is proposed based on the interference pattern generated by the interrogation of a dynamic Brillouin grating (DBG) using two short consecutive optical pulses. Compared to existing DBG interrogation techniques, the method here offers an improved sensitivity to birefringence changes thanks to the interferometric effect generated by the reflections of the two pulses. Experimental results demonstrate the possibility to obtain the longitudinal birefringence profile of a 20 m-long Panda fibre with an accuracy of 10-8 using 16 averages and 30 cm spatial resolution. The method enables sub-metric and highly-accurate distributed temperature and strain sensing.

  4. Compressive Sensing via Nonlocal Smoothed Rank Function

    PubMed Central

    Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le

    2016-01-01

    Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683

  5. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012). Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Effective evaluation of privacy protection techniques in visible and thermal imagery

    NASA Astrophysics Data System (ADS)

    Nawaz, Tahir; Berg, Amanda; Ferryman, James; Ahlberg, Jörgen; Felsberg, Michael

    2017-09-01

    Privacy protection may be defined as replacing the original content in an image region with a (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed, the development of privacy protection techniques also needs to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgments or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. An annotation-free evaluation method that is neither subjective nor assumes a specific target type is proposed. It assesses two key aspects of privacy protection: "protection" and "utility." Protection is quantified as an appearance similarity, and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences), including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset is made available online for the community. We demonstrate effectiveness of the proposed method by evaluating six image-based privacy protection techniques and also show comparisons of the proposed method over existing methods.

  7. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A new gradient shimming method based on undistorted field map of B0 inhomogeneity.

    PubMed

    Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang

    2016-04-01

    Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Predicting Human Protein Subcellular Locations by the Ensemble of Multiple Predictors via Protein-Protein Interaction Network with Edge Clustering Coefficients

    PubMed Central

    Du, Pufeng; Wang, Lusheng

    2014-01-01

    One of the fundamental tasks in biology is to identify the functions of all proteins to reveal the primary machinery of a cell. Knowledge of the subcellular locations of proteins will provide key hints to reveal their functions and to understand the intricate pathways that regulate biological processes at the cellular level. Protein subcellular location prediction has been extensively studied in the past two decades. A lot of methods have been developed based on protein primary sequences as well as protein-protein interaction network. In this paper, we propose to use the protein-protein interaction network as an infrastructure to integrate existing sequence based predictors. When predicting the subcellular locations of a given protein, not only the protein itself, but also all its interacting partners were considered. Unlike existing methods, our method requires neither the comprehensive knowledge of the protein-protein interaction network nor the experimentally annotated subcellular locations of most proteins in the protein-protein interaction network. Besides, our method can be used as a framework to integrate multiple predictors. Our method achieved 56% on human proteome in absolute-true rate, which is higher than the state-of-the-art methods. PMID:24466278

  10. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  11. Vibrational cross sections for positron scattering by nitrogen molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazon, K. T.; Tenfen, W.; Michelin, S. E.

    2010-09-15

    We present a systematic study of low-energy positron collision with nitrogen molecules. Vibrational elastic and excitation cross sections are calculated using the multichannel version of the continued fractions method in the close-coupling scheme for the positron incident energy up to 20 eV. The interaction potential is treated within the static-correlation-polarization approximation. The comparison of our calculated data with existing theoretical and experimental results is encouraging.

  12. SOAR: An Architecture for General Intelligence

    DTIC Science & Technology

    1987-12-01

    these tasks, and (3) learn about all aspects of the tasks and its performance on them. Soar has existed since mid 1982 as an experimental software system...intelligence. Soar’s behavior has already been studied over a range of tasks and methods (Figure 1), which sample its intended range, though...in multiple small tasks: Generate and test, AND/OR search, hill climbing ( simple and steepest-ascent), means-ends analysis, operator subgoaling

  13. Experimental determination of self-heating and self-ignition risks associated with the dusts of agricultural materials commonly stored in silos.

    PubMed

    Ramírez, Alvaro; García-Torrent, Javier; Tascón, Alberto

    2010-03-15

    Agricultural products stored in silos, and their dusts, can undergo oxidation and self-heating, increasing the risk of self-ignition and therefore of fires and explosions. The aim of the present work was to determine the thermal susceptibility (as reflected by the Maciejasz index, the temperature of the emission of flammable volatile substances and the combined information provided by the apparent activation energy and the oxidation temperature) of icing sugar, bread-making flour, maize, wheat, barley, alfalfa, and soybean dusts, using experimental methods for the characterisation of different types of coal (no standardised procedure exists for characterising the thermal susceptibility of either coal or agricultural products). In addition, the thermal stability of wheat, i.e., the risk of self-ignition determined as a function of sample volume, ignition temperature and storage time, was determined using the methods outlined in standard EN 15188:2007. The advantages and drawbacks of the different methods used are discussed. (c) 2009 Elsevier B.V. All rights reserved.

  14. An approach to improving transporting velocity in the long-range ultrasonic transportation of micro-particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jianxin; Mei, Deqing, E-mail: meidq-127@zju.edu.cn; Yang, Keji

    2014-08-14

    In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, anmore » approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles.« less

  15. Multilabel learning via random label selection for protein subcellular multilocations prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-01-01

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  16. Material identification based on electrostatic sensing technology

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Chen, Xi; Li, Jingnan

    2018-04-01

    When the robot travels on the surface of different media, the uncertainty of the medium will seriously affect the autonomous action of the robot. In this paper, the distribution characteristics of multiple electrostatic charges on the surface of materials are detected, so as to improve the accuracy of the existing electrostatic signal material identification methods, which is of great significance to help the robot optimize the control algorithm. In this paper, based on the electrostatic signal material identification method proposed by predecessors, the multi-channel detection circuit is used to obtain the electrostatic charge distribution at different positions of the material surface, the weights are introduced into the eigenvalue matrix, and the weight distribution is optimized by the evolutionary algorithm, which makes the eigenvalue matrix more accurately reflect the surface charge distribution characteristics of the material. The matrix is used as the input of the k-Nearest Neighbor (kNN)classification algorithm to classify the dielectric materials. The experimental results show that the proposed method can significantly improve the recognition rate of the existing electrostatic signal material recognition methods.

  17. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    PubMed

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  19. A novel method linking neural connectivity to behavioral fluctuations: Behavior-regressed connectivity.

    PubMed

    Passaro, Antony D; Vettel, Jean M; McDaniel, Jonathan; Lawhern, Vernon; Franaszczuk, Piotr J; Gordon, Stephen M

    2017-03-01

    During an experimental session, behavioral performance fluctuates, yet most neuroimaging analyses of functional connectivity derive a single connectivity pattern. These conventional connectivity approaches assume that since the underlying behavior of the task remains constant, the connectivity pattern is also constant. We introduce a novel method, behavior-regressed connectivity (BRC), to directly examine behavioral fluctuations within an experimental session and capture their relationship to changes in functional connectivity. This method employs the weighted phase lag index (WPLI) applied to a window of trials with a weighting function. Using two datasets, the BRC results are compared to conventional connectivity results during two time windows: the one second before stimulus onset to identify predictive relationships, and the one second after onset to capture task-dependent relationships. In both tasks, we replicate the expected results for the conventional connectivity analysis, and extend our understanding of the brain-behavior relationship using the BRC analysis, demonstrating subject-specific BRC maps that correspond to both positive and negative relationships with behavior. Comparison with Existing Method(s): Conventional connectivity analyses assume a consistent relationship between behaviors and functional connectivity, but the BRC method examines performance variability within an experimental session to understand dynamic connectivity and transient behavior. The BRC approach examines connectivity as it covaries with behavior to complement the knowledge of underlying neural activity derived from conventional connectivity analyses. Within this framework, BRC may be implemented for the purpose of understanding performance variability both within and between participants. Published by Elsevier B.V.

  20. Mode Identification of High-Amplitude Pressure Waves in Liquid Rocket Engines

    NASA Astrophysics Data System (ADS)

    EBRAHIMI, R.; MAZAHERI, K.; GHAFOURIAN, A.

    2000-01-01

    Identification of existing instability modes from experimental pressure measurements of rocket engines is difficult, specially when steep waves are present. Actual pressure waves are often non-linear and include steep shocks followed by gradual expansions. It is generally believed that interaction of these non-linear waves is difficult to analyze. A method of mode identification is introduced. After presumption of constituent modes, they are superposed by using a standard finite difference scheme for solution of the classical wave equation. Waves are numerically produced at each end of the combustion tube with different wavelengths, amplitudes, and phases with respect to each other. Pressure amplitude histories and phase diagrams along the tube are computed. To determine the validity of the presented method for steep non-linear waves, the Euler equations are numerically solved for non-linear waves, and negligible interactions between these waves are observed. To show the applicability of this method, other's experimental results in which modes were identified are used. Results indicate that this simple method can be used in analyzing complicated pressure signal measurements.

  1. A Buoyancy-based Method of Determining Fat Levels in Drosophila.

    PubMed

    Hazegh, Kelsey E; Reis, Tânia

    2016-11-01

    Drosophila melanogaster is a key experimental system in the study of fat regulation. Numerous techniques currently exist to measure levels of stored fat in Drosophila, but most are expensive and/or laborious and have clear limitations. Here, we present a method to quickly and cheaply determine organismal fat levels in L3 Drosophila larvae. The technique relies on the differences in density between fat and lean tissues and allows for rapid detection of fat and lean phenotypes. We have verified the accuracy of this method by comparison to body fat percentage as determined by neutral lipid extraction and gas chromatography coupled with mass spectrometry (GCMS). We furthermore outline detailed protocols for the collection and synchronization of larvae as well as relevant experimental recipes. The technique presented below overcomes the major shortcomings in the most widely used lipid quantitation methods and provides a powerful way to quickly and sensitively screen L3 larvae for fat regulation phenotypes while maintaining the integrity of the larvae. This assay has wide applications for the study of metabolism and fat regulation using Drosophila.

  2. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  3. Absolute Paleointensity Estimates using Combined Shaw and Pseudo-Thellier Experimental Protocols

    NASA Astrophysics Data System (ADS)

    Foucher, M. S.; Smirnov, A. V.

    2016-12-01

    Data on the long-term evolution of Earth's magnetic field intensity have a great potential to advance our understanding of many aspects of the Earth's evolution. However, paleointensity determination is one of the most challenging aspects of paleomagnetic research so the quantity and quality of existing paleointensity data remain limited, especially for older epochs. While the Thellier double-heating method remains to be the most commonly used paleointensity technique, its applicability is limited for many rocks that undergo magneto-mineralogical alteration during the successive heating steps required by the method. In order to reduce the probability of alteration, several alternative methods that involve a limited number of or no heating steps have been proposed. However, continued efforts are needed to better understand the physical foundations and relative efficiency of reduced/non-heating methods in recovering the true paleofield strength and to better constrain their calibration factors. We will present the results of our investigation of synthetic and natural magnetite-bearing samples using a combination of the LTD-DHT Shaw and pseudo-Thellier experimental protocols for absolute paleointensity estimation.

  4. A content-boosted collaborative filtering algorithm for personalized training in interpretation of radiological imaging.

    PubMed

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng

    2014-08-01

    Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.

  5. A strategy to load balancing for non-connectivity MapReduce job

    NASA Astrophysics Data System (ADS)

    Zhou, Huaping; Liu, Guangzong; Gui, Haixia

    2017-09-01

    MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.

  6. Stress and Damage in Polymer Matrix Composite Materials Due to Material Degradation at High Temperatures

    NASA Technical Reports Server (NTRS)

    McManus, Hugh L.; Chamis, Christos C.

    1996-01-01

    This report describes analytical methods for calculating stresses and damage caused by degradation of the matrix constituent in polymer matrix composite materials. Laminate geometry, material properties, and matrix degradation states are specified as functions of position and time. Matrix shrinkage and property changes are modeled as functions of the degradation states. The model is incorporated into an existing composite mechanics computer code. Stresses, strains, and deformations at the laminate, ply, and micro levels are calculated, and from these calculations it is determined if there is failure of any kind. The rationale for the model (based on published experimental work) is presented, its integration into the laminate analysis code is outlined, and example results are given, with comparisons to existing material and structural data. The mechanisms behind the changes in properties and in surface cracking during long-term aging of polyimide matrix composites are clarified. High-temperature-material test methods are also evaluated.

  7. Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.

    PubMed

    Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua

    2011-05-15

    High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.

  8. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Srinivasan, Ramakrishna; Gustaveson, Mark B.

    1990-01-01

    Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselage lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cutoff, and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations.

  9. RNA-sequence data normalization through in silico prediction of reference genes: the bacterial response to DNA damage as case study.

    PubMed

    Berghoff, Bork A; Karlsson, Torgny; Källman, Thomas; Wagner, E Gerhart H; Grabherr, Manfred G

    2017-01-01

    Measuring how gene expression changes in the course of an experiment assesses how an organism responds on a molecular level. Sequencing of RNA molecules, and their subsequent quantification, aims to assess global gene expression changes on the RNA level (transcriptome). While advances in high-throughput RNA-sequencing (RNA-seq) technologies allow for inexpensive data generation, accurate post-processing and normalization across samples is required to eliminate any systematic noise introduced by the biochemical and/or technical processes. Existing methods thus either normalize on selected known reference genes that are invariant in expression across the experiment, assume that the majority of genes are invariant, or that the effects of up- and down-regulated genes cancel each other out during the normalization. Here, we present a novel method, moose 2 , which predicts invariant genes in silico through a dynamic programming (DP) scheme and applies a quadratic normalization based on this subset. The method allows for specifying a set of known or experimentally validated invariant genes, which guides the DP. We experimentally verified the predictions of this method in the bacterium Escherichia coli , and show how moose 2 is able to (i) estimate the expression value distances between RNA-seq samples, (ii) reduce the variation of expression values across all samples, and (iii) to subsequently reveal new functional groups of genes during the late stages of DNA damage. We further applied the method to three eukaryotic data sets, on which its performance compares favourably to other methods. The software is implemented in C++ and is publicly available from http://grabherr.github.io/moose2/. The proposed RNA-seq normalization method, moose 2 , is a valuable alternative to existing methods, with two major advantages: (i) in silico prediction of invariant genes provides a list of potential reference genes for downstream analyses, and (ii) non-linear artefacts in RNA-seq data are handled adequately to minimize variations between replicates.

  10. Effects of a System Thinking-Based Simulation Program for Congestive Heart Failure.

    PubMed

    Kim, Hyeon-Young; Yun, Eun Kyoung

    2018-03-01

    This study evaluated a system thinking-based simulation program for the care of patients with congestive heart failure. Participants were 67 undergraduate nursing students from a nursing college in Seoul, South Korea. The experimental group was given a 4-hour system-thinking program and a 2-hour simulation program, whereas the control group had a 4-hour case study and a 2-hour simulation program. There were significant improvements in critical thinking in both groups, but no significant group differences between educational methods (F = 3.26, P = .076). Problem-solving ability in the experimental group was significantly higher than in the control group (F = 5.04, P = .028). Clinical competency skills in the experimental group were higher than in the control group (t = 2.12, P = .038). A system thinking-based simulation program is a more effective learning method in terms of problem-solving ability and clinical competency skills compared to the existing simulation program. Further research using a longitudinal study is needed to test the long-term effect of the intervention and apply it to the nursing curriculum.

  11. A collaborative environment for developing and validating predictive tools for protein biophysical characteristics

    NASA Astrophysics Data System (ADS)

    Johnston, Michael A.; Farrell, Damien; Nielsen, Jens Erik

    2012-04-01

    The exchange of information between experimentalists and theoreticians is crucial to improving the predictive ability of theoretical methods and hence our understanding of the related biology. However many barriers exist which prevent the flow of information between the two disciplines. Enabling effective collaboration requires that experimentalists can easily apply computational tools to their data, share their data with theoreticians, and that both the experimental data and computational results are accessible to the wider community. We present a prototype collaborative environment for developing and validating predictive tools for protein biophysical characteristics. The environment is built on two central components; a new python-based integration module which allows theoreticians to provide and manage remote access to their programs; and PEATDB, a program for storing and sharing experimental data from protein biophysical characterisation studies. We demonstrate our approach by integrating PEATSA, a web-based service for predicting changes in protein biophysical characteristics, into PEATDB. Furthermore, we illustrate how the resulting environment aids method development using the Potapov dataset of experimentally measured ΔΔGfold values, previously employed to validate and train protein stability prediction algorithms.

  12. Overview of the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chwalowski, Pawel; Florance, Jennifer P.; Wieseman, Carol D.; Schuster, David M.; Perry, Raleigh B.

    2013-01-01

    The Aeroelastic Prediction Workshop brought together an international community of computational fluid dynamicists as a step in defining the state of the art in computational aeroelasticity. This workshop's technical focus was prediction of unsteady pressure distributions resulting from forced motion, benchmarking the results first using unforced system data. The most challenging aspects of the physics were identified as capturing oscillatory shock behavior, dynamic shock-induced separated flow and tunnel wall boundary layer influences. The majority of the participants used unsteady Reynolds-averaged Navier Stokes codes. These codes were exercised at transonic Mach numbers for three configurations and comparisons were made with existing experimental data. Substantial variations were observed among the computational solutions as well as differences relative to the experimental data. Contributing issues to these differences include wall effects and wall modeling, non-standardized convergence criteria, inclusion of static aeroelastic deflection, methodology for oscillatory solutions, post-processing methods. Contributing issues pertaining principally to the experimental data sets include the position of the model relative to the tunnel wall, splitter plate size, wind tunnel expansion slot configuration, spacing and location of pressure instrumentation, and data processing methods.

  13. Menthol-induced bleaching rapidly and effectively provides experimental aposymbiotic sea anemones (Aiptasia sp.) for symbiosis investigations.

    PubMed

    Matthews, Jennifer L; Sproles, Ashley E; Oakley, Clinton A; Grossman, Arthur R; Weis, Virginia M; Davy, Simon K

    2016-02-01

    Experimental manipulation of the symbiosis between cnidarians and photosynthetic dinoflagellates (Symbiodinium spp.) is crucial to advancing the understanding of the cellular mechanisms involved in host-symbiont interactions, and overall coral reef ecology. The anemone Aiptasia sp. is a model for cnidarian-dinoflagellate symbiosis, and notably it can be rendered aposymbiotic (i.e. dinoflagellate-free) and re-infected with a range of Symbiodinium types. Various methods exist for generating aposymbiotic hosts; however, they can be hugely time consuming and not wholly effective. Here, we optimise a method using menthol for production of aposymbiotic Aiptasia. The menthol treatment produced aposymbiotic hosts within just 4 weeks (97-100% symbiont loss), and the condition was maintained long after treatment when anemones were held under a standard light:dark cycle. The ability of Aiptasia to form a stable symbiosis appeared to be unaffected by menthol exposure, as demonstrated by successful re-establishment of the symbiosis when anemones were experimentally re-infected. Furthermore, there was no significant impact on photosynthetic or respiratory performance of re-infected anemones. © 2016. Published by The Company of Biologists Ltd.

  14. Consensus Prediction of Charged Single Alpha-Helices with CSAHserver.

    PubMed

    Dudola, Dániel; Tóth, Gábor; Nyitray, László; Gáspári, Zoltán

    2017-01-01

    Charged single alpha-helices (CSAHs) constitute a rare structural motif. CSAH is characterized by a high density of regularly alternating residues with positively and negatively charged side chains. Such segments exhibit unique structural properties; however, there are only a handful of proteins where its existence is experimentally verified. Therefore, establishing a pipeline that is capable of predicting the presence of CSAH segments with a low false positive rate is of considerable importance. Here we describe a consensus-based approach that relies on two conceptually different CSAH detection methods and a final filter based on the estimated helix-forming capabilities of the segments. This pipeline was shown to be capable of identifying previously uncharacterized CSAH segments that could be verified experimentally. The method is available as a web server at http://csahserver.itk.ppke.hu and also a downloadable standalone program suitable to scan larger sequence collections.

  15. Contaminated water delivery as a simple and effective method of experimental Salmonella infection

    PubMed Central

    O’Donnell, Hope; Pham, Oanh H.; Benoun, Joseph M.; Ravesloot-Chávez, Marietta M.; McSorley, Stephen J.

    2016-01-01

    Aims In most infectious disease models, it is assumed that gavage needle infection is the most reliable means of pathogen delivery to the gastrointestinal tract. However, this methodology can cause esophageal tearing and induces stress in experimental animals, both of which have the potential to impact early infection and the subsequent immune response. Materials and Methods C57BL/6 mice were orally infected with virulent Salmonella Typhimurium SL1344 either by intragastric gavage preceded by sodium bicarbonate, or by contamination of drinking water. Results We demonstrate that water contamination delivery of Salmonella is equivalent to gavage inoculation in providing a consistent model of infection. Furthermore, exposure of mice to contaminated drinking water for as little as 4 hours allowed maximal mucosal and systemic infection, suggesting an abbreviated window exists for natural intestinal entry. Conclusions Together, these data question the need for gavage delivery for infection with oral pathogens. PMID:26439708

  16. Three-Dimensional Dynamic Deformation Measurements Using Stereoscopic Imaging and Digital Speckle Photography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prentice, H. J.; Proud, W. G.

    2006-07-28

    A technique has been developed to determine experimentally the three-dimensional displacement field on the rear surface of a dynamically deforming plate. The technique combines speckle analysis with stereoscopy, using a modified angular-lens method: this incorporates split-frame photography and a simple method by which the effective lens separation can be adjusted and calibrated in situ. Whilst several analytical models exist to predict deformation in extended or semi-infinite targets, the non-trivial nature of the wave interactions complicates the generation and development of analytical models for targets of finite depth. By interrogating specimens experimentally to acquire three-dimensional strain data points, both analytical andmore » numerical model predictions can be verified more rigorously. The technique is applied to the quasi-static deformation of a rubber sheet and dynamically to Mild Steel sheets of various thicknesses.« less

  17. An experimental system for spectral line ratio measurements in the TJ-II stellarator.

    PubMed

    Zurro, B; Baciero, A; Fontdecaba, J M; Peláez, R; Jiménez-Rey, D

    2008-10-01

    The chord-integrated emissions of spectral lines have been monitored in the TJ-II stellarator by using a spectral system with time and space scanning capabilities and relative calibration over the entire UV-visible spectral range. This system has been used to study the line ratio of lines of different ionization stages of carbon (C(5+) 5290 A and C(4+) 2271 A) for plasma diagnostic purposes. The local emissivity of these ions has been reconstructed, for quasistationary profiles, by means of the inversion Fisher method described previously. The experimental line ratio is being empirically studied and in parallel a simple spectroscopic model has been developed to account for that ratio. We are investigating whether the role played by charge exchange processes with neutrals and the existence of non-Maxwellian electrons, intrinsic to Electron Cyclotron Resonance Heating (ECRH) heating, leave any distinguishable mark on this diagnostic method.

  18. Dirac R -matrix calculations for the electron-impact excitation of neutral tungsten providing noninvasive diagnostics for magnetic confinement fusion

    NASA Astrophysics Data System (ADS)

    Smyth, R. T.; Ballance, C. P.; Ramsbottom, C. A.; Johnson, C. A.; Ennis, D. A.; Loch, S. D.

    2018-05-01

    Neutral tungsten is the primary candidate as a wall material in the divertor region of the International Thermonuclear Experimental Reactor (ITER). The efficient operation of ITER depends heavily on precise atomic physics calculations for the determination of reliable erosion diagnostics, helping to characterize the influx of tungsten impurities into the core plasma. The following paper presents detailed calculations of the atomic structure of neutral tungsten using the multiconfigurational Dirac-Fock method, drawing comparisons with experimental measurements where available, and includes a critical assessment of existing atomic structure data. We investigate the electron-impact excitation of neutral tungsten using the Dirac R -matrix method, and by employing collisional-radiative models, we benchmark our results with recent Compact Toroidal Hybrid measurements. The resulting comparisons highlight alternative diagnostic lines to the widely used 400.88-nm line.

  19. A QM/MM-MD study on protein electronic properties: Circular dichroism spectra of oxytocin and insulin

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yuya; Akinaga, Yoshinobu; Kawashima, Yukio; Jung, Jaewoon; Ten-no, Seiichiro

    2012-06-01

    A QM/MM (quantum-mechanical/molecular-mechanical) molecular-dynamics approach based on the generalized hybrid-orbital (GHO) method, in conjunction with the second-order perturbation (MP2) theory and the second-order approximate coupled-cluster (CC2) model, is employed to calculate electronic property accounting for a protein environment. Circular dichroism (CD) spectra originating from chiral disulfide bridges of oxytocin and insulin at room temperature are computed. It is shown that the sampling of thermal fluctuation of molecular geometries facilitated by the GHO-MD method plays an important role in the obtained spectra. It is demonstrated that, while the protein environments in an oxytocin molecule have significant electrostatic influence on its chiral center, it is compensated by solvent induced charges. This gives a reasonable explanation to experimental observations. GHO-MD simulations starting from different experimental structures of insulin indicate that existence of the disulfide bridges with negative dihedral angles is crucial.

  20. Identification of Extracellular Segments by Mass Spectrometry Improves Topology Prediction of Transmembrane Proteins.

    PubMed

    Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E

    2017-02-13

    Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.

  1. Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns

    NASA Astrophysics Data System (ADS)

    Hirayama, Ryuji; Suzuki, Tomotaka; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Naruse, Makoto; Nakayama, Hirotaka; Kakue, Takashi; Ito, Tomoyoshi

    2017-04-01

    In this study, a method to construct a full-colour volumetric display is presented using a commercially available inkjet printer. Photoreactive luminescence materials are minutely and automatically printed as the volume elements, and volumetric displays are constructed with high resolution using easy-to-fabricate means that exploit inkjet printing technologies. The results experimentally demonstrate the first prototype of an inkjet printing-based volumetric display composed of multiple layers of transparent films that yield a full-colour three-dimensional (3D) image. Moreover, we propose a design algorithm with 3D structures that provide multiple different 2D full-colour patterns when viewed from different directions and experimentally demonstrate prototypes. It is considered that these types of 3D volumetric structures and their fabrication methods based on widely deployed existing printing technologies can be utilised as novel information display devices and systems, including digital signage, media art, entertainment and security.

  2. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    PubMed

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions.

  3. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy

    PubMed Central

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242

  4. Numerical analysis and experimental studies on solenoid common rail diesel injector with worn control valve

    NASA Astrophysics Data System (ADS)

    Krivtsov, S. N.; Yakimov, I. V.; Ozornin, S. P.

    2018-03-01

    A mathematical model of a solenoid common rail fuel injector was developed. Its difference from existing models is control valve wear simulation. A common rail injector of 0445110376 Series (Cummins ISf 2.8 Diesel engine) produced by Bosch Company was used as a research object. Injector parameters (fuel delivery and back leakage) were determined by calculation and experimental methods. GT-Suite model average R2 is 0.93 which means that it predicts the injection rate shape very accurately (nominal and marginal technical conditions of an injector). Numerical analysis and experimental studies showed that control valve wear increases back leakage and fuel delivery (especially at 160 MPa). The regression models for determining fuel delivery and back leakage effects on fuel pressure and energizing time were developed (for nominal and marginal technical conditions).

  5. Solitary waves and double layers in a dusty electronegative plasma.

    PubMed

    Mamun, A A; Shukla, P K; Eliasson, B

    2009-10-01

    A dusty electronegative plasma containing Boltzmann electrons, Boltzmann negative ions, cold mobile positive ions, and negatively charged stationary dust has been considered. The basic features of arbitrary amplitude solitary waves (SWs) and double layers (DLs), which have been found to exist in such a dusty electronegative plasma, have been investigated by the pseudopotential method. The small amplitude limit has also been considered in order to study the small amplitude SWs and DLs analytically. It has been shown that under certain conditions, DLs do not exist, which is in good agreement with the experimental observations of Ghim and Hershkowitz [Y. Ghim (Kim) and N. Hershkowitz, Appl. Phys. Lett. 94, 151503 (2009)].

  6. Method, accuracy and limitation of computer interaction in the operating room by a navigated surgical instrument.

    PubMed

    Hurka, Florian; Wenger, Thomas; Heininger, Sebastian; Lueth, Tim C

    2011-01-01

    This article describes a new interaction device for surgical navigation systems--the so-called navigation mouse system. The idea is to use a tracked instrument of a surgical navigation system like a pointer to control the software. The new interaction system extends existing navigation systems with a microcontroller-unit. The microcontroller-unit uses the existing communication line to extract the needed 3D-information of an instrument to calculate positions analogous to the PC mouse cursor and click events. These positions and events are used to manipulate the navigation system. In an experimental setup the reachable accuracy with the new mouse system is shown.

  7. Water Mapping Using Multispectral Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Yan, W. Y.; Shaker, A.; LaRocque, P. E.

    2018-04-01

    This study investigates the use of the world's first multispectral airborne LiDAR sensor, Optech Titan, manufactured by Teledyne Optech to serve the purpose of automatic land-water classification with a particular focus on near shore region and river environment. Although there exist recent studies utilizing airborne LiDAR data for shoreline detection and water surface mapping, the majority of them only perform experimental testing on clipped data subset or rely on data fusion with aerial/satellite image. In addition, most of the existing approaches require manual intervention or existing tidal/datum data for sample collection of training data. To tackle the drawbacks of previous approaches, we propose and develop an automatic data processing workflow for land-water classification using multispectral airborne LiDAR data. Depending on the nature of the study scene, two methods are proposed for automatic training data selection. The first method utilizes the elevation/intensity histogram fitted with Gaussian mixture model (GMM) to preliminarily split the land and water bodies. The second method mainly relies on the use of a newly developed scan line elevation intensity ratio (SLIER) to estimate the water surface data points. Regardless of the training methods being used, feature spaces can be constructed using the multispectral LiDAR intensity, elevation and other features derived from these parameters. The comprehensive workflow was tested with two datasets collected for different near shore region and river environment, where the overall accuracy yielded better than 96 %.

  8. Intergration of system identification and robust controller designs for flexible structures in space

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Lew, Jiann-Shiun

    1990-01-01

    An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.

  9. Aerodynamic analysis of the Darrieus wind turbines including dynamic-stall effects

    NASA Astrophysics Data System (ADS)

    Paraschivoiu, Ion; Allet, Azeddine

    Experimental data for a 17-m wind turbine are compared with aerodynamic performance predictions obtained with two dynamic stall methods which are based on numerical correlations of the dynamic stall delay with the pitch rate parameter. Unlike the Gormont (1973) model, the MIT model predicts that dynamic stall does not occur in the downwind part of the turbine, although it does exist in the upwind zone. The Gormont model is shown to overestimate the aerodynamic coefficients relative to the MIT model. The MIT model is found to accurately predict the dynamic-stall regime, which is characterized by a plateau oscillating near values of the experimental data for the rotor power vs wind speed at the equator.

  10. Elevated temperature biaxial fatigue

    NASA Technical Reports Server (NTRS)

    Jordan, E. H.

    1984-01-01

    A three year experimental program for studying elevated temperature biaxial fatigue of a nickel based alloy Hastelloy-X has been completed. A new high temperature fatigue test facility with unique capabilities has been developed. Effort was directed toward understanding multiaxial fatigue and correlating the experimental data to the existing theories of fatigue failure. The difficult task of predicting fatigue lives for non-proportional loading was used as an ultimate test for various life prediction methods being considered. The primary means of reaching improved undertanding were through several critical non-proportional loading experiments. It was discovered that the cracking mode switched from primarily cracking on the maximum shear planes at room temperature to cracking on the maximum normal strain planes at 649 C.

  11. Global Network Alignment in the Context of Aging.

    PubMed

    Faisal, Fazle Elahi; Zhao, Han; Milenkovic, Tijana

    2015-01-01

    Analogous to sequence alignment, network alignment (NA) can be used to transfer biological knowledge across species between conserved network regions. NA faces two algorithmic challenges: 1) Which cost function to use to capture "similarities" between nodes in different networks? 2) Which alignment strategy to use to rapidly identify "high-scoring" alignments from all possible alignments? We "break down" existing state-of-the-art methods that use both different cost functions and different alignment strategies to evaluate each combination of their cost functions and alignment strategies. We find that a combination of the cost function of one method and the alignment strategy of another method beats the existing methods. Hence, we propose this combination as a novel superior NA method. Then, since human aging is hard to study experimentally due to long lifespan, we use NA to transfer aging-related knowledge from well annotated model species to poorly annotated human. By doing so, we produce novel human aging-related knowledge, which complements currently available knowledge about aging that has been obtained mainly by sequence alignment. We demonstrate significant similarity between topological and functional properties of our novel predictions and those of known aging-related genes. We are the first to use NA to learn more about aging.

  12. Improving detection of copy-number variation by simultaneous bias correction and read-depth segmentation.

    PubMed

    Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei

    2013-02-01

    Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.

  13. A reference estimator based on composite sensor pattern noise for source device identification

    NASA Astrophysics Data System (ADS)

    Li, Ruizhe; Li, Chang-Tsun; Guan, Yu

    2014-02-01

    It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.

  14. Ultra-fast photon counting with a passive quenching silicon photomultiplier in the charge integration regime

    NASA Astrophysics Data System (ADS)

    Zhang, Guoqing; Lina, Liu

    2018-02-01

    An ultra-fast photon counting method is proposed based on the charge integration of output electrical pulses of passive quenching silicon photomultipliers (SiPMs). The results of the numerical analysis with actual parameters of SiPMs show that the maximum photon counting rate of a state-of-art passive quenching SiPM can reach ~THz levels which is much larger than that of the existing photon counting devices. The experimental procedure is proposed based on this method. This photon counting regime of SiPMs is promising in many fields such as large dynamic light power detection.

  15. A Simple and Robust Method for Partially Matched Samples Using the P-Values Pooling Approach

    PubMed Central

    Kuan, Pei Fen; Huang, Bo

    2013-01-01

    This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta-analysis, we recast the partially matched samples as coming from two experimental designs, and propose a simple yet robust approach based on the weighted Z-test to integrate the p-values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared to existing methods for partially matched samples. PMID:23417968

  16. The theoretical and experimental study of a material structure evolution in gigacyclic fatigue regime

    NASA Astrophysics Data System (ADS)

    Plekhov, Oleg; Naimark, Oleg; Narykova, Maria; Kadomtsev, Andrey; Betekhtin, Vladimir

    2015-10-01

    The work is devoted to the study of the metal structure evolution under gigacyclic fatigue (VHCF) regime. The study of the mechanical properties of the samples (Armco iron) with different state of life time existing was carried out on the base of the acoustic resonance method. The damage accumulation (porosity of the samples) was studied by the hydrostatic weighing method. A statistical model of damage accumulation was proposed in order to describe the damage accumulation process. The model describes the influence of the sample surface on the location of fatigue crack initiation.

  17. Polarization holograms allow highly efficient generation of complex light beams.

    PubMed

    Ruiz, U; Pagliusi, P; Provenzano, C; Volke-Sepúlveda, K; Cipparrone, Gabriella

    2013-03-25

    We report a viable method to generate complex beams, such as the non-diffracting Bessel and Weber beams, which relies on the encoding of amplitude information, in addition to phase and polarization, using polarization holography. The holograms are recorded in polarization sensitive films by the interference of a reference plane wave with a tailored complex beam, having orthogonal circular polarizations. The high efficiency, the intrinsic achromaticity and the simplicity of use of the polarization holograms make them competitive with respect to existing methods and attractive for several applications. Theoretical analysis, based on the Jones formalism, and experimental results are shown.

  18. Photon Strength Function at Low Energies in 95Mo

    DOE PAGES

    Wiedeking, M.; Bernstein, L. A.; Allmond, J. M.; ...

    2014-05-01

    A new and model-independent experimental method has been developed to determine the energy dependence of the photon strength function. It is designed to study statistical feeding from the quasi continuum to individual low-lying discrete levels. This new technique is presented and results for 95Mo are compared to data from the University of Oslo. In particular, questions regarding the existence of the low-energy enhancement in the photon strength function are addressed.

  19. Experimental Methods in Phonosemantics: Preliminary Testing of the Antonymic Hypothesis as a Way of Mediating between the Arbitrary Nature of Linguistic Representation and Aspects of Iconism

    ERIC Educational Resources Information Center

    Freeman, Geremy Richard

    2009-01-01

    The question of whether or not linguistic sounds might convey inherent meaning has never conclusively been resolved. This is an empirical study weighing evidence for and against the existence of phonosemantics, also known as sound symbolism or iconism. Contrary to well established principles such as the arbitrary nature of the sign and the double…

  20. The Environmental Assessment and Management (TEAM) Guide: Montana Supplement. Revision

    DTIC Science & Technology

    2010-01-01

    pollution control equipment are operating as designed. AE.37.3.MT. Non -exempt existing s mall m unicipal combustion u nits m ust m eet...species. NON -ESSENTIAL EXPERIMENTAL POPULATION (XN) - A population of a listed species reintroduced into a specific area that receives more flexible...been triple rinsed or processed by methods approved by the Department. 2. Group III wastes include wood wastes and non -water soluble solids. These

  1. Influence of boundary conditions on the existence and stability of minimal surfaces of revolution made of soap films

    NASA Astrophysics Data System (ADS)

    Salkin, Louis; Schmit, Alexandre; Panizza, Pascal; Courbin, Laurent

    2014-09-01

    Because of surface tension, soap films seek the shape that minimizes their surface energy and thus their surface area. This mathematical postulate allows one to predict the existence and stability of simple minimal surfaces. After briefly recalling classical results obtained in the case of symmetric catenoids that span two circular rings with the same radius, we discuss the role of boundary conditions on such shapes, working with two rings having different radii. We then investigate the conditions of existence and stability of other shapes that include two portions of catenoids connected by a planar soap film and half-symmetric catenoids for which we introduce a method of observation. We report a variety of experimental results including metastability—an hysteretic evolution of the shape taken by a soap film—explained using simple physical arguments. Working by analogy with the theory of phase transitions, we conclude by discussing universal behaviors of the studied minimal surfaces in the vicinity of their existence thresholds.

  2. Determining the semantic similarities among Gene Ontology terms.

    PubMed

    Taha, Kamal

    2013-05-01

    We present in this paper novel techniques that determine the semantic relationships among GeneOntology (GO) terms. We implemented these techniques in a prototype system called GoSE, which resides between user application and GO database. Given a set S of GO terms, GoSE would return another set S' of GO terms, where each term in S' is semantically related to each term in S. Most current research is focused on determining the semantic similarities among GO ontology terms based solely on their IDs and proximity to one another in the GO graph structure, while overlooking the contexts of the terms, which may lead to erroneous results. The context of a GO term T is the set of other terms, whose existence in the GO graph structure is dependent on T. We propose novel techniques that determine the contexts of terms based on the concept of existence dependency. We present a stack-based sort-merge algorithm employing these techniques for determining the semantic similarities among GO terms.We evaluated GoSE experimentally and compared it with three existing methods. The results of measuring the semantic similarities among genes in KEGG and Pfam pathways retrieved from the DBGET and Sanger Pfam databases, respectively, have shown that our method outperforms the other three methods in recall and precision.

  3. A simple method for comparing immunogold distributions in two or more experimental groups illustrated using GLUT1 labelling of isolated trophoblast cells.

    PubMed

    Mayhew, T M; Desoye, G

    2004-07-01

    Colloidal gold-labelling, combined with transmission electron microscopy, is a valuable technique for high-resolution immunolocalization of identified antigens in different subcellular compartments. Whilst the technique has been applied to placental tissues, few quantitative studies have been made. Subcellular compartments exist in three main categories (viz. organelles, membranes, filaments/tubules) and this affects the possibilities for quantification. Generally, gold particles are counted in order to compare either (a) compartments within an experimental group or (b) compartmental labelling distributions between groups. For the former, recent developments make it possible to test whether or not there is differential (nonrandom) labelling of compartments. The methods (relative labelling index and labelling density) are ideally suited to analysing label in one category of compartment (organelle or membrane or filament) but may be adapted to deal with a mixture of categories. They also require information about compartment size (e.g. profile area or trace length). Here, a simple and efficient method for drawing between-group comparisons of labelling distributions is presented. The method does not require information about compartment size or specimen magnification. It relies on multistage random sampling of specimens and unbiased counting of gold particles associated with different compartments. Distributions of observed gold counts in different experimental groups are compared by contingency table analysis with degrees of freedom for chi-squared (chi(2)) values being determined by the numbers of compartments and experimental groups. Compartmental values of chi(2)which contribute substantially to total chi(2)identify the principal subcellular sites of between-group differences. The method is illustrated using datasets from immunolabelling studies on the localization of GLUT1 glucose transporters in cultured human trophoblast cells exposed to different treatments.

  4. An Automated System for Skeletal Maturity Assessment by Extreme Learning Machines

    PubMed Central

    Mansourvar, Marjan; Shamshirband, Shahaboddin; Raj, Ram Gopal; Gunalan, Roshan; Mazinani, Iman

    2015-01-01

    Assessing skeletal age is a subjective and tedious examination process. Hence, automated assessment methods have been developed to replace manual evaluation in medical applications. In this study, a new fully automated method based on content-based image retrieval and using extreme learning machines (ELM) is designed and adapted to assess skeletal maturity. The main novelty of this approach is it overcomes the segmentation problem as suffered by existing systems. The estimation results of ELM models are compared with those of genetic programming (GP) and artificial neural networks (ANNs) models. The experimental results signify improvement in assessment accuracy over GP and ANN, while generalization capability is possible with the ELM approach. Moreover, the results are indicated that the ELM model developed can be used confidently in further work on formulating novel models of skeletal age assessment strategies. According to the experimental results, the new presented method has the capacity to learn many hundreds of times faster than traditional learning methods and it has sufficient overall performance in many aspects. It has conclusively been found that applying ELM is particularly promising as an alternative method for evaluating skeletal age. PMID:26402795

  5. Theory and Simulation of A Novel Viscosity Measurement Method for High Temperature Semiconductor

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Li, Chao; Ban, Heng; Scripa, Rose; Zhu, Shen; Su, Ching-Hua; Lehoczky, S. L.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    The properties of molten semiconductors are good indicators for material structure transformation and hysteresis under temperature variations. Viscosity, as one of the most important properties, is difficult to measure because of high temperature, high pressure, and vapor toxicity of melts. Recently, a novel method was developed by applying a rotating magnetic field to the melt sealed in a suspended quartz ampoule, and measuring the transient torque exerted by rotating melt flow on the ampoule wall. The method was designed to measure viscosity in short time period, which is essential for evaluating temperature hysteresis. This paper compares the theoretical prediction of melt flow and ampoule oscillation with the experimental data. A theoretical model was established and the coupled fluid flow and ampoule torsional vibration equations were solved numerically. The simulation results showed a good agreement with experimental data. The results also showed that both electrical conductivity and viscosity could be calculated by fitting the theoretical results to the experimental data. The transient velocity of the melt caused by the rotating magnetic field was found reach equilibrium in about half a minute, and the viscosity of melt could be calculated from the altitude of oscillation. This would allow the measurement of viscosity in a minute or so, in contrast to the existing oscillation cup method, which requires about an hour for one measurement.

  6. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  7. The Matrix Element Method: Past, Present, and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.

    2013-07-12

    The increasing use of multivariate methods, and in particular the Matrix Element Method (MEM), represents a revolution in experimental particle physics. With continued exponential growth in computing capabilities, the use of sophisticated multivariate methods-- already common-- will soon become ubiquitous and ultimately almost compulsory. While the existence of sophisticated algorithms for disentangling signal and background might naively suggest a diminished role for theorists, the use of the MEM, with its inherent connection to the calculation of differential cross sections will benefit from collaboration between theorists and experimentalists. In this white paper, we will briefly describe the MEM and some ofmore » its recent uses, note some current issues and potential resolutions, and speculate about exciting future opportunities.« less

  8. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    PubMed

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.

  9. Predicting New Indications for Approved Drugs Using a Proteo-Chemometric Method

    PubMed Central

    Dakshanamurthy, Sivanesan; Issa, Naiem T; Assefnia, Shahin; Seshasayee, Ashwini; Peters, Oakland J; Madhavan, Subha; Uren, Aykut; Brown, Milton L; Byers, Stephen W

    2012-01-01

    The most effective way to move from target identification to the clinic is to identify already approved drugs with the potential for activating or inhibiting unintended targets (repurposing or repositioning). This is usually achieved by high throughput chemical screening, transcriptome matching or simple in silico ligand docking. We now describe a novel rapid computational proteo-chemometric method called “Train, Match, Fit, Streamline” (TMFS) to map new drug-target interaction space and predict new uses. The TMFS method combines shape, topology and chemical signatures, including docking score and functional contact points of the ligand, to predict potential drug-target interactions with remarkable accuracy. Using the TMFS method, we performed extensive molecular fit computations on 3,671 FDA approved drugs across 2,335 human protein crystal structures. The TMFS method predicts drug-target associations with 91% accuracy for the majority of drugs. Over 58% of the known best ligands for each target were correctly predicted as top ranked, followed by 66%, 76%, 84% and 91% for agents ranked in the top 10, 20, 30 and 40, respectively, out of all 3,671 drugs. Drugs ranked in the top 1–40, that have not been experimentally validated for a particular target now become candidates for repositioning. Furthermore, we used the TMFS method to discover that mebendazole, an anti-parasitic with recently discovered and unexpected anti-cancer properties, has the structural potential to inhibit VEGFR2. We confirmed experimentally that mebendazole inhibits VEGFR2 kinase activity as well as angiogenesis at doses comparable with its known effects on hookworm. TMFS also predicted, and was confirmed with surface plasmon resonance, that dimethyl celecoxib and the anti-inflammatory agent celecoxib can bind cadherin-11, an adhesion molecule important in rheumatoid arthritis and poor prognosis malignancies for which no targeted therapies exist. We anticipate that expanding our TMFS method to the >27,000 clinically active agents available worldwide across all targets will be most useful in the repositioning of existing drugs for new therapeutic targets. PMID:22780961

  10. Influence analysis in quantitative trait loci detection.

    PubMed

    Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko

    2014-07-01

    This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  12. Prediction of enhancer-promoter interactions via natural language processing.

    PubMed

    Zeng, Wanwen; Wu, Mengmeng; Jiang, Rui

    2018-05-09

    Precise identification of three-dimensional genome organization, especially enhancer-promoter interactions (EPIs), is important to deciphering gene regulation, cell differentiation and disease mechanisms. Currently, it is a challenging task to distinguish true interactions from other nearby non-interacting ones since the power of traditional experimental methods is limited due to low resolution or low throughput. We propose a novel computational framework EP2vec to assay three-dimensional genomic interactions. We first extract sequence embedding features, defined as fixed-length vector representations learned from variable-length sequences using an unsupervised deep learning method in natural language processing. Then, we train a classifier to predict EPIs using the learned representations in supervised way. Experimental results demonstrate that EP2vec obtains F1 scores ranging from 0.841~ 0.933 on different datasets, which outperforms existing methods. We prove the robustness of sequence embedding features by carrying out sensitivity analysis. Besides, we identify motifs that represent cell line-specific information through analysis of the learned sequence embedding features by adopting attention mechanism. Last, we show that even superior performance with F1 scores 0.889~ 0.940 can be achieved by combining sequence embedding features and experimental features. EP2vec sheds light on feature extraction for DNA sequences of arbitrary lengths and provides a powerful approach for EPIs identification.

  13. Abnormality detection of mammograms by discriminative dictionary learning on DSIFT descriptors.

    PubMed

    Tavakoli, Nasrin; Karimi, Maryam; Nejati, Mansour; Karimi, Nader; Reza Soroushmehr, S M; Samavi, Shadrokh; Najarian, Kayvan

    2017-07-01

    Detection and classification of breast lesions using mammographic images are one of the most difficult studies in medical image processing. A number of learning and non-learning methods have been proposed for detecting and classifying these lesions. However, the accuracy of the detection/classification still needs improvement. In this paper we propose a powerful classification method based on sparse learning to diagnose breast cancer in mammograms. For this purpose, a supervised discriminative dictionary learning approach is applied on dense scale invariant feature transform (DSIFT) features. A linear classifier is also simultaneously learned with the dictionary which can effectively classify the sparse representations. Our experimental results show the superior performance of our method compared to existing approaches.

  14. Direct 2-D reconstructions of conductivity and permittivity from EIT data on a human chest.

    PubMed

    Herrera, Claudia N L; Vallejo, Miguel F M; Mueller, Jennifer L; Lima, Raul G

    2015-01-01

    A novel direct D-bar reconstruction algorithm is presented for reconstructing a complex conductivity distribution from 2-D EIT data. The method is applied to simulated data and archival human chest data. Permittivity reconstructions with the aforementioned method and conductivity reconstructions with the previously existing nonlinear D-bar method for real-valued conductivities depicting ventilation and perfusion in the human chest are presented. This constitutes the first fully nonlinear D-bar reconstructions of human chest data and the first D-bar permittivity reconstructions of experimental data. The results of the human chest data reconstructions are compared on a circular domain versus a chest-shaped domain.

  15. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  16. Spectral algorithm for non-destructive damage localisation: Application to an ancient masonry arch model

    NASA Astrophysics Data System (ADS)

    Masciotta, Maria-Giovanna; Ramos, Luís F.; Lourenço, Paulo B.; Vasta, Marcello

    2017-02-01

    Structural monitoring and vibration-based damage identification methods are fundamental tools for condition assessment and early-stage damage identification, especially when dealing with the conservation of historical constructions and the maintenance of strategic civil structures. However, although the substantial advances in the field, several issues must still be addressed to broaden the application range of such tools and to assert their reliability. This study deals with the experimental validation of a novel method for non-destructive damage identification purposes. This method is based on the use of spectral output signals and has been recently validated by the authors through a numerical simulation. After a brief insight into the basic principles of the proposed approach, the spectral-based technique is applied to identify the experimental damage induced on a masonry arch through statically increasing loading. Once the direct and cross spectral density functions of the nodal response processes are estimated, the system's output power spectrum matrix is built and decomposed in eigenvalues and eigenvectors. The present study points out how the extracted spectral eigenparameters contribute to the damage analysis allowing to detect the occurrence of damage and to locate the target points where the cracks appear during the experimental tests. The sensitivity of the spectral formulation to the level of noise in the modal data is investigated and discussed. As a final evaluation criterion, the results from the spectrum-driven method are compared with the ones obtained from existing non-model based damage identification methods.

  17. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  18. A low delay transmission method of multi-channel video based on FPGA

    NASA Astrophysics Data System (ADS)

    Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei

    2018-03-01

    In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.

  19. Maximum likelihood estimation of protein kinetic parameters under weak assumptions from unfolding force spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Aioanei, Daniel; Samorì, Bruno; Brucale, Marco

    2009-12-01

    Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.

  20. Assessment of radiation shield integrity of DD/DT fusion neutron generator facilities by Monte Carlo and experimental methods

    NASA Astrophysics Data System (ADS)

    Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.

    2015-01-01

    DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.

  1. Tele-existence and/or cybernetic interface studies in Japan

    NASA Technical Reports Server (NTRS)

    Tachi, Susumu

    1991-01-01

    Tele-existence aims at a natural and efficient remote control of robots by providing the operator with a real time sensation of presence. It is an advaced type of teleoperation system which enables a human operator at the controls to perform remote manipulation tasks dexterously with the feeling that he or she exists in one of the remote anthropomorphic robots in the remote environment, e.g., in a hostile environment such as those of nuclear radiation, high temperature, and deep space. In order to study the use of the tele-existence system in the artificially constructed environment, the visual tele-existence simulator has been designed, a pseudo-real-time binocular solid model robot simulator has been made, and its feasibility has been experimentally evaluated. An anthropomorphic robot mechanism with an arm having seven degrees of freedom has been designed and developed as a slave robot for feasibility experiments of teleoperation using the tele-existence method. An impedance controlled active display mechanism and a head mounted display have also been designed and developed as the display subsystem for the master. The robot's structural dimensions are set very close to those of humans.

  2. Photodecomposition Profile of Curcumin in the Existence of Tungsten Trioxide Particles

    NASA Astrophysics Data System (ADS)

    Nandiyanto, A. B. D.; Zaen, R.; Oktiani, R.; Abdullah, A. G.

    2018-02-01

    The purpose of this study was to investigate the stability of curcumin solution in the existence of tungsten trioxide (WO3) particles under light illumination. In the experimental method, curcumin extracted from Indonesian local turmeric was added with WO3 microparticles and put into the photoreactor system. The photostability performance of curcumin was conducted for 22 hours using 100 W of Neon Lamp. The results showed that the curcumin solution was relatively stable. When curcumin without existence of WO3 was irradiated, no change in the curcumin concentration was found. However, when curcumin solution was mixed with WO3 particles, decreases in the concentration of curcumin was found. The concentration of curcumin with WO3 after light irradiation was about 73.58%. Based on the results, we concluded that the curcumin is relatively stable against light. However, its lightirradiation stability decreases with additional inorganic material.

  3. Free energy of formation of a crystal nucleus in incongruent solidification: Implication for modeling the crystallization of aqueous nitric acid droplets in polar stratospheric clouds

    NASA Astrophysics Data System (ADS)

    Djikaev, Yuri S.; Ruckenstein, Eli

    2017-04-01

    Using the formalism of classical thermodynamics in the framework of the classical nucleation theory, we derive an expression for the reversible work W* of formation of a binary crystal nucleus in a liquid binary solution of non-stoichiometric composition (incongruent crystallization). Applied to the crystallization of aqueous nitric acid droplets, the new expression more adequately takes account of the effects of nitric acid vapor compared to the conventional expression of MacKenzie, Kulmala, Laaksonen, and Vesala (MKLV) [J. Geophys. Res.: Atmos. 102, 19729 (1997)]. The predictions of both MKLV and modified expressions for the average liquid-solid interfacial tension σls of nitric acid dihydrate (NAD) crystals are compared by using existing experimental data on the incongruent crystallization of aqueous nitric acid droplets of composition relevant to polar stratospheric clouds (PSCs). The predictions for σls based on the MKLV expression are higher by about 5% compared to predictions based on our modified expression. This results in similar differences between the predictions of both expressions for the solid-vapor interfacial tension σsv of NAD crystal nuclei. The latter can be obtained by using the method based on the analysis of experimental data on crystal nucleation rates in aqueous nitric acid droplets; it exploits the dominance of the surface-stimulated mode of crystal nucleation in small droplets and its negligibility in large ones. Applying that method to existing experimental data, our expression for the free energy of formation provides an estimate for σsv of NAD in the range ≈92 dyn/cm to ≈100 dyn/cm, while the MKLV expression predicts it in the range ≈95 dyn/cm to ≈105 dyn/cm. The predictions of both expressions for W* become identical for the case of congruent crystallization; this was also demonstrated by applying our method for determining σsv to the nucleation of nitric acid trihydrate crystals in PSC droplets of stoichiometric composition.

  4. A Systematic Prediction of Drug-Target Interactions Using Molecular Fingerprints and Protein Sequences.

    PubMed

    Huang, Yu-An; You, Zhu-Hong; Chen, Xing

    2018-01-01

    Drug-Target Interactions (DTI) play a crucial role in discovering new drug candidates and finding new proteins to target for drug development. Although the number of detected DTI obtained by high-throughput techniques has been increasing, the number of known DTI is still limited. On the other hand, the experimental methods for detecting the interactions among drugs and proteins are costly and inefficient. Therefore, computational approaches for predicting DTI are drawing increasing attention in recent years. In this paper, we report a novel computational model for predicting the DTI using extremely randomized trees model and protein amino acids information. More specifically, the protein sequence is represented as a Pseudo Substitution Matrix Representation (Pseudo-SMR) descriptor in which the influence of biological evolutionary information is retained. For the representation of drug molecules, a novel fingerprint feature vector is utilized to describe its substructure information. Then the DTI pair is characterized by concatenating the two vector spaces of protein sequence and drug substructure. Finally, the proposed method is explored for predicting the DTI on four benchmark datasets: Enzyme, Ion Channel, GPCRs and Nuclear Receptor. The experimental results demonstrate that this method achieves promising prediction accuracies of 89.85%, 87.87%, 82.99% and 81.67%, respectively. For further evaluation, we compared the performance of Extremely Randomized Trees model with that of the state-of-the-art Support Vector Machine classifier. And we also compared the proposed model with existing computational models, and confirmed 15 potential drug-target interactions by looking for existing databases. The experiment results show that the proposed method is feasible and promising for predicting drug-target interactions for new drug candidate screening based on sizeable features. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.

  6. Improving material removal determinacy based on the compensation of tool influence function

    NASA Astrophysics Data System (ADS)

    Zhong, Bo; Chen, Xian-hua; Deng, Wen-hui; Zhao, Shi-jie; Zheng, Nan

    2018-03-01

    In the process of computer-controlled optical surfacing (CCOS), the key of correcting the surface error of optical components is to ensure the consistency between the simulated tool influence function and the actual tool influence function (TIF). The existing removal model usually adopts the fixed-point TIF to remove the material with the planning path and velocity, and it considers that the polishing process is linear and time invariant. However, in the actual polishing process, the TIF is a function related to the feed speed. In this paper, the relationship between the actual TIF and the feed speed (i.e. the compensation relationship between static removal and dynamic removal) is determined by experimental method. Then, the existing removal model is modified based on the compensation relationship, to improve the conformity between simulated and actual processing. Finally, the surface error modification correction test are carried out. The results show that the fitting degree of the simulated surface and the experimental surface is better than 88%, and the surface correction accuracy can be better than 1/10 λ (Λ=632.8nm).

  7. A new approach to enhance the performance of decision tree for classifying gene expression data.

    PubMed

    Hassan, Md; Kotagiri, Ramamohanarao

    2013-12-20

    Gene expression data classification is a challenging task due to the large dimensionality and very small number of samples. Decision tree is one of the popular machine learning approaches to address such classification problems. However, the existing decision tree algorithms use a single gene feature at each node to split the data into its child nodes and hence might suffer from poor performance specially when classifying gene expression dataset. By using a new decision tree algorithm where, each node of the tree consists of more than one gene, we enhance the classification performance of traditional decision tree classifiers. Our method selects suitable genes that are combined using a linear function to form a derived composite feature. To determine the structure of the tree we use the area under the Receiver Operating Characteristics curve (AUC). Experimental analysis demonstrates higher classification accuracy using the new decision tree compared to the other existing decision trees in literature. We experimentally compare the effect of our scheme against other well known decision tree techniques. Experiments show that our algorithm can substantially boost the classification performance of the decision tree.

  8. Simulation of unsteady state performance of a secondary air system by the 1D-3D-Structure coupled method

    NASA Astrophysics Data System (ADS)

    Wu, Hong; Li, Peng; Li, Yulong

    2016-02-01

    This paper describes the calculation method for unsteady state conditions in the secondary air systems in gas turbines. The 1D-3D-Structure coupled method was applied. A 1D code was used to model the standard components that have typical geometric characteristics. Their flow and heat transfer were described by empirical correlations based on experimental data or CFD calculations. A 3D code was used to model the non-standard components that cannot be described by typical geometric languages, while a finite element analysis was carried out to compute the structural deformation and heat conduction at certain important positions. These codes were coupled through their interfaces. Thus, the changes in heat transfer and structure and their interactions caused by exterior disturbances can be reflected. The results of the coupling method in an unsteady state showed an apparent deviation from the existing data, while the results in the steady state were highly consistent with the existing data. The difference in the results in the unsteady state was caused primarily by structural deformation that cannot be predicted by the 1D method. Thus, in order to obtain the unsteady state performance of a secondary air system more accurately and efficiently, the 1D-3D-Structure coupled method should be used.

  9. An embedded formula of the Chebyshev collocation method for stiff problems

    NASA Astrophysics Data System (ADS)

    Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu

    2017-12-01

    In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vonach, H.; Tagesen, S.

    Starting with a discussion of the requirements and goals for high quality general-purpose evaluations the paper will describe the procedures chosen in our evaluation work for JEFF for producing new general evaluations with complete covariance information for all cross sections (file 3 data). Key problems essential for the goal of making the best possible use of the existing theoretical and experimental knowledge on neutron interactions with the respective nuclide will be addressed, especially the problem of assigning covariances to calculated cross sections, necessary checking procedures for all experimental data and various possibilities to amend the experimental database beyond the obviousmore » use of EXFOR data for the respective cross sections. In this respect both, the use of elemental cross sections in isotopic evaluations and the use of implicit cross-section data (that is data which can be converted into cross sections by simple methods) will be discussed in some detail.« less

  11. Molecular dynamics simulation of premelting and melting phase transitions in stoichiometric uranium dioxide

    NASA Astrophysics Data System (ADS)

    Yakub, Eugene; Ronchi, Claudio; Staicu, Dragos

    2007-09-01

    Results of molecular dynamics (MD) simulation of UO2 in a wide temperature range are presented and discussed. A new approach to the calibration of a partly ionic Busing-Ida-type model is proposed. A potential parameter set is obtained reproducing the experimental density of solid UO2 in a wide range of temperatures. A conventional simulation of the high-temperature stoichiometric UO2 on large MD cells, based on a novel fast method of computation of Coulomb forces, reveals characteristic features of a premelting λ transition at a temperature near to that experimentally observed (Tλ=2670K ). A strong deviation from the Arrhenius behavior of the oxygen self-diffusion coefficient was found in the vicinity of the transition point. Predictions for liquid UO2, based on the same potential parameter set, are in good agreement with existing experimental data and theoretical calculations.

  12. First-Principles Prediction of Liquid/Liquid Interfacial Tension.

    PubMed

    Andersson, M P; Bennetzen, M V; Klamt, A; Stipp, S L S

    2014-08-12

    The interfacial tension between two liquids is the free energy per unit surface area required to create that interface. Interfacial tension is a determining factor for two-phase liquid behavior in a wide variety of systems ranging from water flooding in oil recovery processes and remediation of groundwater aquifers contaminated by chlorinated solvents to drug delivery and a host of industrial processes. Here, we present a model for predicting interfacial tension from first principles using density functional theory calculations. Our model requires no experimental input and is applicable to liquid/liquid systems of arbitrary compositions. The consistency of the predictions with experimental data is significant for binary, ternary, and multicomponent water/organic compound systems, which offers confidence in using the model to predict behavior where no data exists. The method is fast and can be used as a screening technique as well as to extend experimental data into conditions where measurements are technically too difficult, time consuming, or impossible.

  13. An integrated approach to model strain localization bands in magnesium alloys

    NASA Astrophysics Data System (ADS)

    Baxevanakis, K. P.; Mo, C.; Cabal, M.; Kontsos, A.

    2018-02-01

    Strain localization bands (SLBs) that appear at early stages of deformation of magnesium alloys have been recently associated with heterogeneous activation of deformation twinning. Experimental evidence has demonstrated that such "Lüders-type" band formations dominate the overall mechanical behavior of these alloys resulting in sigmoidal type stress-strain curves with a distinct plateau followed by pronounced anisotropic hardening. To evaluate the role of SLB formation on the local and global mechanical behavior of magnesium alloys, an integrated experimental/computational approach is presented. The computational part is developed based on custom subroutines implemented in a finite element method that combine a plasticity model with a stiffness degradation approach. Specific inputs from the characterization and testing measurements to the computational approach are discussed while the numerical results are validated against such available experimental information, confirming the existence of load drops and the intensification of strain accumulation at the time of SLB initiation.

  14. New approaches to increase intestinal length: Methods used for intestinal regeneration and bioengineering

    PubMed Central

    Shirafkan, Ali; Montalbano, Mauro; McGuire, Joshua; Rastellini, Cristiana; Cicalese, Luca

    2016-01-01

    Inadequate absorptive surface area poses a great challenge to the patients suffering a variety of intestinal diseases causing short bowel syndrome. To date, these patients are managed with total parenteral nutrition or intestinal transplantation. However, these carry significant morbidity and mortality. Currently, by emergence of tissue engineering, anticipations to utilize an alternative method to increase the intestinal absorptive surface area are increasing. In this paper, we will review the improvements made over time in attempting elongating the intestine with surgical techniques as well as using intestinal bioengineering. Performing sequential intestinal lengthening was the preliminary method applied in humans. However, these methods did not reach widespread use and has limited outcome. Subsequent experimental methods were developed utilizing scaffolds to regenerate intestinal tissue and organoids unit from the intestinal epithelium. Stem cells also have been studied and applied in all types of tissue engineering. Biomaterials were utilized as a structural support for naive cells to produce bio-engineered tissue that can achieve a near-normal anatomical structure. A promising novel approach is the elongation of the intestine with an acellular biologic scaffold to generate a neo-formed intestinal tissue that showed, for the first time, evidence of absorption in vivo. In the large intestine, studies are more focused on regeneration and engineering of sphincters and will be briefly reviewed. From the review of the existing literature, it can be concluded that significant progress has been achieved in these experimental methods but that these now need to be fully translated into a pre-clinical and clinical experimentation to become a future viable therapeutic option. PMID:27011901

  15. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  16. Development Of A Numerical Tow Tank With Wave Generation To Supplement Experimental Efforts

    DTIC Science & Technology

    2017-12-01

    vehicles CAD computer aided design CFD computational fluid dynamics FVM finite volume method IO information operations ISR intelligence, surveillance, and...deliver a product that I am truly proud of. xv THIS PAGE INTENTIONALLY LEFT BLANK xvi CHAPTER 1: Introduction 1.1 Importance of Tow Tank Testing Modern...wedge installation. 1 In 2016, NPS student Ensign Ryan Tran adapted an existing vertical plunging wedge wave maker design used at the U.S. Naval

  17. Kinetic Monte Carlo Simulation of the Growth of Various Nanostructures through Atomic and Cluster Deposition: Application to Gold Nanostructure Growth on Graphite

    NASA Astrophysics Data System (ADS)

    Claassens, C. H.; Hoffman, M. J. H.; Terblans, J. J.; Swart, H. C.

    2006-01-01

    A Kinetic Monte Carlo (KMC) method is presented to describe the growth of metallic nanostructures through atomic and cluster deposition in the mono -and multilayer regime. The model makes provision for homo- and heteroepitaxial systems with small lattice mismatch. The accuracy of the model is tested with simulations of the growth of gold nanostructures on HOPG and comparisons are made with existing experimental data.

  18. The collective and quantum nature of proton transfer in the cyclic water tetramer on NaCl(001)

    NASA Astrophysics Data System (ADS)

    Feng, Yexin; Wang, Zhichang; Guo, Jing; Chen, Ji; Wang, En-Ge; Jiang, Ying; Li, Xin-Zheng

    2018-03-01

    Proton tunneling is an elementary process in the dynamics of hydrogen-bonded systems. Collective tunneling is known to exist for a long time. Atomistic investigations of this mechanism in realistic systems, however, are scarce. Using a combination of ab initio theoretical and high-resolution experimental methods, we investigate the role played by the protons on the chirality switching of a water tetramer on NaCl(001). Our scanning tunneling spectroscopies show that partial deuteration of the H2O tetramer with only one D2O leads to a significant suppression of the chirality switching rate at a cryogenic temperature (T), indicating that the chirality switches by tunneling in a concerted manner. Theoretical simulations, in the meantime, support this picture by presenting a much smaller free-energy barrier for the translational collective proton tunneling mode than other chirality switching modes at low T. During this analysis, the virial energy provides a reasonable estimator for the description of the nuclear quantum effects when a traditional thermodynamic integration method cannot be used, which could be employed in future studies of similar problems. Given the high-dimensional nature of realistic systems and the topology of the hydrogen-bonded network, collective proton tunneling may exist more ubiquitously than expected. Systems of this kind can serve as ideal platforms for studies of this mechanism, easily accessible to high-resolution experimental measurements.

  19. Numerical models for afterburning of TNT detonation products in air

    NASA Astrophysics Data System (ADS)

    Donahue, L.; Zhang, F.; Ripley, R. C.

    2013-11-01

    Afterburning occurs when fuel-rich explosive detonation products react with oxygen in the surrounding atmosphere. This energy release can further contribute to the air blast, resulting in a more severe explosion hazard particularly in confined scenarios. The primary objective of this study was to investigate the influence of the products equation of state (EOS) on the prediction of the efficiency of trinitrotoluene (TNT) afterburning and the times of arrival of reverberating shock waves in a closed chamber. A new EOS is proposed, denoted the Afterburning (AB) EOS. This EOS employs the JWL EOS in the high pressure regime, transitioning to a Variable-Gamma (VG) EOS at lower pressures. Simulations of three TNT charges suspended in a explosion chamber were performed. When compared to numerical results using existing methods, it was determined that the Afterburning EOS delays the shock arrival times giving better agreement with the experimental measurements in the early to mid time. In the late time, the Afterburning EOS roughly halved the error between the experimental measurements and results obtained using existing methods. Use of the Afterburning EOS for products with the Variable-Gamma EOS for the surrounding air further significantly improved results, both in the transient solution and the quasi-static pressure. This final combination of EOS and mixture model is recommended for future studies involving afterburning explosives, particularly those in partial and full confinement.

  20. Mechanics of fiber reinforced materials

    NASA Astrophysics Data System (ADS)

    Sun, Huiyu

    This dissertation is dedicated to mechanics of fiber reinforced materials and the woven reinforcement and composed of four parts of research: analytical characterization of the interfaces in laminated composites; micromechanics of braided composites; shear deformation, and Poisson's ratios of woven fabric reinforcements. A new approach to evaluate the mechanical characteristics of interfaces between composite laminae based on a modified laminate theory is proposed. By including an interface as a special lamina termed the "bonding-layer" in the analysis, the mechanical properties of the interfaces are obtained. A numerical illustration is given. For micro-mechanical properties of three-dimensionally braided composite materials, a new method via homogenization theory and incompatible multivariable FEM is developed. Results from the hybrid stress element approach compare more favorably with the experimental data than other existing numerical methods widely used. To evaluate the shearing properties for woven fabrics, a new mechanical model is proposed during the initial slip region. Analytical results show that this model provides better agreement with the experiments for both the initial shear modulus and the slipping angle than the existing models. Finally, another mechanical model for a woven fabric made of extensible yarns is employed to calculate the fabric Poisson's ratios. Theoretical results are compared with the available experimental data. A thorough examination on the influences of various mechanical properties of yarns and structural parameters of fabrics on the Poisson's ratios of a woven fabric is given at the end.

  1. How to integrate biological research into society and exclude errors in biomedical publications? Progress in theoretical and systems biology releases pressure on experimental research.

    PubMed

    Volkov, Vadim

    2014-01-01

    This brief opinion proposes measures to increase efficiency and exclude errors in biomedical research under the existing dynamic situation. Rapid changes in biology began with the description of the three dimensional structure of DNA 60 years ago; today biology has progressed by interacting with computer science and nanoscience together with the introduction of robotic stations for the acquisition of large-scale arrays of data. These changes have had an increasing influence on the entire research and scientific community. Future advance demands short-term measures to ensure error-proof and efficient development. They can include the fast publishing of negative results, publishing detailed methodical papers and excluding a strict connection between career progression and publication activity, especially for younger researchers. Further development of theoretical and systems biology together with the use of multiple experimental methods for biological experiments could also be helpful in the context of years and decades. With regards to the links between science and society, it is reasonable to compare both these systems, to find and describe specific features for biology and to integrate it into the existing stream of social life and financial fluxes. It will increase the level of scientific research and have mutual positive effects for both biology and society. Several examples are given for further discussion.

  2. Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics

    PubMed Central

    Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas

    2014-01-01

    Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727

  3. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  4. Micro Dot Patterning on the Light Guide Panel Using Powder Blasting.

    PubMed

    Jang, Ho Su; Cho, Myeong Woo; Park, Dong Sam

    2008-02-08

    This study is to develop a micromachining technology for a light guidepanel(LGP) mold, whereby micro dot patterns are formed on a LGP surface by a singleinjection process instead of existing screen printing processes. The micro powder blastingtechnique is applied to form micro dot patterns on the LGP mold surface. The optimalconditions for masking, laminating, exposure, and developing processes to form the microdot patterns are first experimentally investigated. A LGP mold with masked micro patternsis then machined using the micro powder blasting method and the machinability of themicro dot patterns is verified. A prototype LGP is test- injected using the developed LGPmold and a shape analysis of the patterns and performance testing of the injected LGP arecarried out. As an additional approach, matte finishing, a special surface treatment method,is applied to the mold surface to improve the light diffusion characteristics, uniformity andbrightness of the LGP. The results of this study show that the applied powder blastingmethod can be successfully used to manufacture LGPs with micro patterns by just singleinjection using the developed mold and thereby replace existing screen printing methods.

  5. Observed physical processes in mechanical tests of PBX9501 and recomendations for experiments to explore a possible plasticity/damage threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buechler, Miles A.

    2012-05-02

    This memo discusses observations that have been made in regards to a series of monotonic and cyclic uniaxial experiments performed on PBX9501 by Darla Thompson under Enhanced Surveilance Campaign support. These observations discussed in Section Cyclic compression observations strongly suggest the presence of viscoelastic, plastic, and damage phenomena in the mechanical response of the material. In Secton Uniaxial data analysis and observations methods are discussed for separating out the viscoelastic effects. A crude application of those methods suggests the possibility of a critical stress below which plasticity and damage may be negligible. The threshold should be explored because if itmore » exists it will be an important feature of any constitutive model. Additionally, if the threshold exists then modifications of experimental methods may be feasible which could potentially simplify future experiments or provide higher quality data from those experiments. A set of experiments to explore the threshold stress are proposed in Section Exploratory tests program for identifying threshold stress.« less

  6. Gaussian mixture model based identification of arterial wall movement for computation of distension waveform.

    PubMed

    Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram

    2015-01-01

    This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

  7. Analysis of gene expression as relevant to cancer cells and circulating tumour cells.

    PubMed

    Friel, Anne M; Crown, John; O'Driscoll, Lorraine

    2011-01-01

    Current literature provides significant evidence to support the concept that there are limited subpopulations of cells within a solid tumour that have increased tumour-initiating potential relative to the total tumour population. Such tumour-initiating cells have been identified in leukaemia and in a variety of solid tumours using different combinations of cell surface markers, suggesting that a tumour-initiating cell heterogeneity exists for each specific tumour. These studies have been extended to endometrial cancer; and herein we present several experimental approaches, both in vitro and in vivo, that can be used to determine whether such populations exist, and if so, to characterize them. These methods are adaptable to the investigation of tumour-initiating cells from other tumour types.

  8. A Survey of Challenges in Aerodynamic Exhaust Nozzle Technology for Aerospace Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Shyne, Rickey J.

    2002-01-01

    The current paper discusses aerodynamic exhaust nozzle technology challenges for aircraft and space propulsion systems. Technology advances in computational and experimental methods have led to more accurate design and analysis tools, but many major challenges continue to exist in nozzle performance, jet noise and weight reduction. New generations of aircraft and space vehicle concepts dictate that exhaust nozzles have optimum performance, low weight and acceptable noise signatures. Numerous innovative nozzle concepts have been proposed for advanced subsonic, supersonic and hypersonic vehicle configurations such as ejector, mixer-ejector, plug, single expansion ramp, altitude compensating, lobed and chevron nozzles. This paper will discuss the technology barriers that exist for exhaust nozzles as well as current research efforts in place to address the barriers.

  9. Micro Dot Patterning on the Light Guide Panel Using Powder Blasting

    PubMed Central

    Jang, Ho Su; Cho, Myeong Woo; Park, Dong Sam

    2008-01-01

    This study is to develop a micromachining technology for a light guide panel(LGP) mold, whereby micro dot patterns are formed on a LGP surface by a single injection process instead of existing screen printing processes. The micro powder blasting technique is applied to form micro dot patterns on the LGP mold surface. The optimal conditions for masking, laminating, exposure, and developing processes to form the micro dot patterns are first experimentally investigated. A LGP mold with masked micro patterns is then machined using the micro powder blasting method and the machinability of the micro dot patterns is verified. A prototype LGP is test- injected using the developed LGP mold and a shape analysis of the patterns and performance testing of the injected LGP are carried out. As an additional approach, matte finishing, a special surface treatment method, is applied to the mold surface to improve the light diffusion characteristics, uniformity and brightness of the LGP. The results of this study show that the applied powder blasting method can be successfully used to manufacture LGPs with micro patterns by just single injection using the developed mold and thereby replace existing screen printing methods. PMID:27879740

  10. An optical method for characterizing carbon content in ceramic pot filters.

    PubMed

    Goodwin, J Y; Elmore, A C; Salvinelli, C; Reidmeyer, Mary R

    2017-08-01

    Ceramic pot filter (CPF) technology is a relatively common means of household water treatment in developing areas, and performance characteristics of CPFs have been characterized using production CPFs, experimental CPFs fabricated in research laboratories, and ceramic disks intended to be CPF surrogates. There is evidence that CPF manufacturers do not always fire their products according to best practices and the result is incomplete combustion of the pore forming material and the creation of a carbon core in the final CPFs. Researchers seldom acknowledge the existence of potential existence of carbon cores, and at least one CPF producer has postulated that the carbon may be beneficial in terms of final water quality because of the presence of activated carbon in consumer filters marketed in the Western world. An initial step in characterizing the presence and impact of carbon cores is the characterization of those cores. An optical method which may be more viable to producers relative to off-site laboratory analysis of carbon content has been developed and verified. The use of the optical method is demonstrated via preliminary disinfection and flowrate studies, and the results of these studies indicate that the method may be of use in studying production kiln operation.

  11. A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz

    2012-09-10

    Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.

  12. Calculation of recoil implantation profiles using known range statistics

    NASA Technical Reports Server (NTRS)

    Fung, C. D.; Avila, R. E.

    1985-01-01

    A method has been developed to calculate the depth distribution of recoil atoms that result from ion implantation onto a substrate covered with a thin surface layer. The calculation includes first order recoils considering projected range straggles, and lateral straggles of recoils but neglecting lateral straggles of projectiles. Projectile range distributions at intermediate energies in the surface layer are deduced from look-up tables of known range statistics. A great saving of computing time and human effort is thus attained in comparison with existing procedures. The method is used to calculate recoil profiles of oxygen from implantation of arsenic through SiO2 and of nitrogen from implantation of phosphorus through Si3N4 films on silicon. The calculated recoil profiles are in good agreement with results obtained by other investigators using the Boltzmann transport equation and they also compare very well with available experimental results in the literature. The deviation between calculated and experimental results is discussed in relation to lateral straggles. From this discussion, a range of surface layer thickness for which the method applies is recommended.

  13. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  14. An Improved BLE Indoor Localization with Kalman-Based Fusion: An Experimental Study

    PubMed Central

    Röbesaat, Jenny; Zhang, Peilin; Abdelaal, Mohamed; Theel, Oliver

    2017-01-01

    Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter. PMID:28445421

  15. An Experimental Study on the Fabrication of Glass-based Acceleration Sensor Body Using Micro Powder Blasting Method

    PubMed Central

    Park, Dong-Sam; Yun, Dae-Jin; Cho, Myeong-Woo; Shin, Bong-Cheol

    2007-01-01

    This study investigated the feasibility of the micro powder blasting technique for the micro fabrication of sensor structures using the Pyrex glass to replace the existing silicon-based acceleration sensor fabrication processes. As the preliminary experiments, the effects of the blasting pressure, the mass flow rate of abrasive and the number of nozzle scanning times on erosion depth of the Pyrex and the soda lime glasses were examined. From the experimental results, optimal blasting conditions were selected for the Pyrex glass machining. The dimensions of the designed glass sensor was 1.7×1.7×0.6mm for the vibrating mass, and 2.9×0.7×0.2mm for the cantilever beam. The machining results showed that the dimensional errors of the machined glass sensor ranged from 3 μm in minimum to 20 μm in maximum. These results imply that the micro powder blasting method can be applied for the micromachining of glass-based acceleration sensors to replace the exiting method.

  16. Data-driven information retrieval in heterogeneous collections of transcriptomics data links SIM2s to malignant pleural mesothelioma.

    PubMed

    Caldas, José; Gehlenborg, Nils; Kettunen, Eeva; Faisal, Ali; Rönty, Mikko; Nicholson, Andrew G; Knuutila, Sakari; Brazma, Alvis; Kaski, Samuel

    2012-01-15

    Genome-wide measurement of transcript levels is an ubiquitous tool in biomedical research. As experimental data continues to be deposited in public databases, it is becoming important to develop search engines that enable the retrieval of relevant studies given a query study. While retrieval systems based on meta-data already exist, data-driven approaches that retrieve studies based on similarities in the expression data itself have a greater potential of uncovering novel biological insights. We propose an information retrieval method based on differential expression. Our method deals with arbitrary experimental designs and performs competitively with alternative approaches, while making the search results interpretable in terms of differential expression patterns. We show that our model yields meaningful connections between biological conditions from different studies. Finally, we validate a previously unknown connection between malignant pleural mesothelioma and SIM2s suggested by our method, via real-time polymerase chain reaction in an independent set of mesothelioma samples. Supplementary data and source code are available from http://www.ebi.ac.uk/fg/research/rex.

  17. Design and Fabrication of an Experimental Microheater Array Powder Sintering Printer

    NASA Astrophysics Data System (ADS)

    Holt, Nicholas; Zhou, Wenchao

    2018-03-01

    Microheater array powder sintering (MAPS) is a novel additive manufacturing process that uses an array of microheaters to selectively sinter powder particles. MAPS shows great promise as a new method of printing flexible electronics by enabling digital curing of conductive inks on a variety of substrates. For MAPS to work effectively, a microscale air gap needs to be maintained between the heater array and the conductive ink. In this article, we present an experimental MAPS printer with air gap control for printing conductive circuits. First, we discuss design aspects necessary to implement MAPS. An analysis is performed to validate that the design can maintain the desired air gap between the microheaters and the sintering layer, which consists of a silver nanoparticle ink. The printer is tested by printing conductive lines on a flexible plastic substrate with silver nanoparticle ink. Results show MAPS performs on par with or better than the existing fabrication methods for printed electronics in terms of both the print quality (conductivity of the printed line) and print speed, which shows MAPS' great promise as a competitive new method for digital production of printed electronics.

  18. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  19. Molecular structure, electronic properties, NLO, NBO analysis and spectroscopic characterization of Gabapentin with experimental (FT-IR and FT-Raman) techniques and quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Sinha, Leena; Karabacak, Mehmet; Narayan, V.; Cinar, Mehmet; Prasad, Onkar

    2013-05-01

    Gabapentin (GP), structurally related to the neurotransmitter GABA (gamma-aminobutyric acid), mimics the activity of GABA and is also widely used in neurology for the treatment of peripheral neuropathic pain. It exists in zwitterionic form in solid state. The present communication deals with the quantum chemical calculations of energies, geometrical structure and vibrational wavenumbers of GP using density functional (DFT/B3LYP) method with 6-311++G(d,p) basis set. In view of the fact that amino acids exist as zwitterions as well as in the neutral form depending on the environment (solvent, pH, etc.), molecular properties of both the zwitterionic and neutral form of GP have been analyzed. The fundamental vibrational wavenumbers as well as their intensities were calculated and compared with experimental FT-IR and FT-Raman spectra. The fundamental assignments were done on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanical (SQM) method. The electric dipole moment, polarizability and the first hyperpolarizability values of the GP have been calculated at the same level of theory and basis set. The nonlinear optical (NLO) behavior of zwitterionic and neutral form has been compared. Stability of the molecule arising from hyper-conjugative interactions and charge delocalization has been analyzed using natural bond orbital analysis. Ultraviolet-visible (UV-Vis) spectrum of the title molecule has also been calculated using TD-DFT method. The thermodynamic properties of both the zwitterionic and neutral form of GP at different temperatures have been calculated.

  20. MBMC: An Effective Markov Chain Approach for Binning Metagenomic Reads from Environmental Shotgun Sequencing Projects.

    PubMed

    Wang, Ying; Hu, Haiyan; Li, Xiaoman

    2016-08-01

    Metagenomics is a next-generation omics field currently impacting postgenomic life sciences and medicine. Binning metagenomic reads is essential for the understanding of microbial function, compositions, and interactions in given environments. Despite the existence of dozens of computational methods for metagenomic read binning, it is still very challenging to bin reads. This is especially true for reads from unknown species, from species with similar abundance, and/or from low-abundance species in environmental samples. In this study, we developed a novel taxonomy-dependent and alignment-free approach called MBMC (Metagenomic Binning by Markov Chains). Different from all existing methods, MBMC bins reads by measuring the similarity of reads to the trained Markov chains for different taxa instead of directly comparing reads with known genomic sequences. By testing on more than 24 simulated and experimental datasets with species of similar abundance, species of low abundance, and/or unknown species, we report here that MBMC reliably grouped reads from different species into separate bins. Compared with four existing approaches, we demonstrated that the performance of MBMC was comparable with existing approaches when binning reads from sequenced species, and superior to existing approaches when binning reads from unknown species. MBMC is a pivotal tool for binning metagenomic reads in the current era of Big Data and postgenomic integrative biology. The MBMC software can be freely downloaded at http://hulab.ucf.edu/research/projects/metagenomics/MBMC.html .

  1. Predicting drug loading in PLA-PEG nanoparticles.

    PubMed

    Meunier, M; Goupil, A; Lienard, P

    2017-06-30

    Polymer nanoparticles present advantageous physical and biopharmaceutical properties as drug delivery systems compared to conventional liquid formulations. Active pharmaceutical ingredients (APIs) are often hydrophobic, thus not soluble in conventional liquid delivery. Encapsulating the drugs in polymer nanoparticles can improve their pharmacological and bio-distribution properties, preventing rapid clearance from the bloodstream. Such nanoparticles are commonly made of non-toxic amphiphilic self-assembling block copolymers where the core (poly-[d,l-lactic acid] or PLA) serves as a reservoir for the API and the external part (Poly-(Ethylene-Glycol) or PEG) serves as a stealth corona to avoid capture by macrophage. The present study aims to predict the drug affinity for PLA-PEG nanoparticles and their effective drug loading using in silico tools in order to virtually screen potential drugs for non-covalent encapsulation applications. To that end, different simulation methods such as molecular dynamics and Monte-Carlo have been used to estimate the binding of actives on model polymer surfaces. Initially, the methods and models are validated against a series of pigments molecules for which experimental data exist. The drug affinity for the core of the nanoparticles is estimated using a Monte-Carlo "docking" method. Drug miscibility in the polymer matrix, using the Hildebrand solubility parameter (δ), and the solvation free energy of the drug in the PLA polymer model is then estimated. Finally, existing published ALogP quantitative structure-property relationships (QSPR) are compared to this method. Our results demonstrate that adsorption energies modelled by docking atomistic simulations on PLA surfaces correlate well with experimental drug loadings, whereas simpler approaches based on Hildebrand solubility parameters and Flory-Huggins interaction parameters do not. More complex molecular dynamics techniques which use estimation of the solvation free energies both in PLA and in water led to satisfactory predictive models. In addition, experimental drug loadings and Log P are found to correlate well. This work can be used to improve the understanding of drug-polymer interactions, a key component to designing better delivery systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. OneG: A Computational Tool for Predicting Cryptic Intermediates in the Unfolding Kinetics of Proteins under Native Conditions

    PubMed Central

    Richa, Tambi; Sivaraman, Thirunavukkarasu

    2012-01-01

    Understanding the relationships between conformations of proteins and their stabilities is one key to address the protein folding paradigm. The free energy change (ΔG) of unfolding reactions of proteins is measured by traditional denaturation methods and native hydrogen-deuterium (H/D) exchange methods. However, the free energy of unfolding (ΔGU) and the free energy of exchange (ΔGHX) of proteins are not in good agreement, though the experimental conditions of both methods are well matching to each other. The anomaly is due to any one or combinations of the following reasons: (i) effects of cis-trans proline isomerisation under equilibrium unfolding reactions of proteins (ii) inappropriateness in accounting the baselines of melting curves (iii) presence of cryptic intermediates, which may elude the melting curve analysis and (iv) existence of higher energy metastable states in the H/D exchange reactions of proteins. Herein, we have developed a novel computational tool, OneG, which accounts the discrepancy between ΔGU and ΔGHX of proteins by systematically accounting all the four factors mentioned above. The program is fully automated and requires four inputs: three-dimensional structures of proteins, ΔGU, ΔGU * and residue-specific ΔGHX determined under EX2-exchange conditions in the absence of denaturants. The robustness of the program has been validated using experimental data available for proteins such as cytochrome c and apocytochrome b562 and the data analyses revealed that cryptic intermediates of the proteins detected by the experimental methods and the cryptic intermediates predicted by the OneG for those proteins were in good agreement. Furthermore, using OneG, we have shown possible existence of cryptic intermediates and metastable states in the unfolding pathways of cardiotoxin III and cobrotoxin, respectively, which are homologous proteins. The unique application of the program to map the unfolding pathways of proteins under native conditions have been brought into fore and the program is publicly available at http://sblab.sastra.edu/oneg.html PMID:22412877

  3. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    PubMed

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  4. On the Epistemological Crisis in Genomics

    PubMed Central

    Dougherty, Edward R

    2008-01-01

    There is an epistemological crisis in genomics. At issue is what constitutes scientific knowledge in genomic science, or systems biology in general. Does this crisis require a new perspective on knowledge heretofore absent from science or is it merely a matter of interpreting new scientific developments in an existing epistemological framework? This paper discusses the manner in which the experimental method, as developed and understood over recent centuries, leads naturally to a scientific epistemology grounded in an experimental-mathematical duality. It places genomics into this epistemological framework and examines the current situation in genomics. Meaning and the constitution of scientific knowledge are key concerns for genomics, and the nature of the epistemological crisis in genomics depends on how these are understood. PMID:19440447

  5. Experimental study of near-field air entrainment by subsonic volcanic jets

    USGS Publications Warehouse

    Solovitz, Stephen A.; Mastin, Larry G.

    2009-01-01

    The flow structure in the developing region of a turbulent jet has been examined using particle image velocimetry methods, considering the flow at steady state conditions. The velocity fields were integrated to determine the ratio of the entrained air speed to the jet speed, which was approximately 0.03 for a range of Mach numbers up to 0.89 and Reynolds numbers up to 217,000. This range of experimental Mach and Reynolds numbers is higher than previously considered for high-accuracy entrainment measures, particularly in the near-vent region. The entrainment values are below those commonly used for geophysical analyses of volcanic plumes, suggesting that existing 1-D models are likely to understate the tendency for column collapse.

  6. Sliceable transponders for metro-access transmission links

    NASA Astrophysics Data System (ADS)

    Wagner, C.; Madsen, P.; Spolitis, S.; Vegas Olmos, J. J.; Tafur Monroy, I.

    2015-01-01

    This paper presents a solution for upgrading optical access networks by reusing existing electronics or optical equipment: sliceable transponders using signal spectrum slicing and stitching back method after direct detection. This technique allows transmission of wide bandwidth signals from the service provider (OLT - optical line terminal) to the end user (ONU - optical network unit) over an optical distribution network (ODN) via low bandwidth equipment. We show simulation and experimental results for duobinary signaling of 1 Gbit/s and 10 Gbit/s waveforms. The number of slices is adjusted to match the lowest analog bandwidth of used electrical devices and scale from 2 slices to 10 slices. Results of experimental transmission show error free signal recovery by using post forward error correction with 7% overhead.

  7. Numerical Investigation of the Performance of a Supersonic Combustion Chamber and Comparison with Experiments

    NASA Astrophysics Data System (ADS)

    Banica, M. C.; Chun, J.; Scheuermann, T.; Weigand, B.; Wolfersdorf, J. v.

    2009-01-01

    Scramjet powered vehicles can decrease costs for access to space but substantial obstacles still exist in their realization. For example, experiments in the relevant Mach number regime are difficult to perform and flight testing is expensive. Therefore, numerical methods are often employed for system layout but they require validation against experimental data. Here, we validate the commercial code CFD++ against experimental results for hydrogen combustion in the supersonic combustion facility of the Institute of Aerospace Thermodynamics (ITLR) at the Universität Stuttgart. Fuel is injected through a lobed a strut injector, which provides rapid mixing. Our numerical data shows reasonable agreement with experiments. We further investigate effects of varying equivalence ratios on several important performance parameters.

  8. Dynamo Enhancement and Mode Selection Triggered by High Magnetic Permeability.

    PubMed

    Kreuzahler, S; Ponty, Y; Plihon, N; Homann, H; Grauer, R

    2017-12-08

    We present results from consistent dynamo simulations, where the electrically conducting and incompressible flow inside a cylinder vessel is forced by moving impellers numerically implemented by a penalization method. The numerical scheme models jumps of magnetic permeability for the solid impellers, resembling various configurations tested experimentally in the von Kármán sodium experiment. The most striking experimental observations are reproduced in our set of simulations. In particular, we report on the existence of a time-averaged axisymmetric dynamo mode, self-consistently generated when the magnetic permeability of the impellers exceeds a threshold. We describe a possible scenario involving both the turbulent flow in the vicinity of the impellers and the high magnetic permeability of the impellers.

  9. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    PubMed

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  10. Cantilever spring constant calibration using laser Doppler vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohler, Benjamin

    2007-06-15

    Uncertainty in cantilever spring constants is a critical issue in atomic force microscopy (AFM) force measurements. Though numerous methods exist for calibrating cantilever spring constants, the accuracy of these methods can be limited by both the physical models themselves as well as uncertainties in their experimental implementation. Here we report the results from two of the most common calibration methods, the thermal tune method and the Sader method. These were implemented on a standard AFM system as well as using laser Doppler vibrometry (LDV). Using LDV eliminates some uncertainties associated with optical lever detection on an AFM. It also offersmore » considerably higher signal to noise deflection measurements. We find that AFM and LDV result in similar uncertainty in the calibrated spring constants, about 5%, using either the thermal tune or Sader methods provided that certain limitations of the methods and instrumentation are observed.« less

  11. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2016-12-21

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  12. [Review of research design and statistical methods in Chinese Journal of Cardiology].

    PubMed

    Zhang, Li-jun; Yu, Jin-ming

    2009-07-01

    To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.

  13. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  14. Mapping the ecological networks of microbial communities.

    PubMed

    Xiao, Yandong; Angulo, Marco Tulio; Friedman, Jonathan; Waldor, Matthew K; Weiss, Scott T; Liu, Yang-Yu

    2017-12-11

    Mapping the ecological networks of microbial communities is a necessary step toward understanding their assembly rules and predicting their temporal behavior. However, existing methods require assuming a particular population dynamics model, which is not known a priori. Moreover, those methods require fitting longitudinal abundance data, which are often not informative enough for reliable inference. To overcome these limitations, here we develop a new method based on steady-state abundance data. Our method can infer the network topology and inter-taxa interaction types without assuming any particular population dynamics model. Additionally, when the population dynamics is assumed to follow the classic Generalized Lotka-Volterra model, our method can infer the inter-taxa interaction strengths and intrinsic growth rates. We systematically validate our method using simulated data, and then apply it to four experimental data sets. Our method represents a key step towards reliable modeling of complex, real-world microbial communities, such as the human gut microbiota.

  15. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  16. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  17. High-resolution stress measurements for microsystem and semiconductor applications

    NASA Astrophysics Data System (ADS)

    Vogel, Dietmar; Keller, Juergen; Michel, Bernd

    2006-04-01

    Research results obtained for local stress determination on micro and nanotechnology components are summarized. It meets the concern of controlling stresses introduced to sensors, MEMS and electronics devices during different micromachining processes. The method bases on deformation measurement options made available inside focused ion beam equipment. Removing locally material by ion beam milling existing stresses / residual stresses lead to deformation fields around the milled feature. Digital image correlation techniques are used to extract deformation values from micrographs captured before and after milling. In the paper, two main milling features have been analyzed - through hole and through slit milling. Analytical solutions for stress release fields of in-plane stresses have been derived and compared to respective experimental findings. Their good agreement allows to settle a method for determination of residual stress values, which is demonstrated for thin membranes manufactured by silicon micro technology. Some emphasis is made on the elimination of main error sources for stress determination, like rigid body object displacements and rotations due to drifts of experimental conditions under FIB imaging. In order to illustrate potential application areas of the method residual stress suppression by ion implantation is evaluated by the method and reported here.

  18. FIB-based measurement of local residual stresses on microsystems

    NASA Astrophysics Data System (ADS)

    Vogel, Dietmar; Sabate, Neus; Gollhardt, Astrid; Keller, Juergen; Auersperg, Juergen; Michel, Bernd

    2006-03-01

    The paper comprises research results obtained for stress determination on micro and nanotechnology components. It meets the concern of controlling stresses introduced to sensors, MEMS and electronics devices during different micromachining processes. The method bases on deformation measurement options made available inside focused ion beam equipment. Removing locally material by ion beam milling existing stresses / residual stresses lead to deformation fields around the milled feature. Digital image correlation techniques are used to extract deformation values from micrographs captured before and after milling. In the paper, two main milling features have been analyzed - through hole and through slit milling. Analytical solutions for stress release fields of in-plane stresses have been derived and compared to respective experimental findings. Their good agreement allows to settle a method for determination of residual stress values, which is demonstrated for thin membranes manufactured by silicon micro technology. Some emphasis is made on the elimination of main error sources for stress determination, like rigid body object displacements and rotations due to drifts of experimental conditions under FIB imaging. In order to illustrate potential application areas of the method residual stress suppression by ion implantation is evaluated by the method and reported here.

  19. Blurred image recognition by legendre moment invariants

    PubMed Central

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  20. Quantitative phase imaging method based on an analytical nonparaxial partially coherent phase optical transfer function.

    PubMed

    Bao, Yijun; Gaylord, Thomas K

    2016-11-01

    Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.

  1. Color Image Enhancement Using Multiscale Retinex Based on Particle Swarm Optimization Method

    NASA Astrophysics Data System (ADS)

    Matin, F.; Jeong, Y.; Kim, K.; Park, K.

    2018-01-01

    This paper introduces, a novel method for the image enhancement using multiscale retinex and practical swarm optimization. Multiscale retinex is widely used image enhancement technique which intemperately pertains on parameters such as Gaussian scales, gain and offset, etc. To achieve the privileged effect, the parameters need to be tuned manually according to the image. In order to handle this matter, a developed retinex algorithm based on PSO has been used. The PSO method adjusted the parameters for multiscale retinex with chromaticity preservation (MSRCP) attains better outcome to compare with other existing methods. The experimental result indicates that the proposed algorithm is an efficient one and not only provides true color loyalty in low light conditions but also avoid color distortion at the same time.

  2. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  3. Quantification of Peptides from Immunoglobulin Constant and Variable Regions by Liquid Chromatography-Multiple Reaction Monitoring Mass Spectrometry for Assessment of Multiple Myeloma Patients

    PubMed Central

    Remily-Wood, Elizabeth R.; Benson, Kaaron; Baz, Rachid C.; Chen, Y. Ann; Hussein, Mohamad; Hartley-Brown, Monique A.; Sprung, Robert W.; Perez, Brianna; Liu, Richard Z.; Yoder, Sean; Teer, Jamie; Eschrich, Steven A.; Koomen, John M.

    2014-01-01

    Purpose Quantitative mass spectrometry assays for immunoglobulins (Igs) are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, e.g. multiple myeloma. Experimental design Using LC-MS/MS data, Ig constant region peptides and transitions were selected for liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM). Quantitative assays were used to assess Igs in serum from 83 patients. Results LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1–4, IgA1–2, IgM, IgD, and IgE, as well as kappa(κ) and lambda(λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 multiple myeloma cell line and two MM patients. Conclusions and Clinical Relevance LC-MRM assays targeting constant region peptides determine the type and isoform of the involved immunoglobulin and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher interassay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. PMID:24723328

  4. Determination of ferroelectric contributions to electromechanical response by frequency dependent piezoresponse force microscopy.

    PubMed

    Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N; Kim, Yunseok

    2016-07-28

    Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response.

  5. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Suvam; Naghma, Rahla; Kaur, Jaspreet

    The total and ionization cross sections for electron scattering by benzene, halobenzenes, toluene, aniline, and phenol are reported over a wide energy domain. The multi-scattering centre spherical complex optical potential method has been employed to find the total elastic and inelastic cross sections. The total ionization cross section is estimated from total inelastic cross section using the complex scattering potential-ionization contribution method. In the present article, the first theoretical calculations for electron impact total and ionization cross section have been performed for most of the targets having numerous practical applications. A reasonable agreement is obtained compared to existing experimental observationsmore » for all the targets reported here, especially for the total cross section.« less

  7. Local lubrication model for spherical particles within incompressible Navier-Stokes flows.

    PubMed

    Lambert, B; Weynans, L; Bergmann, M

    2018-03-01

    The lubrication forces are short-range hydrodynamic interactions essential to describe suspension of the particles. Usually, they are underestimated in direct numerical simulations of particle-laden flows. In this paper, we propose a lubrication model for a coupled volume penalization method and discrete element method solver that estimates the unresolved hydrodynamic forces and torques in an incompressible Navier-Stokes flow. Corrections are made locally on the surface of the interacting particles without any assumption on the global particle shape. The numerical model has been validated against experimental data and performs as well as existing numerical models that are limited to spherical particles.

  8. Determination of ferroelectric contributions to electromechanical response by frequency dependent piezoresponse force microscopy

    PubMed Central

    Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N.; Kim, Yunseok

    2016-01-01

    Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response. PMID:27466086

  9. Rational-operator-based depth-from-defocus approach to scene reconstruction.

    PubMed

    Li, Ang; Staunton, Richard; Tjahjadi, Tardi

    2013-09-01

    This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

  10. Hot deformation behavior of AA5383 alloy

    NASA Astrophysics Data System (ADS)

    Du, Rou; Giraud, Eliane; Mareau, Charles; Ayed, Yessine; Santo, Philippe Dal

    2018-05-01

    Hot forming processes are widely used in deep drawing applications due to the ability of metallic materials to sustain large deformations. The optimization of such forming processes often requires the mechanical behavior to be accurately described. In this study, the hot temperature behavior of a 5383 aluminum alloy is investigated. In this perspective, different uniaxial tension tests have been carried out on dog-bone shaped specimens using a specific experimental device. The temperature and strain rate ranges of interest are 623˜723 K and 0.0001˜0.1 s-1, respectively. An inverse method has been used to determine the flow curves from the experimental force-displacement data. The material exhibits a slight flow stress increase beyond the yield point for most configurations. Softening phenomenon exists at high strain rates and high temperatures. A new model based on the modification of a modified Zerilli-Armstrong model is proposed to describe the stress-strain responses. Genetic algorithm optimization method is used for the identification of parameters for the new model. It is found that the new model has a good predictability under the experimental conditions. The application of this model is validated by shear and notched tension tests.

  11. Fuzzy forecasting based on fuzzy-trend logical relationship groups.

    PubMed

    Chen, Shyi-Ming; Wang, Nai-Yi

    2010-10-01

    In this paper, we present a new method to predict the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) based on fuzzy-trend logical relationship groups (FTLRGs). The proposed method divides fuzzy logical relationships into FTLRGs based on the trend of adjacent fuzzy sets appearing in the antecedents of fuzzy logical relationships. First, we apply an automatic clustering algorithm to cluster the historical data into intervals of different lengths. Then, we define fuzzy sets based on these intervals of different lengths. Then, the historical data are fuzzified into fuzzy sets to derive fuzzy logical relationships. Then, we divide the fuzzy logical relationships into FTLRGs for forecasting the TAIEX. Moreover, we also apply the proposed method to forecast the enrollments and the inventory demand, respectively. The experimental results show that the proposed method gets higher average forecasting accuracy rates than the existing methods.

  12. A hybrid method based on Band Pass Filter and Correlation Algorithm to improve debris sensor capacity

    NASA Astrophysics Data System (ADS)

    Hong, Wei; Wang, Shaoping; Liu, Haokuo; Tomovic, Mileta M.; Chao, Zhang

    2017-01-01

    The inductive debris detection is an effective method for monitoring mechanical wear, and could be used to prevent serious accidents. However, debris detection during early phase of mechanical wear, when small debris (<100 um) is generated, requires that the sensor has high sensitivity with respect to background noise. In order to detect smaller debris by existing sensors, this paper presents a hybrid method which combines Band Pass Filter and Correlation Algorithm to improve sensor signal-to-noise ratio (SNR). The simulation results indicate that the SNR will be improved at least 2.67 times after signal processing. In other words, this method ensures debris identification when the sensor's SNR is bigger than -3 dB. Thus, smaller debris will be detected in the same SNR. Finally, effectiveness of the proposed method is experimentally validated.

  13. A general method for the inclusion of radiation chemistry in astrochemical models.

    PubMed

    Shingledecker, Christopher N; Herbst, Eric

    2018-02-21

    In this paper, we propose a general formalism that allows for the estimation of radiolysis decomposition pathways and rate coefficients suitable for use in astrochemical models, with a focus on solid phase chemistry. Such a theory can help increase the connection between laboratory astrophysics experiments and astrochemical models by providing a means for modelers to incorporate radiation chemistry into chemical networks. The general method proposed here is targeted particularly at the majority of species now included in chemical networks for which little radiochemical data exist; however, the method can also be used as a starting point for considering better studied species. We here apply our theory to the irradiation of H 2 O ice and compare the results with previous experimental data.

  14. Ultra-High Density Holographic Memory Module with Solid-State Architecture

    NASA Technical Reports Server (NTRS)

    Markov, Vladimir B.

    2000-01-01

    NASA's terrestrial. space, and deep-space missions require technology that allows storing. retrieving, and processing a large volume of information. Holographic memory offers high-density data storage with parallel access and high throughput. Several methods exist for data multiplexing based on the fundamental principles of volume hologram selectivity. We recently demonstrated that a spatial (amplitude-phase) encoding of the reference wave (SERW) looks promising as a way to increase the storage density. The SERW hologram offers a method other than traditional methods of selectivity, such as spatial de-correlation between recorded and reconstruction fields, In this report we present the experimental results of the SERW-hologram memory module with solid-state architecture, which is of particular interest for space operations.

  15. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  16. A novel image registration approach via combining local features and geometric invariants

    PubMed Central

    Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa

    2018-01-01

    Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595

  17. Impact, Fire, and Fluid Spread Code Coupling for Complex Transportation Accident Environment Simulation.

    PubMed

    Brown, Alexander L; Wagner, Gregory J; Metzinger, Kurt E

    2012-06-01

    Transportation accidents frequently involve liquids dispersing in the atmosphere. An example is that of aircraft impacts, which often result in spreading fuel and a subsequent fire. Predicting the resulting environment is of interest for design, safety, and forensic applications. This environment is challenging for many reasons, one among them being the disparate time and length scales that are necessary to resolve for an accurate physical representation of the problem. A recent computational method appropriate for this class of problems has been described for modeling the impact and subsequent liquid spread. Because the environment is difficult to instrument and costly to test, the existing validation data are of limited scope and quality. A comparatively well instrumented test involving a rocket propelled cylindrical tank of water was performed, the results of which are helpful to understand the adequacy of the modeling methods. Existing data include estimates of drop sizes at several locations, final liquid surface deposition mass integrated over surface area regions, and video evidence of liquid cloud spread distances. Comparisons are drawn between the experimental observations and the predicted results of the modeling methods to provide evidence regarding the accuracy of the methods, and to provide guidance on the application and use of these methods.

  18. Image quality assessment using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  19. A Weighted Multipath Measurement Based on Gene Ontology for Estimating Gene Products Similarity

    PubMed Central

    Liu, Lizhen; Dai, Xuemin; Song, Wei; Lu, Jingli

    2014-01-01

    Abstract Many different methods have been proposed for calculating the semantic similarity of term pairs based on gene ontology (GO). Most existing methods are based on information content (IC), and the methods based on IC are used more commonly than those based on the structure of GO. However, most IC-based methods not only fail to handle identical annotations but also show a strong bias toward well-annotated proteins. We propose a new method called weighted multipath measurement (WMM) for estimating the semantic similarity of gene products based on the structure of the GO. We not only considered the contribution of every path between two GO terms but also took the depth of the lowest common ancestors into account. We assigned different weights for different kinds of edges in GO graph. The similarity values calculated by WMM can be reused because they are only relative to the characteristics of GO terms. Experimental results showed that the similarity values obtained by WMM have a higher accuracy. We compared the performance of WMM with that of other methods using GO data and gene annotation datasets for yeast and humans downloaded from the GO database. We found that WMM is more suited for prediction of gene function than most existing IC-based methods and that it can distinguish proteins with identical annotations (two proteins are annotated with the same terms) from each other. PMID:25229994

  20. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  1. Transitions between corona, glow, and spark regimes of nanosecond repetitively pulsed discharges in air at atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Pai, David Z.; Lacoste, Deanna A.; Laux, Christophe O.

    2010-05-01

    In atmospheric pressure air preheated from 300 to 1000 K, the nanosecond repetitively pulsed (NRP) method has been used to generate corona, glow, and spark discharges. Experiments have been performed to determine the parameter space (applied voltage, pulse repetition frequency, ambient gas temperature, and interelectrode gap distance) of each discharge regime. In particular, the experimental conditions necessary for the glow regime of NRP discharges have been determined, with the notable result that there exists a minimum and maximum gap distance for its existence at a given ambient gas temperature. The minimum gap distance increases with decreasing gas temperature, whereas the maximum does not vary appreciably. To explain the experimental results, an analytical model is developed to explain the corona-to-glow (C-G) and glow-to-spark (G-S) transitions. The C-G transition is analyzed in terms of the avalanche-to-streamer transition and the breakdown field during the conduction phase following the establishment of a conducting channel across the discharge gap. The G-S transition is determined by the thermal ionization instability, and we show analytically that this transition occurs at a certain reduced electric field for the NRP discharges studied here. This model shows that the electrode geometry plays an important role in the existence of the NRP glow regime at a given gas temperature. We derive a criterion for the existence of the NRP glow regime as a function of the ambient gas temperature, pulse repetition frequency, electrode radius of curvature, and interelectrode gap distance.

  2. Microwave imaging of spinning object using orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Liu, Kang; Li, Xiang; Gao, Yue; Wang, Hongqiang; Cheng, Yongqiang

    2017-09-01

    The linear Doppler shift used for the detection of a spinning object becomes significantly weakened when the line of sight (LOS) is perpendicular to the object, which will result in the failure of detection. In this paper, a new detection and imaging technique for spinning objects is developed. The rotational Doppler phenomenon is observed by using the microwave carrying orbital angular momentum (OAM). To converge the radiation energy on the area where objects might exist, the generation method of OAM beams is proposed based on the frequency diversity principle, and the imaging model is derived accordingly. The detection method of the rotational Doppler shift and the imaging approach of the azimuthal profiles are proposed, which are verified by proof-of-concept experiments. Simulation and experimental results demonstrate that OAM beams can still be used to obtain the azimuthal profiles of spinning objects even when the LOS is perpendicular to the object. This work remedies the insufficiency in existing microwave sensing technology and offers a new solution to the object identification problem.

  3. An online database for plant image analysis software tools.

    PubMed

    Lobet, Guillaume; Draye, Xavier; Périlleux, Claire

    2013-10-09

    Recent years have seen an increase in methods for plant phenotyping using image analyses. These methods require new software solutions for data extraction and treatment. These solutions are instrumental in supporting various research pipelines, ranging from the localisation of cellular compounds to the quantification of tree canopies. However, due to the variety of existing tools and the lack of central repository, it is challenging for researchers to identify the software that is best suited for their research. We present an online, manually curated, database referencing more than 90 plant image analysis software solutions. The website, plant-image-analysis.org, presents each software in a uniform and concise manner enabling users to identify the available solutions for their experimental needs. The website also enables user feedback, evaluations and new software submissions. The plant-image-analysis.org database provides an overview of existing plant image analysis software. The aim of such a toolbox is to help users to find solutions, and to provide developers a way to exchange and communicate about their work.

  4. Classification of high dimensional multispectral image data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1993-01-01

    A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.

  5. Sketch Matching on Topology Product Graph.

    PubMed

    Liang, Shuang; Luo, Jun; Liu, Wenyin; Wei, Yichen

    2015-08-01

    Sketch matching is the fundamental problem in sketch based interfaces. After years of study, it remains challenging when there exists large irregularity and variations in the hand drawn sketch shapes. While most existing works exploit topology relations and graph representations for this problem, they are usually limited by the coarse topology exploration and heuristic (thus suboptimal) similarity metrics between graphs. We present a new sketch matching method with two novel contributions. We introduce a comprehensive definition of topology relations, which results in a rich and informative graph representation of sketches. For graph matching, we propose topology product graph that retains the full correspondence for matching two graphs. Based on it, we derive an intuitive sketch similarity metric whose exact solution is easy to compute. In addition, the graph representation and new metric naturally support partial matching, an important practical problem that received less attention in the literature. Extensive experimental results on a real challenging dataset and the superior performance of our method show that it outperforms the state-of-the-art.

  6. RootGraph: a graphic optimization tool for automated image analysis of plant roots

    PubMed Central

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N.; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J.

    2015-01-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions. PMID:26224880

  7. Multiple sup 3 H-oxytocin binding sites in rat myometrial plasma membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crankshaw, D.; Gaspar, V.; Pliska, V.

    1990-01-01

    The affinity spectrum method has been used to analyse binding isotherms for {sup 3}H-oxytocin to rat myometrial plasma membranes. Three populations of binding sites with dissociation constants (Kd) of 0.6-1.5 x 10(-9), 0.4-1.0 x 10(-7) and 7 x 10(-6) mol/l were identified and their existence verified by cluster analysis based on similarities between Kd, binding capacity and Hill coefficient. When experimental values were compared to theoretical curves constructed using the estimated binding parameters, good fits were obtained. Binding parameters obtained by this method were not influenced by the presence of GTP gamma S (guanosine-5'-O-3-thiotriphosphate) in the incubation medium. The bindingmore » parameters agree reasonably well with those found in uterine cells, they support the existence of a medium affinity site and may allow for an explanation of some of the discrepancies between binding and response in this system.« less

  8. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Gustaveson, Mark B.; Burton, James R., III; Castellino, Craig

    1989-01-01

    Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselages lines with panels alternately tuned to frequencies above and below the frequency to be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cut off and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations. This program summarizes the work carried out at Duke University during the third semester of a contract supported by the Structural Acoustics Branch at NASA Langley Research Center.

  9. A general strategy to solve the phase problem in RNA crystallography

    PubMed Central

    Keel, Amanda Y.; Rambo, Robert P.; Batey, Robert T.; Kieft, Jeffrey S.

    2007-01-01

    SUMMARY X-ray crystallography of biologically important RNA molecules has been hampered by technical challenges, including finding a heavy-atom derivative to obtain high-quality experimental phase information. Existing techniques have drawbacks, severely limiting the rate at which important new structures are solved. To address this need, we have developed a reliable means to localize heavy atoms specifically to virtually any RNA. By solving the crystal structures of thirteen variants of the G·U wobble pair cation binding motif we have identified an optimal version that when inserted into an RNA helix introduces a high-occupancy cation binding site suitable for phasing. This “directed soaking” strategy can be integrated fully into existing RNA and crystallography methods, potentially increasing the rate at which important structures are solved and facilitating routine solving of structures using Cu-Kα radiation. The success of this method has been proven in that it has already been used to solve several novel crystal structures. PMID:17637337

  10. Stress and Damage in Polymer Matrix Composite Materials Due to Material Degradation at High Temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcmanus, H.L.; Chamis, C.C.

    1996-01-01

    This report describes analytical methods for calculating stresses and damage caused by degradation of the matrix constituent in polymer matrix composite materials. Laminate geometry, material properties, and matrix degradation states are specified as functions of position and time. Matrix shrinkage and property changes are modeled as functions of the degradation states. The model is incorporated into an existing composite mechanics computer code. Stresses, strains, and deformations at the laminate, ply, and micro levels are calculated, and from these calculations it is determined if there is failure of any kind. The rationale for the model (based on published experimental work) ismore » presented, its integration into the laminate analysis code is outlined, and example results are given, with comparisons to existing material and structural data. The mechanisms behind the changes in properties and in surface cracking during long-term aging of polyimide matrix composites are clarified. High-temperature-material test methods are also evaluated.« less

  11. Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) for Imaging Electrical Conductivity of Biological Tissue: A Tutorial Review

    PubMed Central

    Li, Xu; Yu, Kai; He, Bin

    2016-01-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is a noninvasive imaging method developed to map electrical conductivity of biological tissue with millimeter level spatial resolution. In MAT-MI, a time-varying magnetic stimulation is applied to induce eddy current inside the conductive tissue sample. With the existence of a static magnetic field, the Lorentz force acting on the induced eddy current drives mechanical vibrations producing detectable ultrasound signals. These ultrasound signals can then be acquired to reconstruct a map related to the sample’s electrical conductivity contrast. This work reviews fundamental ideas of MAT-MI and major techniques developed in these years. First, the physical mechanisms underlying MAT-MI imaging are described including the magnetic induction and Lorentz force induced acoustic wave propagation. Second, experimental setups and various imaging strategies for MAT-MI are reviewed and compared together with the corresponding experimental results. In addition, as a recently developed reverse mode of MAT-MI, magneto-acousto-electrical tomography with magnetic induction (MAET-MI) is briefly reviewed in terms of its theory and experimental studies. Finally, we give our opinions on existing challenges and future directions for MAT-MI research. With all the reported and future technical advancement, MAT-MI has the potential to become an important noninvasive modality for electrical conductivity imaging of biological tissue. PMID:27542088

  12. Two color holographic interferometry for microgravity application

    NASA Technical Reports Server (NTRS)

    Trolinger, James D.; Weber, David C.

    1995-01-01

    Holographic interferometry is a primary candidate for determining temperature and concentration in crystal growth experiments designed for space. The method measures refractive index changes within the fluid of an experimental test cell resulting from temperature and/or concentration changes. When the refractive index changes are caused by simultaneous temperature and concentration changes, the contributions of the two effects cannot be separated by single wavelength interferometry. By using two wavelengths, however, two independent interferograms can provide the additional independent equation required to determine the two unknowns. There is no other technique available that provides this type of information. The primary objectives of this effort were to experimentally verify the mathematical theory of two color holographic interferometry (TCHI) and to determine the practical value of this technique for space application. In the foregoing study, the theory of TCHI has been tested experimentally over a range of interest for materials processing in space where measurements of temperature and concentration in a solution are required. New techniques were developed and applied to stretch the limits beyond what could be done with existing procedures. The study resulted in the production of one of the most advanced, enhanced sensitivity holographic interferometers in existence. The interferometric measurements made at MSFC represent what is believed to be the most accurate holographic interferometric measurements made in a fluid to date. The tests have provided an understanding of the limitations of the technique in practical use.

  13. Resistance fail strain gage technology as applied to composite materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1985-01-01

    Existing strain gage technologies as applied to orthotropic composite materials are reviewed. The bonding procedures, transverse sensitivity effects, errors due to gage misalignment, and temperature compensation methods are addressed. Numerical examples are included where appropriate. It is shown that the orthotropic behavior of composites can result in experimental error which would not be expected based on practical experience with isotropic materials. In certain cases, the transverse sensitivity of strain gages and/or slight gage misalignment can result in strain measurement errors.

  14. Determination of the mechanical characteristics of nanomaterials under tension and compression

    NASA Astrophysics Data System (ADS)

    Filippov, A. A.; Fomin, V. M.

    2018-04-01

    In this paper, new method for determining the mechanical characteristics of nanoparticles in a heterogeneous mixture is proposed. The heterogeneous mixture consists of a thermosetting epoxy resin and silicon dioxide powder of different dispersity. The mechanical characteristics of such a material at a constant concentration for nanopowder are experimentally determined. Using existing formulas for obtaining effective characteristics, the Lame coefficients for nanoparticles of various sizes are calculated. The dependence of the elastic characteristics on the particle size is obtained.

  15. Improving nuclear data accuracy of 241Am and 237Np capture cross sections

    NASA Astrophysics Data System (ADS)

    Žerovnik, Gašper; Schillebeeckx, Peter; Cano-Ott, Daniel; Jandel, Marian; Hori, Jun-ichi; Kimura, Atsushi; Rossbach, Matthias; Letourneau, Alain; Noguere, Gilles; Leconte, Pierre; Sano, Tadafumi; Kellett, Mark A.; Iwamoto, Osamu; Ignatyuk, Anatoly V.; Cabellos, Oscar; Genreith, Christoph; Harada, Hideo

    2017-09-01

    In the framework of the OECD/NEA WPEC subgroup 41, ways to improve neutron induced capture cross sections for 241Am and 237Np are being sought. Decay data, energy dependent cross section data and neutron spectrum averaged data are important for that purpose and were investigated. New time-of-flight measurements were performed and analyzed, and considerable effort was put into development of methods for analysis of spectrum averaged data and re-analysis of existing experimental data.

  16. International Conference on Mathematical Methods in Electromagnetic Theory (MMET 2000), Volume 2 Held in Kharkov, Ukraine on September 12-15, 2000

    DTIC Science & Technology

    2000-09-01

    frequencies of the WG-modes of the resonator are determined as the points on the complex frequency plane for which nontrivial solutions of (3) exist...allow to determine a surface impedance of end-walls using the experimentally measured frequencies and basic Q-factors of resonance oscillations of a...Y0 = Y,. The com- plex eigen frequencies K’ = Re< + i. Im< (v is the number of the resonance in the zones

  17. Electron capture in collisions of N^+ with H and H^+ with N

    NASA Astrophysics Data System (ADS)

    Lin, C. Y.; Stancil, P. C.; Gu, J. P.; Buenker, R. J.; Kimura, M.

    2004-05-01

    Charge transfer processes due to collisions of N^+ with atomic hydrogen and H^+ with atomic nitrogen are investigated using the quantum-mechanical molecular-orbital close-coupling (MOCC) method. The MOCC calculations utilize ab initio adiabatic potential curves and nonadiabatic radial and rotational coupling matrix elements obtained with the multireference single- and double-excitation configuration interaction approach. Total and state-selective cross sections for the energy range 0.1-500 eV/u will be presented and compared with existing experimental and theoretical data.

  18. Thermal acoustic oscillations, volume 2. [cryogenic fluid storage

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Sims, W. H.; Fan, C.

    1975-01-01

    A number of thermal acoustic oscillation phenomena and their effects on cryogenic systems were studied. The conditions which cause or suppress oscillations, the frequency, amplitude and intensity of oscillations when they exist, and the heat loss they induce are discussed. Methods of numerical analysis utilizing the digital computer were developed for use in cryogenic systems design. In addition, an experimental verification program was conducted to study oscillation wave characteristics and boiloff rate. The data were then reduced and compared with the analytical predictions.

  19. Identification and management of filament-wound case stiffness parameters

    NASA Technical Reports Server (NTRS)

    Verderaime, V.; Rheinfurth, M.

    1983-01-01

    The high specific strength and the high specific modules made graphite epoxy laminate an expedient material substitute for the Shuttle Solid Rocket Motor steel case to substantially increase the payload performance without increasing the composite case axial growth during thrust build up which was constrained to minimize liftoff excitation effects on existing structural elements and interfaces. Parameters associated with axial growth were identified for quality and manufacturing controls. Included is an innovative method for experimentally verifying extensional elastic properties on a laminate pressurized test bottle.

  20. Flexwall Hydraulic Hose Replacement in the NASA Glenn 10- by 10-Foot Supersonic Propulsion Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Smith, Larry E.; Roeder, James W.; Linne, Alan A.; Klann, Gary A.

    2003-01-01

    The space-time conservation-element and solution-element method is employed to numerically study the near-field screech-tone noise of a typical underexpanded circular jet issuing from a sonic nozzle. Both axisymmetric and fully three-dimensional computations are carried out. The self-sustained feedback loop is properly simulated. The computed shock-cell structure, acoustic wave length, screech-tone frequency, and sound-pressure levels are in good agreement with existing experimental results.

  1. Adsorption of ions onto nanosolids dispersed in liquid crystals: Towards understanding the ion trapping effect in nanocolloids

    NASA Astrophysics Data System (ADS)

    Garbovskiy, Yuriy

    2016-05-01

    The ion capturing effect in liquid crystal nanocolloids was quantified by means of the ion trapping coefficient. The dependence of the ion trapping coefficient on the concentration of nano-dopants and their ionic purity was calculated for a variety of nanosolids dispersed in liquid crystals: carbon nanotubes, graphene nano-flakes, diamond nanoparticles, anatase nanoparticles, and ferroelectric nanoparticles. The proposed method perfectly fits existing experimental data and can be useful in the design of highly efficient ion capturing nanomaterials.

  2. Ratios of Vector and Pseudoscalar B Meson Decay Constants in the Light-Cone Quark Model

    NASA Astrophysics Data System (ADS)

    Dhiman, Nisha; Dahiya, Harleen

    2018-05-01

    We study the decay constants of pseudoscalar and vector B meson in the framework of light-cone quark model. We apply the variational method to the relativistic Hamiltonian with the Gaussian-type trial wave function to obtain the values of β (scale parameter). Then with the help of known values of constituent quark masses, we obtain the numerical results for the decay constants f_P and f_V, respectively. We compare our numerical results with the existing experimental data.

  3. The Efficacy of Group Decision Support Systems: A Field Experiment to Evaluate Impacts on Air Force Decision Makers

    DTIC Science & Technology

    1992-12-01

    made several interesting observations as well. Gray, Vogel, and Beauclair developed an alternate method for determining which experiments were similar...organization" ( Beauclair , 1989), (1:329, 331). 2.7 Summary of Existing Research In the book Group Support Systems: New Perspectives," Alan Dennis and Brent...Computer TDY Temporary Duty USAF United States Air Force VIF Variance Inflation Factor P-2 Bibliography 1. Beauclair , Renee A. "An Experimental Study of

  4. Optical properties of two types of sex hormones of the cyclopentenephenanthrene series

    NASA Astrophysics Data System (ADS)

    Meshalkin, Yu. P.; Artyukhov, V. Ya.; Pomogaev, V. A.

    2003-09-01

    The spectral and luminescent characteristics of estradiol and testosterone—two basic sex hormones of the cyclopentenephenanthrene series—are calculated by employing quantum-chemical methods. The results of calculations are in good agreement with experimental data. It is shown that fluorescence observed in estrogens is associated with the occurrence of the lowest ππ state, while the absence of fluorescence in androgens is attributed to the existence of the lowest nπ state, from which fluorescence is forbidden.

  5. Study of advanced techniques for determining the long-term performance of components

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A study was conducted of techniques having the capability of determining the performance and reliability of components for spacecraft liquid propulsion applications for long term missions. The study utilized two major approaches; improvement in the existing technology, and the evolution of new technology. The criteria established and methods evolved are applicable to valve components. Primary emphasis was placed on the propellants oxygen difluoride and diborane combination. The investigation included analysis, fabrication, and tests of experimental equipment to provide data and performance criteria.

  6. A deblocking algorithm based on color psychology for display quality enhancement

    NASA Astrophysics Data System (ADS)

    Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin

    2012-12-01

    This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.

  7. Measuring and Advancing Experimental Design Ability in an Introductory Course without Altering Existing Lab Curriculum.

    PubMed

    Shanks, Ryan A; Robertson, Chuck L; Haygood, Christian S; Herdliksa, Anna M; Herdliska, Heather R; Lloyd, Steven A

    2017-01-01

    Introductory biology courses provide an important opportunity to prepare students for future courses, yet existing cookbook labs, although important in their own way, fail to provide many of the advantages of semester-long research experiences. Engaging, authentic research experiences aid biology students in meeting many learning goals. Therefore, overlaying a research experience onto the existing lab structure allows faculty to overcome barriers involving curricular change. Here we propose a working model for this overlay design in an introductory biology course and detail a means to conduct this lab with minimal increases in student and faculty workloads. Furthermore, we conducted exploratory factor analysis of the Experimental Design Ability Test (EDAT) and uncovered two latent factors which provide valid means to assess this overlay model's ability to increase advanced experimental design abilities. In a pre-test/post-test design, we demonstrate significant increases in both basic and advanced experimental design abilities in an experimental and comparison group. We measured significantly higher gains in advanced experimental design understanding in students in the experimental group. We believe this overlay model and EDAT factor analysis contribute a novel means to conduct and assess the effectiveness of authentic research experiences in an introductory course without major changes to the course curriculum and with minimal increases in faculty and student workloads.

  8. Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.

    PubMed

    He, A; Deepan, B; Quan, C

    2017-09-01

    A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.

  9. Study on model current predictive control method of PV grid- connected inverters systems with voltage sag

    NASA Astrophysics Data System (ADS)

    Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.

    2016-08-01

    According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.

  10. An efficient algorithm for measurement of retinal vessel diameter from fundus images based on directional filtering

    NASA Astrophysics Data System (ADS)

    Wang, Xuchu; Niu, Yanmin

    2011-02-01

    Automatic measurement of vessels from fundus images is a crucial step for assessing vessel anomalies in ophthalmological community, where the change in retinal vessel diameters is believed to be indicative of the risk level of diabetic retinopathy. In this paper, a new retinal vessel diameter measurement method by combining vessel orientation estimation and filter response is proposed. Its interesting characteristics include: (1) different from the methods that only fit the vessel profiles, the proposed method extracts more stable and accurate vessel diameter by casting this problem as a maximal response problem of a variation of Gabor filter; (2) the proposed method can directly and efficiently estimate the vessel's orientation, which is usually captured by time-consuming multi-orientation fitting techniques in many existing methods. Experimental results shows that the proposed method both retains the computational simplicity and achieves stable and accurate estimation results.

  11. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  12. Greased Lightning (GL-10) Performance Flight Research: Flight Data Report

    NASA Technical Reports Server (NTRS)

    McSwain, Robert G.; Glaab, Louis J.; Theodore, Colin R.; Rhew, Ray D. (Editor); North, David D. (Editor)

    2017-01-01

    Modern aircraft design methods have produced acceptable designs for large conventional aircraft performance. With revolutionary electronic propulsion technologies fueled by the growth in the small UAS (Unmanned Aerial Systems) industry, these same prediction models are being applied to new smaller, and experimental design concepts requiring a VTOL (Vertical Take Off and Landing) capability for ODM (On Demand Mobility). A 50% sub-scale GL-10 flight model was built and tested to demonstrate the transition from hover to forward flight utilizing DEP (Distributed Electric Propulsion)[1][2]. In 2016 plans were put in place to conduct performance flight testing on the 50% sub-scale GL-10 flight model to support a NASA project called DELIVER (Design Environment for Novel Vertical Lift Vehicles). DELIVER was investigating the feasibility of including smaller and more experimental aircraft configurations into a NASA design tool called NDARC (NASA Design and Analysis of Rotorcraft)[3]. This report covers the performance flight data collected during flight testing of the GL-10 50% sub-scale flight model conducted at Beaver Dam Airpark, VA. Overall the flight test data provides great insight into how well our existing conceptual design tools predict the performance of small scale experimental DEP concepts. Low fidelity conceptual design tools estimated the (L/D)( sub max)of the GL-10 50% sub-scale flight model to be 16. Experimentally measured (L/D)( sub max) for the GL-10 50% scale flight model was 7.2. The aerodynamic performance predicted versus measured highlights the complexity of wing and nacelle interactions which is not currently accounted for in existing low fidelity tools.

  13. Experimental evolution of multicellularity using microbial pseudo-organisms.

    PubMed

    Queller, David C; Strassmann, Joan E

    2013-02-23

    In a major evolutionary transition to a new level of organization, internal conflicts must be controlled before the transition can truly be successful. One such transition is that from single cells to multicellularity. Conflicts among cells in multicellular organisms can be greatly reduced if they consist of genetically identical clones. However, mutations to cheaters that experience one round of within-individual selection could still be a problem, particularly for certain life cycles. We propose an experimental evolution method to investigate this issue, using micro-organisms to construct multicellular pseudo-organisms, which can be evolved under different artificial life cycles. These experiments can be used to test the importance of various life cycle features in maintaining cooperation. They include structured reproduction, in which small propagule size reduces within-individual genetic variation. They also include structured growth, which increases local relatedness within individual bodies. Our method provides a novel way to test how different life cycles favour cooperation, even for life cycles that do not exist.

  14. Experimental and modal verification of an integral equation solution for a thin-walled dichroic plate with cross-shaped holes

    NASA Technical Reports Server (NTRS)

    Epp, L. W.; Stanton, P. H.

    1993-01-01

    In order to add the capability of an X-band uplink onto the 70-m antenna, a new dichroic plate is needed to replace the Pyle-guide-shaped dichroic plate currently in use. The replacement dichroic plate must exhibit an additional passband at the new uplink frequency of 7.165 GHz, while still maintaining a passband at the existing downlink frequency of 8.425 GHz. Because of the wide frequency separation of these two passbands, conventional methods of designing air-filled dichroic plates exhibit grating lobe problems. A new method of solving this problem by using a dichroic plate with cross-shaped holes is presented and verified experimentally. Two checks of the integral equation solution are described. One is the comparison to a modal analysis for the limiting cross shape of a square hole. As a final check, a prototype dichroic plate with cross-shaped holes was built and measured.

  15. A method for feature selection of APT samples based on entropy

    NASA Astrophysics Data System (ADS)

    Du, Zhenyu; Li, Yihong; Hu, Jinsong

    2018-05-01

    By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.

  16. Investigating a holobiont: Microbiota perturbations and transkingdom networks.

    PubMed

    Greer, Renee; Dong, Xiaoxi; Morgun, Andrey; Shulzhenko, Natalia

    2016-01-01

    The scientific community has recently come to appreciate that, rather than existing as independent organisms, multicellular hosts and their microbiota comprise a complex evolving superorganism or metaorganism, termed a holobiont. This point of view leads to a re-evaluation of our understanding of different physiological processes and diseases. In this paper we focus on experimental and computational approaches which, when combined in one study, allowed us to dissect mechanisms (traditionally named host-microbiota interactions) regulating holobiont physiology. Specifically, we discuss several approaches for microbiota perturbation, such as use of antibiotics and germ-free animals, including advantages and potential caveats of their usage. We briefly review computational approaches to characterize the microbiota and, more importantly, methods to infer specific components of microbiota (such as microbes or their genes) affecting host functions. One such approach called transkingdom network analysis has been recently developed and applied in our study. (1) Finally, we also discuss common methods used to validate the computational predictions of host-microbiota interactions using in vitro and in vivo experimental systems.

  17. Quadrature Moments Method for the Simulation of Turbulent Reactive Flows

    NASA Technical Reports Server (NTRS)

    Raman, Venkatramanan; Pitsch, Heinz; Fox, Rodney O.

    2003-01-01

    A sub-filter model for reactive flows, namely the DQMOM model, was formulated for Large Eddy Simulation (LES) using the filtered mass density function. Transport equations required to determine the location and size of the delta-peaks were then formulated for a 2-peak decomposition of the FDF. The DQMOM scheme was implemented in an existing structured-grid LES solver. Simulations of scalar shear layer using an experimental configuration showed that the first and second moments of both reactive and inert scalars are in good agreement with a conventional Lagrangian scheme that evolves the same FDF. Comparisons with LES simulations performed using laminar chemistry assumption for the reactive scalar show that the new method provides vast improvements at minimal computational cost. Currently, the DQMOM model is being implemented for use with the progress variable/mixture fraction model of Pierce. Comparisons with experimental results and LES simulations using a single-environment for the progress-variable are planned. Future studies will aim at understanding the effect of increase in environments on predictions.

  18. TargetMiner: microRNA target prediction with systematic identification of tissue-specific negative examples.

    PubMed

    Bandyopadhyay, Sanghamitra; Mitra, Ramkrishna

    2009-10-15

    Prediction of microRNA (miRNA) target mRNAs using machine learning approaches is an important area of research. However, most of the methods suffer from either high false positive or false negative rates. One reason for this is the marked deficiency of negative examples or miRNA non-target pairs. Systematic identification of non-target mRNAs is still not addressed properly, and therefore, current machine learning approaches are compelled to rely on artificially generated negative examples for training. In this article, we have identified approximately 300 tissue-specific negative examples using a novel approach that involves expression profiling of both miRNAs and mRNAs, miRNA-mRNA structural interactions and seed-site conservation. The newly generated negative examples are validated with pSILAC dataset, which elucidate the fact that the identified non-targets are indeed non-targets.These high-throughput tissue-specific negative examples and a set of experimentally verified positive examples are then used to build a system called TargetMiner, a support vector machine (SVM)-based classifier. In addition to assessing the prediction accuracy on cross-validation experiments, TargetMiner has been validated with a completely independent experimental test dataset. Our method outperforms 10 existing target prediction algorithms and provides a good balance between sensitivity and specificity that is not reflected in the existing methods. We achieve a significantly higher sensitivity and specificity of 69% and 67.8% based on a pool of 90 feature set and 76.5% and 66.1% using a set of 30 selected feature set on the completely independent test dataset. In order to establish the effectiveness of the systematically generated negative examples, the SVM is trained using a different set of negative data generated using the method in Yousef et al. A significantly higher false positive rate (70.6%) is observed when tested on the independent set, while all other factors are kept the same. Again, when an existing method (NBmiRTar) is executed with the our proposed negative data, we observe an improvement in its performance. These clearly establish the effectiveness of the proposed approach of selecting the negative examples systematically. TargetMiner is now available as an online tool at www.isical.ac.in/ approximately bioinfo_miu

  19. Investigation on the effect of diaphragm on the combustion characteristics of solid-fuel ramjet

    NASA Astrophysics Data System (ADS)

    Gong, Lunkun; Chen, Xiong; Yang, Haitao; Li, Weixuan; Zhou, Changsheng

    2017-10-01

    The flow field characteristics and the regression rate distribution of solid-fuel ramjet with three-hole diaphragm were investigated by numerical and experimental methods. The experimental data were obtained by burning high-density polyethylene using a connected-pipe facility to validate the numerical model and analyze the combustion efficiency of the solid-fuel ramjet. The three-dimensional code developed in the present study adopted three-order MUSCL and central difference schemes, AUSMPW + flux vector splitting method, and second-order moment turbulence-chemistry model, together with k-ω shear stress transport (SST) turbulence model. The solid fuel surface temperature was calculated with fluid-solid heat coupling method. The numerical results show that strong circumferential flow exists in the region upstream of the diaphragm. The diaphragm can enhance the regression rate of the solid fuel in the region downstream of the diaphragm significantly, which mainly results from the increase of turbulent viscosity. As the diaphragm port area decreases, the regression rate of the solid fuel downstream of the diaphragm increases. The diaphragm can result in more sufficient mixing between the incoming air and fuel pyrolysis gases, while inevitably producing some pressure loss. The experimental results indicate that the effect of the diaphragm on the combustion efficiency of hydrocarbon fuels is slightly negative. It is conjectured that the diaphragm may have some positive effects on the combustion efficiency of the solid fuel with metal particles.

  20. Estimating non-isothermal bacterial growth in foods from isothermal experimental data.

    PubMed

    Corradini, M G; Peleg, M

    2005-01-01

    To develop a mathematical method to estimate non-isothermal microbial growth curves in foods from experiments performed under isothermal conditions and demonstrate the method's applicability with published growth data. Published isothermal growth curves of Pseudomonas spp. in refrigerated fish at 0-8 degrees C and Escherichia coli 1952 in a nutritional broth at 27.6-36 degrees C were fitted with two different three-parameter 'primary models' and the temperature dependence of their parameters was fitted by ad hoc empirical 'secondary models'. These were used to generate non-isothermal growth curves by solving, numerically, a differential equation derived on the premise that the momentary non-isothermal growth rate is the isothermal rate at the momentary temperature, at a time that corresponds to the momentary growth level of the population. The predicted non-isothermal growth curves were in agreement with the reported experimental ones and, as expected, the quality of the predictions did not depend on the 'primary model' chosen for the calculation. A common type of sigmoid growth curve can be adequately described by three-parameter 'primary models'. At least in the two systems examined, these could be used to predict growth patterns under a variety of continuous and discontinuous non-isothermal temperature profiles. The described mathematical method whenever validated experimentally will enable the simulation of the microbial quality of stored and transported foods under a large variety of existing or contemplated commercial temperature histories.

  1. Early reactions to Harvey's circulation theory: the impact on medicine.

    PubMed

    Lubitz, Steven A

    2004-09-01

    In early 17th century Europe, scientific concepts were still based largely on ancient philosophical and theological explanations. During this same era, however, experimentation began to take hold as a legitimate component of scientific investigation. In 1628, the English physician William Harvey announced a revolutionary theory stating that blood circulates repeatedly throughout the body. He relied on experimentation, comparative anatomy and calculation to arrive at his conclusions. His theory contrasted sharply with the accepted beliefs of the time, which were based on the 1400-year-old teachings of Galen and denied the presence of circulation. As with many new ideas, Harvey's circulation theory was received with a great deal of controversy among his colleagues. An examination of their motives reveals that many proponents agreed with his theory largely because of the logic of his argument and his use of experimentation and quantitative methods. However, some proponents agreed for religious, mystical and philosophical reasons, while some were convinced only because of the change in public opinion with time. Many opposed the circulation theory because of their rigid commitment to ancient doctrines, the questionable utility of experimentation, the lack of proof that capillaries exist, and a failure to recognize the clinical applications of his theory. Other opponents were motivated by personal resentments and professional "territorialism." Beyond the immediate issues and arguments, however, the controversy is important because it helped establish use of the scientific method.

  2. CNV-TV: a robust method to discover copy number variation from short sequencing reads.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-05-02

    Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data. A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a change-point detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project. The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.

  3. Thermal analysis of fused deposition modeling process using infrared thermography imaging and finite element modeling

    NASA Astrophysics Data System (ADS)

    Zhou, Xunfei; Hsieh, Sheng-Jen

    2017-05-01

    After years of development, Fused Deposition Modeling (FDM) has become the most popular technique in commercial 3D printing due to its cost effectiveness and easy-to-operate fabrication process. Mechanical strength and dimensional accuracy are two of the most important factors for reliability of FDM products. However, the solid-liquid-solid state changes of material in the FDM process make it difficult to monitor and model. In this paper, an experimental model was developed to apply cost-effective infrared thermography imaging method to acquire temperature history of filaments at the interface and their corresponding cooling mechanism. A three-dimensional finite element model was constructed to simulate the same process using element "birth and death" feature and validated with the thermal response from the experimental model. In 6 of 9 experimental conditions, a maximum of 13% difference existed between the experimental and numerical models. This work suggests that numerical modeling of FDM process is reliable and can facilitate better understanding of bead spreading and road-to-road bonding mechanics during fabrication.

  4. A comparative study of the constitutive models for silicon carbide

    NASA Astrophysics Data System (ADS)

    Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra

    2001-06-01

    Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.

  5. Temperature-strain discrimination in distributed optical fiber sensing using phase-sensitive optical time-domain reflectometry.

    PubMed

    Lu, Xin; Soto, Marcelo A; Thévenaz, Luc

    2017-07-10

    A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.

  6. Lα and Mαβ X-ray production cross-sections of Bi by 6-30 keV electron impact

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Xu, M. X.; Yuan, Y.; Wu, Y.; Qian, Z. C.; Chang, C. H.; Mei, C. S.; Zhu, J. J.; Moharram, K.

    2017-12-01

    In this paper, the Lα and Mαβ X-ray production cross-sections for Bi impacted by 6-30 keV electron have been measured. The experiments were performed at a Scanning Electron Microscope equipped with a silicon drift detector. The thin film with thick C substrate and the thin film deposited on self-supporting thin C film were both used as the targets to make a comparison. For the thick carbon substrate target, the Monte Carlo method has been used to eliminate the contribution of backscattering particles. The measured data are compared with the DWBA theoretical model and the experimental results in the literature. The experimental data for the thin film with thick C substrate target and the thin film deposited on self-supporting thin C film target are within reasonable gaps. The DWBA theoretical model gives good fit to the experimental data both for L- and M- shells. Besides, we also analyze the reasons why the discrepancies exist between our measurements and the experimental results in the literature.

  7. Experimental study of the energy dependence of the total cross section for the 6He + natSi and 9Li + natSi reactions

    NASA Astrophysics Data System (ADS)

    Sobolev, Yu. G.; Penionzhkevich, Yu. E.; Aznabaev, D.; Zemlyanaya, E. V.; Ivanov, M. P.; Kabdrakhimova, G. D.; Kabyshev, A. M.; Knyazev, A. G.; Kugler, A.; Lashmanov, N. A.; Lukyanov, K. V.; Maj, A.; Maslov, V. A.; Mendibayev, K.; Skobelev, N. K.; Slepnev, R. S.; Smirnov, V. V.; Testov, D.

    2017-11-01

    New experimental measurements of the total reaction cross sections for the 6He + natSi and 9Li + natSi processes in the energy range of 5 to 40 A MeV are presented. A modified transmission method based on high-efficiency detection of prompt n-γ radiation has been used in the experiment. A bump is observed for the first time in the energy dependence σR( E) at E ˜ 10-30 A MeV for the 9Li + natSi reaction, and existence of the bump in σR( E) at E ˜ 10-20 A MeV first observed in the standard transmission experiments is experimentally confirmed for the 6He + natSi reaction. Theoretical analysis of the measured 6He + natSi and 9Li + natSi reaction cross sections is performed within the microscopic double folding model. Disagreement is observed between the experimental and theoretical cross sections in the region of the bump at the energies of 10 to 20 A MeV, which requires further study.

  8. Minimize Solvent Oxidation with NO X Pre-Scrubbing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton, Andrew; Sachde, Darshan; Vance, Austyn

    A novel method to remove nitrogen dioxide (NO 2) from the flue gas of coal-fired power plants with CO 2 capture was further developed for commercial implementation. The technology leverages the equipment and chemistry in an existing (sulfur dioxide) SO 2 polishing scrubber upstream of the main CO 2 capture unit to remove the NO 2, preventing degradation of the CO 2 capture solvent and formation of nitrosamines (environmental hazards). The research in this report focuses on further evaluation of the chemical additives and operating conditions associated with the NO 2 removal process to define conditions for commercial scale testingmore » and deployment. Experimental work systematically evaluated a series of potential additives to minimize the oxidation of sulfite in a representative SO 2 pre-scrubber solution (sulfite, in turn, absorbs NO 2). The additive combinations and concentrations were varied alongside important process conditions such as temperature, oxygen concentration, and metals present in solution to mimic the conditions expected in a commercial system. Important results of the parametric experimental work include identifying a new, potent sulfite oxidation inhibitor, revealing the importance of combining inhibitors with metal chelating agents, validation of a low-cost additive process, and development of a new semi-empirical model to represent mechanisms associated with sulfite oxidation. In addition, the experimental work reveled the impact of operating at higher temperatures (representative of a field test unit), which will guide the selection and concertation of additives as well. Engineering analysis found that waste solutions from the pre-scrubber with NO 2 additives may potentially be integrated with existing processes on site (e.g., flue gas desulfurization unit). In addition, techno-economic analysis identified potential net savings as large as $1.30/tonne CO 2 captured and quantified the potential benefit of low cost additive options actively being pursued by the development team. Finally, the experimental results and engineering analysis supported the development of a detailed field testing plan and protocol to evaluate the technology at near-commercial scale. The field test preparation included development of procedures to introduce chemical additives to an existing SO 2 polishing unit and identification of representative flue gas conditions based on a review of existing plants. These activities will have direct bearing on operation and design of commercial units.« less

  9. Modeling Aromatic Liquids:  Toluene, Phenol, and Pyridine.

    PubMed

    Baker, Christopher M; Grant, Guy H

    2007-03-01

    Aromatic groups are now acknowledged to play an important role in many systems of interest. However, existing molecular mechanics methods provide a poor representation of these groups. In a previous paper, we have shown that the molecular mechanics treatment of benzene can be improved by the incorporation of an explicit representation of the aromatic π electrons. Here, we develop this concept further, developing charge-separation models for toluene, phenol, and pyridine. Monte Carlo simulations are used to parametrize the models, via the reproduction of experimental thermodynamic data, and our models are shown to outperform an existing atom-centered model. The models are then used to make predictions about the structures of the liquids at the molecular level and are tested further through their application to the modeling of gas-phase dimers and cation-π interactions.

  10. Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.

    PubMed

    Lakshmi, Priya G G; Domnic, S

    2014-12-01

    Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.

  11. Cross-Domain Multi-View Object Retrieval via Multi-Scale Topic Models.

    PubMed

    Hong, Richang; Hu, Zhenzhen; Wang, Ruxin; Wang, Meng; Tao, Dacheng

    2016-09-27

    The increasing number of 3D objects in various applications has increased the requirement for effective and efficient 3D object retrieval methods, which attracted extensive research efforts in recent years. Existing works mainly focus on how to extract features and conduct object matching. With the increasing applications, 3D objects come from different areas. In such circumstances, how to conduct object retrieval becomes more important. To address this issue, we propose a multi-view object retrieval method using multi-scale topic models in this paper. In our method, multiple views are first extracted from each object, and then the dense visual features are extracted to represent each view. To represent the 3D object, multi-scale topic models are employed to extract the hidden relationship among these features with respected to varied topic numbers in the topic model. In this way, each object can be represented by a set of bag of topics. To compare the objects, we first conduct topic clustering for the basic topics from two datasets, and then generate the common topic dictionary for new representation. Then, the two objects can be aligned to the same common feature space for comparison. To evaluate the performance of the proposed method, experiments are conducted on two datasets. The 3D object retrieval experimental results and comparison with existing methods demonstrate the effectiveness of the proposed method.

  12. Review: Quantifying animal feeding behaviour with a focus on pigs.

    PubMed

    Maselyne, Jarissa; Saeys, Wouter; Van Nuffel, Annelies

    2015-01-01

    The study of animal feeding behaviour is of interest to understand feeding, to investigate the effect of treatments and conditions or to predict illness. This paper reviews the different steps to undertake when studying animal feeding behaviour, with illustrations for group-housed pigs. First, one must be aware of the mechanisms that control feeding and the various influences that can change feeding behaviour. Satiety is shown to largely influence free feeding (ad libitum and without an operant condition) in animals, but 'free' feeding seems a very fragile process, given the many factors that can influence feeding behaviour. Second, a measurement method must be chosen that is compatible with the goal of the research. Several measurement methods exist, which lead to different experimental set-ups and measurement data. Sensors are available for lab conditions, for research on group-housed pigs and also for on-farm use. Most of these methods result in a record of feeding visits. However, these feeding visits are often found to be clustered into meals. Thus, the third step is to choose which unit of feeding behaviour to use for analysis. Depending on the situation, either meals, feeding visits, other raw data, or a combination thereof can be suitable. Meals are more appropriate for analysing short-term feeding behaviour, but this may not be true for disease detection. Further research is therefore needed. To cluster visits into meals, an appropriate analysis method has to be selected. The last part of this paper provides a review and discussion of the existing methods for meal determination. A variety of methods exist, with the most recent methods based on the influence of satiety on feeding. More thorough validation of the recent methods, including validation from a behavioural point of view and uniformity in the applied methods is therefore necessary. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Chemical vapor deposition growth

    NASA Technical Reports Server (NTRS)

    Ruth, R. P.; Manasevit, H. M.; Kenty, J. L.; Moudy, L. A.; Simpson, W. I.; Yang, J. J.

    1976-01-01

    The chemical vapor deposition (CVD) method for the growth of Si sheet on inexpensive substrate materials is investigated. The objective is to develop CVD techniques for producing large areas of Si sheet on inexpensive substrate materials, with sheet properties suitable for fabricating solar cells meeting the technical goals of the Low Cost Silicon Solar Array Project. Specific areas covered include: (1) modification and test of existing CVD reactor system; (2) identification and/or development of suitable inexpensive substrate materials; (3) experimental investigation of CVD process parameters using various candidate substrate materials; (4) preparation of Si sheet samples for various special studies, including solar cell fabrication; (5) evaluation of the properties of the Si sheet material produced by the CVD process; and (6) fabrication and evaluation of experimental solar cell structures, using standard and near-standard processing techniques.

  14. Estimation of whole lemon mass transfer parameters during hot air drying using different modelling methods

    NASA Astrophysics Data System (ADS)

    Torki-Harchegani, Mehdi; Ghanbarian, Davoud; Sadeghi, Morteza

    2015-08-01

    To design new dryers or improve existing drying equipments, accurate values of mass transfer parameters is of great importance. In this study, an experimental and theoretical investigation of drying whole lemons was carried out. The whole lemons were dried in a convective hot air dryer at different air temperatures (50, 60 and 75 °C) and a constant air velocity (1 m s-1). In theoretical consideration, three moisture transfer models including Dincer and Dost model, Bi- G correlation approach and conventional solution of Fick's second law of diffusion were used to determine moisture transfer parameters and predict dimensionless moisture content curves. The predicted results were then compared with the experimental data and the higher degree of prediction accuracy was achieved by the Dincer and Dost model.

  15. Elastic scattering of low-energy electrons by C{sub 3}H{sub 4} isomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, A.R.; Bettega, M.H.F.

    2003-03-01

    We report integral, differential, and momentum-transfer cross sections for elastic scattering of low-energy electrons by the C{sub 3}H{sub 4} isomers allene, propyne, and cyclopropene, which belong to the D{sub 2d}, C{sub 3v}, and C{sub 2v} groups, respectively. We use the Schwinger multichannel method with pseudopotentials [Bettega et al., Phys. Rev. A 47, 1111 (1993)] at the static-exchange approximation to compute the cross sections for energies up to 40 eV. We compare our results with available experimental results and find very good agreement. Our results confirm the existence of the shape resonances in the cross sections of allene and propyne, andmore » the isomer effect, both reported by the experimental studies.« less

  16. Plans and Example Results for the 2nd AIAA Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chwalowski, Pawel; Schuster, David M.; Raveh, Daniella; Jirasek, Adam; Dalenbring, Mats

    2015-01-01

    This paper summarizes the plans for the second AIAA Aeroelastic Prediction Workshop. The workshop is designed to assess the state-of-the-art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. This paper provides guidelines and instructions for participants including the computational aerodynamic model, the structural dynamic properties, the experimental comparison data and the expected output data from simulations. The Benchmark Supercritical Wing (BSCW) has been chosen as the configuration for this workshop. The analyses to be performed will include aeroelastic flutter solutions of the wing mounted on a pitch-and-plunge apparatus.

  17. Second-order optical effects in several pyrazolo-quinoline derivatives

    NASA Astrophysics Data System (ADS)

    Makowska-Janusik, M.; Gondek, E.; Kityk, I. V.; Wisła, J.; Sanetra, J.; Danel, A.

    2004-11-01

    Using optical poling of several pyazolo-quinoline (PAQ) derivatives we have found an existence of sufficiently high second order optical susceptibility at wavelength 1.76 μm varying in the range 0.9-2.8 pm/V. The performed quantum chemical simulations of the UV-absorption for isolated, solvated and incorporated into the polymethacrylate (PMMA) polymer films have shown that the PM3 method is the best among the semi-empirical ones to simulate the optical properties. The calculations of the hyperpolarizabilites have shown a good correlation with experimentally measured susceptibilities obtained from the optical poling. We have found that experimental susceptibility depends on linear molecular polarizability and photoinducing changes of the molecular dipole moment. It is clearly seen for the PAQ4-PAQ6 molecules possessing halogen atoms with relatively large polarizabilities.

  18. Determination of performance of non-ideal aluminized explosives.

    PubMed

    Keshavarz, Mohammad Hossein; Mofrad, Reza Teimuri; Poor, Karim Esmail; Shokrollahi, Arash; Zali, Abbas; Yousefi, Mohammad Hassan

    2006-09-01

    Non-ideal explosives can have Chapman-Jouguet (C-J) detonation pressure significantly different from those expected from existing thermodynamic computer codes, which usually allows finding the parameters of ideal detonation of individual high explosives with good accuracy. A simple method is introduced by which detonation pressure of non-ideal aluminized explosives with general formula C(a)H(b)N(c)O(d)Al(e) can be predicted only from a, b, c, d and e at any loading density without using any assumed detonation products and experimental data. Calculated detonation pressures show good agreement with experimental values with respect to computed results obtained by complicated computer code. It is shown here how loading density and atomic composition can be integrated into an empirical formula for predicting detonation pressure of proposed aluminized explosives.

  19. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  20. A review on the solution of Grad-Shafranov equation in the cylindrical coordinates based on the Chebyshev collocation technique

    NASA Astrophysics Data System (ADS)

    Amerian, Z.; Salem, M. K.; Salar Elahi, A.; Ghoranneviss, M.

    2017-03-01

    Equilibrium reconstruction consists of identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. Numerous methods exist to solve the Grad-Shafranov equation, describing the equilibrium of plasma confined by an axisymmetric magnetic field. In this paper, we have proposed a new numerical solution to the Grad-Shafranov equation (an axisymmetric, magnetic field transformed in cylindrical coordinates solved with the Chebyshev collocation method) when the source term (current density function) on the right-hand side is linear. The Chebyshev collocation method is a method for computing highly accurate numerical solutions of differential equations. We describe a circular cross-section of the tokamak and present numerical result of magnetic surfaces on the IR-T1 tokamak and then compare the results with an analytical solution.

Top