Sample records for experimental technique called

  1. An experimental ward. Improving care and learning.

    PubMed

    Ronan, L; Stoeckle, J D

    1992-01-01

    The rapidly changing health care system is still largely organized according to old, and increasingly outdated models. The contemporary demands of patient care and residency training call for an experimental ward, which can develop and test new techniques in hospital organization and the delivery of care in a comprehensive way.

  2. Experimental Studies on role of pH, potential and concentration of buffer solution for chemical bath deposition technique

    NASA Astrophysics Data System (ADS)

    Suresha, B. L.; Sumantha, H. S.; Salman, K. Mohammed; Pramod, N. G.; Abhiram, J.

    2018-04-01

    The ionization potential is usually found to be less in acid and more in base. The experiment proves that the ionization potential increases on dilution of acid to base and reduces from base to acid. The potential can be tailored according to the desired properties based on our choice of acid or base. The experimental study establishes a direct relationship between pH and electric potential. This work provides theoretical insights on the need for a basic media of pH 10 in chemical thin film growth techniques called Chemical Bath Deposition Techniques.

  3. Opto-Electronic Oscillator and its Applications

    NASA Technical Reports Server (NTRS)

    Yao, X. S.; Maleki, L.

    1996-01-01

    We present the theoretical and experimental results of a new class of microwave oscillators called opto-electronic oscillators (OEO). We discuss techniques of achieving high stability single mode operation and demonstrate the applications of OEO in photonic communication systems.

  4. Dynamics of the brain: Mathematical models and non-invasive experimental studies

    NASA Astrophysics Data System (ADS)

    Toronov, V.; Myllylä, T.; Kiviniemi, V.; Tuchin, V. V.

    2013-10-01

    Dynamics is an essential aspect of the brain function. In this article we review theoretical models of neural and haemodynamic processes in the human brain and experimental non-invasive techniques developed to study brain functions and to measure dynamic characteristics, such as neurodynamics, neurovascular coupling, haemodynamic changes due to brain activity and autoregulation, and cerebral metabolic rate of oxygen. We focus on emerging theoretical biophysical models and experimental functional neuroimaging results, obtained mostly by functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). We also included our current results on the effects of blood pressure variations on cerebral haemodynamics and simultaneous measurements of fast processes in the brain by near-infrared spectroscopy and a very novel functional MRI technique called magnetic resonance encephalography. Based on a rapid progress in theoretical and experimental techniques and due to the growing computational capacities and combined use of rapidly improving and emerging neuroimaging techniques we anticipate during next decade great achievements in the overall knowledge of the human brain.

  5. Opto-electronic oscillator and its applications

    NASA Astrophysics Data System (ADS)

    Yao, X. S.; Maleki, Lute

    1997-04-01

    We review the properties of a new class of microwave oscillators called opto-electronic oscillators (OEO). We present theoretical and experimental results of a multi-loop technique for single mode selection. We then describe a new development called coupled OEO (COEO) in which the electrical oscillation is directly coupled with the optical oscillation, producing an OEO that generates stable optical pulses and single mode microwave oscillation simultaneously. Finally we discuss various applications of OEO.

  6. EVALUATION OF CHEMICALLY BONDED PHOSPHATE CERAMICS FOR MERCURY STABILIZATION OF A MIXED SYNTHETIC WASTE

    EPA Science Inventory

    This experimental study was conducted to evaluate the stabilization and encapsulation technique developed by Argonne National Laboratory, called the Chemically Bonded Phosphate Ceramics technology for Hg- and HgCl2-contaminated synthetic waste materials. Leachability ...

  7. A Qualitative Experiment: Research on Mediated Meaning Construction Using a Hybrid Approach

    ERIC Educational Resources Information Center

    Robinson, Sue; Mendelson, Andrew L.

    2012-01-01

    This article presents a hybrid methodological technique that fuses elements of experimental design with qualitative strategies to explore mediated communication. Called the "qualitative experiment," this strategy uses focus groups and in-depth interviews "within" randomized stimulus conditions typically associated with…

  8. Photovoltaics module interface: General purpose primers

    NASA Technical Reports Server (NTRS)

    Boerio, J.

    1985-01-01

    The interfacial chemistry established between ethylene vinyl acetate (EVA) and the aluminized back surface of commercial solar cells was observed experimentally. The technique employed is called Fourier Transform Infrared (FTIR) spectroscopy, with the infrared signal being reflected back from the aluminum surface through the EVA film. Reflection infrared (IR) spectra are given and attention is drawn to the specific IR peak at 1080/cm which forms on hydrolytic aging of the EVA/aluminum system. With this fundamental finding, and the workable experimental techniques, candidate silane coupling agents are employed at the interface, and their effects on eliminating or slowing hydrolytic aging of the EVA/aluminum interface are monitored.

  9. Glass transition temperatures of liquid prepolymers obtained by thermal penetrometry

    NASA Technical Reports Server (NTRS)

    Potts, J. E., Jr.; Ashcraft, A. C.

    1973-01-01

    Thermal penetrometry is experimental technique for detecting temperature at which frozen prepolymer becomes soft enough to be pierced by weighted penetrometer needle; temperature at which this occurs is called penetration temperature. Apparatus used to obtain penetration temperatures can be set up largely from standard parts.

  10. Overview Snapshot Observational Technique (OSOT): Administration Manual Experimental Research Form.

    ERIC Educational Resources Information Center

    Coller, Alan R.

    Overview Snapshot Observational Technical (OSOT) is specifically designed to allow users to obtain both pictorial and categorical data related to the transactions in context component of early childhood (prekindergarten and kindergarten) educational programs. Such information is especially useful in operations calling for descriptive evaluation.…

  11. Range-Depth Tracking of Sounds from a Single-Point Deployment by Exploiting the Deep-Water Sound Speed Minimum

    DTIC Science & Technology

    2014-09-30

    beaked whales , and shallow-diving mysticetes, with a focus on humpback whales . Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...obtained via large-aperture vertical array techniques (for humpback whales ). APPROACH The experimental approach used by this project uses data...m depth. The motivation behind these multiple deployments is that multiple techniques can be used to estimate humpback whale call position, and

  12. Object Recognition and Random Image Structure Evolution

    ERIC Educational Resources Information Center

    Sadr, Jvid; Sinha, Pawan

    2004-01-01

    We present a technique called Random Image Structure Evolution (RISE) for use in experimental investigations of high-level visual perception. Potential applications of RISE include the quantitative measurement of perceptual hysteresis and priming, the study of the neural substrates of object perception, and the assessment and detection of subtle…

  13. Sports Training Support Method by Self-Coaching with Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Toyama, S.; Ikeda, F.; Yasaka, T.

    2016-09-01

    This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.

  14. Cognitive Regulation and Skills Training in the Management of Anger: A Stress Inoculation Approach.

    ERIC Educational Resources Information Center

    Novaco, Raymond W.

    Experimental interest in anger arousal has typically been incidental or secondary to the study of aggresssion. Novaco developed a cognitive behavior therapy approach to chronic anger problems. Clinical techniques have followed the work of Meichenbaum (1974, 1975) in the development of an approach called "stress inoculation" that has been…

  15. Electrochemically active biofilms: facts and fiction. A review

    PubMed Central

    Babauta, Jerome; Renslow, Ryan; Lewandowski, Zbigniew; Beyenal, Haluk

    2014-01-01

    This review examines the electrochemical techniques used to study extracellular electron transfer in the electrochemically active biofilms that are used in microbial fuel cells and other bioelectrochemical systems. Electrochemically active biofilms are defined as biofilms that exchange electrons with conductive surfaces: electrodes. Following the electrochemical conventions, and recognizing that electrodes can be considered reactants in these bioelectrochemical processes, biofilms that deliver electrons to the biofilm electrode are called anodic, ie electrode-reducing, biofilms, while biofilms that accept electrons from the biofilm electrode are called cathodic, ie electrode-oxidizing, biofilms. How to grow these electrochemically active biofilms in bioelec-trochemical systems is discussed and also the critical choices made in the experimental setup that affect the experimental results. The reactor configurations used in bioelectrochemical systems research are also described and the authors demonstrate how to use selected voltammetric techniques to study extracellular electron transfer in bioelectrochemical systems. Finally, some critical concerns with the proposed electron transfer mechanisms in bioelectrochemical systems are addressed together with the prospects of bioelectrochemical systems as energy-converting and energy-harvesting devices. PMID:22856464

  16. Multipulse technique exploiting the intermodulation of ultrasound waves in a nonlinear medium.

    PubMed

    Biagi, Elena; Breschi, Luca; Vannacci, Enrico; Masotti, Leonardo

    2009-03-01

    In recent years, the nonlinear properties of materials have attracted much interest in nondestructive testing and in ultrasound diagnostic applications. Acoustic nonlinear parameters represent an opportunity to improve the information that can be extracted from a medium such as structural organization and pathologic status of tissue. In this paper, a method called pulse subtraction intermodulation (PSI), based on a multipulse technique, is presented and investigated both theoretically and experimentally. This method allows separation of the intermodulation products, which arise when 2 separate frequencies are transmitted in a nonlinear medium, from fundamental and second harmonic components, making them available for improved imaging techniques or signal processing algorithms devoted to tissue characterization. The theory of intermodulation product generation was developed according the Khokhlov-Zabolotskaya-Kuznetsov (KZK) nonlinear propagation equation, which is consistent with experimental results. The description of the proposed method, characterization of the intermodulation spectral contents, and quantitative results coming from in vitro experimentation are reported and discussed in this paper.

  17. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  18. Systematic cloning of an ORFeome using the Gateway system.

    PubMed

    Matsuyama, Akihisa; Yoshida, Minoru

    2009-01-01

    With the completion of the genome projects, there are increasing demands on the experimental systems that enable to exploit the entire set of protein-coding open reading frames (ORFs), viz. ORFeome, en masse. Systematic proteomic studies based on cloned ORFeomes are called "reverse proteomics," and have been launched in many organisms in recent years. Cloning of an ORFeome is such an attractive way for comprehensive understanding of biological phenomena, but is a challenging and daunting task. However, recent advances in techniques for DNA cloning using site-specific recombination and for high-throughput experimental techniques have made it feasible to clone an ORFeome with the minimum of exertion. The Gateway system is one of such the approaches, employing the recombination reaction of the bacteriophage lambda. Combining traditional DNA manipulation methods with modern technique of the recombination-based cloning system, it is possible to clone an ORFeome of an organism on an individual level.

  19. DROP: Detecting Return-Oriented Programming Malicious Code

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Xiao, Hai; Shen, Xiaobin; Yin, Xinchun; Mao, Bing; Xie, Li

    Return-Oriented Programming (ROP) is a new technique that helps the attacker construct malicious code mounted on x86/SPARC executables without any function call at all. Such technique makes the ROP malicious code contain no instruction, which is different from existing attacks. Moreover, it hides the malicious code in benign code. Thus, it circumvents the approaches that prevent control flow diversion outside legitimate regions (such as W ⊕ X ) and most malicious code scanning techniques (such as anti-virus scanners). However, ROP has its own intrinsic feature which is different from normal program design: (1) uses short instruction sequence ending in "ret", which is called gadget, and (2) executes the gadgets contiguously in specific memory space, such as standard GNU libc. Based on the features of the ROP malicious code, in this paper, we present a tool DROP, which is focused on dynamically detecting ROP malicious code. Preliminary experimental results show that DROP can efficiently detect ROP malicious code, and have no false positives and negatives.

  20. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  1. Experimental level densities of atomic nuclei

    DOE PAGES

    Guttormsen, M.; Aiche, M.; Bello Garrote, F. L.; ...

    2015-12-23

    It is almost 80 years since Hans Bethe described the level density as a non-interacting gas of protons and neutrons. In all these years, experimental data were interpreted within this picture of a fermionic gas. However, the renewed interest of measuring level density using various techniques calls for a revision of this description. In particular, the wealth of nuclear level densities measured with the Oslo method favors the constant-temperature level density over the Fermi-gas picture. Furthermore, trom the basis of experimental data, we demonstrate that nuclei exhibit a constant-temperature level density behavior for all mass regions and at least upmore » to the neutron threshold.« less

  2. A Laboratory Course for Teaching Laboratory Techniques, Experimental Design, Statistical Analysis, and Peer Review Process to Undergraduate Science Students

    ERIC Educational Resources Information Center

    Gliddon, C. M.; Rosengren, R. J.

    2012-01-01

    This article describes a 13-week laboratory course called Human Toxicology taught at the University of Otago, New Zealand. This course used a guided inquiry based laboratory coupled with formative assessment and collaborative learning to develop in undergraduate students the skills of problem solving/critical thinking, data interpretation and…

  3. Foucault, Counselling and the Aesthetics of Existence

    ERIC Educational Resources Information Center

    Peters, Michael A.

    2005-01-01

    Michel Foucault was drawn late in life to study the "arts of the self" in Greco-Roman culture as a basis, following Nietzsche, for what he called an "aesthetics of existence." By this, he meant a set of creative and experimental processes and techniques by which an individual turns him- or herself into a work of art. For Nietzsche, it was above…

  4. Image-based overlay and alignment metrology through optically opaque media with sub-surface probe microscopy

    NASA Astrophysics Data System (ADS)

    van Es, Maarten H.; Mohtashami, Abbas; Piras, Daniele; Sadeghian, Hamed

    2018-03-01

    Nondestructive subsurface nanoimaging through optically opaque media is considered to be extremely challenging and is essential for several semiconductor metrology applications including overlay and alignment and buried void and defect characterization. The current key challenge in overlay and alignment is the measurement of targets that are covered by optically opaque layers. Moreover, with the device dimensions moving to the smaller nodes and the issue of the so-called loading effect causing offsets between between targets and product features, it is increasingly desirable to perform alignment and overlay on product features or so-called on-cell overlay, which requires higher lateral resolution than optical methods can provide. Our recently developed technique known as SubSurface Ultrasonic Resonance Force Microscopy (SSURFM) has shown the capability for high-resolution imaging of structures below a surface based on (visco-)elasticity of the constituent materials and as such is a promising technique to perform overlay and alignment with high resolution in upcoming production nodes. In this paper, we describe the developed SSURFM technique and the experimental results on imaging buried features through various layers and the ability to detect objects with resolution below 10 nm. In summary, the experimental results show that the SSURFM is a potential solution for on-cell overlay and alignment as well as detecting buried defects or voids and generally metrology through optically opaque layers.

  5. Identification of immunoglobulins using Chou's pseudo amino acid composition with feature selection technique.

    PubMed

    Tang, Hua; Chen, Wei; Lin, Hao

    2016-04-01

    Immunoglobulins, also called antibodies, are a group of cell surface proteins which are produced by the immune system in response to the presence of a foreign substance (called antigen). They play key roles in many medical, diagnostic and biotechnological applications. Correct identification of immunoglobulins is crucial to the comprehension of humoral immune function. With the avalanche of protein sequences identified in postgenomic age, it is highly desirable to develop computational methods to timely identify immunoglobulins. In view of this, we designed a predictor called "IGPred" by formulating protein sequences with the pseudo amino acid composition into which nine physiochemical properties of amino acids were incorporated. Jackknife cross-validated results showed that 96.3% of immunoglobulins and 97.5% of non-immunoglobulins can be correctly predicted, indicating that IGPred holds very high potential to become a useful tool for antibody analysis. For the convenience of most experimental scientists, a web-server for IGPred was established at http://lin.uestc.edu.cn/server/IGPred. We believe that the web-server will become a powerful tool to study immunoglobulins and to guide related experimental validations.

  6. Evolutionary neural networks for anomaly detection based on the behavior of a program.

    PubMed

    Han, Sang-Jun; Cho, Sung-Bae

    2006-06-01

    The process of learning the behavior of a given program by using machine-learning techniques (based on system-call audit data) is effective to detect intrusions. Rule learning, neural networks, statistics, and hidden Markov models (HMMs) are some of the kinds of representative methods for intrusion detection. Among them, neural networks are known for good performance in learning system-call sequences. In order to apply this knowledge to real-world problems successfully, it is important to determine the structures and weights of these call sequences. However, finding the appropriate structures requires very long time periods because there are no suitable analytical solutions. In this paper, a novel intrusion-detection technique based on evolutionary neural networks (ENNs) is proposed. One advantage of using ENNs is that it takes less time to obtain superior neural networks than when using conventional approaches. This is because they discover the structures and weights of the neural networks simultaneously. Experimental results with the 1999 Defense Advanced Research Projects Agency (DARPA) Intrusion Detection Evaluation (IDEVAL) data confirm that ENNs are promising tools for intrusion detection.

  7. The Buffer Diagnostic Prototype: A fault isolation application using CLIPS

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1994-01-01

    This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.

  8. Homogenization of CZ Si wafers by Tabula Rasa annealing

    NASA Astrophysics Data System (ADS)

    Meduňa, M.; Caha, O.; Kuběna, J.; Kuběna, A.; Buršík, J.

    2009-12-01

    The precipitation of interstitial oxygen in Czochralski grown silicon has been investigated by infrared absorption spectroscopy, chemical etching, transmission electron microscopy and X-ray diffraction after application of homogenization annealing process called Tabula Rasa. The influence of this homogenization step consisting in short time annealing at high temperature has been observed for various temperatures and times. The experimental results involving the interstitial oxygen decay in Si wafers and absorption spectra of SiOx precipitates during precipitation annealing at 1000∘ C were compared with other techniques for various Tabula Rasa temperatures. The differences in oxygen precipitation, precipitate morphology and evolution of point defects in samples with and without Tabula Rasa applied is evident from all used experimental techniques. The results qualitatively correlate with prediction of homogenization annealing process based on classical nucleation theory.

  9. Preclinical Evaluation to Specifically Target Ovarian Cancer with Folic Acid-Conjugated Nanoceria

    DTIC Science & Technology

    2014-08-01

    cancer . Our experimental nanoparticle is Nanoceria (NCe), a cerium oxide nanoparticle . Nanotechnology -based tools and techniques are rapidly... cancer we proposed the present work, where we are integrating the field of nanotechnology with ovarian cancer cell’s unique property of...overexpressing folic acid receptor alpha (FR-a) to specifically target ovarian cancer . A cerium oxide nanoparticle , called Nanoceria (NCe), that has the ability

  10. An experimental system for coiled tubing partial underbalanced drilling (CT-PUBD) technique

    NASA Astrophysics Data System (ADS)

    Shi, H. Z.; Ji, Z. S.; Zhao, H. Q.; Chen, Z. L.; Zhang, H. Z.

    2018-05-01

    To improve the rate of penetration (ROP) in hard formations, a new high-speed drilling technique called Coiled Tubing Partial Underbalanced Drilling (CT-PUBD) is proposed. This method uses a rotary packer to realize an underbalanced condition near the bit by creating a micro-annulus and an overbalanced condition at the main part of the annulus. A new full-scale laboratory experimental system is designed and set up to study the hydraulic characteristics and drilling performance of this method. The system is composed of a drilling system, circulation system, and monitor system, including three key devices, namely, cuttings discharge device, rotary packer, and backflow device. The experimental results showed that the pressure loss increased linearly with the flow rate of the drilling fluid. The high drilling speed of CT-PUBD proved it a better drilling method than the conventional drilling. The experimental system may provide a fundamental basis for the research of CT-PUBD, and the results proved that this new method is feasible in enhancing ROP and guaranteeing the drilling safety.

  11. The physics of a popsicle stick bomb

    NASA Astrophysics Data System (ADS)

    Sautel, Jérémy; Bourges, Andréane; Caussarieu, Aude; Plihon, Nicolas; Taberlet, Nicolas

    2017-10-01

    Popsicle sticks can be interlocked in the so-called "cobra weave" to form a chain under tension. When one end of the chain is released, the sticks rapidly disentangle, forming a traveling wave that propagates down the chain. In this paper, the properties of the traveling front are studied experimentally, and classical results from the theory of elasticity allow for a dimensional analysis of the height and speed of the traveling wave. The study presented here can help undergraduate students familiarize themselves with experimental techniques of image processing, and it also demonstrates the power of dimensional analysis and scaling laws.

  12. Paper simulation techniques in user requirements analysis for interactive computer systems

    NASA Technical Reports Server (NTRS)

    Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.

    1979-01-01

    This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task

  13. Compton imaging tomography technique for NDE of large nonuniform structures

    NASA Astrophysics Data System (ADS)

    Grubsky, Victor; Romanov, Volodymyr; Patton, Ned; Jannson, Tomasz

    2011-09-01

    In this paper we describe a new nondestructive evaluation (NDE) technique called Compton Imaging Tomography (CIT) for reconstructing the complete three-dimensional internal structure of an object, based on the registration of multiple two-dimensional Compton-scattered x-ray images of the object. CIT provides high resolution and sensitivity with virtually any material, including lightweight structures and organics, which normally pose problems in conventional x-ray computed tomography because of low contrast. The CIT technique requires only one-sided access to the object, has no limitation on the object's size, and can be applied to high-resolution real-time in situ NDE of large aircraft/spacecraft structures and components. Theoretical and experimental results will be presented.

  14. A critical review on tablet disintegration.

    PubMed

    Quodbach, Julian; Kleinebudde, Peter

    2016-09-01

    Tablet disintegration is an important factor for drug release and can be modified with excipients called tablet disintegrants. Tablet disintegrants act via different mechanisms and the efficacy of these excipients is influenced by various factors. In this review, the existing literature on tablet disintegration is critically reviewed. Potential disintegration mechanisms, as well as impact factors on the disintegration process will be discussed based on experimental evidence. Search terms for Scopus and Web of Science included "tablet disintegration", "mechanism tablet disintegration", "superdisintegrants", "disintegrants", "swelling force", "disintegration force", "disintegration mechanisms", as well as brand names of commonly applied superdisintegrants. References of identified papers were screened as well. Experimental data supports swelling and shape recovery as main mechanisms of action of disintegrants. Other tablet excipients and different manufacturing techniques greatly influence the disintegration process. The use of different excipients, experimental setups and manufacturing techniques, as well as the demand for original research led to a distinct patchwork of knowledge. Broader, more systematic approaches are necessary not only to structure the past but also future findings.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perl, M.L.

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics atmore » very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.« less

  16. Laser induced photoluminiscence studies of primary photochemical production processes of cometary radicals

    NASA Technical Reports Server (NTRS)

    Jackson, W. M.

    1977-01-01

    A tunable vacuum ultraviolet flash lamp was constructed. This unique flash lamp was coupled with a tunable dye laser detector and permits the experimenter to measure the production rates of ground state radicals as a function of wavelength. A new technique for producing fluorescent radicals was discovered. This technique called multiphoton ultraviolet photodissociation is currently being applied to several problems of both cometary and stratospheric interest. It was demonstrated that NO2 will dissociate to produce an excited fragment and the radiation can possibly be used for remote detection of this species.

  17. J-substitution algorithm in magnetic resonance electrical impedance tomography (MREIT): phantom experiments for static resistivity images.

    PubMed

    Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun

    2002-06-01

    Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.

  18. Failure of tetracycline as a biomarker in batch-marking juvenile frogs

    USGS Publications Warehouse

    Hatfield, Jeffrey S.; Henry, Paula F.P.; Olsen, Glenn H.; Paul, M.M.; Hammerschlag, Richard S.

    2001-01-01

    Recent widespread amphibian declines call for better techniques to assess population dynamics. Tetracycline as a biomarker in capture-recapture studies is one technique used successfully in fish, reptiles, and mammals. A two-phase experimental study was conducted to evaluate tetracycline as a biomarker in green frogs (Rana clamitans) and pickerel frogs (Rana palustris). In the first experimental phase tadpoles were exposed to water containing either 250 mg/l or 500 mg/l tetracycline for a period of 24 hr. During the second phase, juvenile frogs were exposed to tetracycline in water at 500 mg/l or given injections of tetracycline at the dose rate of 100 mg/kg body weight. At selected times several weeks later, under tricaine methanesulfonate anesthesia, a toe was surgically excised from each animal, sectioned and viewed under an ultraviolet microscope. No significant differences were found between the various treatments and control animals (untreated). Therefore, the use of tetracycline as a biomarker in anurans using these techniques is not recommended.

  19. A High Performance SOAP Engine for Grid Computing

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Welzl, Michael; Zhang, Liang

    Web Service technology still has many defects that make its usage for Grid computing problematic, most notably the low performance of the SOAP engine. In this paper, we develop a novel SOAP engine called SOAPExpress, which adopts two key techniques for improving processing performance: SCTP data transport and dynamic early binding based data mapping. Experimental results show a significant and consistent performance improvement of SOAPExpress over Apache Axis.

  20. New fluorescence techniques for high-throughput drug discovery.

    PubMed

    Jäger, S; Brand, L; Eggeling, C

    2003-12-01

    The rapid increase of compound libraries as well as new targets emerging from the Human Genome Project require constant progress in pharmaceutical research. An important tool is High-Throughput Screening (HTS), which has evolved as an indispensable instrument in the pre-clinical target-to-IND (Investigational New Drug) discovery process. HTS requires machinery, which is able to test more than 100,000 potential drug candidates per day with respect to a specific biological activity. This calls for certain experimental demands especially with respect to sensitivity, speed, and statistical accuracy, which are fulfilled by using fluorescence technology instrumentation. In particular the recently developed family of fluorescence techniques, FIDA (Fluorescence Intensity Distribution Analysis), which is based on confocal single-molecule detection, has opened up a new field of HTS applications. This report describes the application of these new techniques as well as of common fluorescence techniques--such as confocal fluorescence lifetime and anisotropy--to HTS. It gives experimental examples and presents advantages and disadvantages of each method. In addition the most common artifacts (auto-fluorescence or quenching by the drug candidates) emerging from the fluorescence detection techniques are highlighted and correction methods for confocal fluorescence read-outs are presented, which are able to circumvent this deficiency.

  1. Jet measurements in heavy ion physics

    NASA Astrophysics Data System (ADS)

    Connors, Megan; Nattrass, Christine; Reed, Rosi; Salur, Sevil

    2018-04-01

    A hot, dense medium called a quark gluon plasma (QGP) is created in ultrarelativistic heavy ion collisions. Early in the collision, hard parton scatterings generate high momentum partons that traverse the medium, which then fragment into sprays of particles called jets. Understanding how these partons interact with the QGP and fragment into final state particles provides critical insight into quantum chromodynamics. Experimental measurements from high momentum hadrons, two particle correlations, and full jet reconstruction at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) continue to improve our understanding of energy loss in the QGP. Run 2 at the LHC recently began and there is a jet detector at RHIC under development. Now is the perfect time to reflect on what the experimental measurements have taught us so far, the limitations of the techniques used for studying jets, how the techniques can be improved, and how to move forward with the wealth of experimental data such that a complete description of energy loss in the QGP can be achieved. Measurements of jets to date clearly indicate that hard partons lose energy. Detailed comparisons of the nuclear modification factor between data and model calculations led to quantitative constraints on the opacity of the medium to hard probes. However, while there is substantial evidence for softening and broadening jets through medium interactions, the difficulties comparing measurements to theoretical calculations limit further quantitative constraints on energy loss mechanisms. Since jets are algorithmic descriptions of the initial parton, the same jet definitions must be used, including the treatment of the underlying heavy ion background, when making data and theory comparisons. An agreement is called for between theorists and experimentalists on the appropriate treatment of the background, Monte Carlo generators that enable experimental algorithms to be applied to theoretical calculations, and a clear understanding of which observables are most sensitive to the properties of the medium, even in the presence of background. This will enable us to determine the best strategy for the field to improve quantitative constraints on properties of the medium in the face of these challenges.

  2. Fluctuations in protein synthesis from a single RNA template: stochastic kinetics of ribosomes.

    PubMed

    Garai, Ashok; Chowdhury, Debashish; Ramakrishnan, T V

    2009-01-01

    Proteins are polymerized by cyclic machines called ribosomes, which use their messenger RNA (mRNA) track also as the corresponding template, and the process is called translation. We explore, in depth and detail, the stochastic nature of the translation. We compute various distributions associated with the translation process; one of them--namely, the dwell time distribution--has been measured in recent single-ribosome experiments. The form of the distribution, which fits best with our simulation data, is consistent with that extracted from the experimental data. For our computations, we use a model that captures both the mechanochemistry of each individual ribosome and their steric interactions. We also demonstrate the effects of the sequence inhomogeneities of real genes on the fluctuations and noise in translation. Finally, inspired by recent advances in the experimental techniques of manipulating single ribosomes, we make theoretical predictions on the force-velocity relation for individual ribosomes. In principle, all our predictions can be tested by carrying out in vitro experiments.

  3. Visual mining geo-related data using pixel bar charts

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Keim, Daniel A.; Dayal, Umeshwar; Wright, Peter; Schneidewind, Joern

    2005-03-01

    A common approach to analyze geo-related data is using bar charts or x-y plots. They are intuitive and easy to use. But important information often gets lost. In this paper, we introduce a new interactive visualization technique called Geo Pixel Bar Charts, which combines the advantages of Pixel Bar Charts and interactive maps. This technique allows analysts to visualize large amounts of spatial data without aggregation and shows the geographical regions corresponding to the spatial data attribute at the same time. In this paper, we apply Geo Pixel Bar Charts to visually mining sales transactions and Internet usage from different locations. Our experimental results show the effectiveness of this technique for providing data distribution and exceptions from the map.

  4. Experimental validation of predicted cancer genes using FRET

    NASA Astrophysics Data System (ADS)

    Guala, Dimitri; Bernhem, Kristoffer; Ait Blal, Hammou; Jans, Daniel; Lundberg, Emma; Brismar, Hjalmar; Sonnhammer, Erik L. L.

    2018-07-01

    Huge amounts of data are generated in genome wide experiments, designed to investigate diseases with complex genetic causes. Follow up of all potential leads produced by such experiments is currently cost prohibitive and time consuming. Gene prioritization tools alleviate these constraints by directing further experimental efforts towards the most promising candidate targets. Recently a gene prioritization tool called MaxLink was shown to outperform other widely used state-of-the-art prioritization tools in a large scale in silico benchmark. An experimental validation of predictions made by MaxLink has however been lacking. In this study we used Fluorescence Resonance Energy Transfer, an established experimental technique for detection of protein-protein interactions, to validate potential cancer genes predicted by MaxLink. Our results provide confidence in the use of MaxLink for selection of new targets in the battle with polygenic diseases.

  5. Double synchronized switch harvesting (DSSH): a new energy harvesting scheme for efficient energy extraction.

    PubMed

    Lallart, Mickaël; Garbuio, Lauric; Petit, Lionel; Richard, Claude; Guyomar, Daniel

    2008-10-01

    This paper presents a new technique for optimized energy harvesting using piezoelectric microgenerators called double synchronized switch harvesting (DSSH). This technique consists of a nonlinear treatment of the output voltage of the piezoelectric element. It also integrates an intermediate switching stage that ensures an optimal harvested power whatever the load connected to the microgenerator. Theoretical developments are presented considering either constant vibration magnitude, constant driving force, or independent extraction. Then experimental measurements are carried out to validate the theoretical predictions. This technique exhibits a constant output power for a wide range of load connected to the microgenerator. In addition, the extracted power obtained using such a technique allows a gain up to 500% in terms of maximal power output compared with the standard energy harvesting method. It is also shown that such a technique allows a fine-tuning of the trade-off between vibration damping and energy harvesting.

  6. Atomistic determination of flexoelectric properties of crystalline dielectrics

    NASA Astrophysics Data System (ADS)

    Maranganti, R.; Sharma, P.

    2009-08-01

    Upon application of a uniform strain, internal sublattice shifts within the unit cell of a noncentrosymmetric dielectric crystal result in the appearance of a net dipole moment: a phenomenon well known as piezoelectricity. A macroscopic strain gradient on the other hand can induce polarization in dielectrics of any crystal structure, even those which possess a centrosymmetric lattice. This phenomenon, called flexoelectricity, has both bulk and surface contributions: the strength of the bulk contribution can be characterized by means of a material property tensor called the bulk flexoelectric tensor. Several recent studies suggest that strain-gradient induced polarization may be responsible for a variety of interesting and anomalous electromechanical phenomena in materials including electromechanical coupling effects in nonuniformly strained nanostructures, “dead layer” effects in nanocapacitor systems, and “giant” piezoelectricity in perovskite nanostructures among others. In this work, adopting a lattice dynamics based microscopic approach we provide estimates of the flexoelectric tensor for certain cubic crystalline ionic salts, perovskite dielectrics, III-V and II-VI semiconductors. We compare our estimates with experimental/theoretical values wherever available and also revisit the validity of an existing empirical scaling relationship for the magnitude of flexoelectric coefficients in terms of material parameters. It is interesting to note that two independent groups report values of flexoelectric properties for perovskite dielectrics that are orders of magnitude apart: Cross and co-workers from Penn State have carried out experimental studies on a variety of materials including barium titanate while Catalan and co-workers from Cambridge used theoretical ab initio techniques as well as experimental techniques to study paraelectric strontium titanate as well as ferroelectric barium titanate and lead titanate. We find that, in the case of perovskite dielectrics, our estimates agree to an order of magnitude with the experimental and theoretical estimates for strontium titanate. For barium titanate however, while our estimates agree to an order of magnitude with existing ab initio calculations, there exists a large discrepancy with experimental estimates. The possible reasons for the observed deviations are discussed.

  7. A new approach of watermarking technique by means multichannel wavelet functions

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Puccio, Luigia

    2012-12-01

    The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.

  8. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  9. Fringe pattern demodulation with a two-frame digital phase-locked loop algorithm.

    PubMed

    Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-09-10

    A novel technique called a two-frame digital phase-locked loop for fringe pattern demodulation is presented. In this scheme, two fringe patterns with different spatial carrier frequencies are grabbed for an object. A digital phase-locked loop algorithm tracks and demodulates the phase difference between both fringe patterns by employing the wrapped phase components of one of the fringe patterns as a reference to demodulate the second fringe pattern. The desired phase information can be extracted from the demodulated phase difference. We tested the algorithm experimentally using real fringe patterns. The technique is shown to be suitable for noncontact measurement of objects with rapid surface variations, and it outperforms the Fourier fringe analysis technique in this aspect. Phase maps produced withthis algorithm are noisy in comparison with phase maps generated with the Fourier fringe analysis technique.

  10. NeuroPhysics: Studying how neurons create the perception of space-time using Physics' tools and techniques

    NASA Astrophysics Data System (ADS)

    Dhingra, Shonali; Sandler, Roman; Rios, Rodrigo; Vuong, Cliff; Mehta, Mayank

    All animals naturally perceive the abstract concept of space-time. A brain region called the Hippocampus is known to be important in creating these perceptions, but the underlying mechanisms are unknown. In our lab we employ several experimental and computational techniques from Physics to tackle this fundamental puzzle. Experimentally, we use ideas from Nanoscience and Materials Science to develop techniques to measure the activity of hippocampal neurons, in freely-behaving animals. Computationally, we develop models to study neuronal activity patterns, which are point processes that are highly stochastic and multidimensional. We then apply these techniques to collect and analyze neuronal signals from rodents while they're exploring space in Real World or Virtual Reality with various stimuli. Our findings show that under these conditions neuronal activity depends on various parameters, such as sensory cues including visual and auditory, and behavioral cues including, linear and angular, position and velocity. Further, neuronal networks create internally-generated rhythms, which influence perception of space and time. In totality, these results further our understanding of how the brain develops a cognitive map of our surrounding space, and keep track of time.

  11. Space charge distributions in insulating polymers: A new non-contacting way of measurement.

    PubMed

    Marty-Dessus, D; Ziani, A C; Petre, A; Berquez, L

    2015-04-01

    A new technique for the determination of space charge profiles in insulating polymers is proposed. Based on the evolution of an existing thermal wave technique called Focused Laser Intensity Modulation Method ((F)LIMM), it allows non-contact measurements on thin films exhibiting an internal charge to be studied. An electrostatic model taking into account the new sample-cell geometry proposed was first developed. It has been shown, in particular, that it was theoretically possible to calculate the internal charge from experimental measurements while allowing an evaluation of the air layer appearing between the sample and the electrode when non-contact measurements are performed. These predictions were confirmed by an experimental implementation for two thin polymer samples (25 μm-polyvinylidenefluoride and 50 μm-polytetrafluoroethylene (PTFE)) used as tests. In these cases, minimum air-layer thickness was determined with an accuracy of 3% and 20%, respectively, depending on the signal-to-noise ratio during the experimental procedure. In order to illustrate the reachable possibilities of this technique, 2D and 3D cartographies of a negative space charge implanted by electron beam within the PTFE test sample were depicted: like in conventional (F)LIMM, a multidimensional representation of a selectively implanted charge remains possible at a few microns depth, but using a non-contacting way of measurement.

  12. The Shock and Vibration Digest. Volume 12, Number 12,

    DTIC Science & Technology

    1980-12-01

    accelerations is presented. R.G. Schwarz It is shown that while the technique is theoretically cor- Fortschritt-Berichte der VDI -Zt., Series 8, No. 30, rect, it...is subject to experimental limitations due to in- 188 pp, 22 figs, 7 tables (1980). Summary in VDI -Z accuracies in current accelerometer technology...relationship of the so- better understanding of the fatigue life of wind turbine called K-value of the proposed standard VDI 2057 to the pal blades

  13. An Experimental Study of Turbulent Skin Friction Reduction in Supersonic Flow Using a Microblowing Technique

    NASA Technical Reports Server (NTRS)

    Hwang, Danny P.

    1999-01-01

    A new turbulent skin friction reduction technology, called the microblowing technique has been tested in supersonic flow (Mach number of 1.9) on specially designed porous plates with microholes. The skin friction was measured directly by a force balance and the boundary layer development was measured by a total pressure rake at the tailing edge of a test plate. The free stream Reynolds number was 1.0(10 exp 6) per meter. The turbulent skin friction coefficient ratios (C(sub f)/C(sub f0)) of seven porous plates are given in this report. Test results showed that the microblowing technique could reduce the turbulent skin friction in supersonic flow (up to 90 percent below a solid flat plate value, which was even greater than in subsonic flow).

  14. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  15. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  16. Chaotic coordinates for the Large Helical Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, S. R., E-mail: shudson@pppl.gov; Suzuki, Y.

    The theory of quadratic-flux-minimizing (QFM) surfaces is reviewed, and numerical techniques that allow high-order QFM surfaces to be efficiently constructed for experimentally relevant, non-integrable magnetic fields are described. As a practical example, the chaotic edge of the magnetic field in the Large Helical Device (LHD) is examined. A precise technique for finding the boundary surface is implemented, the hierarchy of partial barriers associated with the near-critical cantori is constructed, and a coordinate system, which we call chaotic coordinates, that is based on a selection of QFM surfaces is constructed that simplifies the description of the magnetic field, so that fluxmore » surfaces become “straight” and islands become “square.”.« less

  17. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, K. D.

    1985-01-01

    A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  18. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1990-01-01

    A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  19. Computer tomography of flows external to test models

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  20. Holographic nondestructive tests performed on composite samples of ceramic-epoxy-fiberglass sandwich structure

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L.; Liu, H. K.

    1974-01-01

    When a hologram storing more than one wave is illuminated with coherent light, the reconstructed wave fronts interfere with each other or with any other phase-related wave front derived from the illuminating source. This multiple wave front comparison is called holographic interferometry, and its application is called holographic nondestructive testing (HNDT). The theoretical aspects of HNDT techniques and the sensitivity of the holographic system to the geometrical placement of the optical components are briefly discussed. A unique HNDT system which is mobile and possesses variable sensitivity to stress amplitude is discribed, the experimental evidence of the application of this system to the testing of the hidden debonds in a ceramic-epoxy-fiberglass structure used for sample testing of the radome of the Pershing missile system is presented.

  1. The coupling technique: A two-wave acoustic method for the study of dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Gremaud, G.; Bujard, M.; Benoit, W.

    1987-03-01

    Progress in the study of dislocation dynamics has been achieved using a two-wave acoustic method, which has been called the coupling technique. In this method, the attenuation α and the velocity v of ultrasonic waves are measured in a sample submitted simultaneously to a harmonic stress σ of low frequency. Closed curves Δα(σ) and Δv/v(σ) are drawn during each cycle of the applied stress. The shapes of these curves and their evolution are characteristic of each dislocation motion mechanism which is activated by the low-frequency applied stress. For this reason, the closed curves Δα(σ) and Δv/v(σ) can be considered as signatures of the interaction mechanism which controls the low-frequency dislocation motion. In this paper, the concept of signature is presented and explained with some experimental examples. It will also be shown that theoretical models can be developed which explain very well the experimental results.

  2. Thermal Characterization of Edible Oils by Using Photopyroelectric Technique

    NASA Astrophysics Data System (ADS)

    Lara-Hernández, G.; Suaste-Gómez, E.; Cruz-Orea, A.; Mendoza-Alvarez, J. G.; Sánchez-Sinéncio, F.; Valcárcel, J. P.; García-Quiroz, A.

    2013-05-01

    Thermal properties of several edible oils such as olive, sesame, and grape seed oils were obtained by using the photopyroelectric technique. The inverse photopyroelectric configuration was used in order to obtain the thermal effusivity of the oil samples. The theoretical equation for the photopyroelectric signal in this configuration, as a function of the incident light modulation frequency, was fitted to the experimental data in order to obtain the thermal effusivity of these samples. Also, the back photopyroelectric configuration was used to obtain the thermal diffusivity of these oils; this thermal parameter was obtained by fitting the theoretical equation for this configuration, as a function of the sample thickness (called the thermal wave resonator cavity), to the experimental data. All measurements were done at room temperature. A complete thermal characterization of these edible oils was achieved by the relationship between the obtained thermal diffusivities and thermal effusivities with their thermal conductivities and volumetric heat capacities. The obtained results are in agreement with the thermal properties reported for the case of the olive oil.

  3. Microemulsion-based lycopene extraction: Effect of surfactants, co-surfactants and pretreatments.

    PubMed

    Amiri-Rigi, Atefeh; Abbasi, Soleiman

    2016-04-15

    Lycopene is a potent antioxidant that has received extensive attention recently. Due to the challenges encountered with current methods of lycopene extraction using hazardous solvents, industry calls for a greener, safer and more efficient process. The main purpose of present study was application of microemulsion technique to extract lycopene from tomato pomace. In this respect, the effect of eight different surfactants, four different co-surfactants, and ultrasound and enzyme pretreatments on lycopene extraction efficiency was examined. Experimental results revealed that application of combined ultrasound and enzyme pretreatments, saponin as a natural surfactant, and glycerol as a co-surfactant, in the bicontinuous region of microemulsion was the optimal experimental conditions resulting in a microemulsion containing 409.68±0.68 μg/glycopene. The high lycopene concentration achieved, indicates that microemulsion technique, using a low-cost natural surfactant could be promising for a simple and safe separation of lycopene from tomato pomace and possibly from tomato industrial wastes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Reduction of parasitic lasing

    NASA Technical Reports Server (NTRS)

    Storm, Mark E. (Inventor)

    1994-01-01

    A technique was developed which carefully retro-reflects precisely controlled amounts of light back into a laser system thereby intentionally forcing the laser system components to oscillate in a new resonator called the parasitic oscillator. The parasitic oscillator uses the laser system to provide the gain and an external mirror is used to provide the output coupling of the new resonator. Any change of gain or loss inside the new resonator will directly change the lasing threshold of the parasitic oscillator. This change in threshold can be experimentally measured as a change in the absolute value of reflectivity, provided by the external mirror, necessary to achieve lasing in the parasitic oscillator. Discrepancies between experimental data and a parasitic oscillator model are direct evidence of optical misalignment or component performance problems. Any changes in the optical system can instantly be measured as a change in threshold for the parasitic oscillator. This technique also enables aligning the system for maximum parasitic suppression with the system fully operational.

  5. Experimental demonstration of quantum digital signatures using phase-encoded coherent states of light

    PubMed Central

    Clarke, Patrick J.; Collins, Robert J.; Dunjko, Vedran; Andersson, Erika; Jeffers, John; Buller, Gerald S.

    2012-01-01

    Digital signatures are frequently used in data transfer to prevent impersonation, repudiation and message tampering. Currently used classical digital signature schemes rely on public key encryption techniques, where the complexity of so-called ‘one-way' mathematical functions is used to provide security over sufficiently long timescales. No mathematical proofs are known for the long-term security of such techniques. Quantum digital signatures offer a means of sending a message, which cannot be forged or repudiated, with security verified by information-theoretical limits and quantum mechanics. Here we demonstrate an experimental system, which distributes quantum signatures from one sender to two receivers and enables message sending ensured against forging and repudiation. Additionally, we analyse the security of the system in some typical scenarios. Our system is based on the interference of phase-encoded coherent states of light and our implementation utilizes polarization-maintaining optical fibre and photons with a wavelength of 850 nm. PMID:23132024

  6. Innovative hybrid pile oscillator technique in the Minerve reactor: open loop vs. closed loop

    NASA Astrophysics Data System (ADS)

    Geslot, Benoit; Gruel, Adrien; Bréaud, Stéphane; Leconte, Pierre; Blaise, Patrick

    2018-01-01

    Pile oscillator techniques are powerful methods to measure small reactivity worth of isotopes of interest for nuclear data improvement. This kind of experiments has long been implemented in the Mineve experimental reactor, operated by CEA Cadarache. A hybrid technique, mixing reactivity worth estimation and measurement of small changes around test samples is presented here. It was made possible after the development of high sensitivity miniature fission chambers introduced next to the irradiation channel. A test campaign, called MAESTRO-SL, took place in 2015. Its objective was to assess the feasibility of the hybrid method and investigate the possibility to separate mixed neutron effects, such as fission/capture or scattering/capture. Experimental results are presented and discussed in this paper, which focus on comparing two measurements setups, one using a power control system (closed loop) and another one where the power is free to drift (open loop). First, it is demonstrated that open loop is equivalent to closed loop. Uncertainty management and methods reproducibility are discussed. Second, results show that measuring the flux depression around oscillated samples provides valuable information regarding partial neutron cross sections. The technique is found to be very sensitive to the capture cross section at the expense of scattering, making it very useful to measure small capture effects of highly scattering samples.

  7. An experimental model for the study of cognitive disorders: the hippocampus and associative learning in mice.

    PubMed

    Delgado-García, José M; Gruart, Agnès

    2008-12-01

    The availability of transgenic mice mimicking selective human neurodegenerative and psychiatric disorders calls for new electrophysiological and microstimulation techniques capable of being applied in vivo in this species. In this article, we will concentrate on experiments and techniques developed in our laboratory during the past few years. Thus we have developed different techniques for the study of learning and memory capabilities of wild-type and transgenic mice with deficits in cognitive functions, using classical conditioning procedures. These techniques include different trace (tone/SHOCK and shock/SHOCK) conditioning procedures ? that is, a classical conditioning task involving the cerebral cortex, including the hippocampus. We have also developed implantation and recording techniques for evoking long-term potentiation (LTP) in behaving mice and for recording the evolution of field excitatory postsynaptic potentials (fEPSP) evoked in the hippocampal CA1 area by the electrical stimulation of the commissural/Schaffer collateral pathway across conditioning sessions. Computer programs have also been developed to quantify the appearance and evolution of eyelid conditioned responses and the slope of evoked fEPSPs. According to the present results, the in vivo recording of the electrical activity of selected hippocampal sites during classical conditioning of eyelid responses appears to be a suitable experimental procedure for studying learning capabilities in genetically modified mice, and an excellent model for the study of selected neuropsychiatric disorders compromising cerebral cortex functioning.

  8. Digression and Value Concatenation to Enable Privacy-Preserving Regression.

    PubMed

    Li, Xiao-Bai; Sarkar, Sumit

    2014-09-01

    Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.

  9. Experimental and numerical investigation of a scaled-up passive micromixer using fluorescence technique

    NASA Astrophysics Data System (ADS)

    Fan, Yanfeng; Hassan, Ibrahim

    2010-09-01

    The present paper investigates experimentally and numerically a scaled-up micromixer that combines the mixing principles of focusing/diverging and flow split-and-recombine. The micromixer consists of two units called “cross” and “omega”, which are similar to a zigzag structure. The total length is 199.5 mm with a depth of 3 mm. Fluorescence technique is used in the present study for local quantitative measurements of concentration. Two syringe pumps are used to supply the working fluids at two inlets. The testing range of Reynolds number is at 1 ≤ Re ≤ 50. The results of the experiment, obtained by fluorescence technique, are supported by the mixing visualization. The experimental results show that the mixing efficiency decreases at Re ≤ 10 and increases at Re ≥ 10. This is caused by the change in mixing mechanism from mass-diffusion domination to mass-convection domination. After five cells, the mixing efficiency reaches to 70% at Re = 50. The computational fluid dynamics is applied to assist in the understanding of fluid characteristics in channels. The simulation has a good agreement with the experiment. Based on the simulation results, vortices are observed in the channels at high Re, which could stretch and fold the fluids to enhance the effect of mass-convection on mixing. This design has the potential to be developed for micromixers with high flow rates.

  10. Space charge distributions in insulating polymers: A new non-contacting way of measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marty-Dessus, D., E-mail: marty@laplace.univ-tlse.fr; Ziani, A. C.; Berquez, L.

    2015-04-15

    A new technique for the determination of space charge profiles in insulating polymers is proposed. Based on the evolution of an existing thermal wave technique called Focused Laser Intensity Modulation Method ((F)LIMM), it allows non-contact measurements on thin films exhibiting an internal charge to be studied. An electrostatic model taking into account the new sample-cell geometry proposed was first developed. It has been shown, in particular, that it was theoretically possible to calculate the internal charge from experimental measurements while allowing an evaluation of the air layer appearing between the sample and the electrode when non-contact measurements are performed. Thesemore » predictions were confirmed by an experimental implementation for two thin polymer samples (25 μm-polyvinylidenefluoride and 50 μm-polytetrafluoroethylene (PTFE)) used as tests. In these cases, minimum air-layer thickness was determined with an accuracy of 3% and 20%, respectively, depending on the signal-to-noise ratio during the experimental procedure. In order to illustrate the reachable possibilities of this technique, 2D and 3D cartographies of a negative space charge implanted by electron beam within the PTFE test sample were depicted: like in conventional (F)LIMM, a multidimensional representation of a selectively implanted charge remains possible at a few microns depth, but using a non-contacting way of measurement.« less

  11. Preliminary experimental investigation of in vivo magnetic manipulation: results and potential application in hyperthermia.

    PubMed

    Grady, M S; Howard, M A; Molloy, J A; Ritter, R C; Quate, E G; Gillies, G T

    1989-01-01

    The first in vivo experiments in support of a new technique for delivering stereotaxic hyperthermia have been conducted at the Experimental Surgery Facility of the University of Virginia's Medical Center. We call this technique the "Video Tumor Fighter." In each of twelve trials a single, small permanent magnet or train of small permanent magnets was implanted on the brain surface of adult canine models. In three of the trials, this "seed" (typically 6-mm diameter X 6-mm long) was moved by magnetic manipulation to different locations within the brain. In two other trials, the seed moved along the interface between the brain and the inner vault of the skull. The noncontact magnetic manipulation was accomplished by coupling the permanently magnetized seed to the large dc magnetic field gradient created by a water-cooled coil surrounding the animal's head. The seed's motions were monitored with x-ray fluoroscopy; its rate of movement was found to be approximately 0.8 mm s-1. The forces required to produce these motions were on the order of 0.07 N. We document here the instrumentation used in these trials, describe the experimental procedures employed, and discuss the technical aspects of the results.

  12. Generation of dark hollow beam via coherent combination based on adaptive optics.

    PubMed

    Zheng, Yi; Wang, Xiaohua; Shen, Feng; Li, Xinyang

    2010-12-20

    A novel method for generating a dark hollow beam (DHB) is proposed and studied both theoretically and experimentally. A coherent combination technique for laser arrays is implemented based on adaptive optics (AO). A beam arraying structure and an active segmented mirror are designed and described. Piston errors are extracted by a zero-order interference detection system with the help of a custom-made photo-detectors array. An algorithm called the extremum approach is adopted to calculate feedback control signals. A dynamic piston error is imported by LiNbO3 to test the capability of the AO servo. In a closed loop the stable and clear DHB is obtained. The experimental results confirm the feasibility of the concept.

  13. Quantitative image analysis for investigating cell-matrix interactions

    NASA Astrophysics Data System (ADS)

    Burkel, Brian; Notbohm, Jacob

    2017-07-01

    The extracellular matrix provides both chemical and physical cues that control cellular processes such as migration, division, differentiation, and cancer progression. Cells can mechanically alter the matrix by applying forces that result in matrix displacements, which in turn may localize to form dense bands along which cells may migrate. To quantify the displacements, we use confocal microscopy and fluorescent labeling to acquire high-contrast images of the fibrous material. Using a technique for quantitative image analysis called digital volume correlation, we then compute the matrix displacements. Our experimental technology offers a means to quantify matrix mechanics and cell-matrix interactions. We are now using these experimental tools to modulate mechanical properties of the matrix to study cell contraction and migration.

  14. A variable resolution x-ray detector for computed tomography: I. Theoretical basis and experimental verification.

    PubMed

    DiBianca, F A; Gupta, V; Zeman, H D

    2000-08-01

    A computed tomography imaging technique called variable resolution x-ray (VRX) detection provides detector resolution ranging from that of clinical body scanning to that of microscopy (1 cy/mm to 100 cy/mm). The VRX detection technique is based on a new principle denoted as "projective compression" that allows the detector resolution element to scale proportionally to the image field size. Two classes of VRX detector geometry are considered. Theoretical aspects related to x-ray physics and data sampling are presented. Measured resolution parameters (line-spread function and modulation-transfer function) are presented and discussed. A VRX image that resolves a pair of 50 micron tungsten hairs spaced 30 microns apart is shown.

  15. Double differential neutron spectra generated by the interaction of a 12 MeV/nucleon 36S beam on a thick natCu target

    NASA Astrophysics Data System (ADS)

    Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.

    2018-07-01

    Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.

  16. Heike Kamerlingh Onnes: Master of Experimental Technique and Quantitative Research

    NASA Astrophysics Data System (ADS)

    Reif-Acherman, Simón

    Heike Kamerlingh Onnes (1853-1926), born a century and a half ago, was a major protagonist in the so-called Second Golden Age of Dutch Science. He devoted his career to the emerging field of low-temperature physics. His particular concern was to test the theories of his older compatriot Johannes Diderik van der Waals (1837-1923) by creating a style of research that was characterized by meticulous planning, precise measurement, and constant improvement of techniques and instruments. He made numerous contributions to low-temperature physics, but I focus on his liquefaction of helium, for which he received the Nobel Prize in Physics for 1913, and on his discovery of superconductivity. He became known internationally as le gentleman du zéro absolu.

  17. HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features.

    PubMed

    Zaman, Rianon; Chowdhury, Shahana Yasmin; Rashid, Mahmood A; Sharma, Alok; Dehzangi, Abdollah; Shatabda, Swakkhar

    2017-01-01

    DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM) as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.

  18. Cycles in metabolism and heat loss

    NASA Technical Reports Server (NTRS)

    Annis, J. F.; Troutman, S. J.; Webb, P.

    1974-01-01

    Using calorimetric techniques, subjects' metabolism, thermoregulation, and body temperatures were monitored continuously for 24-hour days, using three types of experimental routines. A water cooling garment (WCG) was used for direct calorimetry, while partitional calorimetry was used to establish a non-suited comparison for one of the routines. In this replicated routine, called the quiet day, the subjects were sedentary throughout the daytime hours and slept normally at night. Results indicate that the WCG may act to reduce 24-hour total oxygen consumption (VO2) or heat production, possibly due to the lowered energy cost of thermoregulation.

  19. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  20. Railway crossing risk area detection using linear regression and terrain drop compensation techniques.

    PubMed

    Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing

    2014-06-16

    Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas.

  1. Railway Crossing Risk Area Detection Using Linear Regression and Terrain Drop Compensation Techniques

    PubMed Central

    Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing

    2014-01-01

    Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas. PMID:24936948

  2. Single-molecule fluorescence microscopy review: shedding new light on old problems

    PubMed Central

    Shashkova, Sviatlana

    2017-01-01

    Fluorescence microscopy is an invaluable tool in the biosciences, a genuine workhorse technique offering exceptional contrast in conjunction with high specificity of labelling with relatively minimal perturbation to biological samples compared with many competing biophysical techniques. Improvements in detector and dye technologies coupled to advances in image analysis methods have fuelled recent development towards single-molecule fluorescence microscopy, which can utilize light microscopy tools to enable the faithful detection and analysis of single fluorescent molecules used as reporter tags in biological samples. For example, the discovery of GFP, initiating the so-called ‘green revolution’, has pushed experimental tools in the biosciences to a completely new level of functional imaging of living samples, culminating in single fluorescent protein molecule detection. Today, fluorescence microscopy is an indispensable tool in single-molecule investigations, providing a high signal-to-noise ratio for visualization while still retaining the key features in the physiological context of native biological systems. In this review, we discuss some of the recent discoveries in the life sciences which have been enabled using single-molecule fluorescence microscopy, paying particular attention to the so-called ‘super-resolution’ fluorescence microscopy techniques in live cells, which are at the cutting-edge of these methods. In particular, how these tools can reveal new insights into long-standing puzzles in biology: old problems, which have been impossible to tackle using other more traditional tools until the emergence of new single-molecule fluorescence microscopy techniques. PMID:28694303

  3. Assessment of analytical and experimental techniques utilized in conducting plume technology tests 575 and 593. [exhaust flow simulation (wind tunnel tests) of scale model Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Baker, L. R.; Sulyma, P. R.; Tevepaugh, J. A.; Penny, M. M.

    1976-01-01

    Since exhaust plumes affect vehicle base environment (pressure and heat loads) and the orbiter vehicle aerodynamic control surface effectiveness, an intensive program involving detailed analytical and experimental investigations of the exhaust plume/vehicle interaction was undertaken as a pertinent part of the overall space shuttle development program. The program, called the Plume Technology program, has as its objective the determination of the criteria for simulating rocket engine (in particular, space shuttle propulsion system) plume-induced aerodynamic effects in a wind tunnel environment. The comprehensive experimental program was conducted using test facilities at NASA's Marshall Space Flight Center and Ames Research Center. A post-test examination of some of the experimental results obtained from NASA-MSFC's 14 x 14-inch trisonic wind tunnel is presented. A description is given of the test facility, simulant gas supply system, nozzle hardware, test procedure and test matrix. Analysis of exhaust plume flow fields and comparison of analytical and experimental exhaust plume data are presented.

  4. Ab Initio Studies of Stratospheric Ozone Depletion Chemistry

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Head-Gordon, Martin; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    An overview of the current understanding of ozone depletion chemistry, particularly with regards the formation of the so-called Antarctic ozone hole, will be presented together with an outline as to how ab initio quantum chemistry can be used to further our understanding of stratospheric chemistry. The ability of modern state-of-the art ab initio quantum chemical techniques to characterize reliably the gas-phase molecular structure, vibrational spectrum, electronic spectrum, and thermal stability of fluorine, chlorine, bromine and nitrogen oxide species will be demonstrated by presentation of some example studies. The ab initio results will be shown to be in excellent agreement with the available experimental data, and where the experimental data are either not known or are inconclusive, the theoretical results are shown to fill in the gaps and to resolve experimental controversies. In addition, ab initio studies in which the electronic spectra and the characterization of excited electronic states of halogen oxide species will also be presented. Again where available, the ab initio results are compared to experimental observations, and are used to aid in the interpretation of experimental studies.

  5. A data-hiding technique with authentication, integration, and confidentiality for electronic patient records.

    PubMed

    Chao, Hui-Mei; Hsu, Chin-Ming; Miaou, Shaou-Gang

    2002-03-01

    A data-hiding technique called the "bipolar multiple-number base" was developed to provide capabilities of authentication, integration, and confidentiality for an electronic patient record (EPR) transmitted among hospitals through the Internet. The proposed technique is capable of hiding those EPR related data such as diagnostic reports, electrocardiogram, and digital signatures from doctors or a hospital into a mark image. The mark image could be the mark of a hospital used to identify the origin of an EPR. Those digital signatures from doctors and a hospital could be applied for the EPR authentication. Thus, different types of medical data can be integrated into the same mark image. The confidentiality is ultimately achieved by decrypting the EPR related data and digital signatures with an exact copy of the original mark image. The experimental results validate the integrity and the invisibility of the hidden EPR related data. This newly developed technique allows all of the hidden data to be separated and restored perfectly by authorized users.

  6. Real-time Electrophysiology: Using Closed-loop Protocols to Probe Neuronal Dynamics and Beyond

    PubMed Central

    Linaro, Daniele; Couto, João; Giugliano, Michele

    2015-01-01

    Experimental neuroscience is witnessing an increased interest in the development and application of novel and often complex, closed-loop protocols, where the stimulus applied depends in real-time on the response of the system. Recent applications range from the implementation of virtual reality systems for studying motor responses both in mice1 and in zebrafish2, to control of seizures following cortical stroke using optogenetics3. A key advantage of closed-loop techniques resides in the capability of probing higher dimensional properties that are not directly accessible or that depend on multiple variables, such as neuronal excitability4 and reliability, while at the same time maximizing the experimental throughput. In this contribution and in the context of cellular electrophysiology, we describe how to apply a variety of closed-loop protocols to the study of the response properties of pyramidal cortical neurons, recorded intracellularly with the patch clamp technique in acute brain slices from the somatosensory cortex of juvenile rats. As no commercially available or open source software provides all the features required for efficiently performing the experiments described here, a new software toolbox called LCG5 was developed, whose modular structure maximizes reuse of computer code and facilitates the implementation of novel experimental paradigms. Stimulation waveforms are specified using a compact meta-description and full experimental protocols are described in text-based configuration files. Additionally, LCG has a command-line interface that is suited for repetition of trials and automation of experimental protocols. PMID:26132434

  7. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal.

  8. Use of simulated experiments for material characterization of brittle materials subjected to high strain rate dynamic tension

    PubMed Central

    Saletti, Dominique

    2017-01-01

    Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505

  9. Experimental study on behaviors of dielectric elastomer based on acrylonitrile butadiene rubber

    NASA Astrophysics Data System (ADS)

    An, Kuangjun; Chuc, Nguyen Huu; Kwon, Hyeok Yong; Phuc, Vuong Hong; Koo, Jachoon; Lee, Youngkwan; Nam, Jaedo; Choi, Hyouk Ryeol

    2010-04-01

    Previously, the dielectric elastomer based on Acrylonitrile Butadiene Rubber (NBR), called synthetic elastomer has been reported by our group. It has the advantages that its characteristics can be modified according to the requirements of performances, and thus, it is applicable to a wide variety of applications. In this paper, we address the effects of additives and vulcanization conditions on the overall performance of synthetic elastomer. In the present work, factors to have effects on the performances are extracted, e.g additives such as dioctyl phthalate (DOP), barium titanium dioxide (BaTiO3) and vulcanization conditions such as dicumyl peroxide (DCP), cross-linking times. Also, it is described how the performances can be optimized by using DOE (Design of Experiments) technique and experimental results are analyzed by ANOVA (Analysis of variance).

  10. A B-TOF mass spectrometer for the analysis of ions with extreme high start-up energies.

    PubMed

    Lezius, M

    2002-03-01

    Weak magnetic deflection is combined with two acceleration stage time-of-flight mass spectrometry and subsequent position-sensitive ion detection. The experimental method, called B-TOF mass spectrometry, is described with respect to its theoretical background and some experimental results. It is demonstrated that the technique has distinct advantages over other approaches, with special respect to the identification and analysis of very highly energetic ions with an initially large energy broadening (up to 1 MeV) and with high charge states (up to 30+). Similar energetic targets are a common case in intense laser-matter interaction processes found during laser ablation, laser-cluster and laser-molecule interaction and fast particle and x-ray generation from laser-heated plasma. Copyright 2002 John Wiley & Sons, Ltd.

  11. Fluid mechanics of slurry flow through the grinding media in ball mills

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Songfack, P.K.; Rajamani, R.K.

    1995-12-31

    The slurry transport within the ball mill greatly influences the mill holdup, residence time, breakage rate, and hence the power draw and the particle size distribution of the mill product. However, residence-time distribution and holdup in industrial mills could not be predicted a priori. Indeed, it is impossible to determine the slurry loading in continuously operating mills by direct measurement, especially in industrial mills. In this paper, the slurry transport problem is solved using the principles of fluid mechanics. First, the motion of the ball charge and its expansion are predicted by a technique called discrete element method. Then themore » slurry flow through the porous ball charge is tackled with a fluid-flow technique called the marker and cell method. This may be the only numerical technique capable of tracking the slurry free surface as it fluctuates with the motion of the ball charge. The result is a prediction of the slurry profile in both the radial and axial directions. Hence, it leads to the detailed description of slurry mass and ball charge within the mill. The model predictions are verified with pilot-scale experimental work. This novel approach based on the physics of fluid flow is devoid of any empiricism. It is shown that the holdup of industrial mills at a given feed percent solids can be predicted successfully.« less

  12. Tomographic Aperture-Encoded Particle Tracking Velocimetry: A New Approach to Volumetric PIV

    NASA Astrophysics Data System (ADS)

    Troolin, Dan; Boomsma, Aaron; Lai, Wing; Pothos, Stamatios; Fluid Mechanics Research Instruments Team

    2016-11-01

    Volumetric velocity fields are useful in a wide variety of fluid mechanics applications. Several types of three-dimensional imaging methods have been used in the past to varying degrees of success, for example, 3D PTV (Maas et al., 1993), DDPIV (Peireira et al., 2006), Tomographic PIV (Elsinga, 2006), and V3V (Troolin and Longmire, 2009), among others. Each of these techniques has shown advantages and disadvantages in different areas. With the advent of higher resolution and lower noise cameras with higher stability levels, new techniques are emerging that combine the advantages of the existing techniques. This talk describes a new technique called Tomographic Aperture-Encoded Particle Tracking Velocimetry (TAPTV), in which segmented triangulation and diameter tolerance are used to achieve three-dimensional particle tracking with extremely high particle densities (on the order of ppp = 0.2 or higher) without the drawbacks normally associated with ghost particles (for example in TomoPIV). The results are highly spatially-resolved data with very fast processing times. A detailed explanation of the technique as well as plots, movies, and experimental considerations will be discussed.

  13. Thermal Characterization, Using the Photopyroelectric Technique, of Liquids Used in the Automobile Industry

    NASA Astrophysics Data System (ADS)

    Cervantes-Espinosa, L. M.; Castillo-Alvarado, F. de L.; Lara-Hernández, G.; Cruz-Orea, A.; Mendoza-Alvarez, J. G.; Valcárcel, J. P.; García-Quiroz, A.

    2012-11-01

    Thermal properties of liquids used in the automobile industry such as engine oil, antifreeze, and a liquid for windshield wipers were obtained using the photopyroelectric (PPE) technique. The inverse PPE configuration was used in order to obtain the thermal effusivity of the liquid samples. The theoretical equation for the PPE signal in this configuration, as a function of the incident light modulation frequency, was fitted to the experimental data in order to obtain the thermal effusivity of these samples. Also, the back PPE configuration was used to obtain the thermal diffusivity of these liquids; this thermal parameter was obtained by fitting the theoretical equation for this configuration, as a function of the sample thickness (called the thermal wave resonator cavity), to the experimental data. All measurements were done at room temperature. A complete thermal characterization of these liquids used in the automobile industry was achieved by the relationship between the obtained thermal diffusivities and thermal effusivities with their thermal conductivities and volumetric heat capacities. The obtained results are compared with the thermal properties of similar liquids.

  14. Linear increases in carbon nanotube density through multiple transfer technique.

    PubMed

    Shulaker, Max M; Wei, Hai; Patil, Nishant; Provine, J; Chen, Hong-Yu; Wong, H-S P; Mitra, Subhasish

    2011-05-11

    We present a technique to increase carbon nanotube (CNT) density beyond the as-grown CNT density. We perform multiple transfers, whereby we transfer CNTs from several growth wafers onto the same target surface, thereby linearly increasing CNT density on the target substrate. This process, called transfer of nanotubes through multiple sacrificial layers, is highly scalable, and we demonstrate linear CNT density scaling up to 5 transfers. We also demonstrate that this linear CNT density increase results in an ideal linear increase in drain-source currents of carbon nanotube field effect transistors (CNFETs). Experimental results demonstrate that CNT density can be improved from 2 to 8 CNTs/μm, accompanied by an increase in drain-source CNFET current from 4.3 to 17.4 μA/μm.

  15. Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.

    PubMed

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.

  16. Alternate methodologies to experimentally investigate shock initiation properties of explosives

    NASA Astrophysics Data System (ADS)

    Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger

    2017-01-01

    Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.

  17. Mapping Base Modifications in DNA by Transverse-Current Sequencing

    NASA Astrophysics Data System (ADS)

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2018-02-01

    Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.

  18. How nonlinear optics can merge interferometry for high resolution imaging

    NASA Astrophysics Data System (ADS)

    Ceus, D.; Reynaud, F.; Tonello, A.; Delage, L.; Grossard, L.

    2017-11-01

    High resolution stellar interferometers are very powerful efficient instruments to get a better knowledge of our Universe through the spatial coherence analysis of the light. For this purpose, the optical fields collected by each telescope Ti are mixed together. From the interferometric pattern, two expected information called the contrast Cij and the phase information φij are extracted. These information lead to the Vij, called the complex visibility, with Vij=Cijexp(jφij). For each telescope doublet TiTj, it is possible to get a complex visibility Vij. The Zernike Van Cittert theorem gives a relationship between the intensity distribution of the object observed and the complex visibility. The combination of the acquired complex visibilities and a reconstruction algorithm allows imaging reconstruction. To avoid lots of technical difficulties related to infrared optics (components transmission, thermal noises, thermal cooling…), our team proposes to explore the possibility of using nonlinear optical techniques. This is a promising alternative detection technique for detecting infrared optical signals. This way, we experimentally demonstrate that frequency conversion does not result in additional bias on the interferometric data supplied by a stellar interferometer. In this presentation, we report on wavelength conversion of the light collected by each telescope from the infrared domain to the visible. The interferometric pattern is observed in the visible domain with our, so called, upconversion interferometer. Thereby, one can benefit from mature optical components mainly used in optical telecommunications (waveguide, coupler, multiplexer…) and efficient low-noise detection schemes up to the single-photon counting level.

  19. Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain

    NASA Astrophysics Data System (ADS)

    D'Ambrogio, Walter; Fregolent, Annalisa

    2014-04-01

    The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.

  20. Females that experience threat are better teachers

    PubMed Central

    Kleindorfer, Sonia; Evans, Christine; Colombelli-Négrel, Diane

    2014-01-01

    Superb fairy-wren (Malurus cyaneus) females use an incubation call to teach their embryos a vocal password to solicit parental feeding care after hatching. We previously showed that high call rate by the female was correlated with high call similarity in fairy-wren chicks, but not in cuckoo chicks, and that parent birds more often fed chicks with high call similarity. Hosts should be selected to increase their defence behaviour when the risk of brood parasitism is highest, such as when cuckoos are present in the area. Therefore, we experimentally test whether hosts increase call rate to embryos in the presence of a singing Horsfield's bronze-cuckoo (Chalcites basalis). Female fairy-wrens increased incubation call rate when we experimentally broadcast cuckoo song near the nest. Embryos had higher call similarity when females had higher incubation call rate. We interpret the findings of increased call rate as increased teaching effort in response to a signal of threat. PMID:24806422

  1. Intra- and interspecific responses to Rafinesque’s big-eared bat (Corynorhinus rafinesquii) social calls.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loeb, Susan, C.; Britzke, Eric, R.

    Bats respond to the calls of conspecifics as well as to calls of other species; however, few studies have attempted to quantify these responses or understand the functions of these calls. We tested the response of Rafinesque’s big-eared bats (Corynorhinus rafinesquii) to social calls as a possible method to increase capture success and to understand the function of social calls. We also tested if calls of bats within the range of the previously designated subspecies differed, if the responses of Rafinesque’s big-eared bats varied with geographic origin of the calls, and if other species responded to the calls of C.more » rafinesquii. We recorded calls of Rafinesque’s big-eared bats at two colony roost sites in South Carolina, USA. Calls were recorded while bats were in the roosts and as they exited. Playback sequences for each site were created by copying typical pulses into the playback file. Two mist nets were placed approximately 50–500 m from known roost sites; the net with the playback equipment served as the Experimental net and the one without the equipment served as the Control net. Call structures differed significantly between the Mountain and Coastal Plains populations with calls from the Mountains being of higher frequency and longer duration. Ten of 11 Rafinesque’s big-eared bats were caught in the Control nets and, 13 of 19 bats of other species were captured at Experimental nets even though overall bat activity did not differ significantly between Control and Experimental nets. Our results suggest that Rafinesque’s big-eared bats are not attracted to conspecifics’ calls and that these calls may act as an intraspecific spacing mechanism during foraging.« less

  2. Finger vein recognition using local line binary pattern.

    PubMed

    Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin

    2011-01-01

    In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP).

  3. Novel optical scanning cryptography using Fresnel telescope imaging.

    PubMed

    Yan, Aimin; Sun, Jianfeng; Hu, Zhijuan; Zhang, Jingtao; Liu, Liren

    2015-07-13

    We propose a new method called modified optical scanning cryptography using Fresnel telescope imaging technique for encryption and decryption of remote objects. An image or object can be optically encrypted on the fly by Fresnel telescope scanning system together with an encryption key. For image decryption, the encrypted signals are received and processed with an optical coherent heterodyne detection system. The proposed method has strong performance through use of secure Fresnel telescope scanning with orthogonal polarized beams and efficient all-optical information processing. The validity of the proposed method is demonstrated by numerical simulations and experimental results.

  4. Perception and Haptic Rendering of Friction Moments.

    PubMed

    Kawasaki, H; Ohtuka, Y; Koide, S; Mouri, T

    2011-01-01

    This paper considers moments due to friction forces on the human fingertip. A computational technique called the friction moment arc method is presented. The method computes the static and/or dynamic friction moment independent of a friction force calculation. In addition, a new finger holder to display friction moment is presented. This device incorporates a small brushless motor and disk, and connects the human's finger to an interface finger of the five-fingered haptic interface robot HIRO II. Subjects' perception of friction moment while wearing the finger holder, as well as perceptions during object manipulation in a virtual reality environment, were evaluated experimentally.

  5. Experimental technique for measuring the isentrope of hydrogen to several megabars

    NASA Astrophysics Data System (ADS)

    Barker, L. M.; Truncano, T. G.; Wise, J. I.; Asay, J. R.

    The experimental measurement of the Equations of State (EOS) of hydrogen has been of interest for some time because of the theoretical expectation of a transition to the metallic state in the multi-megabar pressure regime. Previous experiments have reported results which are consistent with a metallic transition, but experimental uncertainties have precluded positive identification of the metallic phase. In this paper we describe a new experimental approach to the measurement of the high-pressure EOS of hydrogen. A cryogenic hydrogen specimen, either liquid or solid, is located in the muzzle of a gun barrel between a tungsten anvil and another tungsten disk called a shim. Helium gas in the gun barrel cushions the impact and allows nearly isentropic compression of the hydrogen. The time-resolved pressure in the specimen is calculated from a laser interferometer (VISAR) measurement of the acceleration history of the anvil's free surface, and volume measurements at specific times are made by combining VISAR data, which define the position of the anvil, with flash X-ray photographs which define the shim position.

  6. Experimental investigation of complex circular Airy beam characteristics

    NASA Astrophysics Data System (ADS)

    Porfirev, A. P.; Fomchenkov, S. A.; Khonina, S. N.

    2018-04-01

    We demonstrate a new type of circular Airy beams, the so-called azimuthally modulated circular Airy beams, generated by utilizing a diffraction element, whose transmission function is the sum of the transmission function of the element generating a "petal" pattern and the transmission function of the element generating a circular Airy beam. We experimentally investigate the propagation dynamics of such beams and demonstrate that their autofocusing and selfhealing properties are strongly dependent on the number of generated petals. These beams are a combination of a conventional circular Airy beam and vortex laser beams (or their superpositions). Using a spatial light modulator, we demonstrate that these beams have unique properties such as autofocusing, "nondiffractive" propagation and self-healing after passing through an obstacle. The experimental results are in good agreement with the simulation. We believe that these results can be very useful for lensless laser fabrication and laser manipulation techniques, as well as for development of new filament plasma multi-channel formation methods.

  7. Identity method for particle number fluctuations and correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorenstein, M. I.

    An incomplete particle identification distorts the observed event-by-event fluctuations of the hadron chemical composition in nucleus-nucleus collisions. A new experimental technique called the identity method was recently proposed. It eliminated the misidentification problem for one specific combination of the second moments in a system of two hadron species. In the present paper, this method is extended to calculate all the second moments in a system with an arbitrary number of hadron species. Special linear combinations of the second moments are introduced. These combinations are presented in terms of single-particle variables and can be found experimentally from the event-by-event averaging. Themore » mathematical problem is then reduced to solving a system of linear equations. The effect of incomplete particle identification is fully eliminated from the final results.« less

  8. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  9. Effect of the amyloid β hairpin's structure on the handedness of helices formed by its aggregates

    DOE PAGES

    GhattyVenkataKrishna, Pavan K.; Uberbacher, Edward C.; Cheng, Xiaolin

    2013-07-08

    Various structural models for amyloid β fibrils have been derived from a variety of experimental techniques. However, these models cannot differentiate between the relative position of the two arms of the β hairpin called the stagger. Amyloid fibrils of various hierarchical levels form left-handed helices composed of β sheets. However it is unclear if positive, negative and zero staggers all form the macroscopic left-handed helices. To address this issue we have conducted extensive molecular dynamics simulations of amyloid β sheets of various staggers and shown that only negative staggers lead to the experimentally observed left-handed helices while positive staggers generatemore » the incorrect right-handed helices. In conclusion, this result suggests that the negative staggers are physiologically relevant structure of the amyloid β fibrils.« less

  10. Boostream: a dynamic fluid flow process to assemble nanoparticles at liquid interface

    NASA Astrophysics Data System (ADS)

    Delléa, Olivier; Lebaigue, Olivier

    2017-12-01

    CEA-LITEN develops an original process called Boostream® to manipulate, assemble and connect micro- or nanoparticles of various materials, sizes, shapes and functions to obtain monolayer colloidal crystals (MCCs). This process uses the upper surface of a liquid film flowing down a ramp to assemble particles in a manner that is close to the horizontal situation of a Langmuir-Blodgett film construction. In presence of particles at the liquid interface, the film down-flow configuration exhibits an unusual hydraulic jump which results from the fluid flow accommodation to the particle monolayer. In order to master our process, the fluid flow has been modeled and experimentally characterized by optical means, such as with the moiré technique that consists in observing the reflection of a succession of periodic black-and-red fringes on the liquid surface mirror. The fringe images are deformed when reflected by the curved liquid surface associated with the hydraulic jump, the fringe deformation being proportional to the local slope of the surface. This original experimental setup allowed us to get the surface profile in the jump region and to measure it along with the main process parameters (liquid flow rate, slope angle, temperature sensitive fluid properties such as dynamic viscosity or surface tension, particle sizes). This work presents the experimental setup and its simple model, the different experimental characterization techniques used and will focus on the way the hydraulic jump relies on the process parameters.

  11. Role of hydrogen in volatile behaviour of defects in SiO2-based electronic devices

    NASA Astrophysics Data System (ADS)

    Wimmer, Yannick; El-Sayed, Al-Moatasem; Gös, Wolfgang; Grasser, Tibor; Shluger, Alexander L.

    2016-06-01

    Charge capture and emission by point defects in gate oxides of metal-oxide-semiconductor field-effect transistors (MOSFETs) strongly affect reliability and performance of electronic devices. Recent advances in experimental techniques used for probing defect properties have led to new insights into their characteristics. In particular, these experimental data show a repeated dis- and reappearance (the so-called volatility) of the defect-related signals. We use multiscale modelling to explain the charge capture and emission as well as defect volatility in amorphous SiO2 gate dielectrics. We first briefly discuss the recent experimental results and use a multiphonon charge capture model to describe the charge-trapping behaviour of defects in silicon-based MOSFETs. We then link this model to ab initio calculations that investigate the three most promising defect candidates. Statistical distributions of defect characteristics obtained from ab initio calculations in amorphous SiO2 are compared with the experimentally measured statistical properties of charge traps. This allows us to suggest an atomistic mechanism to explain the experimentally observed volatile behaviour of defects. We conclude that the hydroxyl-E' centre is a promising candidate to explain all the observed features, including defect volatility.

  12. In situ characterization of natural pyrite bioleaching using electrochemical noise technique

    NASA Astrophysics Data System (ADS)

    Chen, Guo-bao; Yang, Hong-ying; Li, Hai-jun

    2016-02-01

    An in situ characterization technique called electrochemical noise (ECN) was used to investigate the bioleaching of natural pyrite. ECN experiments were conducted in four active systems (sulfuric acid, ferric-ion, 9k culture medium, and bioleaching solutions). The ECN data were analyzed in both the time and frequency domains. Spectral noise impedance spectra obtained from power spectral density (PSD) plots for different systems were compared. A reaction mechanism was also proposed on the basis of the experimental data analysis. The bioleaching system exhibits the lowest noise resistance of 0.101 MΩ. The bioleaching of natural pyrite is considered to be a bio-battery reaction, which distinguishes it from chemical oxidation reactions in ferric-ion and culture-medium (9k) solutions. The corrosion of pyrite becomes more severe over time after the long-term testing of bioleaching.

  13. The effects of ionic strength and organic matter on virus inactivation at low temperatures: general likelihood uncertainty estimation (GLUE) as an alternative to least-squares parameter optimization for the fitting of virus inactivation models

    NASA Astrophysics Data System (ADS)

    Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin

    2017-06-01

    This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.

  14. Control designs for low-loss active magnetic bearings: Theory and implementation

    NASA Astrophysics Data System (ADS)

    Wilson, Brian Christopher David

    Active Magnetic Bearings (AMB) have been proposed for use in Electromechanical Flywheel Batteries. In these devices, kinetic energy is stored in a magnetically levitated flywheel which spins in a vacuum. The AMB eliminates all mechanical losses, however, electrical loss, which is proportional to the square of the magnetic flux, is still significant. For efficient operation, the flux bias, which is typically introduced into the electromagnets to improve the AMB stiffness, must be reduced, preferably to zero. This zero-bias (ZB) mode of operation cripples the classical control techniques which are customarily used and nonlinear control is required. As a compromise between AMB stiffness and efficiency, a new flux bias scheme is proposed called the generalized complementary flux condition (gcfc). A flux-bias dependent trade-off exists between AMB stiffness, power consumption, and power loss. This work theoretically develops and experimentally verifies new low-loss AMB control designs which employ the gcfc condition. Particular attention is paid to the removal of the singularity present in the standard nonlinear control techniques when operating in ZB. Experimental verification is conduced on a 6-DOF AMB reaction wheel. Practical aspects of the gcfc implementation such as flux measurement and flux-bias implementation with voltage mode amplifiers using IR compensation are investigated. Comparisons are made between the gcfc bias technique and the standard constant-flux-sum (cfs) bias method. Under typical operating circumstances, theoretical analysis and experimental data show that the new gcfc bias scheme is more efficient in producing the control flux required for rotor stabilization than the ordinary cfs bias strategy.

  15. Detection and Length Estimation of Linear Scratch on Solid Surfaces Using an Angle Constrained Ant Colony Technique

    NASA Astrophysics Data System (ADS)

    Pal, Siddharth; Basak, Aniruddha; Das, Swagatam

    In many manufacturing areas the detection of surface defects is one of the most important processes in quality control. Currently in order to detect small scratches on solid surfaces most of the industries working on material manufacturing rely on visual inspection primarily. In this article we propose a hybrid computational intelligence technique to automatically detect a linear scratch from a solid surface and estimate its length (in pixel unit) simultaneously. The approach is based on a swarm intelligence algorithm called Ant Colony Optimization (ACO) and image preprocessing with Wiener and Sobel filters as well as the Canny edge detector. The ACO algorithm is mostly used to compensate for the broken parts of the scratch. Our experimental results confirm that the proposed technique can be used for detecting scratches from noisy and degraded images, even when it is very difficult for conventional image processing to distinguish the scratch area from its background.

  16. Measuring Young’s modulus the easy way, and tracing the effects of measurement uncertainties

    NASA Astrophysics Data System (ADS)

    Nunn, John

    2015-09-01

    The speed of sound in a solid is determined by the density and elasticity of the material. Young’s modulus can therefore be calculated once the density and the speed of sound in the solid are measured. The density can be measured relatively easily, and the speed of sound through a rod can be measured very inexpensively by setting up a longitudinal standing wave and using a microphone to record its frequency. This is a simplified version of a technique called ‘impulse excitation’. It is a good educational technique for school pupils. This paper includes the description and the free provision of custom software to calculate the frequency spectrum of a recorded sound so that the resonant peaks can be readily identified. Discussion on the effect of measurement uncertainties is included to help the more thorough experimental student improve the accuracy of his method. The technique is sensitive enough to be able to detect changes in the elasticity modulus with a temperature change of just a few degrees.

  17. Multiple signal classification algorithm for super-resolution fluorescence microscopy

    PubMed Central

    Agarwal, Krishna; Macháň, Radek

    2016-01-01

    Single-molecule localization techniques are restricted by long acquisition and computational times, or the need of special fluorophores or biologically toxic photochemical environments. Here we propose a statistical super-resolution technique of wide-field fluorescence microscopy we call the multiple signal classification algorithm which has several advantages. It provides resolution down to at least 50 nm, requires fewer frames and lower excitation power and works even at high fluorophore concentrations. Further, it works with any fluorophore that exhibits blinking on the timescale of the recording. The multiple signal classification algorithm shows comparable or better performance in comparison with single-molecule localization techniques and four contemporary statistical super-resolution methods for experiments of in vitro actin filaments and other independently acquired experimental data sets. We also demonstrate super-resolution at timescales of 245 ms (using 49 frames acquired at 200 frames per second) in samples of live-cell microtubules and live-cell actin filaments imaged without imaging buffers. PMID:27934858

  18. Peptide Conformation and Supramolecular Organization in Amylin Fibrils: Constraints from Solid State NMR

    PubMed Central

    Luca, Sorin; Yau, Wai-Ming; Leapman, Richard; Tycko, Robert

    2008-01-01

    The 37-residue amylin peptide, also known as islet amyloid polypeptide, forms fibrils that are the main peptide or protein component of amyloid that develops in the pancreas of type 2 diabetes patients. Amylin also readily forms amyloid fibrils in vitro that are highly polymorphic under typical experimental conditions. We describe a protocol for the preparation of synthetic amylin fibrils that exhibit a single predominant morphology, which we call a striated ribbon, in electron microscope and atomic force microscope images. Solid state nuclear magnetic resonance (NMR) measurements on a series of isotopically labeled samples indicate a single molecular structure within the striated ribbons. We use scanning transmission electron microscopy and several types of one-dimensional and two-dimensional solid state NMR techniques to obtain constraints on the peptide conformation and supramolecular structure in these amylin fibrils, and derive molecular structural models that are consistent with the experimental data. The basic structural unit in amylin striated ribbons, which we call the protofilament, contains four-layers of parallel β-sheets, formed by two symmetric layers of amylin molecules. The molecular structure of amylin protofilaments in striated ribbons closely resembles the protofilament in amyloid fibrils with similar morphology formed by the 40-residue β-amyloid peptide that is associated with Alzheimer's disease. PMID:17979302

  19. Characterization of quantum vortex dynamics in superfluid helium

    NASA Astrophysics Data System (ADS)

    Meichle, David P.

    Liquid helium obtains superfluid properties when cooled below the Lambda transition temperature of 2.17 K. A superfluid, which is a partial Bose Einstein condensate, has many exotic properties including free flow without friction, and ballistic instead of diffusive heat transport. A superfluid is also uniquely characterized by the presence of quantized vortices, dynamical line-like topological phase defects around which all circulation in the flow is constrained. Two vortices can undergo a violent process called reconnection when they approach, cross, and retract having exchanged tails. With a numerical examination of a local, linearized solution near reconnection we discovered a dynamically unstable stationary solution to the Gross-Pitaevskii equation, which was relaxed to a fully non-linear solution using imaginary time propagation. This investigation explored vortex reconnection in the context of the changing topology of the order parameter, a complex field governing the superfluid dynamics at zero temperature. The dynamics of the vortices can be studied experimentally by dispersing tracer particles into a superfluid flow and recording their motions with movie cameras. The pioneering work of Bewley et al. provided the first visualization technique using frozen gases to create tracer particles. Using this technique, we experimentally observed for the first time the excitation of helical traveling waves on a vortex core called Kelvin waves. Kelvin waves are thought to be a central mechanism for dissipation in this inviscid fluid, as they provide an efficient cascade mechanism for transferring energy from large to microscopic length scales. We examined the Kelvin waves in detail, and compared their dynamics in fully self-similar non-dimensional coordinates to theoretical predictions. Additionally, two experimental advances are presented. A newly invented technique for reliably dispersing robust, nanometer-scale fluorescent tracer particles directly into the superfluid is described. A detailed numerical investigation of the particle-vortex interactions provides novel calculations of the force trapping particles on vortices, and a scaling was found suggesting that smaller particles may remain bound to the vortices at much higher speeds than larger particles. Lastly, a new stereographic imaging system has been developed, allowing for the world-first three-dimensional reconstruction of individual particles and vortex filament trajectories. Preliminary data, including the first three-dimensional observation of a vortex reconnection are presented.

  20. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  1. Ultrahigh-resolution CT and DR scanner

    NASA Astrophysics Data System (ADS)

    DiBianca, Frank A.; Gupta, Vivek; Zou, Ping; Jordan, Lawrence M.; Laughter, Joseph S.; Zeman, Herbert D.; Sebes, Jeno I.

    1999-05-01

    A new technique called Variable-Resolution X-ray (VRX) detection that dramatically increases the spatial resolution in computed tomography (CT) and digital radiography (DR) is presented. The technique is based on a principle called 'projective compression' that allows the resolution element of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. Several effects that could limit the performance of VRX detectors are considered. Experimental measurements on a 16-channel, CdWO4 scintillator + photodiode test array yield a limiting MTF of 64 cy/mm (8(mu) ) in the highest-resolution configuration reported. Preliminary CT images have been made of small anatomical specimens and small animals using a storage phosphor screen in the VRX mode. Measured detector resolution of the CT projection data exceeds 20 cy/mm (less than 25 (mu) ); however, the final, reconstructed CT images produced thus far exhibit 10 cy/mm (50 (mu) ) resolution because of non-flatness of the storage phosphor plates, focal spot effects and the use of a rudimentary CT reconstruction algorithm. A 576-channel solid-state detector is being fabricated that is expected to achieve CT image resolution in excess of that of the 26-channel test array.

  2. Effects of High-Temperature-Pressure Polymerized Resin-Infiltrated Ceramic Networks on Oral Stem Cells

    PubMed Central

    Nassif, Ali; Berbar, Tsouria; Le Goff, Stéphane; Berdal, Ariane; Sadoun, Michael; Fournier, Benjamin P. J.

    2016-01-01

    Objectives The development of CAD—CAM techniques called for new materials suited to this technique and offering a safe and sustainable clinical implementation. The infiltration of resin in a ceramic network under high pressure and high temperature defines a new class of hybrid materials, namely polymer infiltrated ceramics network (PICN), for this purpose which requires to be evaluated biologically. We used oral stem cells (gingival and pulpal) as an in vitro experimental model. Methods Four biomaterials were grinded, immersed in a culture medium and deposed on stem cells from dental pulp (DPSC) and gingiva (GSC): Enamic (VITA®), Experimental Hybrid Material (EHM), EHM with initiator (EHMi) and polymerized Z100™ composite material (3M®). After 7 days of incubation; viability, apoptosis, proliferation, cytoskeleton, inflammatory response and morphology were evaluated in vitro. Results Proliferation was insignificantly delayed by all the tested materials. Significant cytotoxicity was observed in presence of resin based composites (MTT assay), however no detectable apoptosis and some dead cells were detected like in PICN materials. Cell morphology, major cytoskeleton and extracellular matrix components were not altered. An intimate contact appeared between the materials and cells. Clinical Significance The three new tested biomaterials did not exhibit adverse effects on oral stem cells in our experimental conditions and may be an interesting alternative to ceramics or composite based CAD—CAM blocks. PMID:27196425

  3. Probing Quark-Gluon-Plasma properties with a Bayesian model-to-data comparison

    NASA Astrophysics Data System (ADS)

    Cai, Tianji; Bernhard, Jonah; Ke, Weiyao; Bass, Steffen; Duke QCD Group Team

    2016-09-01

    Experiments at RHIC and LHC study a special state of matter called the Quark Gluon Plasma (QGP), where quarks and gluons roam freely, by colliding relativistic heavy-ions. Given the transitory nature of the QGP, its properties can only be explored by comparing computational models of its formation and evolution to experimental data. The models fall, roughly speaking, under two categories-those solely using relativistic viscous hydrodynamics (pure hydro model) and those that in addition couple to a microscopic Boltzmann transport for the later evolution of the hadronic decay products (hybrid model). Each of these models has multiple parameters that encode the physical properties we want to probe and that need to be calibrated to experimental data, a task which is computationally expensive, but necessary for the knowledge extraction and determination of the models' quality. Our group has developed an analysis technique based on Bayesian Statistics to perform the model calibration and to extract probability distributions for each model parameter. Following the previous work that applies the technique to the hybrid model, we now perform a similar analysis on a pure-hydro model and display the posterior distributions for the same set of model parameters. We also develop a set of criteria to assess the quality of the two models with respect to their ability to describe current experimental data. Funded by Duke University Goldman Sachs Research Fellowship.

  4. Dynamic connectivity regression: Determining state-related changes in brain connectivity

    PubMed Central

    Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.

    2014-01-01

    Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408

  5. Transferable Calibration Standard Developed for Quantitative Raman Scattering Diagnostics in High-Pressure Flames

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Kojima, Jun

    2005-01-01

    Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.

  6. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    PubMed

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  7. Researches on Preliminary Chemical Reactions in Spark-Ignition Engines

    NASA Technical Reports Server (NTRS)

    Muehlner, E.

    1943-01-01

    Chemical reactions can demonstrably occur in a fuel-air mixture compressed in the working cylinder of an Otto-cycle (spark ignition) internal-combustion engine even before the charge is ignited by the flame proceeding from the sparking plug. These are the so-called "prelinminary reactions" ("pre-flame" combustion or oxidation), and an exact knowledge of their characteristic development is of great importance for a correct appreciation of the phenomena of engine-knock (detonation), and consequently for its avoidance. Such reactions can be studied either in a working engine cylinder or in a combustion bomb. The first method necessitates a complicated experimental technique, while the second has the disadvantage of enabling only a single reaction to be studied at one time. Consequently, a new series of experiments was inaugurated, conducted in a motored (externally-driven) experimental engine of mixture-compression type, without ignition, the resulting preliminary reactions being detectable and measurable thermometrically.

  8. Influence of rotation on the near-wake development behind an impulsively started circular cylinder

    NASA Astrophysics Data System (ADS)

    Coutanceau, M.; Menard, C.

    1985-09-01

    A rotating body, travelling through a fluid in such a way that the rotation axis is at right angles to the translational path, experiences a transverse force, called the Magnus force. The present study is concerned with a rotating cylinder which is in a state of translational motion. In the considered case, the existence of a lift force may be explained easily on the basis of the theory of inviscid fluids. An experimental investigation provides new information regarding the mechanism of the near-wake development of the classical unsteady flow and the influence of the rotational effects. Attention is given to the experimental technique, aspects of flow topology and notation, the time development of the wake flow pattern, the time evolution of certain flow properties, the flow structure in the neighborhood of the front stagnation point, and the influence of the Reynolds number on flow establishment.

  9. Neural-network quantum state tomography

    NASA Astrophysics Data System (ADS)

    Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe

    2018-05-01

    The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.

  10. An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Pappa, R. S.

    1985-01-01

    A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.

  11. Finger Vein Recognition Using Local Line Binary Pattern

    PubMed Central

    Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin

    2011-01-01

    In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP). PMID:22247670

  12. Preliminary Analysis of Photoreading

    NASA Technical Reports Server (NTRS)

    McNamara, Danielle S.

    2000-01-01

    The purpose of this project was to provide a preliminary analysis of a reading strategy called PhotoReading. PhotoReading is a technique developed by Paul Scheele that claims to increase reading rate to 25,000 words per minute (Scheele, 1993). PhotoReading itself involves entering a "relaxed state" and looking at, but not reading, each page of a text for a brief moment (about I to 2 seconds). While this technique has received attention in the popular press, there had been no objective examinations of the technique's validity. To examine the effectiveness of PhotoReading, the principal investigator (i.e., trainee) participated in a PhotoReading workshop to learn the technique. Parallel versions of two standardized and three experimenter-created reading comprehension tests were administered to the trainee and an expert user of the PhotoReading technique to compare the use of normal reading strategies and the PhotoReading technique by both readers. The results for all measures yielded no benefits of using the PhotoReading technique. The extremely rapid reading rates claimed by PhotoReaders were not observed; indeed, the reading rates were generally comparable to those for normal reading. Moreover, the PhotoReading expert generally showed an increase in reading time when using the PhotoReading technique in comparison to when using normal reading strategies to process text. This increase in reading time when PhotoReading was accompanied by a decrease in text comprehension.

  13. Turnover of Lipidated LC3 and Autophagic Cargoes in Mammalian Cells.

    PubMed

    Rodríguez-Arribas, M; Yakhine-Diop, S M S; González-Polo, R A; Niso-Santano, M; Fuentes, J M

    2017-01-01

    Macroautophagy (usually referred to as autophagy) is the most important degradation system in mammalian cells. It is responsible for the elimination of protein aggregates, organelles, and other cellular content. During autophagy, these materials (i.e., cargo) must be engulfed by a double-membrane structure called an autophagosome, which delivers the cargo to the lysosome to complete its degradation. Autophagy is a very dynamic pathway called autophagic flux. The process involves all the steps that are implicated in cargo degradation from autophagosome formation. There are several techniques to monitor autophagic flux. Among them, the method most used experimentally to assess autophagy is the detection of LC3 protein processing and p62 degradation by Western blotting. In this chapter, we provide a detailed and straightforward protocol for this purpose in cultured mammalian cells, including a brief set of notes concerning problems associated with the Western-blotting detection of LC3 and p62. © 2017 Elsevier Inc. All rights reserved.

  14. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  15. Towards simultaneous measurements of electronic and structural properties in ultra-fast x-ray free electron laser absorption spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Gaudin, J.; Fourment, C.; Cho, B. I.; Engelhorn, K.; Galtier, E.; Harmand, M.; Leguay, P. M.; Lee, H. J.; Nagler, B.; Nakatsutsumi, M.; Ozkan, C.; Störmer, M.; Toleikis, S.; Tschentscher, Th; Heimann, P. A.; Dorchies, F.

    2014-04-01

    The rapidly growing ultrafast science with X-ray lasers unveils atomic scale processes with unprecedented time resolution bringing the so called ``molecular movie'' within reach. X-ray absorption spectroscopy is one of the most powerful x-ray techniques providing both local atomic order and electronic structure when coupled with ad-hoc theory. Collecting absorption spectra within few x-ray pulses is possible only in a dispersive setup. We demonstrate ultrafast time-resolved measurements of the LIII-edge x-ray absorption near-edge spectra of irreversibly laser excited Molybdenum using an average of only few x-ray pulses with a signal to noise ratio limited only by the saturation level of the detector. The simplicity of the experimental set-up makes this technique versatile and applicable for a wide range of pump-probe experiments, particularly in the case of non-reversible processes.

  16. Towards simultaneous measurements of electronic and structural properties in ultra-fast x-ray free electron laser absorption spectroscopy experiments

    PubMed Central

    Gaudin, J.; Fourment, C.; Cho, B. I.; Engelhorn, K.; Galtier, E.; Harmand, M.; Leguay, P. M.; Lee, H. J.; Nagler, B.; Nakatsutsumi, M.; Ozkan, C.; Störmer, M.; Toleikis, S.; Tschentscher, Th; Heimann, P. A.; Dorchies, F.

    2014-01-01

    The rapidly growing ultrafast science with X-ray lasers unveils atomic scale processes with unprecedented time resolution bringing the so called “molecular movie” within reach. X-ray absorption spectroscopy is one of the most powerful x-ray techniques providing both local atomic order and electronic structure when coupled with ad-hoc theory. Collecting absorption spectra within few x-ray pulses is possible only in a dispersive setup. We demonstrate ultrafast time-resolved measurements of the LIII-edge x-ray absorption near-edge spectra of irreversibly laser excited Molybdenum using an average of only few x-ray pulses with a signal to noise ratio limited only by the saturation level of the detector. The simplicity of the experimental set-up makes this technique versatile and applicable for a wide range of pump-probe experiments, particularly in the case of non-reversible processes. PMID:24740172

  17. The Three-D Flow Structures of Gas and Liquid Generated by a Spreading Flame Over Liquid Fuel

    NASA Technical Reports Server (NTRS)

    Tashtoush, G.; Ito, A.; Konishi, T.; Narumi, A.; Saito, K.; Cremers, C. J.

    1999-01-01

    We developed a new experimental technique called: Combined laser sheet particle tracking (LSPT) and laser holographic interferometry (HI), which is capable of measuring the transient behavior of three dimensional structures of temperature and flow both in liquid and gas phases. We applied this technique to a pulsating flame spread over n-butanol. We found a twin vortex flow both on the liquid surface and deep in the liquid a few mm below the surface and a twin vortex flow in the gas phase. The first twin vortex flow at the liquid surface was observed previously by NASA Lewis researchers, while the last two observations are new. These observations revealed that the convective flow structure ahead of the flame leading edge is three dimensional in nature and the pulsating spread is controlled by the convective flow of both liquid and gas.

  18. Decorrelation correction for nanoparticle tracking analysis of dilute polydisperse suspensions in bulk flow

    NASA Astrophysics Data System (ADS)

    Hartman, John; Kirby, Brian

    2017-03-01

    Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.

  19. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  20. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  1. Robust stabilization of underactuated nonlinear systems: A fast terminal sliding mode approach.

    PubMed

    Khan, Qudrat; Akmeliawati, Rini; Bhatti, Aamer Iqbal; Khan, Mahmood Ashraf

    2017-01-01

    This paper presents a fast terminal sliding mode based control design strategy for a class of uncertain underactuated nonlinear systems. Strategically, this development encompasses those electro-mechanical underactuated systems which can be transformed into the so-called regular form. The novelty of the proposed technique lies in the hierarchical development of a fast terminal sliding attractor design for the considered class. Having established sliding mode along the designed manifold, the close loop dynamics become finite time stable which, consequently, result in high precision. In addition, the adverse effects of the chattering phenomenon are reduced via strong reachability condition and the robustness of the system against uncertainties is confirmed theoretically. A simulation as well as experimental study of an inverted pendulum is presented to demonstrate the applicability of the proposed technique. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. The total occlusal convergence of the abutment of a partial fixed dental prosthesis: A definition and a clinical technique for its assessment

    PubMed Central

    Mamoun, John S.

    2013-01-01

    The abutment(s) of a partial fixed dental prosthesis (PFDP) should have a minimal total occlusal convergence (TOC), also called a taper, in order to ensure adequate retention of a PFDP that will be made for the abutment(s), given the height of the abutment(s). This article reviews the concept of PFDP abutment TOC and presents an alternative definition of what TOC is, defining it as the extent to which the shape of an abutment differs from an ideal cylinder shape of an abutment. This article also reviews experimental results concerning what is the ideal TOC in degrees and explores clinical techniques of estimating the TOC of a crown abutment. The author suggests that Dentists use high magnification loupes (×6-8 magnification or greater) or a surgical operating microscope when preparing crown abutments, to facilitate creating a minimum abutment TOC. PMID:24932130

  3. In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment

    DOE PAGES

    Rupnowski, Przemyslaw; Ulsh, Michael J.; Sopori, Bhushan; ...

    2017-08-18

    This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min -1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends onmore » both coating porosity (or area loading) and thickness. Here, it is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.« less

  4. In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupnowski, Przemyslaw; Ulsh, Michael J.; Sopori, Bhushan

    This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min -1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends onmore » both coating porosity (or area loading) and thickness. Here, it is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.« less

  5. Towards simultaneous measurements of electronic and structural properties in ultra-fast x-ray free electron laser absorption spectroscopy experiments

    DOE PAGES

    Gaudin, J.; Fourment, C.; Cho, B. I.; ...

    2014-04-17

    The rapidly growing ultrafast science with X-ray lasers unveils atomic scale processes with unprecedented time resolution bringing the so called “molecular movie” within reach. X-ray absorption spectroscopy is one of the most powerful x-ray techniques providing both local atomic order and electronic structure when coupled with ad-hoc theory. Collecting absorption spectra within few x-ray pulses is possible only in a dispersive setup. We demonstrate ultrafast time-resolved measurements of the LIII-edge x-ray absorption near-edge spectra of irreversibly laser excited Molybdenum using an average of only few x-ray pulses with a signal to noise ratio limited only by the saturation level ofmore » the detector. The simplicity of the experimental set-up makes this technique versatile and applicable for a wide range of pump-probe experiments, particularly in the case of non-reversible processes.« less

  6. In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment

    NASA Astrophysics Data System (ADS)

    Rupnowski, Przemyslaw; Ulsh, Michael; Sopori, Bhushan; Green, Brian G.; Wood, David L.; Li, Jianlin; Sheng, Yangping

    2018-01-01

    This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min-1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends on both coating porosity (or area loading) and thickness. It is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.

  7. Tomographic phase microscopy and its biological applications

    NASA Astrophysics Data System (ADS)

    Choi, Wonshik

    2012-12-01

    Conventional interferometric microscopy techniques such as digital holographic microscopy and quantitative phase microscopy are often classified as 3D imaging techniques because a recorded complex field image can be numerically propagated to a different depth. In a strict sense, however, a single complex field image contains only 2D information on a specimen. The measured 2D image is only a subset of the 3D structure. For the 3D mapping of an object, multiple independent 2D images are to be taken, for example at multiple incident angles or wavelengths, and then combined by the so-called optical diffraction tomography (ODT). In this Letter, tomographic phase microscopy (TPM) is reviewed that experimentally realizes the concept of the ODT for the 3D mapping of biological cells in their native state, and some of its interesting biological and biomedical applications are introduced. [Figure not available: see fulltext.

  8. Three Reading Comprehension Strategies: TELLS, Story Mapping, and QARs.

    ERIC Educational Resources Information Center

    Sorrell, Adrian L.

    1990-01-01

    Three reading comprehension strategies are presented to assist learning-disabled students: an advance organizer technique called "TELLS Fact or Fiction" used before reading a passage, a schema-based technique called "Story Mapping" used while reading, and a postreading method of categorizing questions called…

  9. Thermal measurement of brake pad lining surfaces during the braking process

    NASA Astrophysics Data System (ADS)

    Piątkowski, Tadeusz; Polakowski, Henryk; Kastek, Mariusz; Baranowski, Pawel; Damaziak, Krzysztof; Małachowski, Jerzy; Mazurkiewicz, Łukasz

    2012-06-01

    This paper presents the test campaign concept and definition and the analysis of the recorded measurements. One of the most important systems in cars and trucks are brakes. The braking temperature on a lining surface can rise above 500°C. This shows how linings requirements are so strict and, what is more, continuously rising. Besides experimental tests, very supportive method for investigating processes which occur on the brake pad linings are numerical analyses. Experimental tests were conducted on the test machine called IL-68. The main component of IL-68 is so called frictional unit, which consists of: rotational head, which convey a shaft torque and where counter samples are placed and translational head, where samples of coatings are placed and pressed against counter samples. Due to the high rotational speeds and thus the rapid changes in temperature field, the infrared camera was used for testing. The paper presents results of analysis registered thermograms during the tests with different conditions. Furthermore, based on this testing machine, the numerical model was developed. In order to avoid resource demanding analyses only the frictional unit (described above) was taken into consideration. Firstly the geometrical model was performed thanks to CAD techniques, which in the next stage was a base for developing the finite element model. Material properties and boundary conditions exactly correspond to experimental tests. Computations were performed using a dynamic LS-Dyna code where heat generation was estimated assuming full (100%) conversion of mechanical work done by friction forces. Paper presents the results of dynamic thermomechanical analysis too and these results were compared with laboratory tests.

  10. Close contacts at the interface: Experimental-computational synergies for solving complexity problems

    NASA Astrophysics Data System (ADS)

    Torras, Juan; Zanuy, David; Bertran, Oscar; Alemán, Carlos; Puiggalí, Jordi; Turón, Pau; Revilla-López, Guillem

    2018-02-01

    The study of material science has been long devoted to the disentanglement of bulk structures which mainly entails finding the inner structure of materials. That structure is accountable for a major portion of materials' properties. Yet, as our knowledge of these "backbones" enlarged so did the interest for the materials' boundaries properties which means the properties at the frontier with the surrounding environment that is called interface. The interface is thus to be understood as the sum of the material's surface plus the surrounding environment be it in solid, liquid or gas phase. The study of phenomena at this interface requires both the use of experimental and theoretical techniques and, above all, a wise combination of them in order to shed light over the most intimate details at atomic, molecular and mesostructure levels. Here, we report several cases to be used as proof of concept of the results achieved when studying interface phenomena by combining a myriad of experimental and theoretical tools to overcome the usual limitation regardind atomic detail, size and time scales and systems of complex composition. Real world examples of the combined experimental-theoretical work and new tools, software, is offered to the readers.

  11. Jet-mixing of initially-stratified liquid-liquid pipe flows: experiments and numerical simulations

    NASA Astrophysics Data System (ADS)

    Wright, Stuart; Ibarra-Hernandes, Roberto; Xie, Zhihua; Markides, Christos; Matar, Omar

    2016-11-01

    Low pipeline velocities lead to stratification and so-called 'phase slip' in horizontal liquid-liquid flows due to differences in liquid densities and viscosities. Stratified flows have no suitable single point for sampling, from which average phase properties (e.g. fractions) can be established. Inline mixing, achieved by static mixers or jets in cross-flow (JICF), is often used to overcome liquid-liquid stratification by establishing unstable two-phase dispersions for sampling. Achieving dispersions in liquid-liquid pipeline flows using JICF is the subject of this experimental and modelling work. The experimental facility involves a matched refractive index liquid-liquid-solid system, featuring an ETFE test section, and experimental liquids which are silicone oil and a 51-wt% glycerol solution. The matching then allows the dispersed fluid phase fractions and velocity fields to be established through advanced optical techniques, namely PLIF (for phase) and PTV or PIV (for velocity fields). CFD codes using the volume of a fluid (VOF) method are then used to demonstrate JICF breakup and dispersion in stratified pipeline flows. A number of simple jet configurations are described and their dispersion effectiveness is compared with the experimental results. Funding from Cameron for Ph.D. studentship (SW) gratefully acknowledged.

  12. Negative refraction angular characterization in one-dimensional photonic crystals.

    PubMed

    Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn

    2011-04-06

    Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications.

  13. Negative Refraction Angular Characterization in One-Dimensional Photonic Crystals

    PubMed Central

    Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn

    2011-01-01

    Background Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. Methodology/Principal Findings By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Conclusions/Significance Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications. PMID:21494332

  14. Adsorption-induced deformation of nanoporous materials—A review

    NASA Astrophysics Data System (ADS)

    Gor, Gennady Y.; Huber, Patrick; Bernstein, Noam

    2017-03-01

    When a solid surface accommodates guest molecules, they induce noticeable stresses to the surface and cause its strain. Nanoporous materials have high surface area and, therefore, are very sensitive to this effect called adsorption-induced deformation. In recent years, there has been significant progress in both experimental and theoretical studies of this phenomenon, driven by the development of new materials as well as advanced experimental and modeling techniques. Also, adsorption-induced deformation has been found to manifest in numerous natural and engineering processes, e.g., drying of concrete, water-actuated movement of non-living plant tissues, change of permeation of zeolite membranes, swelling of coal and shale, etc. In this review, we summarize the most recent experimental and theoretical findings on adsorption-induced deformation and present the state-of-the-art picture of thermodynamic and mechanical aspects of this phenomenon. We also reflect on the existing challenges related both to the fundamental understanding of this phenomenon and to selected applications, e.g., in sensing and actuation, and in natural gas recovery and geological CO2 sequestration.

  15. Numerical and Experimental study of secondary flows in a rotating two-phase flow: the tea leaf paradox

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Neal, Douglas; Prevost, Richard; Mayrhofer, Arno; Lawrenz, Alan; Foss, John; Sotiropoulos, Fotis

    2015-11-01

    Secondary flows in a rotating flow in a cylinder, resulting in the so called ``tea leaf paradox'', are fundamental for understanding atmospheric pressure systems, developing techniques for separating red blood cells from the plasma, and even separating coagulated trub in the beer brewing process. We seek to gain deeper insights in this phenomenon by integrating numerical simulations and experiments. We employ the Curvilinear Immersed boundary method (CURVIB) of Calderer et al. (J. Comp. Physics 2014), which is a two-phase flow solver based on the level set method, to simulate rotating free-surface flow in a cylinder partially filled with water as in the tea leave paradox flow. We first demonstrate the validity of the numerical model by simulating a cylinder with a rotating base filled with a single fluid, obtaining results in excellent agreement with available experimental data. Then, we present results for the cylinder case with free surface, investigate the complex formation of secondary flow patterns, and show comparisons with new experimental data for this flow obtained by Lavision. Computational resources were provided by the Minnesota Supercomputing Institute.

  16. Experimental Study of Characteristics of Micro-Hole Porous Skins for Turbulent Skin Friction Reduction

    NASA Technical Reports Server (NTRS)

    Hwang, Danny P.

    2002-01-01

    Characteristics of micro-hole porous skins for the turbulent skin friction reduction technology called the micro-blowing technique (MBT) were assessed experimentally at Mach 0.4 and blowing fractions from zero to 0.005. The objective of this study was to provide guidelines for the selection of porous plates for MBT. The hole angle, pattern, diameter, aspect ratio, and porosity were the parameters considered for this study. The additional effort to angle and stagger the holes was experimentally determined to be unwarranted in terms of skin friction benefit; therefore, these parameters were systematically eliminated from the parametric study. The impact of the remaining three parameters was evaluated by fixing two parameters at the reference values while varying the third parameter. The best hole-diameter Reynolds number was found to be around 400, with an optimum aspect ratio of about 6. The optimum porosity was not conclusively discerned because the range of porosities in the test plates considered was not great enough. However, the porosity was estimated to be about 15 percent or less.

  17. Linking amphibian call structure to the environment: the interplay between phenotypic flexibility and individual attributes.

    PubMed

    Ziegler, Lucía; Arim, Matías; Narins, Peter M

    2011-05-01

    The structure of the environment surrounding signal emission produces different patterns of degradation and attenuation. The expected adjustment of calls to ensure signal transmission in an environment was formalized in the acoustic adaptation hypothesis. Within this framework, most studies considered anuran calls as fixed attributes determined by local adaptations. However, variability in vocalizations as a product of phenotypic expression has also been reported. Empirical evidence supporting the association between environment and call structure has been inconsistent, particularly in anurans. Here, we identify a plausible causal structure connecting environment, individual attributes, and temporal and spectral adjustments as direct or indirect determinants of the observed variation in call attributes of the frog Hypsiboas pulchellus. For that purpose, we recorded the calls of 40 males in the field, together with vegetation density and other environmental descriptors of the calling site. Path analysis revealed a strong effect of habitat structure on the temporal parameters of the call, and an effect of site temperature conditioning the size of organisms calling at each site and thus indirectly affecting the dominant frequency of the call. Experimental habitat modification with a styrofoam enclosure yielded results consistent with field observations, highlighting the potential role of call flexibility on detected call patterns. Both, experimental and correlative results indicate the need to incorporate the so far poorly considered role of phenotypic plasticity in the complex connection between environmental structure and individual call attributes.

  18. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  19. Improving promoter prediction for the NNPP2.2 algorithm: a case study using Escherichia coli DNA sequences.

    PubMed

    Burden, S; Lin, Y-X; Zhang, R

    2005-03-01

    Although a great deal of research has been undertaken in the area of promoter prediction, prediction techniques are still not fully developed. Many algorithms tend to exhibit poor specificity, generating many false positives, or poor sensitivity. The neural network prediction program NNPP2.2 is one such example. To improve the NNPP2.2 prediction technique, the distance between the transcription start site (TSS) associated with the promoter and the translation start site (TLS) of the subsequent gene coding region has been studied for Escherichia coli K12 bacteria. An empirical probability distribution that is consistent for all E.coli promoters has been established. This information is combined with the results from NNPP2.2 to create a new technique called TLS-NNPP, which improves the specificity of promoter prediction. The technique is shown to be effective using E.coli DNA sequences, however, it is applicable to any organism for which a set of promoters has been experimentally defined. The data used in this project and the prediction results for the tested sequences can be obtained from http://www.uow.edu.au/~yanxia/E_Coli_paper/SBurden_Results.xls alh98@uow.edu.au.

  20. An optimized resistor pattern for temperature gradient control in microfluidics

    NASA Astrophysics Data System (ADS)

    Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline

    2009-06-01

    In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.

  1. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Pulse-shape discrimination techniques for the COBRA double beta-decay experiment at LNGS

    NASA Astrophysics Data System (ADS)

    Zatschler, S.; COBRA Collaboration

    2017-09-01

    In modern elementary particle physics several questions arise from the fact that neutrino oscillation experiments have found neutrinos to be massive. Among them is the so far unknown nature of neutrinos: either they act as so-called Majorana particles, where one cannot distinguish between particle and antiparticle, or they are Dirac particles like all the other fermions in the Standard Model. The study of neutrinoless double beta-decay (0νββ-decay), where the lepton number conservation is violated by two units, could answer the question regarding the underlying nature of neutrinos and might also shed light on the mechanism responsible for the mass generation. So far there is no experimental evidence for the existence of 0νββ-decay, hence, existing experiments have to be improved and novel techniques should be explored. One of the next-generation experiments dedicated to the search for this ultra-rare decay is the COBRA experiment. This article gives an overview of techniques to identify and reject background based on pulse-shape discrimination.

  3. Parallel halftoning technique using dot diffusion optimization

    NASA Astrophysics Data System (ADS)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  4. Impact of corpus domain for sentiment classification: An evaluation study using supervised machine learning techniques

    NASA Astrophysics Data System (ADS)

    Karsi, Redouane; Zaim, Mounia; El Alami, Jamila

    2017-07-01

    Thanks to the development of the internet, a large community now has the possibility to communicate and express its opinions and preferences through multiple media such as blogs, forums, social networks and e-commerce sites. Today, it becomes clearer that opinions published on the web are a very valuable source for decision-making, so a rapidly growing field of research called “sentiment analysis” is born to address the problem of automatically determining the polarity (Positive, negative, neutral,…) of textual opinions. People expressing themselves in a particular domain often use specific domain language expressions, thus, building a classifier, which performs well in different domains is a challenging problem. The purpose of this paper is to evaluate the impact of domain for sentiment classification when using machine learning techniques. In our study three popular machine learning techniques: Support Vector Machines (SVM), Naive Bayes and K nearest neighbors(KNN) were applied on datasets collected from different domains. Experimental results show that Support Vector Machines outperforms other classifiers in all domains, since it achieved at least 74.75% accuracy with a standard deviation of 4,08.

  5. Phase noise suppression through parametric filtering

    NASA Astrophysics Data System (ADS)

    Cassella, Cristian; Strachan, Scott; Shaw, Steven W.; Piazza, Gianluca

    2017-02-01

    In this work, we introduce and experimentally demonstrate a parametric phase noise suppression technique, which we call "parametric phase noise filtering." This technique is based on the use of a solid-state parametric amplifier operating in its instability region and included in a non-autonomous feedback loop connected at the output of a noisy oscillator. We demonstrate that such a system behaves as a parametrically driven Duffing resonator and can operate at special points where it becomes largely immune to the phase fluctuations that affect the oscillator output signal. A prototype of a parametric phase noise filter (PFIL) was designed and fabricated to operate in the very-high-frequency range. The PFIL prototype allowed us to significantly reduce the phase noise at the output of a commercial signal generator operating around 220 MHz. Noise reduction of 16 dB (40×) and 13 dB (20×) were obtained, respectively, at 1 and 10 kHz offsets from the carrier frequency. The demonstration of this phase noise suppression technique opens up scenarios in the development of passive and low-cost phase noise cancellation circuits for any application demanding high quality frequency generation.

  6. Design and implementation of a robust and cost-effective double-scattering system at a horizontal proton beamline

    NASA Astrophysics Data System (ADS)

    Helmbrecht, S.; Baumann, M.; Enghardt, W.; Fiedler, F.; Krause, M.; Lühr, A.

    2016-11-01

    Purpose: particle therapy has the potential to improve radiooncology. With more and more facilities coming into operation, also the interest for research at proton beams increases. Though many centers provide beam at an experimental room, some of them do not feature a device for radiation field shaping, a so called nozzle. Therefore, a robust and cost-effective double-scattering system for horizontal proton beamlines has been designed and implemented. Materials and methods: the nozzle is based on the double scattering technique. Two lead scatterers, an aluminum ridge-filter and two brass collimators were optimized in a simulation study to form a laterally homogeneous 10 cm × 10 cm field with a spread-out Bragg-peak (SOBP). The parts were mainly manufactured using 3D printing techniques and the system was set up at OncoRay's experimental beamline. Measurement of the radiation field were carried out using a water phantom. Results: high levels of dose homogeneity were found in lateral (dose variation ΔD/D < ±2%) as well as in beam direction (ΔD/D < ± 3% in the SOBP). The system has already been used for radiobiology and physical experiments. Conclusion: the presented setup allows for creating clinically realistic extended radiation fields at fixed horizontal proton beamlines and is ready to use for internal and external users. The excellent performance combined with the simplistic design let it appear as a valuable option for proton therapy centers intending to foster their experimental portfolio.

  7. Micro- and nano-scale optical devices for high density photonic integrated circuits at near-infrared wavelengths

    NASA Astrophysics Data System (ADS)

    Chatterjee, Rohit

    In this research work, we explore fundamental silicon-based active and passive photonic devices that can be integrated together to form functional photonic integrated circuits. The devices which include power splitters, switches and lenses are studied starting from their physics, their design and fabrication techniques and finally from an experimental standpoint. The experimental results reveal high performance devices that are compatible with standard CMOS fabrication processes and can be easily integrated with other devices for near infrared telecom applications. In Chapter 2, a novel method for optical switching using nanomechanical proximity perturbation technique is described and demonstrated. The method which is experimentally demonstrated employs relatively low powers, small chip footprint and is compatible with standard CMOS fabrication processes. Further, in Chapter 3, this method is applied to develop a hitless bypass switch aimed at solving an important issue in current wavelength division multiplexing systems namely hitless switching of reconfigurable optical add drop multiplexers. Experimental results are presented to demonstrate the application of the nanomechanical proximity perturbation technique to practical situations. In Chapter 4, a fundamental photonic component namely the power splitter is described. Power splitters are important components for any photonic integrated circuits because they help split the power from a single light source to multiple devices on the same chip so that different operations can be performed simultaneously. The power splitters demonstrated in this chapter are based on multimode interference principles resulting in highly compact low loss and highly uniform power splitting to split the power of the light from a single channel to two and four channels. These devices can further be scaled to achieve higher order splitting such as 1x16 and 1x32 power splits. Finally in Chapter 5 we overcome challenges in device fabrication and measurement techniques to demonstrate for the first time a "superlens" for the technologically important near infrared wavelength ranges with the opportunity to scale down further to visible wavelengths. The observed resolution is 0.47lambda, clearly smaller than the diffraction limit of 0.61lambda and is supported by detailed theoretical analyses and comprehensive numerical simulations. Importantly, we clearly show for the first time this subdiffraction limit imaging is due to the resonant excitation of surface slab modes, permitting amplification of evanescent waves. The demonstrated "superlens" has the largest figure of merit ever reported till date both theoretically and experimentally. The techniques and devices described in this thesis can be further applied to develop new devices with different functionalities. In Chapter 6 we describe two examples using these ideas. First, we experimentally demonstrate the use of the nanomechanical proximity perturbation technique to develop a phase retarder for on-chip all state polarization control. Next, we use the negative refraction photonic crystals described in Chapter 5 to achieve a special kind of bandgap called the zero-n¯ bandgap having unique properties.

  8. CMB EB and TB cross-spectrum estimation via pseudospectrum techniques

    NASA Astrophysics Data System (ADS)

    Grain, J.; Tristram, M.; Stompor, R.

    2012-10-01

    We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.

  9. Frequency standards requirements of the NASA deep space network to support outer planet missions

    NASA Technical Reports Server (NTRS)

    Fliegel, H. F.; Chao, C. C.

    1974-01-01

    Navigation of Mariner spacecraft to Jupiter and beyond will require greater accuracy of positional determination than heretofore obtained if the full experimental capabilities of this type of spacecraft are to be utilized. Advanced navigational techniques which will be available by 1977 include Very Long Baseline Interferometry (VLBI), three-way Doppler tracking (sometimes called quasi-VLBI), and two-way Doppler tracking. It is shown that VLBI and quasi-VLBI methods depend on the same basic concept, and that they impose nearly the same requirements on the stability of frequency standards at the tracking stations. It is also shown how a realistic modelling of spacecraft navigational errors prevents overspecifying the requirements to frequency stability.

  10. Progress in development of neutron energy spectrometer for deuterium plasma operation in KSTAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomita, H., E-mail: tomita@nagoya-u.jp; Yamashita, F.; Nakayama, Y.

    2014-11-15

    Two types of DD neutron energy spectrometer (NES) are under development for deuterium plasma operation in KSTAR to understand behavior of beam ions in the plasma. One is based on the state-of-the-art nuclear emulsion technique. The other is based on a coincidence detection of a recoiled proton and a scattered neutron caused by an elastic scattering of an incident DD neutron, which is called an associated particle coincidence counting-NES. The prototype NES systems were installed at J-port in KSTAR in 2012. During the 2012 and 2013 experimental campaigns, multiple shots-integrated neutron spectra were preliminarily obtained by the nuclear emulsion-based NESmore » system.« less

  11. Automatic movie skimming with general tempo analysis

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.

    2003-11-01

    Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.

  12. The Jeremiah Metzger lecture of The American Clinical and Climatological Association 1975. New genetic insight into old diseases.

    PubMed Central

    McKusick, V. A.

    1976-01-01

    With these three examples, examined in some detail, I have attempted to indicate new directions in medical genetics. Some of the recognizable generalities are the following: 1. With the application of cell culture techniques in the area called human somatic cell genetics, human genetics has become essentially an experimental science. Somaticcell studies have provided insight into genetic disorders that would not have been possible from studies in the whole organism. Even therapeutic possibilities can be explored. Somatic cell hyubridization has substituted for controlled matings in permitting linkage studies for mapping of the human chromosomes... Images Fig. 1 Fig. 8 Fig. 9 PMID:960419

  13. Progress in development of neutron energy spectrometer for deuterium plasma operation in KSTARa)

    NASA Astrophysics Data System (ADS)

    Tomita, H.; Yamashita, F.; Nakayama, Y.; Morishima, K.; Yamamoto, Y.; Sakai, Y.; Cheon, M. S.; Isobe, M.; Ogawa, K.; Hayashi, S.; Kawarabayashi, J.; Iguchi, T.

    2014-11-01

    Two types of DD neutron energy spectrometer (NES) are under development for deuterium plasma operation in KSTAR to understand behavior of beam ions in the plasma. One is based on the state-of-the-art nuclear emulsion technique. The other is based on a coincidence detection of a recoiled proton and a scattered neutron caused by an elastic scattering of an incident DD neutron, which is called an associated particle coincidence counting-NES. The prototype NES systems were installed at J-port in KSTAR in 2012. During the 2012 and 2013 experimental campaigns, multiple shots-integrated neutron spectra were preliminarily obtained by the nuclear emulsion-based NES system.

  14. Mechanisms of Virus Assembly

    PubMed Central

    Perlmutter, Jason D.; Hagan, Michael F.

    2015-01-01

    Viruses are nanoscale entities containing a nucleic acid genome encased in a protein shell called a capsid, and in some cases surrounded by a lipid bilayer membrane. This review summarizes the physics that govern the processes by which capsids assembles within their host cells and in vitro. We describe the thermodynamics and kinetics for assembly of protein subunits into icosahedral capsid shells, and how these are modified in cases where the capsid assembles around a nucleic acid or on a lipid bilayer. We present experimental and theoretical techniques that have been used to characterize capsid assembly, and we highlight aspects of virus assembly which are likely to receive significant attention in the near future. PMID:25532951

  15. Causal Entropies – a measure for determining changes in the temporal organization of neural systems

    PubMed Central

    Waddell, Jack; Dzakpasu, Rhonda; Booth, Victoria; Riley, Brett; Reasor, Jonathan; Poe, Gina; Zochowski, Michal

    2009-01-01

    We propose a novel measure to detect temporal ordering in the activity of individual neurons in a local network, which is thought to be a hallmark of activity-dependent synaptic modifications during learning. The measure, called Causal Entropy, is based on the time-adaptive detection of asymmetries in the relative temporal patterning between neuronal pairs. We characterize properties of the measure on both simulated data and experimental multiunit recordings of hippocampal neurons from the awake, behaving rat, and show that the metric can more readily detect those asymmetries than standard cross correlation-based techniques, especially since the temporal sensitivity of causal entropy can detect such changes rapidly and dynamically. PMID:17275095

  16. Lipid membranes and single ion channel recording for the advanced physics laboratory

    NASA Astrophysics Data System (ADS)

    Klapper, Yvonne; Nienhaus, Karin; Röcker, Carlheinz; Ulrich Nienhaus, G.

    2014-05-01

    We present an easy-to-handle, low-cost, and reliable setup to study various physical phenomena on a nanometer-thin lipid bilayer using the so-called black lipid membrane technique. The apparatus allows us to precisely measure optical and electrical properties of free-standing lipid membranes, to study the formation of single ion channels, and to gain detailed information on the ion conduction properties of these channels using statistical physics and autocorrelation analysis. The experiments are well suited as part of an advanced physics or biophysics laboratory course; they interconnect physics, chemistry, and biology and will be appealing to students of the natural sciences who are interested in quantitative experimentation.

  17. Experimental Semiautonomous Vehicle

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Mishkin, Andrew H.; Litwin, Todd E.; Matthies, Larry H.; Cooper, Brian K.; Nguyen, Tam T.; Gat, Erann; Gennery, Donald B.; Firby, Robert J.; Miller, David P.; hide

    1993-01-01

    Semiautonomous rover vehicle serves as testbed for evaluation of navigation and obstacle-avoidance techniques. Designed to traverse variety of terrains. Concepts developed applicable to robots for service in dangerous environments as well as to robots for exploration of remote planets. Called Robby, vehicle 4 m long and 2 m wide, with six 1-m-diameter wheels. Mass of 1,200 kg and surmounts obstacles as large as 1 1/2 m. Optimized for development of machine-vision-based strategies and equipped with complement of vision and direction sensors and image-processing computers. Front and rear cabs steer and roll with respect to centerline of vehicle. Vehicle also pivots about central axle, so wheels comply with almost any terrain.

  18. Linking amphibian call structure to the environment: the interplay between phenotypic flexibility and individual attributes

    PubMed Central

    Arim, Matías; Narins, Peter M.

    2011-01-01

    The structure of the environment surrounding signal emission produces different patterns of degradation and attenuation. The expected adjustment of calls to ensure signal transmission in an environment was formalized in the acoustic adaptation hypothesis. Within this framework, most studies considered anuran calls as fixed attributes determined by local adaptations. However, variability in vocalizations as a product of phenotypic expression has also been reported. Empirical evidence supporting the association between environment and call structure has been inconsistent, particularly in anurans. Here, we identify a plausible causal structure connecting environment, individual attributes, and temporal and spectral adjustments as direct or indirect determinants of the observed variation in call attributes of the frog Hypsiboas pulchellus. For that purpose, we recorded the calls of 40 males in the field, together with vegetation density and other environmental descriptors of the calling site. Path analysis revealed a strong effect of habitat structure on the temporal parameters of the call, and an effect of site temperature conditioning the size of organisms calling at each site and thus indirectly affecting the dominant frequency of the call. Experimental habitat modification with a styrofoam enclosure yielded results consistent with field observations, highlighting the potential role of call flexibility on detected call patterns. Both, experimental and correlative results indicate the need to incorporate the so far poorly considered role of phenotypic plasticity in the complex connection between environmental structure and individual call attributes. PMID:22479134

  19. Impact of JPEG2000 compression on endmember extraction and unmixing of remotely sensed hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada

    2010-07-01

    Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.

  20. Interferometry-based free space communication and information processing

    NASA Astrophysics Data System (ADS)

    Arain, Muzammil Arshad

    This dissertation studies, analyzes, and experimentally demonstrates the innovative use of interference phenomenon in the field of opto-electronic information processing and optical communications. A number of optical systems using interferometric techniques both in the optical and the electronic domains has been demonstrated in the filed of signal transmission and processing, optical metrology, defense, and physical sensors. Specifically it has been shown that the interference of waves in the form of holography can be exploited to realize a novel optical scanner called Code Multiplexed Optical Scanner (C-MOS). The C-MOS features large aperture, wide scan angles, 3-D beam control, no moving parts, and high beam scanning resolution. A C-MOS based free space optical transceiver for bi-directional communication has also been experimentally demonstrated. For high speed, large bandwidth, and high frequency operation, an optically implemented reconfigurable RF transversal filter design is presented that implements wide range of filtering algorithms. A number of techniques using heterodyne interferometry via acousto-optic device for optical path length measurements have been described. Finally, a whole new class of interferometric sensors for optical metrology and sensing applications is presented. A non-traditional interferometric output signal processing scheme has been developed. Applications include, for example, temperature sensors for harsh environments for a wide temperature range from room temperature to 1000°C.

  1. High-speed reference-beam-angle control technique for holographic memory drive

    NASA Astrophysics Data System (ADS)

    Yamada, Ken-ichiro; Ogata, Takeshi; Hosaka, Makoto; Fujita, Koji; Okuyama, Atsushi

    2016-09-01

    We developed a holographic memory drive for next-generation optical memory. In this study, we present the key technology for achieving a high-speed transfer rate for reproduction, that is, a high-speed control technique for the reference beam angle. In reproduction in a holographic memory drive, there is the issue that the optimum reference beam angle during reproduction varies owing to distortion of the medium. The distortion is caused by, for example, temperature variation, beam irradiation, and moisture absorption. Therefore, a reference-beam-angle control technique to position the reference beam at the optimum angle is crucial. We developed a new optical system that generates an angle-error-signal to detect the optimum reference beam angle. To achieve the high-speed control technique using the new optical system, we developed a new control technique called adaptive final-state control (AFSC) that adds a second control input to the first one derived from conventional final-state control (FSC) at the time of angle-error-signal detection. We established an actual experimental system employing AFSC to achieve moving control between each page (Page Seek) within 300 µs. In sequential multiple Page Seeks, we were able to realize positioning to the optimum angles of the reference beam that maximize the diffracted beam intensity. We expect that applying the new control technique to the holographic memory drive will enable a giga-bit/s-class transfer rate.

  2. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  3. Physiotherapists use a small number of behaviour change techniques when promoting physical activity: A systematic review comparing experimental and observational studies.

    PubMed

    Kunstler, Breanne E; Cook, Jill L; Freene, Nicole; Finch, Caroline F; Kemp, Joanne L; O'Halloran, Paul D; Gaida, James E

    2018-06-01

    Physiotherapists promote physical activity as part of their practice. This study reviewed the behaviour change techniques physiotherapists use when promoting physical activity in experimental and observational studies. Systematic review of experimental and observational studies. Twelve databases were searched using terms related to physiotherapy and physical activity. We included experimental studies evaluating the efficacy of physiotherapist-led physical activity interventions delivered to adults in clinic-based private practice and outpatient settings to individuals with, or at risk of, non-communicable diseases. Observational studies reporting the techniques physiotherapists use when promoting physical activity were also included. The behaviour change techniques used in all studies were identified using the Behaviour Change Technique Taxonomy. The behaviour change techniques appearing in efficacious and inefficacious experimental interventions were compared using a narrative approach. Twelve studies (nine experimental and three observational) were retained from the initial search yield of 4141. Risk of bias ranged from low to high. Physiotherapists used seven behaviour change techniques in the observational studies, compared to 30 behaviour change techniques in the experimental studies. Social support (unspecified) was the most frequently identified behaviour change technique across both settings. Efficacious experimental interventions used more behaviour change techniques (n=29) and functioned in more ways (n=6) than did inefficacious experimental interventions (behaviour change techniques=10 and functions=1). Physiotherapists use a small number of behaviour change techniques. Less behaviour change techniques were identified in observational studies compared to experimental studies, suggesting physiotherapists use less BCTs clinically than experimentally. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  4. Control system design for flexible structures using data models

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.

    1993-01-01

    The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.

  5. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  6. Thermal diffusivity measurement in thin metallic filaments using the mirage method with multiple probe beams and a digital camera

    NASA Astrophysics Data System (ADS)

    Vargas, E.; Cifuentes, A.; Alvarado, S.; Cabrera, H.; Delgado, O.; Calderón, A.; Marín, E.

    2018-02-01

    Photothermal beam deflection is a well-established technique for measuring thermal diffusivity. In this technique, a pump laser beam generates temperature variations on the surface of the sample to be studied. These variations transfer heat to the surrounding medium, which may be air or any other fluid. The medium in turn experiences a change in the refractive index, which will be proportional to the temperature field on the sample surface when the distance to this surface is small. A probe laser beam will suffer a deflection due to the refractive index periodical changes, which is usually monitored by means of a quadrant photodetector or a similar device aided by lock-in amplification. A linear relationship that arises in this technique is that given by the phase lag of the thermal wave as a function of the distance to a punctual heat source when unidimensional heat diffusion can be guaranteed. This relationship is useful in the calculation of the sample's thermal diffusivity, which can be obtained straightforwardly by the so-called slope method, if the pump beam modulation frequency is well-known. The measurement procedure requires the experimenter to displace the probe beam at a given distance from the heat source, measure the phase lag at that offset, and repeat this for as many points as desired. This process can be quite lengthy in dependence of the number points. In this paper, we propose a detection scheme, which overcomes this limitation and simplifies the experimental setup using a digital camera that substitutes all detection hardware utilizing motion detection techniques and software digital signal lock-in post-processing. In this work, the method is demonstrated using thin metallic filaments as samples.

  7. Thermal diffusivity measurement in thin metallic filaments using the mirage method with multiple probe beams and a digital camera.

    PubMed

    Vargas, E; Cifuentes, A; Alvarado, S; Cabrera, H; Delgado, O; Calderón, A; Marín, E

    2018-02-01

    Photothermal beam deflection is a well-established technique for measuring thermal diffusivity. In this technique, a pump laser beam generates temperature variations on the surface of the sample to be studied. These variations transfer heat to the surrounding medium, which may be air or any other fluid. The medium in turn experiences a change in the refractive index, which will be proportional to the temperature field on the sample surface when the distance to this surface is small. A probe laser beam will suffer a deflection due to the refractive index periodical changes, which is usually monitored by means of a quadrant photodetector or a similar device aided by lock-in amplification. A linear relationship that arises in this technique is that given by the phase lag of the thermal wave as a function of the distance to a punctual heat source when unidimensional heat diffusion can be guaranteed. This relationship is useful in the calculation of the sample's thermal diffusivity, which can be obtained straightforwardly by the so-called slope method, if the pump beam modulation frequency is well-known. The measurement procedure requires the experimenter to displace the probe beam at a given distance from the heat source, measure the phase lag at that offset, and repeat this for as many points as desired. This process can be quite lengthy in dependence of the number points. In this paper, we propose a detection scheme, which overcomes this limitation and simplifies the experimental setup using a digital camera that substitutes all detection hardware utilizing motion detection techniques and software digital signal lock-in post-processing. In this work, the method is demonstrated using thin metallic filaments as samples.

  8. Vibration band gaps for elastic metamaterial rods using wave finite element method

    NASA Astrophysics Data System (ADS)

    Nobrega, E. D.; Gautier, F.; Pelat, A.; Dos Santos, J. M. C.

    2016-10-01

    Band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators are investigated. New techniques to analyze metamaterial systems are using a combination of analytical or numerical method with wave propagation. One of them, called here wave spectral element method (WSEM), consists of combining the spectral element method (SEM) with Floquet-Bloch's theorem. A modern methodology called wave finite element method (WFEM), developed to calculate dynamic behavior in periodic acoustic and structural systems, utilizes a similar approach where SEM is substituted by the conventional finite element method (FEM). In this paper, it is proposed to use WFEM to calculate band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators of multi-degree-of-freedom (M-DOF). Simulated examples with band gaps generated by Bragg scattering and local resonators are calculated by WFEM and verified with WSEM, which is used as a reference method. Results are presented in the form of attenuation constant, vibration transmittance and frequency response function (FRF). For all cases, WFEM and WSEM results are in agreement, provided that the number of elements used in WFEM is sufficient to convergence. An experimental test was conducted with a real elastic metamaterial rod, manufactured with plastic in a 3D printer, without local resonance-type effect. The experimental results for the metamaterial rod with band gaps generated by Bragg scattering are compared with the simulated ones. Both numerical methods (WSEM and WFEM) can localize the band gap position and width very close to the experimental results. A hybrid approach combining WFEM with the commercial finite element software ANSYS is proposed to model complex metamaterial systems. Two examples illustrating its efficiency and accuracy to model an elastic metamaterial rod unit-cell using 1D simple rod element and 3D solid element are demonstrated and the results present good approximation to the experimental data.

  9. Observation of Phase-Filling Singularities in the Optical Dielectric Function of Highly Doped n-Type Ge.

    PubMed

    Xu, Chi; Fernando, Nalin S; Zollner, Stefan; Kouvetakis, John; Menéndez, José

    2017-06-30

    Phase-filling singularities in the optical response function of highly doped (>10^{19}  cm^{-3}) germanium are theoretically predicted and experimentally confirmed using spectroscopic ellipsometry. Contrary to direct-gap semiconductors, which display the well-known Burstein-Moss phenomenology upon doping, the critical point in the joint density of electronic states associated with the partially filled conduction band in n-Ge corresponds to the so-called E_{1} and E_{1}+Δ_{1} transitions, which are two-dimensional in character. As a result of this reduced dimensionality, there is no edge shift induced by Pauli blocking. Instead, one observes the "original" critical point (shifted only by band gap renormalization) and an additional feature associated with the level occupation discontinuity at the Fermi level. The experimental observation of this feature is made possible by the recent development of low-temperature, in situ doping techniques that allow the fabrication of highly doped films with exceptionally flat doping profiles.

  10. Experimental observation of acoustic sub-harmonic diffraction by a grating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jingfei, E-mail: benjamin.jf.liu@gatech.edu; Declercq, Nico F., E-mail: declercqdepatin@gatech.edu

    2014-06-28

    A diffraction grating is a spatial filter causing sound waves or optical waves to reflect in directions determined by the frequency of the waves and the period of the grating. The classical grating equation is the governing principle that has successfully described the diffraction phenomena caused by gratings. However, in this work, we show experimental observation of the so-called sub-harmonic diffraction in acoustics that cannot be explained by the classical grating equation. Experiments indicate two physical phenomena causing the effect: internal scattering effects within the corrugation causing a phase shift and nonlinear acoustic effects generating new frequencies. This discovery expandsmore » our current understanding of the diffraction phenomenon, and it also makes it possible to better design spatial diffraction spectra, such as a rainbow effect in optics with a more complicated color spectrum than a traditional rainbow. The discovery reveals also a possibly new technique to study nonlinear acoustics by exploitation of the natural spatial filtering effect inherent to an acoustic diffraction grating.« less

  11. On-the-fly machine-learning for high-throughput experiments: search for rare-earth-free permanent magnets

    PubMed Central

    Kusne, Aaron Gilad; Gao, Tieren; Mehta, Apurva; Ke, Liqin; Nguyen, Manh Cuong; Ho, Kai-Ming; Antropov, Vladimir; Wang, Cai-Zhuang; Kramer, Matthew J.; Long, Christian; Takeuchi, Ichiro

    2014-01-01

    Advanced materials characterization techniques with ever-growing data acquisition speed and storage capabilities represent a challenge in modern materials science, and new procedures to quickly assess and analyze the data are needed. Machine learning approaches are effective in reducing the complexity of data and rapidly homing in on the underlying trend in multi-dimensional data. Here, we show that by employing an algorithm called the mean shift theory to a large amount of diffraction data in high-throughput experimentation, one can streamline the process of delineating the structural evolution across compositional variations mapped on combinatorial libraries with minimal computational cost. Data collected at a synchrotron beamline are analyzed on the fly, and by integrating experimental data with the inorganic crystal structure database (ICSD), we can substantially enhance the accuracy in classifying the structural phases across ternary phase spaces. We have used this approach to identify a novel magnetic phase with enhanced magnetic anisotropy which is a candidate for rare-earth free permanent magnet. PMID:25220062

  12. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  13. Development of electrical test procedures for qualification of spacecraft against EID. Volume 1: The CAN test and other relevant data

    NASA Technical Reports Server (NTRS)

    Wilkenfeld, J. M.; Judge, R. J. R.; Harlacher, B. L.

    1982-01-01

    A combined experimental and analytical program to develop system electrical test procedures for the qualification of spacecraft against damage produced by space-electron-induced discharges (EID) occurring on spacecraft dielectric outer surfaces is described. The data on the response of a simple satellite model, called CAN, to electron-induced discharges is presented. The experimental results were compared to predicted behavior and to the response of the CAN to electrical injection techniques simulating blowoff and arc discharges. Also included is a review of significant results from other ground tests and the P78-2 program to form a data base from which is specified those test procedures which optimally simulate the response of spacecraft to EID. The electrical and electron spraying test data were evaluated to provide a first-cut determination of the best methods for performance of electrical excitation qualification tests from the point of view of simulation fidelity.

  14. Matter-wave diffraction approaching limits predicted by Feynman path integrals for multipath interference

    NASA Astrophysics Data System (ADS)

    Barnea, A. Ronny; Cheshnovsky, Ori; Even, Uzi

    2018-02-01

    Interference experiments have been paramount in our understanding of quantum mechanics and are frequently the basis of testing the superposition principle in the framework of quantum theory. In recent years, several studies have challenged the nature of wave-function interference from the perspective of Born's rule—namely, the manifestation of so-called high-order interference terms in a superposition generated by diffraction of the wave functions. Here we present an experimental test of multipath interference in the diffraction of metastable helium atoms, with large-number counting statistics, comparable to photon-based experiments. We use a variation of the original triple-slit experiment and accurate single-event counting techniques to provide a new experimental bound of 2.9 ×10-5 on the statistical deviation from the commonly approximated null third-order interference term in Born's rule for matter waves. Our value is on the order of the maximal contribution predicted for multipath trajectories by Feynman path integrals.

  15. Coded excitation with spectrum inversion (CEXSI) for ultrasound array imaging.

    PubMed

    Wang, Yao; Metzger, Kurt; Stephens, Douglas N; Williams, Gregory; Brownlie, Scott; O'Donnell, Matthew

    2003-07-01

    In this paper, a scheme called coded excitation with spectrum inversion (CEXSI) is presented. An established optimal binary code whose spectrum has no nulls and possesses the least variation is encoded as a burst for transmission. Using this optimal code, the decoding filter can be derived directly from its inverse spectrum. Various transmission techniques can be used to improve energy coupling within the system pass-band. We demonstrate its potential to achieve excellent decoding with very low (< 80 dB) side-lobes. For a 2.6 micros code, an array element with a center frequency of 10 MHz and fractional bandwidth of 38%, range side-lobes of about 40 dB have been achieved experimentally with little compromise in range resolution. The signal-to-noise ratio (SNR) improvement also has been characterized at about 14 dB. Along with simulations and experimental data, we present a formulation of the scheme, according to which CEXSI can be extended to improve SNR in sparse array imaging in general.

  16. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  17. Experimental Investigation of Unsteady Thrust Augmentation Using a Speaker-Driven Jet

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wernet, Mark P.; John, Wentworth T.

    2007-01-01

    An experimental investigation is described in which a simple speaker-driven jet was used as a pulsed thrust source (driver) for an ejector configuration. The objectives of the investigation were twofold. The first was to expand the experimental body of evidence showing that an unsteady thrust source, combined with a properly sized ejector generally yields higher thrust augmentation values than a similarly sized, steady driver of equivalent thrust. The second objective was to identify characteristics of the unsteady driver that may be useful for sizing ejectors, and for predicting the thrust augmentation levels that may be achieved. The speaker-driven jet provided a convenient source for the investigation because it is entirely unsteady (i.e., it has no mean velocity component) and because relevant parameters such as frequency, time-averaged thrust, and diameter are easily variable. The experimental setup will be described, as will the two main measurements techniques employed. These are thrust and digital particle imaging velocimetry of the driver. It will be shown that thrust augmentation values as high as 1.8 were obtained, that the diameter of the best ejector scaled with the dimensions of the emitted vortex, and that the so-called formation time serves as a useful dimensionless parameter by which to characterize the jet and predict performance.

  18. Understanding bistability in yeast glycolysis using general properties of metabolic pathways.

    PubMed

    Planqué, Robert; Bruggeman, Frank J; Teusink, Bas; Hulshof, Josephus

    2014-09-01

    Glycolysis is the central pathway in energy metabolism in the majority of organisms. In a recent paper, van Heerden et al. showed experimentally and computationally that glycolysis can exist in two states, a global steady state and a so-called imbalanced state. In the imbalanced state, intermediary metabolites accumulate at low levels of ATP and inorganic phosphate. It was shown that Baker's yeast uses a peculiar regulatory mechanism--via trehalose metabolism--to ensure that most yeast cells reach the steady state and not the imbalanced state. Here we explore the apparent bistable behaviour in a core model of glycolysis that is based on a well-established detailed model, and study in great detail the bifurcation behaviour of solutions, without using any numerical information on parameter values. We uncover a rich suite of solutions, including so-called imbalanced states, bistability, and oscillatory behaviour. The techniques employed are generic, directly suitable for a wide class of biochemical pathways, and could lead to better analytical treatments of more detailed models. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Stretching and Joint Mobilization Exercises Reduce Call-Center Operators’ Musculoskeletal Discomfort and Fatigue

    PubMed Central

    de Castro Lacaze, Denise Helena; Sacco, Isabel de C. N.; Rocha, Lys Esther; de Bragança Pereira, Carlos Alberto; Casarotto, Raquel Aparecida

    2010-01-01

    AIM: We sought to evaluate musculoskeletal discomfort and mental and physical fatigue in the call-center workers of an airline company before and after a supervised exercise program compared with rest breaks during the work shift. INTRODUCTION: This was a longitudinal pilot study conducted in a flight-booking call-center for an airline in São Paulo, Brazil. Occupational health activities are recommended to decrease the negative effects of the call-center working conditions. In practice, exercise programs are commonly recommended for computer workers, but their effects have not been studied in call-center operators. METHODS: Sixty-four call-center operators participated in this study. Thirty-two subjects were placed into the experimental group and attended a 10-min daily exercise session for 2 months. Conversely, 32 participants were placed into the control group and took a 10-min daily rest break during the same period. Each subject was evaluated once a week by means of the Corlett-Bishop body map with a visual analog discomfort scale and the Chalder fatigue questionnaire. RESULTS: Musculoskeletal discomfort decreased in both groups, but the reduction was only statistically significant for the spine and buttocks (p=0.04) and the sum of the segments (p=0.01) in the experimental group. In addition, the experimental group showed significant differences in the level of mental fatigue, especially in questions related to memory Rienzo, #181ff and tiredness (p=0.001). CONCLUSIONS: Our preliminary results demonstrate that appropriately designed and supervised exercise programs may be more efficient than rest breaks in decreasing discomfort and fatigue levels in call-center operators. PMID:20668622

  20. Particle swarm optimization with recombination and dynamic linkage discovery.

    PubMed

    Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung

    2007-12-01

    In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.

  1. Covert Channels in SIP for VoIP Signalling

    NASA Astrophysics Data System (ADS)

    Mazurczyk, Wojciech; Szczypiorski, Krzysztof

    In this paper, we evaluate available steganographic techniques for SIP (Session Initiation Protocol) that can be used for creating covert channels during signaling phase of VoIP (Voice over IP) call. Apart from characterizing existing steganographic methods we provide new insights by introducing new techniques. We also estimate amount of data that can be transferred in signalling messages for typical IP telephony call.

  2. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    PubMed

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  3. Stochastic-field cavitation model

    NASA Astrophysics Data System (ADS)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  4. A cavitation model based on Eulerian stochastic fields

    NASA Astrophysics Data System (ADS)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  5. An Indoor Positioning Method for Smartphones Using Landmarks and PDR.

    PubMed

    Wang, Xi; Jiang, Mingxing; Guo, Zhongwen; Hu, Naijun; Sun, Zhongwei; Liu, Jing

    2016-12-15

    Recently location based services (LBS) have become increasingly popular in indoor environments. Among these indoor positioning techniques providing LBS, a fusion approach combining WiFi-based and pedestrian dead reckoning (PDR) techniques is drawing more and more attention of researchers. Although this fusion method performs well in some cases, it still has some limitations, such as heavy computation and inconvenience for real-time use. In this work, we study map information of a given indoor environment, analyze variations of WiFi received signal strength (RSS), define several kinds of indoor landmarks, and then utilize these landmarks to correct accumulated errors derived from PDR. This fusion scheme, called Landmark-aided PDR (LaP), is proved to be light-weight and suitable for real-time implementation by running an Android application designed for the experiment. We compared LaP with other PDR-based fusion approaches. Experimental results show that the proposed scheme can achieve a significant improvement with an average accuracy of 2.17 m.

  6. A 3D Model for Eddy Current Inspection in Aeronautics: Application to Riveted Structures

    NASA Astrophysics Data System (ADS)

    Paillard, S.; Pichenot, G.; Lambert, M.; Voillaume, H.; Dominguez, N.

    2007-03-01

    Eddy current technique is currently an operational tool used for fastener inspection which is an important issue for the maintenance of aircraft structures. The industry calls for faster, more sensitive and reliable NDT techniques for the detection and characterization of potential flaws nearby rivet. In order to reduce the development time and to optimize the design and the performances assessment of an inspection procedure, the CEA and EADS have started a collaborative work aiming at extending the modeling features of the CIVA non destructive simulation plat-form in order to handle the configuration of a layered planar structure with a rivet and an embedded flaw nearby. Therefore, an approach based on the Volume Integral Method using the Green dyadic formalism which greatly increases computation efficiency has been developed. The first step, modeling the rivet without flaw as a hole in a multi-stratified structure, has been reached and validated in several configurations with experimental data.

  7. An Indoor Positioning Method for Smartphones Using Landmarks and PDR †

    PubMed Central

    Wang, Xi; Jiang, Mingxing; Guo, Zhongwen; Hu, Naijun; Sun, Zhongwei; Liu, Jing

    2016-01-01

    Recently location based services (LBS) have become increasingly popular in indoor environments. Among these indoor positioning techniques providing LBS, a fusion approach combining WiFi-based and pedestrian dead reckoning (PDR) techniques is drawing more and more attention of researchers. Although this fusion method performs well in some cases, it still has some limitations, such as heavy computation and inconvenience for real-time use. In this work, we study map information of a given indoor environment, analyze variations of WiFi received signal strength (RSS), define several kinds of indoor landmarks, and then utilize these landmarks to correct accumulated errors derived from PDR. This fusion scheme, called Landmark-aided PDR (LaP), is proved to be light-weight and suitable for real-time implementation by running an Android application designed for the experiment. We compared LaP with other PDR-based fusion approaches. Experimental results show that the proposed scheme can achieve a significant improvement with an average accuracy of 2.17 m. PMID:27983670

  8. Simultaneous regularization method for the determination of radius distributions from experimental multiangle correlation functions

    NASA Astrophysics Data System (ADS)

    Buttgereit, R.; Roths, T.; Honerkamp, J.; Aberle, L. B.

    2001-10-01

    Dynamic light scattering experiments have become a powerful tool in order to investigate the dynamical properties of complex fluids. In many applications in both soft matter research and industry so-called ``real world'' systems are subject of great interest. Here, the dilution of the investigated system often cannot be changed without getting measurement artifacts, so that one often has to deal with highly concentrated and turbid media. The investigation of such systems requires techniques that suppress the influence of multiple scattering, e.g., cross correlation techniques. However, measurements at turbid as well as highly diluted media lead to data with low signal-to-noise ratio, which complicates data analysis and leads to unreliable results. In this article a multiangle regularization method is discussed, which copes with the difficulties arising from such samples and enhances enormously the quality of the estimated solution. In order to demonstrate the efficiency of this multiangle regularization method we applied it to cross correlation functions measured at highly turbid samples.

  9. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  10. Semiconductor photoelectrochemistry

    NASA Technical Reports Server (NTRS)

    Buoncristiani, A. M.; Byvik, C. E.

    1983-01-01

    Semiconductor photoelectrochemical reactions are investigated. A model of the charge transport processes in the semiconductor, based on semiconductor device theory, is presented. It incorporates the nonlinear processes characterizing the diffusion and reaction of charge carriers in the semiconductor. The model is used to study conditions limiting useful energy conversion, specifically the saturation of current flow due to high light intensity. Numerical results describing charge distributions in the semiconductor and its effects on the electrolyte are obtained. Experimental results include: an estimate rate at which a semiconductor photoelectrode is capable of converting electromagnetic energy into chemical energy; the effect of cell temperature on the efficiency; a method for determining the point of zero zeta potential for macroscopic semiconductor samples; a technique using platinized titanium dioxide powders and ultraviolet radiation to produce chlorine, bromine, and iodine from solutions containing their respective ions; the photoelectrochemical properties of a class of layered compounds called transition metal thiophosphates; and a technique used to produce high conversion efficiency from laser radiation to chemical energy.

  11. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  12. Stochastic-field cavitation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumond, J., E-mail: julien.dumond@areva.com; AREVA GmbH, Erlangen, Paul-Gossen-Strasse 100, D-91052 Erlangen; Magagnato, F.

    2013-07-15

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-fieldmore » cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.« less

  13. 77 FR 56710 - Proposed Information Collection (Call Center Satisfaction Survey): Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-13

    ... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0744] Proposed Information Collection (Call Center Satisfaction Survey): Comment Request AGENCY: Veterans Benefits Administration, Department of... techniques or the use of other forms of information technology. Title: VBA Call Center Satisfaction Survey...

  14. Comparative Analysis of Sequential Proximal Optimizing Technique Versus Kissing Balloon Inflation Technique in Provisional Bifurcation Stenting: Fractal Coronary Bifurcation Bench Test.

    PubMed

    Finet, Gérard; Derimay, François; Motreff, Pascal; Guerin, Patrice; Pilet, Paul; Ohayon, Jacques; Darremont, Olivier; Rioufol, Gilles

    2015-08-24

    This study used a fractal bifurcation bench model to compare 6 optimization sequences for coronary bifurcation provisional stenting, including 1 novel sequence without kissing balloon inflation (KBI), comprising initial proximal optimizing technique (POT) + side-branch inflation (SBI) + final POT, called "re-POT." In provisional bifurcation stenting, KBI fails to improve the rate of major adverse cardiac events. Proximal geometric deformation increases the rate of in-stent restenosis and target lesion revascularization. A bifurcation bench model was used to compare KBI alone, KBI after POT, KBI with asymmetric inflation pressure after POT, and 2 sequences without KBI: initial POT plus SBI, and initial POT plus SBI with final POT (called "re-POT"). For each protocol, 5 stents were tested using 2 different drug-eluting stent designs: that is, a total of 60 tests. Compared with the classic KBI-only sequence and those associating POT with modified KBI, the re-POT sequence gave significantly (p < 0.05) better geometric results: it reduced SB ostium stent-strut obstruction from 23.2 ± 6.0% to 5.6 ± 8.3%, provided perfect proximal stent apposition with almost perfect circularity (ellipticity index reduced from 1.23 ± 0.02 to 1.04 ± 0.01), reduced proximal area overstretch from 24.2 ± 7.6% to 8.0 ± 0.4%, and reduced global strut malapposition from 40 ± 6.2% to 2.6 ± 1.4%. In comparison with 5 other techniques, the re-POT sequence significantly optimized the final result of provisional coronary bifurcation stenting, maintaining circular geometry while significantly reducing SB ostium strut obstruction and global strut malapposition. These experimental findings confirm that provisional stenting may be optimized more effectively without KBI using re-POT. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  15. Computational comparison of quantum-mechanical models for multistep direct reactions

    NASA Astrophysics Data System (ADS)

    Koning, A. J.; Akkermans, J. M.

    1993-02-01

    We have carried out a computational comparison of all existing quantum-mechanical models for multistep direct (MSD) reactions. The various MSD models, including the so-called Feshbach-Kerman-Koonin, Tamura-Udagawa-Lenske and Nishioka-Yoshida-Weidenmüller models, have been implemented in a single computer system. All model calculations thus use the same set of parameters and the same numerical techniques; only one adjustable parameter is employed. The computational results have been compared with experimental energy spectra and angular distributions for several nuclear reactions, namely, 90Zr(p,p') at 80 MeV, 209Bi(p,p') at 62 MeV, and 93Nb(n,n') at 25.7 MeV. In addition, the results have been compared with the Kalbach systematics and with semiclassical exciton model calculations. All quantum MSD models provide a good fit to the experimental data. In addition, they reproduce the systematics very well and are clearly better than semiclassical model calculations. We furthermore show that the calculated predictions do not differ very strongly between the various quantum MSD models, leading to the conclusion that the simplest MSD model (the Feshbach-Kerman-Koonin model) is adequate for the analysis of experimental data.

  16. Determining the semantic similarities among Gene Ontology terms.

    PubMed

    Taha, Kamal

    2013-05-01

    We present in this paper novel techniques that determine the semantic relationships among GeneOntology (GO) terms. We implemented these techniques in a prototype system called GoSE, which resides between user application and GO database. Given a set S of GO terms, GoSE would return another set S' of GO terms, where each term in S' is semantically related to each term in S. Most current research is focused on determining the semantic similarities among GO ontology terms based solely on their IDs and proximity to one another in the GO graph structure, while overlooking the contexts of the terms, which may lead to erroneous results. The context of a GO term T is the set of other terms, whose existence in the GO graph structure is dependent on T. We propose novel techniques that determine the contexts of terms based on the concept of existence dependency. We present a stack-based sort-merge algorithm employing these techniques for determining the semantic similarities among GO terms.We evaluated GoSE experimentally and compared it with three existing methods. The results of measuring the semantic similarities among genes in KEGG and Pfam pathways retrieved from the DBGET and Sanger Pfam databases, respectively, have shown that our method outperforms the other three methods in recall and precision.

  17. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  18. Comparative Analysis of CNV Calling Algorithms: Literature Survey and a Case Study Using Bovine High-Density SNP Data.

    PubMed

    Xu, Lingyang; Hou, Yali; Bickhart, Derek M; Song, Jiuzhou; Liu, George E

    2013-06-25

    Copy number variations (CNVs) are gains and losses of genomic sequence between two individuals of a species when compared to a reference genome. The data from single nucleotide polymorphism (SNP) microarrays are now routinely used for genotyping, but they also can be utilized for copy number detection. Substantial progress has been made in array design and CNV calling algorithms and at least 10 comparison studies in humans have been published to assess them. In this review, we first survey the literature on existing microarray platforms and CNV calling algorithms. We then examine a number of CNV calling tools to evaluate their impacts using bovine high-density SNP data. Large incongruities in the results from different CNV calling tools highlight the need for standardizing array data collection, quality assessment and experimental validation. Only after careful experimental design and rigorous data filtering can the impacts of CNVs on both normal phenotypic variability and disease susceptibility be fully revealed.

  19. Comparison of fresh fuel experimental measurements to MCNPX calculations using self-interrogation neutron resonance densitometry

    NASA Astrophysics Data System (ADS)

    LaFleur, Adrienne M.; Charlton, William S.; Menlove, Howard O.; Swinhoe, Martyn T.

    2012-07-01

    A new non-destructive assay technique called Self-Interrogation Neutron Resonance Densitometry (SINRD) is currently being developed at Los Alamos National Laboratory (LANL) to improve existing nuclear safeguards measurements for Light Water Reactor (LWR) fuel assemblies. SINRD consists of four 235U fission chambers (FCs): bare FC, boron carbide shielded FC, Gd covered FC, and Cd covered FC. Ratios of different FCs are used to determine the amount of resonance absorption from 235U in the fuel assembly. The sensitivity of this technique is based on using the same fissile materials in the FCs as are present in the fuel because the effect of resonance absorption lines in the transmitted flux is amplified by the corresponding (n,f) reaction peaks in the fission chamber. In this work, experimental measurements were performed in air with SINRD using a reference Pressurized Water Reactor (PWR) 15×15 low enriched uranium (LEU) fresh fuel assembly at LANL. The purpose of this experiment was to assess the following capabilities of SINRD: (1) ability to measure the effective 235U enrichment of the PWR fresh LEU fuel assembly and (2) sensitivity and penetrability to the removal of fuel pins from an assembly. These measurements were compared to Monte Carlo N-Particle eXtended transport code (MCNPX) simulations to verify the accuracy of the MCNPX model of SINRD. The reproducibility of experimental measurements via MCNPX simulations is essential to validating the results and conclusions obtained from the simulations of SINRD for LWR spent fuel assemblies.

  20. A proposed technique for vehicle tracking, direction, and speed determination

    NASA Astrophysics Data System (ADS)

    Fisher, Paul S.; Angaye, Cleopas O.; Fisher, Howard P.

    2004-12-01

    A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don"t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.

  1. Development of lightweight aluminum compression panels reinforced by boron-epoxy infiltrated extrusions

    NASA Technical Reports Server (NTRS)

    Roy, P. A.; Mcelman, J. A.; Henshaw, J.

    1973-01-01

    Analytical and experimental studies were performed to evaluate the structural efficiencies afforded by the selective reinforcement of conventional aluminum compression panels with unidirectional boron epoxy composite materials. A unique approach for selective reinforcement was utilized called boron/epoxy infiltration. This technique uses extruded metal sections with preformed hollow voids into which unidirectional boron filaments are drawn and subsequently infiltrated with resin to form an integral part. Simplified analytical models were developed to investigate the behavior of stiffener webs with reinforced flanges. Theoretical results are presented demonstrating the effects of transverse shear, of the reinforcement, flange eccentricity and torsional stiffness in such construction. A series of 55 tests were conducted on boron-infiltrated rods and extruded structural sections.

  2. Intelligent Foreign Particle Inspection Machine for Injection Liquid Examination Based on Modified Pulse-Coupled Neural Networks

    PubMed Central

    Ge, Ji; Wang, YaoNan; Zhou, BoWen; Zhang, Hui

    2009-01-01

    A biologically inspired spiking neural network model, called pulse-coupled neural networks (PCNN), has been applied in an automatic inspection machine to detect visible foreign particles intermingled in glucose or sodium chloride injection liquids. Proper mechanisms and improved spin/stop techniques are proposed to avoid the appearance of air bubbles, which increases the algorithms' complexity. Modified PCNN is adopted to segment the difference images, judging the existence of foreign particles according to the continuity and smoothness properties of their moving traces. Preliminarily experimental results indicate that the inspection machine can detect the visible foreign particles effectively and the detection speed, accuracy and correct detection rate also satisfying the needs of medicine preparation. PMID:22412318

  3. Dual arm coordination and control

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Tso, Kam; Lee, Thomas

    1989-01-01

    A generalized master/slave technique and experimental results for coordinated control of two arms rigidly grasping an object is described. An interactive program has been developed to allow a user the flexibility to select appropriate control modes for a given experiment. This interface allows for control gain adjustments. The results of several experiments performed on this system to demonstrate its capabilities such as transporting an object with or without induced internal forces and movement of a constrained object are offered. The system is further developed to achieve a so-called shared control mode in which an operator specifies the free motion trajectory for a point on the object of manipulation via a joystick while the autonomous control system is used for coordination and control of the arms.

  4. Development of a special-purpose test surface guided by uncertainty analysis - Introduction of a new uncertainty analysis step

    NASA Technical Reports Server (NTRS)

    Wang, T.; Simon, T. W.

    1988-01-01

    Development of a recent experimental program to investigate the effects of streamwise curvature on boundary layer transition required making a bendable, heated and instrumented test wall, a rather nonconventional surface. The present paper describes this surface, the design choices made in its development and how uncertainty analysis was used, beginning early in the test program, to make such design choices. Published uncertainty analysis techniques were found to be of great value; but, it became clear that another step, one herein called the pre-test analysis, would aid the program development. Finally, it is shown how the uncertainty analysis was used to determine whether the test surface was qualified for service.

  5. Interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Bryce, Jeffrey W.

    1987-01-01

    Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at low frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is studied which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panel would oscillate at equal amplitude, to give equal acoustic source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to be cut off, and therefore be nonpropagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is being investigated theoretically and experimentally. Progress to date is discussed.

  6. CleAir Monitoring System for Particulate Matter: A Case in the Napoleonic Museum in Rome

    PubMed Central

    Bonacquisti, Valerio; Di Michele, Marta; Frasca, Francesca; Chianese, Angelo; Siani, Anna Maria

    2017-01-01

    Monitoring the air particulate concentration both outdoors and indoors is becoming a more relevant issue in the past few decades. An innovative, fully automatic, monitoring system called CleAir is presented. Such a system wants to go beyond the traditional technique (gravimetric analysis), allowing for a double monitoring approach: the traditional gravimetric analysis as well as the optical spectroscopic analysis of the scattering on the same filters in steady-state conditions. The experimental data are interpreted in terms of light percolation through highly scattering matter by means of the stretched exponential evolution. CleAir has been applied to investigate the daily distribution of particulate matter within the Napoleonic Museum in Rome as a test case. PMID:28892016

  7. Wave-variable framework for networked robotic systems with time delays and packet losses

    NASA Astrophysics Data System (ADS)

    Puah, Seng-Ming; Liu, Yen-Chen

    2017-05-01

    This paper investigates the problem of networked control system for nonlinear robotic manipulators under time delays and packet loss by using passivity technique. With the utilisation of wave variables and a passive remote controller, the networked robotic system is demonstrated to be stable with guaranteed position regulation. For the input/output signals of robotic systems, a discretisation block is exploited to convert continuous-time signals to discrete-time signals, and vice versa. Subsequently, we propose a packet management, called wave-variable modulation, to cope with the proposed networked robotic system under time delays and packet losses. Numerical examples and experimental results are presented to demonstrate the performance of the proposed wave-variable-based networked robotic systems.

  8. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  9. Digital super-resolution holographic data storage based on Hermitian symmetry for achieving high areal density.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2017-01-23

    Digital super-resolution holographic data storage based on Hermitian symmetry is proposed to store digital data in a tiny area of a medium. In general, reducing a recording area with an aperture leads to the improvement in the storage capacity of holographic data storage. Conventional holographic data storage systems however have a limitation in reducing a recording area. This limitation is called a Nyquist size. Unlike the conventional systems, our proposed system can overcome the limitation with the help of a digital holographic technique and digital signal processing. Experimental result shows that the proposed system can record and retrieve a hologram in a smaller area than the Nyquist size on the basis of Hermitian symmetry.

  10. The Influence of Bearing-Down Technique on the Fetal Heart Rate during the Second Stage of Labor.

    NASA Astrophysics Data System (ADS)

    Perlis, Deborah Woolley

    This experimental study contrasted the effects of sustained bearing-down efforts with short bearing-down efforts during the first twelve contractions of the second stage of labor. A single subject design with intrasubject replication was used to compare the incidence, duration, and amplitude of fetal heart rate decelerations, as well as the beat-to-beat variability of those decelerations. Neonatal outcome was evaluated with umbilical arterial cord blood pH values and the one- and five-minute APGAR scores. Thirty -two nulliparous women alternated the use of vigorous, sustained Valsalva-style bearing-down efforts with shorter efforts called minipushes every three contractions during the second stage of labor. Sixteen women began the second stage using the Valsalva-style bearing-down technique; sixteen began the second stage using the minipush. The fetal heart rate was recorded by an internal fetal scalp electrode. Uterine contractility was measured by an internal uterine pressure catheter. A repeated-measures MANOVA showed a significant interaction between the order of implementation of the bearing-down techniques and the amplitude of the fetal heart rate decelerations. A similar comparison of the duration of the decelerations showed no significant differences between the two bearing-down techniques. Likewise, analysis of the incidence of fetal heart rate decelerations and the magnitude of the beat-to-beat variability revealed no significant differences between the two techniques.

  11. Wild Birds Use an Ordering Rule to Decode Novel Call Sequences.

    PubMed

    Suzuki, Toshitaka N; Wheatcroft, David; Griesser, Michael

    2017-08-07

    The generative power of human language depends on grammatical rules, such as word ordering, that allow us to produce and comprehend even novel combinations of words [1-3]. Several species of birds and mammals produce sequences of calls [4-6], and, like words in human sentences, their order may influence receiver responses [7]. However, it is unknown whether animals use call ordering to extract meaning from truly novel sequences. Here, we use a novel experimental approach to test this in a wild bird species, the Japanese tit (Parus minor). Japanese tits are attracted to mobbing a predator when they hear conspecific alert and recruitment calls ordered as alert-recruitment sequences [7]. They also approach in response to recruitment calls of heterospecific individuals in mixed-species flocks [8, 9]. Using experimental playbacks, we assess their responses to artificial sequences in which their own alert calls are combined into different orderings with heterospecific recruitment calls. We find that Japanese tits respond similarly to mixed-species alert-recruitment call sequences and to their own alert-recruitment sequences. Importantly, however, tits rarely respond to mixed-species sequences in which the call order is reversed. Thus, Japanese tits extract a compound meaning from novel call sequences using an ordering rule. These results demonstrate a new parallel between animal communication systems and human language, opening new avenues for exploring the evolution of ordering rules and compositionality in animal vocal sequences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Successive spectrophotometric resolution as a novel technique for the analysis of ternary mixtures of pharmaceuticals

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Tawakkol, Shereen M.; Fahmy, Nesma M.; Shehata, Mostafa A.

    2014-03-01

    A novel spectrophotometric technique was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This technique was called successive spectrophotometric resolution technique. The technique was based on either the successive ratio subtraction or successive derivative subtraction. The mathematical explanation of the procedure was illustrated. In order to evaluate the applicability of the methods a model data as well as an experimental data were tested. The results from experimental data related to the simultaneous spectrophotometric determination of lidocaine hydrochloride (LH), calcium dobesilate (CD) and dexamethasone acetate (DA); in the presence of hydroquinone (HQ), the degradation product of calcium dobesilate were discussed. The proposed drugs were determined at their maxima 202 nm, 305 nm, 239 nm and 225 nm for LH, CD, DA and HQ respectively; by successive ratio subtraction coupled with constant multiplication method to obtain the zero order absorption spectra, while by applying successive derivative subtraction they were determined at their first derivative spectra at 210 nm for LH, 320 nm or P292-320 for CD, 256 nm or P225-252 for DA and P220-233 for HQ respectively. The calibration curves were linear over the concentration range of 2-20 μg/mL for both LH and DA, 6-50 μg/mL for CD, and 3-40 μg/mL for HQ. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs with no interference from other dosage form additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with those of the official BP methods for LH, DA, and CD, and with the official USP method for HQ; using student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.

  13. Convenience experimentation.

    PubMed

    Krohs, Ulrich

    2012-03-01

    Systems biology aims at explaining life processes by means of detailed models of molecular networks, mainly on the whole-cell scale. The whole cell perspective distinguishes the new field of systems biology from earlier approaches within molecular cell biology. The shift was made possible by the high throughput methods that were developed for gathering 'omic' (genomic, proteomic, etc.) data. These new techniques are made commercially available as semi-automatic analytic equipment, ready-made analytic kits and probe arrays. There is a whole industry of supplies for what may be called convenience experimentation. My paper inquires some epistemic consequences of strong reliance on convenience experimentation in systems biology. In times when experimentation was automated to a lesser degree, modeling and in part even experimentation could be understood fairly well as either being driven by hypotheses, and thus proceed by the testing of hypothesis, or as being performed in an exploratory mode, intended to sharpen concepts or initially vague phenomena. In systems biology, the situation is dramatically different. Data collection became so easy (though not cheap) that experimentation is, to a high degree, driven by convenience equipment, and model building is driven by the vast amount of data that is produced by convenience experimentation. This results in a shift in the mode of science. The paper shows that convenience driven science is not primarily hypothesis-testing, nor is it in an exploratory mode. It rather proceeds in a gathering mode. This shift demands another shift in the mode of evaluation, which now becomes an exploratory endeavor, in response to the superabundance of gathered data. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. A Note on Improving Process Efficiency in Panel Surveys with Paradata

    ERIC Educational Resources Information Center

    Kreuter, Frauke; Müller, Gerrit

    2015-01-01

    Call scheduling is a challenge for surveys around the world. Unlike cross-sectional surveys, panel surveys can use information from prior waves to enhance call-scheduling algorithms. Past observational studies showed the benefit of calling panel cases at times that had been successful in the past. This article is the first to experimentally assign…

  15. AI in CALL--Artificially Inflated or Almost Imminent?

    ERIC Educational Resources Information Center

    Schulze, Mathias

    2008-01-01

    The application of techniques from artificial intelligence (AI) to CALL has commonly been referred to as intelligent CALL (ICALL). ICALL is only slightly older than the "CALICO Journal", and this paper looks back at a quarter century of published research mainly in North America and by North American scholars. This "inventory…

  16. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE PAGES

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; ...

    2016-07-08

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  17. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  18. A New Heuristic Anonymization Technique for Privacy Preserved Datasets Publication on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Aldeen Yousra, S.; Mazleena, Salleh

    2018-05-01

    Recent advancement in Information and Communication Technologies (ICT) demanded much of cloud services to sharing users’ private data. Data from various organizations are the vital information source for analysis and research. Generally, this sensitive or private data information involves medical, census, voter registration, social network, and customer services. Primary concern of cloud service providers in data publishing is to hide the sensitive information of individuals. One of the cloud services that fulfill the confidentiality concerns is Privacy Preserving Data Mining (PPDM). The PPDM service in Cloud Computing (CC) enables data publishing with minimized distortion and absolute privacy. In this method, datasets are anonymized via generalization to accomplish the privacy requirements. However, the well-known privacy preserving data mining technique called K-anonymity suffers from several limitations. To surmount those shortcomings, I propose a new heuristic anonymization framework for preserving the privacy of sensitive datasets when publishing on cloud. The advantages of K-anonymity, L-diversity and (α, k)-anonymity methods for efficient information utilization and privacy protection are emphasized. Experimental results revealed the superiority and outperformance of the developed technique than K-anonymity, L-diversity, and (α, k)-anonymity measure.

  19. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    NASA Astrophysics Data System (ADS)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  20. The Inception of OMA in the Development of Modal Testing Technology for Wind Turbines

    NASA Technical Reports Server (NTRS)

    James, George H., III; Carne. Thomas G.

    2008-01-01

    Wind turbines are immense, flexible structures with aerodynamic forces acting on the rotating blades at harmonics of the turbine rotational frequency, which are comparable to the modal frequencies of the structure. Predicting and experimentally measuring the modal frequencies of wind turbines has been important to their successful design and operation. Performing modal tests on wind turbine structures over 100 meters tall is a substantial challenge, which has inspired innovative developments in modal test technology. For wind turbines, a further complication is that the modal frequencies are dependent on the turbine rotation speed. The history and development of a new technique for acquiring the modal parameters using output-only response data, called the Natural Excitation Technique (NExT), will be reviewed, showing historical tests and techniques. The initial attempts at output-only modal testing began in the late 1980's with the development of NExT in the 1990's. NExT was a predecessor to OMA, developed to overcome these challenges of testing immense structures excited with environmental inputs. We will trace the difficulties and successes of wind turbine modal testing from 1982 to the present. Keywords: OMA, Modal Analysis, NExT, Wind Turbines, Wind Excitation

  1. Recognition of human activity characteristics based on state transitions modeling technique

    NASA Astrophysics Data System (ADS)

    Elangovan, Vinayak; Shirkhodaie, Amir

    2012-06-01

    Human Activity Discovery & Recognition (HADR) is a complex, diverse and challenging task but yet an active area of ongoing research in the Department of Defense. By detecting, tracking, and characterizing cohesive Human interactional activity patterns, potential threats can be identified which can significantly improve situation awareness, particularly, in Persistent Surveillance Systems (PSS). Understanding the nature of such dynamic activities, inevitably involves interpretation of a collection of spatiotemporally correlated activities with respect to a known context. In this paper, we present a State Transition model for recognizing the characteristics of human activities with a link to a prior contextbased ontology. Modeling the state transitions between successive evidential events determines the activities' temperament. The proposed state transition model poses six categories of state transitions including: Human state transitions of Object handling, Visibility, Entity-entity relation, Human Postures, Human Kinematics and Distance to Target. The proposed state transition model generates semantic annotations describing the human interactional activities via a technique called Casual Event State Inference (CESI). The proposed approach uses a low cost kinect depth camera for indoor and normal optical camera for outdoor monitoring activities. Experimental results are presented here to demonstrate the effectiveness and efficiency of the proposed technique.

  2. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy.

    PubMed

    Tremsin, Anton S; Gao, Yan; Dial, Laura C; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  3. Plant training for induced defense against insect pests: a promising tool for integrated pest management in cotton.

    PubMed

    Llandres, Ana L; Almohamad, Raki; Brévault, Thierry; Renou, Alain; Téréta, Idrissa; Jean, Janine; Goebel, François-Regis

    2018-04-17

    Enhancing cotton pest management using plant natural defenses has been described as a promising way to improve the management of crop pests. We here reviewed different studies on cotton growing systems to illustrate how an ancient technique called plant training, which includes plant topping and pruning, may contribute to this goal. Based on examples from cotton crops, we show how trained plants could be promoted to a state of enhanced defense that causes faster and more robust activation of their defense responses. We revisit agricultural benefits associated to this technique in cotton crops, with a focus on its potential as a supplementary tool for Integrated Pest Management (IPM). Particularly, we examine its role in mediating plant interactions with conspecific neighboring plants, pests and associated natural enemies. We propose a new IPM tool, plant training for induced defense, which involves inducing plant defense by artificial injuries. Experimental evidence from various studies shows that cotton training is a promising technique, particularly for smallholders, which can be used as part of an IPM program to significantly reduce insecticide use and to improve productivity in cotton farming. This article is protected by copyright. All rights reserved.

  4. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with 100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  5. Density-cluster NMA: A new protein decomposition technique for coarse-grained normal mode analysis.

    PubMed

    Demerdash, Omar N A; Mitchell, Julie C

    2012-07-01

    Normal mode analysis has emerged as a useful technique for investigating protein motions on long time scales. This is largely due to the advent of coarse-graining techniques, particularly Hooke's Law-based potentials and the rotational-translational blocking (RTB) method for reducing the size of the force-constant matrix, the Hessian. Here we present a new method for domain decomposition for use in RTB that is based on hierarchical clustering of atomic density gradients, which we call Density-Cluster RTB (DCRTB). The method reduces the number of degrees of freedom by 85-90% compared with the standard blocking approaches. We compared the normal modes from DCRTB against standard RTB using 1-4 residues in sequence in a single block, with good agreement between the two methods. We also show that Density-Cluster RTB and standard RTB perform well in capturing the experimentally determined direction of conformational change. Significantly, we report superior correlation of DCRTB with B-factors compared with 1-4 residue per block RTB. Finally, we show significant reduction in computational cost for Density-Cluster RTB that is nearly 100-fold for many examples. Copyright © 2012 Wiley Periodicals, Inc.

  6. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  7. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    PubMed Central

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Abstract Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components. PMID:27877885

  8. Detection of Spoofed MAC Addresses in 802.11 Wireless Networks

    NASA Astrophysics Data System (ADS)

    Tao, Kai; Li, Jing; Sampalli, Srinivas

    Medium Access Control (MAC) address spoofing is considered as an important first step in a hacker's attempt to launch a variety of attacks on 802.11 wireless networks. Unfortunately, MAC address spoofing is hard to detect. Most current spoofing detection systems mainly use the sequence number (SN) tracking technique, which has drawbacks. Firstly, it may lead to an increase in the number of false positives. Secondly, such techniques cannot be used in systems with wireless cards that do not follow standard 802.11 sequence number patterns. Thirdly, attackers can forge sequence numbers, thereby causing the attacks to go undetected. We present a new architecture called WISE GUARD (Wireless Security Guard) for detection of MAC address spoofing on 802.11 wireless LANs. It integrates three detection techniques - SN tracking, Operating System (OS) fingerprinting & tracking and Received Signal Strength (RSS) fingerprinting & tracking. It also includes the fingerprinting of Access Point (AP) parameters as an extension to the OS fingerprinting for detection of AP address spoofing. We have implemented WISE GUARD on a test bed using off-the-shelf wireless devices and open source drivers. Experimental results show that the new design enhances the detection effectiveness and reduces the number of false positives in comparison with current approaches.

  9. Using the Git Software Tool on the Peregrine System | High-Performance

    Science.gov Websites

    branch workflow. Create a local branch called "experimental" based on the current master... git branch experimental Use your branch (start working on that experimental branch....) git checkout experimental git pull origin experimental # work, work, work, commit.... Send local branch to the repo git push

  10. Exploratory Studies in Generalized Predictive Control for Active Aeroelastic Control of Tiltrotor Aircraft

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.; Juang, Jer-Nan; Bennett, Richard L.

    2000-01-01

    The Aeroelasticity Branch at NASA Langley Research Center has a long and substantive history of tiltrotor aeroelastic research. That research has included a broad range of experimental investigations in the Langley Transonic Dynamics Tunnel (TDT) using a variety of scale models and the development of essential analyses. Since 1994, the tiltrotor research program has been using a 1/5-scale, semispan aeroelastic model of the V-22 designed and built by Bell Helicopter Textron Inc. (BHTI) in 1981. That model has been refurbished to form a tiltrotor research testbed called the Wing and Rotor Aeroelastic Test System (WRATS) for use in the TDT. In collaboration with BHTI, studies under the current tiltrotor research program are focused on aeroelastic technology areas having the potential for enhancing the commercial and military viability of tiltrotor aircraft. Among the areas being addressed, considerable emphasis is being directed to the evaluation of modern adaptive multi-input multi- output (MIMO) control techniques for active stability augmentation and vibration control of tiltrotor aircraft. As part of this investigation, a predictive control technique known as Generalized Predictive Control (GPC) is being studied to assess its potential for actively controlling the swashplate of tiltrotor aircraft to enhance aeroelastic stability in both helicopter and airplane modes of flight. This paper summarizes the exploratory numerical and experimental studies that were conducted as part of that investigation.

  11. A comparison in Colorado of three methods to monitor breeding amphibians

    USGS Publications Warehouse

    Corn, P.S.; Muths, E.; Iko, W.M.

    2000-01-01

    We surveyed amphibians at 4 montane and 2 plains lentic sites in northern Colorado using 3 techniques: standardized call surveys, automated recording devices (frog-loggers), and intensive surveys including capture-recapture techniques. Amphibians were observed at 5 sites. Species richness varied from 0 to 4 species at each site. Richness scores, the sums of species richness among sites, were similar among methods: 8 for call surveys, 10 for frog-loggers, and 11 for intensive surveys (9 if the non-vocal salamander Ambystoma tigrinum is excluded). The frog-logger at 1 site recorded Spea bombifrons which was not active during the times when call and intensive surveys were conducted. Relative abundance scores from call surveys failed to reflect a relatively large population of Bufo woodhousii at 1 site and only weakly differentiated among different-sized populations of Pseudacris maculata at 3 other sites. For extensive applications, call surveys have the lowest costs and fewest requirements for highly trained personnel. However, for a variety of reasons, call surveys cannot be used with equal effectiveness in all parts of North America.

  12. A robust calibration technique for acoustic emission systems based on momentum transfer from a ball drop

    USGS Publications Warehouse

    McLaskey, Gregory C.; Lockner, David A.; Kilgore, Brian D.; Beeler, Nicholas M.

    2015-01-01

    We describe a technique to estimate the seismic moment of acoustic emissions and other extremely small seismic events. Unlike previous calibration techniques, it does not require modeling of the wave propagation, sensor response, or signal conditioning. Rather, this technique calibrates the recording system as a whole and uses a ball impact as a reference source or empirical Green’s function. To correctly apply this technique, we develop mathematical expressions that link the seismic moment $M_{0}$ of internal seismic sources (i.e., earthquakes and acoustic emissions) to the impulse, or change in momentum $\\Delta p $, of externally applied seismic sources (i.e., meteor impacts or, in this case, ball impact). We find that, at low frequencies, moment and impulse are linked by a constant, which we call the force‐moment‐rate scale factor $C_{F\\dot{M}} = M_{0}/\\Delta p$. This constant is equal to twice the speed of sound in the material from which the seismic sources were generated. Next, we demonstrate the calibration technique on two different experimental rock mechanics facilities. The first example is a saw‐cut cylindrical granite sample that is loaded in a triaxial apparatus at 40 MPa confining pressure. The second example is a 2 m long fault cut in a granite sample and deformed in a large biaxial apparatus at lower stress levels. Using the empirical calibration technique, we are able to determine absolute source parameters including the seismic moment, corner frequency, stress drop, and radiated energy of these magnitude −2.5 to −7 seismic events.

  13. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  14. Accurate identification of motor unit discharge patterns from high-density surface EMG and validation with a novel signal-based performance metric

    NASA Astrophysics Data System (ADS)

    Holobar, A.; Minetto, M. A.; Farina, D.

    2014-02-01

    Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.

  15. The multi-scattering model for calculations of positron spatial distribution in the multilayer stacks, useful for conventional positron measurements

    NASA Astrophysics Data System (ADS)

    Dryzek, Jerzy; Siemek, Krzysztof

    2013-08-01

    The spatial distribution of positrons emitted from radioactive isotopes into stacks or layered samples is a subject of the presented report. It was found that Monte Carlo (MC) simulations using GEANT4 code are not able to describe correctly the experimental data of the positron fractions in stacks. The mathematical model was proposed for calculations of the implantation profile or positron fractions in separated layers or foils being components of a stack. The model takes into account only two processes, i.e., the positron absorption and backscattering at interfaces. The mathematical formulas were applied in the computer program called LYS-1 (layers profile analysis). The theoretical predictions of the model were in the good agreement with the results of the MC simulations for the semi infinite sample. The experimental verifications of the model were performed on the symmetrical and non-symmetrical stacks of different foils. The good agreement between the experimental and calculated fractions of positrons in components of a stack was achieved. Also the experimental implantation profile obtained using the depth scanning of positron implantation technique is very well described by the theoretical profile obtained within the proposed model. The LYS-1 program allows us also to calculate the fraction of positrons which annihilate in the source, which can be useful in the positron spectroscopy.

  16. Small but wise: Common marmosets (Callithrix jacchus) use acoustic signals as cues to avoid interactions with blonde capuchin monkeys (Sapajus flavius).

    PubMed

    Bastos, Monique; Medeiros, Karolina; Jones, Gareth; Bezerra, Bruna

    2018-03-01

    Vocalizations are often used by animals to communicate and mediate social interactions. Animals may benefit from eavesdropping on calls from other species to avoid predation and thus increase their chances of survival. Here we use both observational and experimental evidence to investigate eavesdropping and how acoustic signals may mediate interactions between two sympatric and endemic primate species (common marmosets and blonde capuchin monkeys) in a fragment of Atlantic Rainforest in Northeastern Brazil. We observed 22 natural vocal encounters between the study species, but no evident visual or physical contact over the study period. These two species seem to use the same area throughout the day, but at different times. We broadcasted alarm and long distance calls to and from both species as well as two control stimuli (i.e., forest background noise and a loud call from an Amazonian primate) in our playback experiments. Common marmosets showed anti-predator behavior (i.e., vigilance and flight) when exposed to blonde capuchin calls both naturally and experimentally. However, blonde capuchin monkeys showed no anti-predator behavior in response to common marmoset calls. Blonde capuchins uttered long distance calls and looked in the direction of the speaker following exposure to their own long distance call, whereas they fled when exposed to their own alarm calls. Both blonde capuchin monkeys and common marmosets showed fear behaviors in response to the loud call from a primate species unknown to them, and showed no apparent response to the forest background noise. Common marmoset responses to blonde capuchin calls suggests that the latter is a potential predator. Furthermore, common marmosets appear to be eavesdropping on calls from blonde capuchin monkeys to avoid potentially costly encounters with them. © 2018 Wiley Periodicals, Inc.

  17. Two blowing concepts for roll and lateral control of aircraft

    NASA Technical Reports Server (NTRS)

    Tavella, D. A.; Wood, N. J.; Lee, C. S.; Roberts, L.

    1986-01-01

    Two schemes to modulate aerodynamic forces for roll and lateral control of aircraft have been investigated. The first scheme, called the lateral blowing concept, consists of thin jets of air exiting spanwise, or at small angle with the spanwise direction, from slots at the tips of straight wings. For this scheme, in addition to experimental measurements, a theory was developed showing the analytical relationship between aerodynamic forces and jet and wing parameters. Experimental results confirmed the theoretically derived scaling laws. The second scheme, which was studied experimentally, is called the jet spoiler concept and consists of thin jets exiting normally to the wing surface from slots aligned with the spanwise direction.

  18. 'Enzyme Test Bench': A biochemical application of the multi-rate modeling

    NASA Astrophysics Data System (ADS)

    Rachinskiy, K.; Schultze, H.; Boy, M.; Büchs, J.

    2008-11-01

    In the expanding field of 'white biotechnology' enzymes are frequently applied to catalyze the biochemical reaction from a resource material to a valuable product. Evolutionary designed to catalyze the metabolism in any life form, they selectively accelerate complex reactions under physiological conditions. Modern techniques, such as directed evolution, have been developed to satisfy the increasing demand on enzymes. Applying these techniques together with rational protein design, we aim at improving of enzymes' activity, selectivity and stability. To tap the full potential of these techniques, it is essential to combine them with adequate screening methods. Nowadays a great number of high throughput colorimetric and fluorescent enzyme assays are applied to measure the initial enzyme activity with high throughput. However, the prediction of enzyme long term stability within short experiments is still a challenge. A new high throughput technique for enzyme characterization with specific attention to the long term stability, called 'Enzyme Test Bench', is presented. The concept of the Enzyme Test Bench consists of short term enzyme tests conducted under partly extreme conditions to predict the enzyme long term stability under moderate conditions. The technique is based on the mathematical modeling of temperature dependent enzyme activation and deactivation. Adapting the temperature profiles in sequential experiments by optimum non-linear experimental design, the long term deactivation effects can be purposefully accelerated and detected within hours. During the experiment the enzyme activity is measured online to estimate the model parameters from the obtained data. Thus, the enzyme activity and long term stability can be calculated as a function of temperature. The results of the characterization, based on micro liter format experiments of hours, are in good agreement with the results of long term experiments in 1L format. Thus, the new technique allows for both: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature. The presented article gives a successful example for the application of multi-rate modeling, experimental design and parameter estimation within biochemical engineering. At the same time, it shows the limitations of the methods at the state of the art and addresses the current problems to the applied mathematics community.

  19. Online and offline experimental techniques for polycyclic aromatic hydrocarbons recovery and measurement.

    PubMed

    Comandini, A; Malewicki, T; Brezinsky, K

    2012-03-01

    The implementation of techniques aimed at improving engine performance and reducing particulate matter (PM) pollutant emissions is strongly influenced by the limited understanding of the polycyclic aromatic hydrocarbons (PAH) formation chemistry, in combustion devices, that produces the PM emissions. New experimental results which examine the formation of multi-ring compounds are required. The present investigation focuses on two techniques for such an experimental examination by recovery of PAH compounds from a typical combustion oriented experimental apparatus. The online technique discussed constitutes an optimal solution but not always feasible approach. Nevertheless, a detailed description of a new online sampling system is provided which can serve as reference for future applications to different experimental set-ups. In comparison, an offline technique, which is sometimes more experimentally feasible but not necessarily optimal, has been studied in detail for the recovery of a variety of compounds with different properties, including naphthalene, biphenyl, and iodobenzene. The recovery results from both techniques were excellent with an error in the total carbon balance of around 10% for the online technique and an uncertainty in the measurement of the single species of around 7% for the offline technique. Although both techniques proved to be suitable for measurement of large PAH compounds, the online technique represents the optimal solution in view of the simplicity of the corresponding experimental procedure. On the other hand, the offline technique represents a valuable solution in those cases where the online technique cannot be implemented.

  20. Multidisciplinary Responses to the Sexual Victimization of Children: Use of Control Phone Calls.

    PubMed

    Canavan, J William; Borowski, Christine; Essex, Stacy; Perkowski, Stefan

    2017-10-01

    This descriptive study addresses the question of the value of one-party consent phone calls regarding the sexual victimization of children. The authors reviewed 4 years of experience with children between the ages of 3 and 18 years selected for the control phone calls after a forensic interview by the New York State Police forensic interviewer. The forensic interviewer identified appropriate cases for control phone calls considering New York State law, the child's capacity to make the call, the presence of another person to make the call and a supportive residence. The control phone call process has been extremely effective forensically. Offenders choose to avoid trial by taking a plea bargain thereby dramatically speeding up the criminal judicial and family court processes. An additional outcome of the control phone call is the alleged offender's own words saved the child from the trauma of testifying in court. The control phone call reduced the need for children to repeat their stories to various interviewers. A successful control phone call gives the child a sense of vindication. This technique is the only technique that preserves the actual communication pattern between the alleged victim and the alleged offender. This can be of great value to the mental health professionals working with both the child and the alleged offender. Cautions must be considered regarding potential serious adverse effects on the child. The multidisciplinary team members must work together in the control phone call. The descriptive nature of this study did not allow the authors adequate demographic data, a subject that should be addressed in future prospective study.

  1. Calculation and measurement of a neutral air flow velocity impacting a high voltage capacitor with asymmetrical electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malík, M., E-mail: michal.malik@tul.cz; Primas, J.; Kopecký, V.

    2014-01-15

    This paper deals with the effects surrounding phenomenon of a mechanical force generated on a high voltage asymmetrical capacitor (the so called Biefeld-Brown effect). A method to measure this force is described and a formula to calculate its value is also given. Based on this the authors derive a formula characterising the neutral air flow velocity impacting an asymmetrical capacitor connected to high voltage. This air flow under normal circumstances lessens the generated force. In the following part this velocity is measured using Particle Image Velocimetry measuring technique and the results of the theoretically calculated velocity and the experimentally measuredmore » value are compared. The authors found a good agreement between the results of both approaches.« less

  2. Automation of disbond detection in aircraft fuselage through thermal image processing

    NASA Technical Reports Server (NTRS)

    Prabhu, D. R.; Winfree, W. P.

    1992-01-01

    A procedure for interpreting thermal images obtained during the nondestructive evaluation of aircraft bonded joints is presented. The procedure operates on time-derivative thermal images and resulted in a disbond image with disbonds highlighted. The size of the 'black clusters' in the output disbond image is a quantitative measure of disbond size. The procedure is illustrated using simulation data as well as data obtained through experimental testing of fabricated samples and aircraft panels. Good results are obtained, and, except in pathological cases, 'false calls' in the cases studied appeared only as noise in the output disbond image which was easily filtered out. The thermal detection technique coupled with an automated image interpretation capability will be a very fast and effective method for inspecting bonded joints in an aircraft structure.

  3. FALSTAFF: A new tool for fission studies

    NASA Astrophysics Data System (ADS)

    Dore, D.; Farget, F.; Lecolley, F.-R.; Lehaut, G.; Materna, T.; Pancin, J.; Panebianco, S.; Papaevangelou, Th.

    2013-12-01

    The future NFS installation will produce high intensity neutron beams from hundreds of keV up to 40 MeV. Taking advantage of this facility, data of particular interest for the nuclear community in view of the development of the fast reactor technology will be measured. The development of an experimental setup called FALSTAFF for a full characterization of actinide fission fragments has been undertaken. Fission fragment isotopic yields and associated neutron multiplicities will be measured as a function of the neutron energy. Based on time-of-flight and residual energy technique, the setup will allow the simultaneous measurement of the complementary fragments velocity and energy. The performances of TOF detectors of FALSTAFF will be presented and expected resolutions for fragment masses and neutron multiplicities, based on realistic simulations, will be shown.

  4. Optical pathlengths in dental caries lesions

    NASA Astrophysics Data System (ADS)

    Mujat, Claudia; ten Bosch, Jaap J.; Dogariu, Aristide C.

    2001-04-01

    The average pathlength of light inside dental enamel and incipient lesions is measured and compared, in order to quantitatively confirm the prediction that incipient lesions have higher scattering coefficients that sound enamel. The technique used, called optical pathlength spectroscopy provides experimental access to the pathlength distribution of light inside highly scattering samples. This is desirable for complex biological materials, where current theoretical models are very difficult to apply. To minimize the effects of surface reflections the average pathlength is measured in wet sound enamel and white spots. We obtain values of 367 micrometers and 272 micrometers average pathlength for sound enamel and white spots respectively. We also investigate the differences between open and subsurface lesions, by measuring the change in the pathlength distribution of light as they go from dry to wet.

  5. A computer model for the 30S ribosome subunit.

    PubMed Central

    Kuntz, I D; Crippen, G M

    1980-01-01

    We describe a computer-generated model for the locations of the 21 proteins of the 30S subunit of the E. coli ribosome. The model uses a new method of incorporating experimental measurements based on a mathematical technique called distance geometry. In this paper, we use data from two sources: immunoelectron microscopy and neutron-scattering studies. The data are generally self-consistent and lead to a set of relatively well-defined structures in which individual protein coordinates differ by approximately 20 A from one structure to another. Two important features of this calculation are the use of extended proteins rather than just the centers of mass, and the ability to confine the protein locations within an arbitrary boundary surface so that only solutions with an approximate 30S "shape" are permitted. PMID:7020786

  6. Understanding the nanoparticle-protein corona complexes using computational and experimental methods.

    PubMed

    Kharazian, B; Hadipour, N L; Ejtehadi, M R

    2016-06-01

    Nanoparticles (NP) have capability to adsorb proteins from biological fluids and form protein layer, which is called protein corona. As the cell sees corona coated NPs, the protein corona can dictate biological response to NPs. The composition of protein corona is varied by physicochemical properties of NPs including size, shape, surface chemistry. Processing of protein adsorption is dynamic phenomena; to that end, a protein may desorb or leave a surface vacancy that is rapidly filled by another protein and cause changes in the corona composition mainly by the Vroman effect. In this review, we discuss the interaction between NP and proteins and the available techniques for identification of NP-bound proteins. Also we review current developed computational methods for understanding the NP-protein complex interactions. Copyright © 2016. Published by Elsevier Ltd.

  7. Expression transmission using exaggerated animation for Elfoid

    PubMed Central

    Hori, Maiya; Tsuruda, Yu; Yoshimura, Hiroki; Iwai, Yoshio

    2015-01-01

    We propose an expression transmission system using a cellular-phone-type teleoperated robot called Elfoid. Elfoid has a soft exterior that provides the look and feel of human skin, and is designed to transmit the speaker's presence to their communication partner using a camera and microphone. To transmit the speaker's presence, Elfoid sends not only the voice of the speaker but also the facial expression captured by the camera. In this research, facial expressions are recognized using a machine learning technique. Elfoid cannot, however, display facial expressions because of its compactness and a lack of sufficiently small actuator motors. To overcome this problem, facial expressions are displayed using Elfoid's head-mounted mobile projector. In an experiment, we built a prototype system and experimentally evaluated it's subjective usability. PMID:26347686

  8. Experimental analysis on semi-finishing machining of Ti6Al4V additively manufactured by direct melting laser sintering

    NASA Astrophysics Data System (ADS)

    Imbrogno, Stano; Bordin, Alberto; Bruschi, Stefania; Umbrello, Domenico

    2016-10-01

    The Additive Manufacturing (AM) techniques are particularly appealing especially for titanium aerospace and biomedical components because they permit to achieve a strong reduction of the buy-to-fly ratio. However, finishing machining operations are often necessary to reduce the uneven surface roughness and geometrics because of local missing accuracy. This work shows the influence of the cutting parameters, cutting speed and feed rate, on the cutting forces as well as on the thermal field observed in the cutting zone, during a turning operation carried out on bars made of Ti6Al4V obtained by the AM process called Direct Metal Laser Sintering (DMLS). Moreover, the sub-surface microstructure alterations due to the process are also showed and commented.

  9. Feasibility study of imaging spectroscopy to monitor the quality of online welding.

    PubMed

    Mirapeix, Jesús; García-Allende, P Beatriz; Cobo, Adolfo; Conde, Olga M; López-Higuera, José M

    2009-08-20

    An online welding quality system based on the use of imaging spectroscopy is proposed and discussed. Plasma optical spectroscopy has already been successfully applied in this context by establishing a direct correlation between some spectroscopic parameters, e.g., the plasma electronic temperature and the resulting seam quality. Given that the use of the so-called hyperspectral devices provides both spatial and spectral information, we propose their use for the particular case of arc welding quality monitoring in an attempt to determine whether this technique would be suitable for this industrial situation. Experimental welding tests are presented, and the ability of the proposed solution to identify simulated defects is proved. Detailed spatial analyses suggest that this additional dimension can be used to improve the performance of the entire system.

  10. Guitars, Keyboards, Strobes, and Motors -- From Vibrational Motion to Active Research

    NASA Astrophysics Data System (ADS)

    Tagg, Randall; Carlson, John; Asadi-Zeydabadi, Masoud; Busley, Brad; Law-Balding, Katie; Juengel, Mattea

    2013-01-01

    Physics First is offered to ninth graders at high schools in Aurora, CO. A unique new asset of this school system is an embedded research lab called the "Innovation Hyperlab." The goal of the lab is to connect secondary school teaching to ongoing university scientific research, supporting the school district's aim to create opportunities to integrate P-20 (preschool to graduate school) learning. This paper is an example of how we create research connections in the context of introductory physics lessons on vibrations and waves. Key to the process is the use of several different types of technical resources, hence the name "hyperlab." Students learn many practical experimental techniques, reinforcing their knowledge of fundamentals and preparing them to work effectively on open-ended research or engineering projects.

  11. [Mechanism of pain sensation].

    PubMed

    Gyulaházi, Judit

    2009-11-15

    Pain, as subjective content of consciousness, is an essential attention-calling sign that helps to survive. Pain relieve is obligatory for every physician, thus, its individual appearance can make the analgesia difficult to carry out. The improving neuroimaging techniques allow understanding the development of pain sensation. Through the 24 articles on the PubMed found with keywords 'pain' and 'neuroimaging', we review here the parts of the pain neuron matrix, their tasks and the assumed mechanism of the acute pain sensation. The mechanism of the individual pain sensation is illustrated by the view of the modular function of the medial part of the pain matrix. Experimental results of empathic pain suggest that pain sensation may occur without real damage of the tissues, as well. The pain network plays main role in chronic pain.

  12. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Heard, W.; Song, B.; Williams, B.; Martin, B.; Sparks, P.; Nie, X.

    2018-01-01

    This review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior of geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Uniqueness and limitations for each experimental technique are also discussed.

  13. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    DOE PAGES

    Heard, W.; Song, B.; Williams, B.; ...

    2018-01-03

    Here, this review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior ofmore » geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Finally, uniqueness and limitations for each experimental technique are also discussed.« less

  14. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heard, W.; Song, B.; Williams, B.

    Here, this review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior ofmore » geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Finally, uniqueness and limitations for each experimental technique are also discussed.« less

  15. Style-independent document labeling: design and performance evaluation

    NASA Astrophysics Data System (ADS)

    Mao, Song; Kim, Jong Woo; Thoma, George R.

    2003-12-01

    The Medical Article Records System or MARS has been developed at the U.S. National Library of Medicine (NLM) for automated data entry of bibliographical information from medical journals into MEDLINE, the premier bibliographic citation database at NLM. Currently, a rule-based algorithm (called ZoneCzar) is used for labeling important bibliographical fields (title, author, affiliation, and abstract) on medical journal article page images. While rules have been created for medical journals with regular layout types, new rules have to be manually created for any input journals with arbitrary or new layout types. Therefore, it is of interest to label any journal articles independent of their layout styles. In this paper, we first describe a system (called ZoneMatch) for automated generation of crucial geometric and non-geometric features of important bibliographical fields based on string-matching and clustering techniques. The rule based algorithm is then modified to use these features to perform style-independent labeling. We then describe a performance evaluation method for quantitatively evaluating our algorithm and characterizing its error distributions. Experimental results show that the labeling performance of the rule-based algorithm is significantly improved when the generated features are used.

  16. 47 CFR 5.115 - Station identification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EXPERIMENTAL RADIO SERVICE (OTHER THAN BROADCAST... its assigned call sign at the end of each complete transmission: Provided, however, that the transmission of the call sign at the end of each transmission is not required for projects requiring continuous...

  17. A new surface fractal dimension for displacement mode shape-based damage identification of plate-type structures

    NASA Astrophysics Data System (ADS)

    Shi, Binkai; Qiao, Pizhong

    2018-03-01

    Vibration-based nondestructive testing is an area of growing interest and worthy of exploring new and innovative approaches. The displacement mode shape is often chosen to identify damage due to its local detailed characteristic and less sensitivity to surrounding noise. Requirement for baseline mode shape in most vibration-based damage identification limits application of such a strategy. In this study, a new surface fractal dimension called edge perimeter dimension (EPD) is formulated, from which an EPD-based window dimension locus (EPD-WDL) algorithm for irregularity or damage identification of plate-type structures is established. An analytical notch-type damage model of simply-supported plates is proposed to evaluate notch effect on plate vibration performance; while a sub-domain of notch cases with less effect is selected to investigate robustness of the proposed damage identification algorithm. Then, fundamental aspects of EPD-WDL algorithm in term of notch localization, notch quantification, and noise immunity are assessed. A mathematical solution called isomorphism is implemented to remove false peaks caused by inflexions of mode shapes when applying the EPD-WDL algorithm to higher mode shapes. The effectiveness and practicability of the EPD-WDL algorithm are demonstrated by an experimental procedure on damage identification of an artificially-induced notched aluminum cantilever plate using a measurement system of piezoelectric lead-zirconate (PZT) actuator and scanning laser Doppler vibrometer (SLDV). As demonstrated in both the analytical and experimental evaluations, the new surface fractal dimension technique developed is capable of effectively identifying damage in plate-type structures.

  18. Evaluating the Liquid Liquid Phase Transition Hypothesis of Supercoooled Water

    NASA Astrophysics Data System (ADS)

    Limmer, David; Chandler, David

    2011-03-01

    To explain the anomalous behavior of supercooled water it has been conjectured that buried within an experimentally inaccessible region of liquid water's phase diagram there exists a second critical point, which is the terminus of a first order transition line between two distinct liquid phases. The so-called liquid-liquid phase transition (LLPT) has since generated much study, though to date there is no consensus on its existence. In this talk, we will discuss our efforts to systematically study the metastable phase diagram of supercooled water through computer simulation. By employing importance-sampling techniques, we have calculated free energies as a function of the density and long-range order to determine unambiguously if two distinct liquid phases exist. We will argue that, contrary to the LLPT hypothesis, the observed phenomenology can be understood as a consequence of the limit of stability of the liquid far away from coexistence. Our results suggest that homogeneous nucleation is the cause of the increased fluctuations present upon supercooling. Further we will show how this understanding can be extended to explain experimental observations of hysteresis in confined supercooled water systems.

  19. Reconstruction of Tissue-Specific Metabolic Networks Using CORDA

    PubMed Central

    Schultz, André; Qutub, Amina A.

    2016-01-01

    Human metabolism involves thousands of reactions and metabolites. To interpret this complexity, computational modeling becomes an essential experimental tool. One of the most popular techniques to study human metabolism as a whole is genome scale modeling. A key challenge to applying genome scale modeling is identifying critical metabolic reactions across diverse human tissues. Here we introduce a novel algorithm called Cost Optimization Reaction Dependency Assessment (CORDA) to build genome scale models in a tissue-specific manner. CORDA performs more efficiently computationally, shows better agreement to experimental data, and displays better model functionality and capacity when compared to previous algorithms. CORDA also returns reaction associations that can greatly assist in any manual curation to be performed following the automated reconstruction process. Using CORDA, we developed a library of 76 healthy and 20 cancer tissue-specific reconstructions. These reconstructions identified which metabolic pathways are shared across diverse human tissues. Moreover, we identified changes in reactions and pathways that are differentially included and present different capacity profiles in cancer compared to healthy tissues, including up-regulation of folate metabolism, the down-regulation of thiamine metabolism, and tight regulation of oxidative phosphorylation. PMID:26942765

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skladnik-Sadowska, E.; Malinowski, K.; Sadowski, M. J.

    The paper concerns the monitoring of high-current pulse discharges and the determination of the plasma concentration within the dense magnetized plasma by means of optical spectroscopy methods. In experiments with the large PF-1000 facility operated at IPPLM in Warsaw, Poland, attention was paid to the determination of the operational mode and electron concentration under different experimental conditions. To measure the visible radiation (VR) the use was made of the MECHELLE registered 900-spectrometer equipped with the CCD readout. The VR emission, observed at 65 deg. to the z-axis, originated from a part of the electrode surfaces, the collapsing current-sheath layer andmore » the dense plasma pinch-region (40-50 mm from the electrode ends). Considerable differences were found in the optical spectra recorded for so-called 'good shots' and for cases of some failures. Estimates of the electron concentration, which were performed with different spectroscopic techniques, showed that it ranged from 5.56x1018 cm-3 to 4.8x1019 cm-3, depending on experimental conditions. The correlation of the fusion-neutron yield and the plasma density was proved.« less

  1. Encoding techniques for complex information structures in connectionist systems

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice.

  2. Wavelength resolved neutron transmission analysis to identify single crystal particles in historical metallurgy

    NASA Astrophysics Data System (ADS)

    Barzagli, E.; Grazzi, F.; Salvemini, F.; Scherillo, A.; Sato, H.; Shinohara, T.; Kamiyama, T.; Kiyanagi, Y.; Tremsin, A.; Zoppi, Marco

    2014-07-01

    The phase composition and the microstructure of four ferrous Japanese arrows of the Edo period (17th-19th century) has been determined through two complementary neutron techniques: Position-sensitive wavelength-resolved neutron transmission analysis (PS-WRNTA) and time-of-flight neutron diffraction (ToF-ND). Standard ToF-ND technique has been applied by using the INES diffractometer at the ISIS pulsed neutron source in the UK, while the innovative PS-WRNTA one has been performed at the J-PARC neutron source on the BL-10 NOBORU beam line using the high spatial high time resolution neutron imaging detector. With ToF-ND we were able to reach information about the quantitative distribution of the metal and non-metal phases, the texture level, the strain level and the domain size of each of the samples, which are important parameters to gain knowledge about the technological level of the Japanese weapon. Starting from this base of data, the more complex PS-WRNTA has been applied to the same samples. This experimental technique exploits the presence of the so-called Bragg edges, in the time-of-flight spectrum of neutrons transmitted through crystalline materials, to map the microstructural properties of samples. The two techniques are non-invasive and can be easily applied to archaeometry for an accurate microstructure mapping of metal and ceramic artifacts.

  3. Importance of integrated results of different non-destructive techniques in order to evaluate defects in panel paintings: the contribution of infrared, optical and ultrasonic techniques

    NASA Astrophysics Data System (ADS)

    Sfarra, S.; Theodorakeas, P.; Ibarra-Castanedo, C.; Avdelidis, N. P.; Paoletti, A.; Paoletti, D.; Hrissagis, K.; Bendada, A.; Koui, M.; Maldague, X.

    2011-06-01

    The increasing deterioration of panel paintings can be due to physical processes that take place during exhibition or transit, or as a result of temperature and humidity fluctuations within a building, church or museum. In response to environmental alterations, a panel painting can expand or contract and a new equilibrium state is eventually reached. These adjustments though, are usually accompanied by a change in shape in order to accommodate to the new conditions. In this work, a holographic method for detecting detached regions and micro-cracks is described. Some of these defects are confirmed by Thermographic Signal Reconstruction (TSR) technique. In addition, Pulsed Phase Thermography (PPT) and Principal Component Thermography (PCT) allow to identify with greater contrast two artificial defects in Mylar which are crucial to understand the topic of interest: the discrimination between defect materials. Finally, traditional contact ultrasounds applications, are widely applied for the evaluation of the wood quality in several characterization procedures. Inspecting the specimen from the front side, the natural and artificial defects of the specimen are confirmed. Experimental results derived by the application of the integrated methods on an Italian panel painting reproduction, called The Angel specimen, are presented. The main advantages that these techniques can offer to the conservation and restoration of artworks are emphasized.

  4. New molten salt systems for high-temperature molten salt batteries: LiF-LiCl-LiBr-based quaternary systems

    NASA Astrophysics Data System (ADS)

    Fujiwara, Syozo; Inaba, Minoru; Tasaka, Akimasa

    To develop novel multi-component molten salt systems more effectively, we developed a simulative technique using the CALPHAD (Calculation of Phase Diagram and Thermodynamics) method to estimate the ionic conductivity and the melting point. The validity of this new simulative technique was confirmed by comparing the simulated ionic conductivities and melting points of typical high-temperature molten salts, such as LiF-LiCl-LiBr, LiF-LiBr-KBr, LiCl-LiBr-KBr, and LiCl-LiBr-LiI, with those reported data in the literature or experimentally obtained. This simulative technique was used to develop new quaternary molten salt systems for use as electrolytes in high-temperature molten salt batteries (called thermal batteries). The targets of the ionic conductivity and the melting point were set at 2.0 S cm -1 and higher at 500 °C, and in the range of 350-430 °C, respectively, to replace the LiCl-KCl system (1.85 S cm -1 at 500 °C) within the conventional design of the heat generation system for thermal batteries. Using the simulative method, six kinds of novel quaternary systems, LiF-LiCl-LiBr-MX (M = Na and K; X = F, Cl, and Br), which contain neither environmentally instable anions such as iodides nor expensive cations such as Rb + and Cs +, were proposed. Experimental results showed that the LiF-LiCl-LiBr-0.10NaX (X = Cl and Br) and LiF-LiCl-LiBr-0.10KX (X = F, Cl, and Br) systems meet our targets of both the ionic conductivity and the melting point.

  5. Why did we elaborate an entangled photons experiment in our engineering school?

    NASA Astrophysics Data System (ADS)

    Jacubowiez, Lionel; Avignon, Thierry

    2005-10-01

    We will describe a simple setup experiment that allows students to create polarization-entangled photons pairs. These photon pairs are in an entangled state first described in the famous 1935 article in Phys.Rev by Einstein-Podolsky-Rosen, often called E.P.R. state. Photons pairs at 810 nm are produced in two nonlinear crystals by spontaneous parametric downconversion of photons at 405 nm emitted by a violet laser diode. The polarization state of the photons pairs is easily tunable with a half-wave plate and a Babinet compensator on the laser diode beam. After having adjusted the polarization-entangled state of the photon pairs, our students can perform a test of Bell's inequalities. They will find the amazing value for the Bell parameter between 2.3 and 2.6, depending on the quality of the adjustments of the state of polarization. The experiments described can be done in 4 or 5 hours. What is the importance of creating an entangled photons experiment for our engineering students? First of all, entanglement concept is clearly one of the most strikingly nonclassical features of quantum theory and it is playing an increasing role in present-day physics. But in this paper, we will emphasise the experimental point of view. We will try to explain why we believe that for our students this lab experiment is a unique opportunity to deal with established concepts and experimental techniques on polarization, non linear effects, phase matching, photon counting avalanche photodiodes, counting statistics, coincidences detectors. Let us recall that the first convincing experimental violations of Bell's inequalities were performed by Alain Aspect and Philippe Grangier with pairs of entangled photons at the Institut d'Optique between 1976 and 1982. Twenty five years later, due to recent advances in laser diode technology, new techniques for generation of photon pairs and avalanche photodiodes, this experiment is now part of the experimental lab courses for our students.

  6. A novel generation of 3D SAR-based passive micromixer: efficient mixing and low pressure drop at a low Reynolds number

    NASA Astrophysics Data System (ADS)

    Viktorov, Vladimir; Nimafar, Mohammad

    2013-05-01

    This study introduces a novel generation of 3D splitting and recombination (SAR) passive micromixer with microstructures placed on the top and bottom floors of microchannels called a ‘chain mixer’. Both experimental verification and numerical analysis of the flow structure of this type of passive micromixer have been performed to evaluate the mixing performance and pressure drop of the microchannel, respectively. We propose here two types of chain mixer—chain 1 and chain 2—and compare their mixing performance and pressure drop with other micromixers, T-, o- and tear-drop micromixers. Experimental tests carried out in the laminar flow regime with a low Reynolds number range, 0.083 ≤ Re ≤ 4.166, and image-based techniques are used to evaluate the mixing efficiency. Also, the computational fluid dynamics code, ANSYS FLUENT-13.0 has been used to analyze the flow and pressure drop in the microchannel. Experimental results show that the chain and tear-drop mixer's efficiency is very high because of the SAR process: specifically, an efficiency of up to 98% can be achieved at the tested Reynolds number. The results also show that chain mixers have a lower required pressure drop in comparison with a tear-drop micromixer.

  7. P-HS-SFM: a parallel harmony search algorithm for the reproduction of experimental data in the continuous microscopic crowd dynamic models

    NASA Astrophysics Data System (ADS)

    Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod

    2018-03-01

    Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.

  8. Effect of quadrupole focusing-field fluctuation on the transverse stability of intense hadron beams in storage rings

    NASA Astrophysics Data System (ADS)

    Ito, Kiyokazu; Matsuba, Masanori; Okamoto, Hiromi

    2018-02-01

    A systematic experimental study is performed to clarify the parameter dependence of the noise-induced beam instability previously demonstrated by a Princeton group [M. Chung et al., Phys. Rev. Lett. 102, 145003 (2009)]. Because of the weakness of the driving force, the instability develops very slowly, which substantially limits the application of conventional experimental and numerical techniques. In the present study, a novel tabletop apparatus called "S-POD" (Simulator of Particle Orbit Dynamics) is employed to explore the long-term collective behavior of intense hadron beams. S-POD provides a many-body Coulomb system physically equivalent to a relativistic charged-particle beam and thus enables us to conduct various beam-dynamics experiments without the use of large-scale machines. It is reconfirmed that random noise on the linear beam-focusing potential can be a source of slow beam quality degradation. Experimental observations are explained well by a simple perturbation theory that predicts the existence of a series of dangerous noise frequency bands overlooked in the previous study. Those additional instability bands newly identified with S-POD are more important practically because the driving noise frequencies can be very low. The dependence of the instability on the noise level, operating tune, and beam intensity is examined and found consistent with theoretical predictions.

  9. Nonlinear acoustics in cicada mating calls enhance sound propagation.

    PubMed

    Hughes, Derke R; Nuttall, Albert H; Katz, Richard A; Carter, G Clifford

    2009-02-01

    An analysis of cicada mating calls, measured in field experiments, indicates that the very high levels of acoustic energy radiated by this relatively small insect are mainly attributed to the nonlinear characteristics of the signal. The cicada emits one of the loudest sounds in all of the insect population with a sound production system occupying a physical space typically less than 3 cc. The sounds made by tymbals are amplified by the hollow abdomen, functioning as a tuned resonator, but models of the signal based solely on linear techniques do not fully account for a sound radiation capability that is so disproportionate to the insect's size. The nonlinear behavior of the cicada signal is demonstrated by combining the mutual information and surrogate data techniques; the results obtained indicate decorrelation when the phase-randomized and non-phase-randomized data separate. The Volterra expansion technique is used to fit the nonlinearity in the insect's call. The second-order Volterra estimate provides further evidence that the cicada mating calls are dominated by nonlinear characteristics and also suggests that the medium contributes to the cicada's efficient sound propagation. Application of the same principles has the potential to improve radiated sound levels for sonar applications.

  10. Methods and materials, for locating and studying spotted owls.

    Treesearch

    Eric D. Forsman

    1983-01-01

    Nocturnal calling surveys are the most effective and most frequently used technique for locating spotted owls. Roosts and general nest locations may be located during the day by calling in suspected roost or nest areas. Specific nest trees are located by: (1) baiting with a live mouse to induce owls to visit the nest, (2) calling in suspected nest areas to stimulate...

  11. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  12. Status of the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Medinaceli, E.; Gerda Collaboration

    2017-07-01

    The GERDA experiment is designed to search for neutrinoless double beta decay (0νββ) using ^{76} Ge, therefore asses the nature of neutrinos (Dirac or Majorana). In the so-called Phase I, with an exposure of 21.6kr yr, GERDA reached a background index (BI) of 10^{-2}{ cts/(keV kg yr)} at 90% CL No signal was found during this phase and a lower limit on the process half-life of 2.1×10^{25}{ yr} was derived (90% CL). GERDA is currently being upgraded to its Phase II, where the ^{76} Ge mass will be double, and it is expected to reduce by an order of magnitude the BI. For 0νββ half-lives at the order of 10^{26}{ yr} will be derived in the absence of signal. The experimental techniques used by GERDA will be depicted and the most relevant results from Phase I will be shown; as well as details on the upgrades of Phase II, including the status of the additional detectors deployed recently, and the new background reduction techniques using the active liquid Argon veto.

  13. ExoMol molecular line lists - XXVII: spectra of C2H4

    NASA Astrophysics Data System (ADS)

    Mant, Barry P.; Yachmenev, Andrey; Yurchenko, Jonathan Tennyson Sergei N.

    2018-05-01

    A new line list for ethylene, 12C21H4 is presented. The line list is based on high level ab initiopotential energy and dipole moment surfaces. The potential energy surface is refined by fitting to experimental energies. The line list covers the range up to 7000 cm-1(1.43 μm) with all ro-vibrational transitions (50 billion) with the lower state below 5000 cm-1included and thus should be applicable for temperatures up to 700 K. A technique for computing molecular opacities from vibrational band intensities is proposed and used to provide temperature dependent cross sections of ethylene for shorter wavelength and higher temperatures. When combined with realistic band profiles (such as the proposed three-band model), the vibrational intensity technique offers a cheap but reasonably accurate alternative to the full ro-vibrational calculations at high temperatures and should be reliable for representing molecular opacities. The C2H4 line list, which is called MaYTY, is rmade available in electronic form from the CDS (http://cdsarc.u-strasbg.fr) and ExoMol (www.exomol.com) databases.

  14. Measurement of Interfacial Profiles of Wavy Film Flow on Inclined Wall

    NASA Astrophysics Data System (ADS)

    Rosli, N.; Amagai, K.

    2016-02-01

    Falling liquid films on inclined wall present in many industrial processes such as in food processing, seawater desalination and electronic devices manufacturing industries. In order to ensure an optimal efficiency of the operation in these industries, a fundamental study on the interfacial flow profiles of the liquid film is of great importance. However, it is generally difficult to experimentally predict the interfacial profiles of liquid film flow on inclined wall due to the instable wavy flow that usually formed on the liquid film surface. In this paper, the liquid film surface velocity was measured by using a non-intrusive technique called as photochromic dye marking method. This technique utilizes the color change of liquid containing the photochromic dye when exposed to the UV light source. The movement of liquid film surface marked by the UV light was analyzed together with the wave passing over the liquid. As a result, the liquid film surface was found to slightly shrink its gradual movement when approached by the wave before gradually move again after the intersection with the wave.

  15. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  16. The Deflection Plate Analyzer: A Technique for Space Plasma Measurements Under Highly Disturbed Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H., Jr.; Dutton, Ken; Martinez, Nelson; Smith, Dennis; Stone, Nobie H.

    2004-01-01

    A technique has been developed to measure the characteristics of space plasmas under highly disturbed conditions; e.g., non-Maxwellian plasmas with strong drifting populations and plasmas contaminated by spacecraft outgassing. The present method is an extension of the capabilities of the Differential Ion Flux Probe (DIFP) to include a mass measurement that does not include either high voltage or contamination sensitive devices such as channeltron electron multipliers or microchannel plates. This reduces the complexity and expense of instrument fabrication, testing, and integration of flight hardware as compared to classical mass analyzers. The new instrument design is called the Deflection Plate Analyzer (DPA) and can deconvolve multiple ion streams and analyze each stream for ion flux intensity (density), velocity (including direction of motion), mass, and temperature (or energy distribution). The basic functionality of the DPA is discussed. The performance characteristics of a flight instrument as built for an electrodynamic tether mission, the Propulsive Small Expendable Deployer System (ProSEDS), and the instrument s role in measuring key experimental conditions are also discussed.

  17. Query-based learning for aerospace applications.

    PubMed

    Saad, E W; Choi, J J; Vian, J L; Wunsch, D C Ii

    2003-01-01

    Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.

  18. Theoretical Sum Frequency Generation Spectroscopy of Peptides

    PubMed Central

    2015-01-01

    Vibrational sum frequency generation (SFG) has become a very promising technique for the study of proteins at interfaces, and it has been applied to important systems such as anti-microbial peptides, ion channel proteins, and human islet amyloid polypeptide. Moreover, so-called “chiral” SFG techniques, which rely on polarization combinations that generate strong signals primarily for chiral molecules, have proven to be particularly discriminatory of protein secondary structure. In this work, we present a theoretical strategy for calculating protein amide I SFG spectra by combining line-shape theory with molecular dynamics simulations. We then apply this method to three model peptides, demonstrating the existence of a significant chiral SFG signal for peptides with chiral centers, and providing a framework for interpreting the results on the basis of the dependence of the SFG signal on the peptide orientation. We also examine the importance of dynamical and coupling effects. Finally, we suggest a simple method for determining a chromophore’s orientation relative to the surface using ratios of experimental heterodyne-detected signals with different polarizations, and test this method using theoretical spectra. PMID:25203677

  19. Electromagnetic Test-Facility characterization: an identification approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zicker, J.E.; Candy, J.V.

    The response of an object subjected to high energy, transient electromagnetic (EM) fields sometimes called electromagnetic pulses (EMP), is an important issue in the survivability of electronic systems (e.g., aircraft), especially when the field has been generated by a high altitude nuclear burst. The characterization of transient response information is a matter of national concern. In this report we discuss techniques to: (1) improve signal processing at a test facility; and (2) parameterize a particular object response. First, we discuss the application of identification-based signal processing techniques to improve signal levels at the Lawrence Livermore National Laboratory (LLNL) EM Transientmore » Test Facility. We identify models of test equipment and then use these models to deconvolve the input/output sequences for the object under test. A parametric model of the object is identified from this data. The model can be used to extrapolate the response to these threat level EMP. Also discussed is the development of a facility simulator (EMSIM) useful for experimental design and calibration and a deconvolution algorithm (DECONV) useful for removing probe effects from the measured data.« less

  20. Probing interferometric parallax with interplanetary spacecraft

    NASA Astrophysics Data System (ADS)

    Rodeghiero, G.; Gini, F.; Marchili, N.; Jain, P.; Ralston, J. P.; Dallacasa, D.; Naletto, G.; Possenti, A.; Barbieri, C.; Franceschini, A.; Zampieri, L.

    2017-07-01

    We describe an experimental scenario for testing a novel method to measure distance and proper motion of astronomical sources. The method is based on multi-epoch observations of amplitude or intensity correlations between separate receiving systems. This technique is called Interferometric Parallax, and efficiently exploits phase information that has traditionally been overlooked. The test case we discuss combines amplitude correlations of signals from deep space interplanetary spacecraft with those from distant galactic and extragalactic radio sources with the goal of estimating the interplanetary spacecraft distance. Interferometric parallax relies on the detection of wavefront curvature effects in signals collected by pairs of separate receiving systems. The method shows promising potentialities over current techniques when the target is unresolved from the background reference sources. Developments in this field might lead to the construction of an independent, geometrical cosmic distance ladder using a dedicated project and future generation instruments. We present a conceptual overview supported by numerical estimates of its performances applied to a spacecraft orbiting the Solar System. Simulations support the feasibility of measurements with a simple and time-saving observational scheme using current facilities.

  1. Thumb-loops up for catalysis: a structure/function investigation of a functional loop movement in a GH11 xylanase

    PubMed Central

    Paës, Gabriel; Cortés, Juan; Siméon, Thierry; O'Donohue, Michael J.; Tran, Vinh

    2012-01-01

    Dynamics is a key feature of enzyme catalysis. Unfortunately, current experimental and computational techniques do not yet provide a comprehensive understanding and description of functional macromolecular motions. In this work, we have extended a novel computational technique, which combines molecular modeling methods and robotics algorithms, to investigate functional motions of protein loops. This new approach has been applied to study the functional importance of the so-called thumb-loop in the glycoside hydrolase family 11 xylanase from Thermobacillus xylanilyticus (Tx-xyl). The results obtained provide new insight into the role of the loop in the glycosylation/deglycosylation catalytic cycle, and underline the key importance of the nature of the residue located at the tip of the thumb-loop. The effect of mutations predicted in silico has been validated by in vitro site-directed mutagenesis experiments. Overall, we propose a comprehensive model of Tx-xyl catalysis in terms of substrate and product dynamics by identifying the action of the thumb-loop motion during catalysis. PMID:24688637

  2. Generation of Escher Arts with Dual Perception.

    PubMed

    Lin, Shih-Syun; Morace, Charles C; Lin, Chao-Hung; Hsu, Li-Fong; Lee, Tong-Yee

    2018-02-01

    Escher transmutation is a graphic art that smoothly transforms one tile pattern into another tile pattern with dual perception. A classic example is the artwork called Sky and Water, in which a compelling figure-ground arrangement is applied to portray the transmutation of a bird in sky and a fish in water. The shape of a bird is progressively deformed and dissolves into the background while the background gradually reveals the shape of a fish. This paper introduces a system to create a variety of Escher-like transmutations, which includes the algorithms for initializing a tile pattern with dual figure-ground arrangement, for searching for the best matched shape of a user-specified motif from a database, and for transforming the content and shapes of tile patterns using a content-aware warping technique. The proposed system, integrating the graphic techniques of tile initialization, shape matching, and shape warping, allows users to create various Escher-like transmutations with minimal user interaction. Experimental results and conducted user studies demonstrate the feasibility and flexibility of the proposed system in Escher art generation.

  3. Interaction of Vortex Ring with Cutting Plate

    NASA Astrophysics Data System (ADS)

    Musta, Mustafa

    2015-11-01

    The interaction of a vortex ring impinging on a thin cutting plate was made experimentally using Volumetric 3-component Velocitmetry (v3v) technique. The vortex rings were generated with piston-cylinder vortex ring generator using piston stroke-to-diameter ratios and Re at 2-3 and 1500 - 3000, respectively. The cutting of vortex rings below center line leads to the formation of secondary vortices on each side of the plate which is look like two vortex rings, and a third vortex ring propagates further downstream in the direction of the initial vortex ring, which is previously showed by flow visualization study of Weigand (1993) and called ``trifurcation''. Trifurcation is very sensitive to the initial Reynolds number and the position of the plate with respect to the vortex ring generator pipe. The present work seeks more detailed investigation on the trifurcation using V3V technique. Conditions for the formation of trifurcation is analyzed and compared with Weigand (1993). The formed secondary vortex rings and the propagation of initial vortex ring in the downstream of the plate are analyzed by calculating their circulation, energy and trajectories.

  4. A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals

    PubMed Central

    Castañón–Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo

    2015-01-01

    The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi–Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information. PMID:26633417

  5. The Deflection Plate Analyzer: A Technique for Space Plasma Measurements Under Highly Disturbed Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H., Jr.; Dutton, Ken; Martinez, Nelson; Smith, Dennis; Stone, Nobie H.

    2003-01-01

    A technique has been developed to measure the characteristics of space plasmas under highly disturbed conditions; e.g., non-Maxwellian plasmas with strong drifting populations and plasmas contaminated by spacecraft outgassing. The present method is an extension of the capabilities of the Differential Ion Flux Probe (DIFP) to include a mass measurement that does not include either high voltage or contamination sensitive devices such as channeltron electron multipliers or microchannel plates. This reduces the complexity and expense of instrument fabrication, testing, and integration of flight hardware as compared to classical mass analyzers. The new instrument design is called the Deflection Plate Analyzer (DPA) and can deconvolve multiple ion streams and analyze each stream for ion flux intensity (density), velocity (including direction of motion), mass, and temperature (or energy distribution). The basic functionality of the DPA is discussed. The performance characteristics of a flight instrument as built for an electrodynamic tether mission, the Propulsive Small Expendable Deployer System (ProSEDS), and the instrument s role in measuring key experimental conditions are also discussed.

  6. Determination by Small-angle X-ray Scattering of Pore Size Distribution in Nanoporous Track-etched Polycarbonate Membranes

    NASA Astrophysics Data System (ADS)

    Jonas, A. M.; Legras, R.; Ferain, E.

    1998-03-01

    Nanoporous track-etched membranes with narrow pore size distributions and average pore size diameters tunable from 100 to 1000 Åare produced by the chemical etching of latent tracks in polymer films after irradiation by a beam of accelerated heavy ions. Nanoporous membranes are used for highly demanding filtration purposes, or as templates to obtain metallic or polymeric nanowires (L. Piraux et al., Nucl. Instr. Meth. Phys. Res. 1997, B131, 357). Such applications call for developments in nanopore size characterization techniques. In this respect, we report on the characterization by small-angle X-ray scattering (SAXS) of nanopore size distribution (nPSD) in polycarbonate track-etched membranes. The obtention of nPSD requires inverting an ill-conditioned inhomogeneous equation. We present different numerical routes to overcome the amplification of experimental errors in the resulting solutions, including a regularization technique allowing to obtain the nPSD without a priori knowledge of its shape. The effect of deviations from cylindrical pore shape on the resulting distributions are analyzed. Finally, SAXS results are compared to results obtained by electron microscopy and conductometry.

  7. A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals.

    PubMed

    Castañón-Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo

    2015-12-02

    The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi-Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information.

  8. Magnetic resonance spectroscopic imaging at superresolution: Overview and perspectives

    NASA Astrophysics Data System (ADS)

    Kasten, Jeffrey; Klauser, Antoine; Lazeyras, François; Van De Ville, Dimitri

    2016-02-01

    The notion of non-invasive, high-resolution spatial mapping of metabolite concentrations has long enticed the medical community. While magnetic resonance spectroscopic imaging (MRSI) is capable of achieving the requisite spatio-spectral localization, it has traditionally been encumbered by significant resolution constraints that have thus far undermined its clinical utility. To surpass these obstacles, research efforts have primarily focused on hardware enhancements or the development of accelerated acquisition strategies to improve the experimental sensitivity per unit time. Concomitantly, a number of innovative reconstruction techniques have emerged as alternatives to the standard inverse discrete Fourier transform (DFT). While perhaps lesser known, these latter methods strive to effect commensurate resolution gains by exploiting known properties of the underlying MRSI signal in concert with advanced image and signal processing techniques. This review article aims to aggregate and provide an overview of the past few decades of so-called "superresolution" MRSI reconstruction methodologies, and to introduce readers to current state-of-the-art approaches. A number of perspectives are then offered as to the future of high-resolution MRSI, with a particular focus on translation into clinical settings.

  9. HPSLPred: An Ensemble Multi-Label Classifier for Human Protein Subcellular Location Prediction with Imbalanced Source.

    PubMed

    Wan, Shixiang; Duan, Yucong; Zou, Quan

    2017-09-01

    Predicting the subcellular localization of proteins is an important and challenging problem. Traditional experimental approaches are often expensive and time-consuming. Consequently, a growing number of research efforts employ a series of machine learning approaches to predict the subcellular location of proteins. There are two main challenges among the state-of-the-art prediction methods. First, most of the existing techniques are designed to deal with multi-class rather than multi-label classification, which ignores connections between multiple labels. In reality, multiple locations of particular proteins imply that there are vital and unique biological significances that deserve special focus and cannot be ignored. Second, techniques for handling imbalanced data in multi-label classification problems are necessary, but never employed. For solving these two issues, we have developed an ensemble multi-label classifier called HPSLPred, which can be applied for multi-label classification with an imbalanced protein source. For convenience, a user-friendly webserver has been established at http://server.malab.cn/HPSLPred. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Improved Method for Linear B-Cell Epitope Prediction Using Antigen’s Primary Sequence

    PubMed Central

    Raghava, Gajendra P. S.

    2013-01-01

    One of the major challenges in designing a peptide-based vaccine is the identification of antigenic regions in an antigen that can stimulate B-cell’s response, also called B-cell epitopes. In the past, several methods have been developed for the prediction of conformational and linear (or continuous) B-cell epitopes. However, the existing methods for predicting linear B-cell epitopes are far from perfection. In this study, an attempt has been made to develop an improved method for predicting linear B-cell epitopes. We have retrieved experimentally validated B-cell epitopes as well as non B-cell epitopes from Immune Epitope Database and derived two types of datasets called Lbtope_Variable and Lbtope_Fixed length datasets. The Lbtope_Variable dataset contains 14876 B-cell epitope and 23321 non-epitopes of variable length where as Lbtope_Fixed length dataset contains 12063 B-cell epitopes and 20589 non-epitopes of fixed length. We also evaluated the performance of models on above datasets after removing highly identical peptides from the datasets. In addition, we have derived third dataset Lbtope_Confirm having 1042 epitopes and 1795 non-epitopes where each epitope or non-epitope has been experimentally validated in at least two studies. A number of models have been developed to discriminate epitopes and non-epitopes using different machine-learning techniques like Support Vector Machine, and K-Nearest Neighbor. We achieved accuracy from ∼54% to 86% using diverse s features like binary profile, dipeptide composition, AAP (amino acid pair) profile. In this study, for the first time experimentally validated non B-cell epitopes have been used for developing method for predicting linear B-cell epitopes. In previous studies, random peptides have been used as non B-cell epitopes. In order to provide service to scientific community, a web server LBtope has been developed for predicting and designing B-cell epitopes (http://crdd.osdd.net/raghava/lbtope/). PMID:23667458

  11. Calling behavior of blue and fin whales off California

    NASA Astrophysics Data System (ADS)

    Oleson, Erin Marie

    Passive acoustic monitoring is an effective means for evaluating cetacean presence in remote regions and over long time periods, and may become an important component of cetacean abundance surveys. To use passive acoustic recordings for abundance estimation, an understanding of the behavioral ecology of cetacean calling is crucial. In this dissertation, I develop a better understanding of how blue (Balaenoptera musculus) and fin (B. physalus ) whales use sound with the goal of evaluating passive acoustic techniques for studying their populations. Both blue and fin whales produce several different call types, though the behavioral and environmental context of these calls have not been widely investigated. To better understand how calling is used by these whales off California I have employed both new technologies and traditional techniques, including acoustic recording tags, continuous long-term autonomous acoustic recordings, and simultaneous shipboard acoustic and visual surveys. The outcome of these investigations has led to several conclusions. The production of blue whale calls varies with sex, behavior, season, location, and time of day. Each blue whale call type has a distinct behavioral context, including a male-only bias in the production of song, a call type thought to function in reproduction, and the production of some calls by both sexes. Long-term acoustic records, when interpreted using all call types, provide a more accurate measure of the local seasonal presence of whales, and how they use the region annually, seasonally and daily. The relative occurrence of different call types may indicate prime foraging habitat and the presence of different segments of the population. The proportion of animals heard calling changes seasonally and geographically relative to the number seen, indicating the calibration of acoustic and visual surveys is complex and requires further study on the motivations behind call production and the behavior of calling whales. These findings will play a role in the future development of acoustic census methods and habitat studies for these species, and will provide baseline information for the determination of anthropogenic impacts on these populations.

  12. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  13. Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors.

    PubMed

    Mahdiani, Hamid Reza; Fakhraie, Sied Mehdi; Lucas, Caro

    2012-08-01

    Reliability should be identified as the most important challenge in future nano-scale very large scale integration (VLSI) implementation technologies for the development of complex integrated systems. Normally, fault tolerance (FT) in a conventional system is achieved by increasing its redundancy, which also implies higher implementation costs and lower performance that sometimes makes it even infeasible. In contrast to custom approaches, a new class of applications is categorized in this paper, which is inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties. Neural networks are good indicators of imprecision-tolerant applications. We have also proposed a new class of FT techniques called relaxed fault-tolerant (RFT) techniques which are developed for VLSI implementation of imprecision-tolerant applications. The main advantage of RFT techniques with respect to traditional FT solutions is that they exploit inherent FT of different applications to reduce their implementation costs while improving their performance. To show the applicability as well as the efficiency of the RFT method, the experimental results for implementation of a face-recognition computationally intensive neural network and its corresponding RFT realization are presented in this paper. The results demonstrate promising higher performance of artificial neural network VLSI solutions for complex applications in faulty nano-scale implementation environments.

  14. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    NASA Astrophysics Data System (ADS)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  15. Magnetic tweezers for the measurement of twist and torque.

    PubMed

    Lipfert, Jan; Lee, Mina; Ordu, Orkide; Kerssemakers, Jacob W J; Dekker, Nynke H

    2014-05-19

    Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a "conventional" magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the "conventional" magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.

  16. Rudolph A. Marcus and His Theory of Electron Transfer Reactions

    Science.gov Websites

    early 1950s and soon discovered ... a strong experimental program at Brookhaven on electron-transfer experimental work provided the first verification of several of the predictions of his theory. This, in turn Marcus theory, namely, experimental evidence for the so-called "inverted region" where rates

  17. Experimental analysis of multivariate female choice in gray treefrogs (Hyla versicolor): evidence for directional and stabilizing selection.

    PubMed

    Gerhardt, H Carl; Brooks, Robert

    2009-10-01

    Even simple biological signals vary in several measurable dimensions. Understanding their evolution requires, therefore, a multivariate understanding of selection, including how different properties interact to determine the effectiveness of the signal. We combined experimental manipulation with multivariate selection analysis to assess female mate choice on the simple trilled calls of male gray treefrogs. We independently and randomly varied five behaviorally relevant acoustic properties in 154 synthetic calls. We compared response times of each of 154 females to one of these calls with its response to a standard call that had mean values of the five properties. We found directional and quadratic selection on two properties indicative of the amount of signaling, pulse number, and call rate. Canonical rotation of the fitness surface showed that these properties, along with pulse rate, contributed heavily to a major axis of stabilizing selection, a result consistent with univariate studies showing diminishing effects of increasing pulse number well beyond the mean. Spectral properties contributed to a second major axis of stabilizing selection. The single major axis of disruptive selection suggested that a combination of two temporal and two spectral properties with values differing from the mean should be especially attractive.

  18. The Effectiveness of Asulam for Bracken ( Pteridium aquilinum) Control in the United Kingdom: A Meta-Analysis

    NASA Astrophysics Data System (ADS)

    Stewart, Gavin B.; Pullin, Andrew S.; Tyler, Claire

    2007-11-01

    Bracken ( Pteridium aquilinum) is a major problem for livestock-based extensive agriculture, conservation, recreation, and game management globally. It is an invasive species often achieving dominance to the detriment of other species. Control is essential to maintain plant communities such as grassland and lowland heath or if extensive grazing by domestic stock, particularly sheep, is to be viable on upland margins. Bracken is managed primarily by herbicide application or cutting but other techniques including rolling, burning, and grazing are also utilized. Here we evaluate the evidence regarding the effectiveness of asulam for the control of bracken. Thirteen studies provided data for meta-analyses which demonstrate that application of the herbicide asulam reduces bracken abundance. Subgroup analyses indicate that the number of treatments had an important impact, with multiple follow-up treatments more effective than one or two treatments. Management practices should reflect the requirement for repeated follow-up. There is insufficient available experimental evidence for quantitative analysis of the effectiveness of other management interventions, although this results from lack of reporting in papers where cutting and comparisons of cutting and asulam application are concerned. Systematic searching and meta-analytical synthesis have effectively demonstrated the limits of current knowledge, based on recorded empirical evidence, and increasing the call for more rigorous monitoring of bracken control techniques. Lack of experimental evidence on the effectiveness of management such as rolling or grazing with hardy cattle breeds contrasts with the widespread acceptance of their use through dissemination of experience.

  19. Biophotonics for imaging and cell manipulation: quo vadis?

    NASA Astrophysics Data System (ADS)

    Serafetinides, Alexandros A.; Makropoulou, Mirsini; Kotsifaki, Domna G.; Tsigaridas, Giorgos

    2016-01-01

    As one of the major health problems for mankind is cancer, any development for the early detection and effective treatment of cancer is crucial to saving lives. Worldwide, the dream for the anti-cancer procedure of attack is the development of a safe and efficient early diagnosis technique, the so called "optical biopsy". As early diagnosis of cancer is associated with improved prognosis, several laser based optical diagnostic methods were developed to enable earlier, non-invasive detection of human cancer, as Laser Induced Fluorescence spectroscopy (LIFs), Diffuse Reflectance spectroscopy (DRs), confocal microscopy, and Optical Coherence Tomography (OCT). Among them, Optical Coherence Tomography (OCT) imaging is considered to be a useful tool to differentiate healthy from malignant (e.g. basal cell carcinoma, squamous cell carcinoma) skin tissue. If the demand is to perform imaging in sub-tissular or even sub-cellular level, optical tweezers and atomic force microscopy have enabled the visualization of molecular events underlying cellular processes in live cells, as well as the manipulation and characterization of microscale or even nanoscale biostructures. In this work, we will present the latest advances in the field of laser imaging and manipulation techniques, discussing some representative experimental data focusing on the 21th century biophotonics roadmap of novel diagnostic and therapeutical approaches. As an example of a recently discussed health and environmental problem, we studied both experimentally and theoretically the optical trapping forces exerted on yeast cells and modified with estrogen-like acting compounds yeast cells, suspended in various buffer media.

  20. Stabilization of Joule Heating in the Electropyroelectric Method

    NASA Astrophysics Data System (ADS)

    Ivanov, R.; Hernández, M.; Marín, E.; Araujo, C.; Alaniz, D.; Araiza, M.; Martínez-Ordoñez, E. I.

    2012-11-01

    Recently the so-called electropyroelectric technique for thermal characterization of liquids has been proposed (Ivanov et al., J. Phys. D: Appl. Phys. 43, 225501 (2010)). In this method a pyroelectric sensor, in good thermal contact with the investigated sample, is heated by passing an amplitude-modulated electrical current through the electrical contacts. As a result of the heat dissipated to the sample, the pyroelectric signal measured as a voltage drop across the electrical contacts changes in a periodical way. The amplitude and phase of this signal can be measured by lock-in detection as a function of the electrical current modulation frequency. Because the signal amplitude and phase depend on the thermal properties of the sample, these can be determined straightforwardly by fitting the experimental data to a theoretical model based on the solution of the heat diffusion equation with proper boundary conditions. In general, the experimental conditions are selected so that the thermal effusivity becomes the measured magnitude. The technique has the following handicap. As the result of heating and wear of the metal coating layers (previously etched to achieve a serpentine form) with time, their electrical resistance changes with time, so that the heat power dissipated by the Joule effect can vary, and thermal effusivity measurement can become inaccurate. To avoid this problem in this study, a method is proposed that allows maintaining stable the Joule dissipated power. An electronic circuit is designed whose stability and characteristics are investigated and discussed.

  1. Eddy current characterization of small cracks using least square support vector machine

    NASA Astrophysics Data System (ADS)

    Chelabi, M.; Hacib, T.; Le Bihan, Y.; Ikhlef, N.; Boughedda, H.; Mekideche, M. R.

    2016-04-01

    Eddy current (EC) sensors are used for non-destructive testing since they are able to probe conductive materials. Despite being a conventional technique for defect detection and localization, the main weakness of this technique is that defect characterization, of the exact determination of the shape and dimension, is still a question to be answered. In this work, we demonstrate the capability of small crack sizing using signals acquired from an EC sensor. We report our effort to develop a systematic approach to estimate the size of rectangular and thin defects (length and depth) in a conductive plate. The achieved approach by the novel combination of a finite element method (FEM) with a statistical learning method is called least square support vector machines (LS-SVM). First, we use the FEM to design the forward problem. Next, an algorithm is used to find an adaptive database. Finally, the LS-SVM is used to solve the inverse problems, creating polynomial functions able to approximate the correlation between the crack dimension and the signal picked up from the EC sensor. Several methods are used to find the parameters of the LS-SVM. In this study, the particle swarm optimization (PSO) and genetic algorithm (GA) are proposed for tuning the LS-SVM. The results of the design and the inversions were compared to both simulated and experimental data, with accuracy experimentally verified. These suggested results prove the applicability of the presented approach.

  2. Acoustic Blind Deconvolution and Frequency-Difference Beamforming in Shallow Ocean Environments

    DTIC Science & Technology

    2012-01-01

    acoustic field experiment (FAF06) conducted in July 2006 off the west coast of Italy. Dr. Heechun Song of the Scripps Institution of Oceanography...from seismic surveying and whale calls recorded on a vertical array with 12 elements. The whale call frequencies range from 100 to 500 Hz and the water...underway. Together Ms. Abadi and Dr. Thode had considerable success simulating the experimental environment, deconvolving whale calls, ranging the

  3. Global search in photoelectron diffraction structure determination using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Viana, M. L.; Díez Muiño, R.; Soares, E. A.; Van Hove, M. A.; de Carvalho, V. E.

    2007-11-01

    Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 × 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.

  4. Terrestrial Radiodetermination Performance and Cost

    DOT National Transportation Integrated Search

    1977-09-01

    The report summarizes information gathered during a study of the application of electronic techniques to geographical position determination on land and on inland waterways. Systems incorporating such techniques have been called terrestrial radiodete...

  5. von Neumann's Law: Theoretical and Microgravity Experimental Comparison for Coarsening Diffusion in Bubble Lattices

    NASA Technical Reports Server (NTRS)

    Noever, David A.

    2000-01-01

    The effects of gravity in influencing the theoretical limit for bubble lattice coarsening and aging behavior, otherwise called von Neumann's law, is examined theoretically and experimentally. Preliminary microgravity results will be discussed.

  6. Experimental Drug Metarrestin Targets Metastatic Tumors

    Cancer.gov

    An experimental drug called metarrestin appears to selectively target tumors that have spread to other parts of the body. As this Cancer Currents blog post reports, the drug shrank metastatic tumors and extended survival in in mouse models of pancreatic cancer.

  7. Bat echolocation calls facilitate social communication

    PubMed Central

    Knörnschild, Mirjam; Jung, Kirsten; Nagy, Martina; Metz, Markus; Kalko, Elisabeth

    2012-01-01

    Bat echolocation is primarily used for orientation and foraging but also holds great potential for social communication. The communicative function of echolocation calls is still largely unstudied, especially in the wild. Eavesdropping on vocal signatures encoding social information in echolocation calls has not, to our knowledge, been studied in free-living bats so far. We analysed echolocation calls of the polygynous bat Saccopteryx bilineata and found pronounced vocal signatures encoding sex and individual identity. We showed experimentally that free-living males discriminate approaching male and female conspecifics solely based on their echolocation calls. Males always produced aggressive vocalizations when hearing male echolocation calls and courtship vocalizations when hearing female echolocation calls; hence, they responded with complex social vocalizations in the appropriate social context. Our study demonstrates that social information encoded in bat echolocation calls plays a crucial and hitherto underestimated role for eavesdropping conspecifics and thus facilitates social communication in a highly mobile nocturnal mammal. PMID:23034703

  8. Bat echolocation calls facilitate social communication.

    PubMed

    Knörnschild, Mirjam; Jung, Kirsten; Nagy, Martina; Metz, Markus; Kalko, Elisabeth

    2012-12-07

    Bat echolocation is primarily used for orientation and foraging but also holds great potential for social communication. The communicative function of echolocation calls is still largely unstudied, especially in the wild. Eavesdropping on vocal signatures encoding social information in echolocation calls has not, to our knowledge, been studied in free-living bats so far. We analysed echolocation calls of the polygynous bat Saccopteryx bilineata and found pronounced vocal signatures encoding sex and individual identity. We showed experimentally that free-living males discriminate approaching male and female conspecifics solely based on their echolocation calls. Males always produced aggressive vocalizations when hearing male echolocation calls and courtship vocalizations when hearing female echolocation calls; hence, they responded with complex social vocalizations in the appropriate social context. Our study demonstrates that social information encoded in bat echolocation calls plays a crucial and hitherto underestimated role for eavesdropping conspecifics and thus facilitates social communication in a highly mobile nocturnal mammal.

  9. Local anaesthesia through the action of cocaine, the oral mucosa and the Vienna group.

    PubMed

    López-Valverde, A; de Vicente, J; Martínez-Domínguez, L; de Diego, R Gómez

    2014-07-11

    Local anaesthesia through the action of cocaine was introduced in Europe by the Vienna group, which includeed Freud, Koller and Königstein. Before using the alkaloid in animal or human experimentation all these scientists tested it on their oral mucosa - so-called self-experimentation. Some of them with different pathologies (that is, in the case of Freud), eventually became addicted to the alkaloid. Here we attempt to describe the people forming the so-called 'Vienna group', their social milieu, their experiences and internal disputes within the setting of a revolutionary discovery of the times.

  10. How experimentally to detect a solitary superconductivity in dirty ferromagnet-superconductor trilayers?

    NASA Astrophysics Data System (ADS)

    Avdeev, Maxim V.; Proshin, Yurii N.

    2017-10-01

    We theoretically study the proximity effect in the thin-film layered ferromagnet (F) - superconductor (S) heterostructures in F1F2S design. We consider the boundary value problem for the Usadel-like equations in the case of so-called ;dirty; limit. The ;latent; superconducting pairing interaction in F layers taken into account. The focus is on the recipe of experimental preparation the state with so-called solitary superconductivity. We also propose and discuss the model of the superconducting spin valve based on F1F2S trilayers in solitary superconductivity regime.

  11. Single-molecule study of the DNA denaturation phase transition in the force-torsion space.

    PubMed

    Salerno, D; Tempestini, A; Mai, I; Brogioli, D; Ziano, R; Cassina, V; Mantegazza, F

    2012-09-14

    We use the "magnetic tweezers" technique to show the structural transitions that the DNA undergoes in the force-torsion space. In particular, we focus on the regions corresponding to negative supercoiling. These regions are characterized by the formation of the so-called denaturation bubbles, which play an essential role in the replication and transcription of DNA. We experimentally map the region of the force-torsion space where the denaturation takes place. We observe that large fluctuations in DNA extension occur at one of the boundaries of this region, i.e., when the formation of denaturation bubbles and of plectonemes compete. To describe the experiments, we introduce a suitable extension of the classical model. The model correctly describes the position of the denaturation regions, the transition boundaries, and the measured values of the DNA extension fluctuations.

  12. Artificial Neural Identification and LMI Transformation for Model Reduction-Based Control of the Buck Switch-Mode Regulator

    NASA Astrophysics Data System (ADS)

    Al-Rabadi, Anas N.

    2009-10-01

    This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.

  13. Membership-degree preserving discriminant analysis with applications to face recognition.

    PubMed

    Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun

    2013-01-01

    In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.

  14. Atomic Detail Visualization of Photosynthetic Membranes with GPU-Accelerated Ray Tracing

    PubMed Central

    Vandivort, Kirby L.; Barragan, Angela; Singharoy, Abhishek; Teo, Ivan; Ribeiro, João V.; Isralewitz, Barry; Liu, Bo; Goh, Boon Chong; Phillips, James C.; MacGregor-Chatwin, Craig; Johnson, Matthew P.; Kourkoutis, Lena F.; Hunter, C. Neil

    2016-01-01

    The cellular process responsible for providing energy for most life on Earth, namely photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers. PMID:27274603

  15. Gold rush - A swarm dynamics in games

    NASA Astrophysics Data System (ADS)

    Zelinka, Ivan; Bukacek, Michal

    2017-07-01

    This paper is focused on swarm intelligence techniques and its practical use in computer games. The aim is to show how a swarm dynamics can be generated by multiplayer game, then recorded, analyzed and eventually controlled. In this paper we also discuss possibility to use swarm intelligence instead of game players. Based on our previous experiments two games, using swarm algorithms are mentioned briefly here. The first one is strategy game StarCraft: Brood War, and TicTacToe in which SOMA algorithm has also take a role of player against human player. Open research reported here has shown potential benefit of swarm computation in the field of strategy games and players strategy based on swarm behavior record and analysis. We propose new game called Gold Rush as an experimental environment for human or artificial swarm behavior and consequent analysis.

  16. A variable resolution x-ray detector for computed tomography: II. Imaging theory and performance.

    PubMed

    DiBianca, F A; Zou, P; Jordan, L M; Laughter, J S; Zeman, H D; Sebes, J

    2000-08-01

    A computed tomography (CT) imaging technique called variable resolution x-ray (VRX) detection provides variable image resolution ranging from that of clinical body scanning (1 cy/mm) to that of microscopy (100 cy/mm). In this paper, an experimental VRX CT scanner based on a rotating subject table and an angulated storage phosphor screen detector is described and tested. The measured projection resolution of the scanner is > or = 20 lp/mm. Using this scanner, 4.8-s CT scans are made of specimens of human extremities and of in vivo hamsters. In addition, the system's projected spatial resolution is calculated to exceed 100 cy/mm for a future on-line CT scanner incorporating smaller focal spots (0.1 mm) than those currently used and a 1008-channel VRX detector with 0.6-mm cell spacing.

  17. Leap-dynamics: efficient sampling of conformational space of proteins and peptides in solution.

    PubMed

    Kleinjung, J; Bayley, P; Fraternali, F

    2000-03-31

    A molecular simulation scheme, called Leap-dynamics, that provides efficient sampling of protein conformational space in solution is presented. The scheme is a combined approach using a fast sampling method, imposing conformational 'leaps' to force the system over energy barriers, and molecular dynamics (MD) for refinement. The presence of solvent is approximated by a potential of mean force depending on the solvent accessible surface area. The method has been successfully applied to N-acetyl-L-alanine-N-methylamide (alanine dipeptide), sampling experimentally observed conformations inaccessible to MD alone under the chosen conditions. The method predicts correctly the increased partial flexibility of the mutant Y35G compared to native bovine pancreatic trypsin inhibitor. In particular, the improvement over MD consists of the detection of conformational flexibility that corresponds closely to slow motions identified by nuclear magnetic resonance techniques.

  18. Identification of PAH Isomeric Structure in Cosmic Dust Analogs: The AROMA Setup

    NASA Astrophysics Data System (ADS)

    Sabbah, Hassan; Bonnamy, Anthony; Papanastasiou, Dimitris; Cernicharo, Jose; Martín-Gago, Jose-Angel; Joblin, Christine

    2017-07-01

    We developed a new analytical experimental setup called AROMA (Astrochemistry Research of Organics with Molecular Analyzer) that combines laser desorption/ionization techniques with ion trap mass spectrometry. We report here on the ability of the apparatus to detect aromatic species in complex materials of astrophysical interest and characterize their structures. A limit of detection of 100 femto-grams has been achieved using pure polycyclic aromatic hydrocarbon (PAH) samples, which corresponds to 2 × 108 molecules in the case of coronene (C24H12). We detected the PAH distribution in the Murchison meteorite, which is made of a complex mixture of extraterrestrial organic compounds. In addition, collision induced dissociation experiments were performed on selected species detected in Murchison, which led to the first firm identification of pyrene and its methylated derivatives in this sample.

  19. Resolving and quantifying overlapped chromatographic bands by transmutation

    PubMed

    Malinowski

    2000-09-15

    A new chemometric technique called "transmutation" is developed for the purpose of sharpening overlapped chromatographic bands in order to quantify the components. The "transmutation function" is created from the chromatogram of the pure component of interest, obtained from the same instrument, operating under the same experimental conditions used to record the unresolved chromatogram of the sample mixture. The method is used to quantify mixtures containing toluene, ethylbenzene, m-xylene, naphthalene, and biphenyl from unresolved chromatograms previously reported. The results are compared to those obtained using window factor analysis, rank annihilation factor analysis, and matrix regression analysis. Unlike the latter methods, the transmutation method is not restricted to two-dimensional arrays of data, such as those obtained from HPLC/DAD, but is also applicable to chromatograms obtained from single detector experiments. Limitations of the method are discussed.

  20. Single-Molecule Study of the DNA Denaturation Phase Transition in the Force-Torsion Space

    NASA Astrophysics Data System (ADS)

    Salerno, D.; Tempestini, A.; Mai, I.; Brogioli, D.; Ziano, R.; Cassina, V.; Mantegazza, F.

    2012-09-01

    We use the “magnetic tweezers” technique to show the structural transitions that the DNA undergoes in the force-torsion space. In particular, we focus on the regions corresponding to negative supercoiling. These regions are characterized by the formation of the so-called denaturation bubbles, which play an essential role in the replication and transcription of DNA. We experimentally map the region of the force-torsion space where the denaturation takes place. We observe that large fluctuations in DNA extension occur at one of the boundaries of this region, i.e., when the formation of denaturation bubbles and of plectonemes compete. To describe the experiments, we introduce a suitable extension of the classical model. The model correctly describes the position of the denaturation regions, the transition boundaries, and the measured values of the DNA extension fluctuations.

  1. Nuthatches eavesdrop on variations in heterospecific chickadee mobbing alarm calls

    PubMed Central

    Templeton, Christopher N.; Greene, Erick

    2007-01-01

    Many animals recognize the alarm calls produced by other species, but the amount of information they glean from these eavesdropped signals is unknown. We previously showed that black-capped chickadees (Poecile atricapillus) have a sophisticated alarm call system in which they encode complex information about the size and risk of potential predators in variations of a single type of mobbing alarm call. Here we show experimentally that red-breasted nuthatches (Sitta canadensis) respond appropriately to subtle variations of these heterospecific “chick-a-dee” alarm calls, thereby evidencing that they have gained important information about potential predators in their environment. This study demonstrates a previously unsuspected level of discrimination in intertaxon eavesdropping. PMID:17372225

  2. Nuthatches eavesdrop on variations in heterospecific chickadee mobbing alarm calls.

    PubMed

    Templeton, Christopher N; Greene, Erick

    2007-03-27

    Many animals recognize the alarm calls produced by other species, but the amount of information they glean from these eavesdropped signals is unknown. We previously showed that black-capped chickadees (Poecile atricapillus) have a sophisticated alarm call system in which they encode complex information about the size and risk of potential predators in variations of a single type of mobbing alarm call. Here we show experimentally that red-breasted nuthatches (Sitta canadensis) respond appropriately to subtle variations of these heterospecific "chick-a-dee" alarm calls, thereby evidencing that they have gained important information about potential predators in their environment. This study demonstrates a previously unsuspected level of discrimination in intertaxon eavesdropping.

  3. Control algorithms for aerobraking in the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Ward, Donald T.; Shipley, Buford W., Jr.

    1991-01-01

    The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.

  4. Blind source computer device identification from recorded VoIP calls for forensic investigation.

    PubMed

    Jahanirad, Mehdi; Anuar, Nor Badrul; Wahab, Ainuddin Wahid Abdul

    2017-03-01

    The VoIP services provide fertile ground for criminal activity, thus identifying the transmitting computer devices from recorded VoIP call may help the forensic investigator to reveal useful information. It also proves the authenticity of the call recording submitted to the court as evidence. This paper extended the previous study on the use of recorded VoIP call for blind source computer device identification. Although initial results were promising but theoretical reasoning for this is yet to be found. The study suggested computing entropy of mel-frequency cepstrum coefficients (entropy-MFCC) from near-silent segments as an intrinsic feature set that captures the device response function due to the tolerances in the electronic components of individual computer devices. By applying the supervised learning techniques of naïve Bayesian, linear logistic regression, neural networks and support vector machines to the entropy-MFCC features, state-of-the-art identification accuracy of near 99.9% has been achieved on different sets of computer devices for both call recording and microphone recording scenarios. Furthermore, unsupervised learning techniques, including simple k-means, expectation-maximization and density-based spatial clustering of applications with noise (DBSCAN) provided promising results for call recording dataset by assigning the majority of instances to their correct clusters. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  5. Terrestrial Radiodetermination Potential Users and Their Requirements

    DOT National Transportation Integrated Search

    1976-07-01

    The report summarizes information gathered during a preliminary study of the application of electronic techniques to geographical position determination on land and on inland waterways. Systems incorporating such techniques have been called terrestri...

  6. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  7. Calling depths of baleen whales from single sensor data: development of an autocorrelation method using multipath localization.

    PubMed

    Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M

    2013-09-01

    Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.

  8. Small Molecules-Big Data.

    PubMed

    Császár, Attila G; Furtenbacher, Tibor; Árendás, Péter

    2016-11-17

    Quantum mechanics builds large-scale graphs (networks): the vertices are the discrete energy levels the quantum system possesses, and the edges are the (quantum-mechanically allowed) transitions. Parts of the complete quantum mechanical networks can be probed experimentally via high-resolution, energy-resolved spectroscopic techniques. The complete rovibronic line list information for a given molecule can only be obtained through sophisticated quantum-chemical computations. Experiments as well as computations yield what we call spectroscopic networks (SN). First-principles SNs of even small, three to five atomic molecules can be huge, qualifying for the big data description. Besides helping to interpret high-resolution spectra, the network-theoretical view offers several ideas for improving the accuracy and robustness of the increasingly important information systems containing line-by-line spectroscopic data. For example, the smallest number of measurements necessary to perform to obtain the complete list of energy levels is given by the minimum-weight spanning tree of the SN and network clustering studies may call attention to "weakest links" of a spectroscopic database. A present-day application of spectroscopic networks is within the MARVEL (Measured Active Rotational-Vibrational Energy Levels) approach, whereby the transitions information on a measured SN is turned into experimental energy levels via a weighted linear least-squares refinement. MARVEL has been used successfully for 15 molecules and allowed to validate most of the transitions measured and come up with energy levels with well-defined and realistic uncertainties. Accurate knowledge of the energy levels with computed transition intensities allows the realistic prediction of spectra under many different circumstances, e.g., for widely different temperatures. Detailed knowledge of the energy level structure of a molecule coming from a MARVEL analysis is important for a considerable number of modeling efforts in chemistry, physics, and engineering.

  9. Mission-based Scenario Research: Experimental Design And Analysis

    DTIC Science & Technology

    2012-01-01

    neurotechnologies called Brain-Computer Interaction Technologies. 15. SUBJECT TERMS neuroimaging, EEG, task loading, neurotechnologies , ground... neurotechnologies called Brain-Computer Interaction Technologies. INTRODUCTION Imagine a system that can identify operator fatigue during a long-term...BCIT), a class of neurotechnologies , that aim to improve task performance by incorporating measures of brain activity to optimize the interactions

  10. Intimate Debate Technique: Medicinal Use of Marijuana

    ERIC Educational Resources Information Center

    Herreid, Clyde Freeman; DeRei, Kristie

    2007-01-01

    Classroom debates used to be familiar exercises to students schooled in past generations. In this article, the authors describe the technique called "intimate debate". To cooperative learning specialists, the technique is known as "structured debate" or "constructive debate". It is a powerful method for dealing with case topics that involve…

  11. Digital computer technique for setup and checkout of an analog computer

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1968-01-01

    Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.

  12. Essentials of Suggestopedia: A Primer for Practitioners.

    ERIC Educational Resources Information Center

    Caskey, Owen L.; Flake, Muriel H.

    Suggestology is the scientific study of the psychology of suggestion and Suggestopedia in the application of relaxation and suggestion techniques to learning. The approach applied to learning processes (called Suggestopedic) developed by Dr. Georgi Lozanov (called the Lozanov Method) utilizes mental and physical relaxation, deep breathing,…

  13. Post-coronagraphic tip-tilt sensing for vortex phase masks: The QACITS technique

    NASA Astrophysics Data System (ADS)

    Huby, E.; Baudoz, P.; Mawet, D.; Absil, O.

    2015-12-01

    Context. Small inner working angle coronagraphs, such as the vortex phase mask, are essential to exploit the full potential of ground-based telescopes in the context of exoplanet detection and characterization. However, the drawback of this attractive feature is a high sensitivity to pointing errors, which degrades the performance of the coronagraph. Aims: We propose a tip-tilt retrieval technique based on the analysis of the final coronagraphic image, hereafter called Quadrant Analysis of Coronagraphic Images for Tip-tilt Sensing (QACITS). Methods: Under the assumption of small phase aberrations, we show that the behavior of the vortex phase mask can be simply described from the entrance pupil to the Lyot stop plane with Zernike polynomials. This convenient formalism is used to establish the theoretical basis of the QACITS technique. We performed simulations to demonstrate the validity and limits of the technique, including the case of a centrally obstructed pupil. Results: The QACITS technique principle is validated with experimental results in the case of an unobstructed circular aperture, as well as simulations in presence of a central obstruction. The typical configuration of the Keck telescope (24% central obstruction) has been simulated with additional high order aberrations. In these conditions, our simulations show that the QACITS technique is still adapted to centrally obstructed pupils and performs tip-tilt retrieval with a precision of 5 × 10-2λ/D when wavefront errors amount to λ/ 14 rms and 10-2λ/D for λ/ 70 rms errors (with λ the wavelength and D the pupil diameter). Conclusions: We have developed and demonstrated a tip-tilt sensing technique for vortex coronagraphs. The implementation of the QACITS technique is based on the analysis of the scientific image and does not require any modification of the original setup. Current facilities equipped with a vortex phase mask can thus directly benefit from this technique to improve the contrast performance close to the axis.

  14. Techniques in Experimental Mechanics Applicable to Forest Products Research

    Treesearch

    Leslie H. Groom; Audrey G. Zink

    1994-01-01

    The title of this publication-Techniques in Experimental Mechanics Applicable to Forest Products Research-is the theme of this plenary session from the 1994 Annual Meeting of the Forest Products Society (FPS). Although this session focused on experimental techniques that can be of assistance to researchers in the field of forest products, it is hoped that the...

  15. An experimental test of noise-dependent voice amplitude regulation in Cope's grey treefrog (Hyla chrysoscelis).

    PubMed

    Love, Elliot K; Bee, Mark A

    2010-09-01

    One strategy for coping with the constraints on acoustic signal reception posed by ambient noise is to signal louder as noise levels increase. Termed the 'Lombard effect', this reflexive behaviour is widespread among birds and mammals and occurs with a diversity of signal types, leading to the hypothesis that voice amplitude regulation represents a general vertebrate mechanism for coping with environmental noise. Support for this evolutionary hypothesis, however, remains limited due to a lack of studies in taxa other than birds and mammals. Here, we report the results of an experimental test of the hypothesis that male grey treefrogs increase the amplitude of their advertisement calls in response to increasing levels of chorus-shaped noise. We recorded spontaneously produced calls in quiet and in the presence of noise broadcast at sound pressure levels ranging between 40 dB and 70 dB. While increasing noise levels induced predictable changes in call duration and rate, males did not regulate call amplitude. These results do not support the hypothesis that voice amplitude regulation is a generic vertebrate mechanism for coping with noise. We discuss the possibility that intense sexual selection and high levels of competition for mates in choruses place some frogs under strong selection to call consistently as loudly as possible.

  16. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    PubMed

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.

  17. Considerations in detecting CDC select agents under field conditions

    NASA Astrophysics Data System (ADS)

    Spinelli, Charles; Soelberg, Scott; Swanson, Nathaneal; Furlong, Clement; Baker, Paul

    2008-04-01

    Surface Plasmon Resonance (SPR) has become a widely accepted technique for real-time detection of interactions between receptor molecules and ligands. Antibody may serve as receptor and can be attached to the gold surface of the SPR device, while candidate analyte fluids contact the detecting antibody. Minute, but detectable, changes in refractive indices (RI) indicate that analyte has bound to the antibody. A decade ago, an inexpensive, robust, miniature and fully integrated SPR chip, called SPREETA, was developed. University of Washington (UW) researchers subsequently developed a portable, temperature-regulated instrument, called SPIRIT, to simultaneously use eight of these three-channel SPREETA chips. A SPIRIT prototype instrument was tested in the field, coupled to a remote reporting system on a surrogate unmanned aerial vehicle (UAV). Two target protein analytes were released sequentially as aerosols with low analyte concentration during each of three flights and were successfully detected and verified. Laboratory experimentation with a more advanced SPIRIT instrument demonstrated detection of very low levels of several select biological agents that might be employed by bioterrorists. Agent detection under field-like conditions is more challenging, especially as analyte concentrations are reduced and complex matricies are introduced. Two different sample preconditioning protocols have been developed for select agents in complex matrices. Use of these preconditioning techniques has allowed laboratory detection in spiked heavy mud of Francisella tularensis at 10 3 CFU/ml, Bacillus anthracis spores at 10 3 CFU/ml, Staphylococcal enterotoxin B (SEB) at 1 ng/ml, and Vaccinia virus (a smallpox simulant) at 10 5 PFU/ml. Ongoing experiments are aimed at simultaneous detection of multiple agents in spiked heavy mud, using a multiplex preconditioning protocol.

  18. The Fernow Experimental Forest and Canaan Valley: A history of research

    Treesearch

    Mary Beth Adams; James N. Kochenderfer

    2015-01-01

    The Fernow Experimental Forest (herein called the Fernow) in Tucker County, WV, was set aside in 1934 for “experimental and demonstration purposes under the direction of the Appalachian Forest Experiment Station” of the US Forest Service. Named after a famous German forester, Bernhard Fernow, the Fernow was initially developed with considerable assistance from the...

  19. ALCF Data Science Program: Productive Data-centric Supercomputing

    NASA Astrophysics Data System (ADS)

    Romero, Nichols; Vishwanath, Venkatram

    The ALCF Data Science Program (ADSP) is targeted at big data science problems that require leadership computing resources. The goal of the program is to explore and improve a variety of computational methods that will enable data-driven discoveries across all scientific disciplines. The projects will focus on data science techniques covering a wide area of discovery including but not limited to uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows. Project teams will be among the first to access Theta, ALCFs forthcoming 8.5 petaflops Intel/Cray system. The program will transition to the 200 petaflop/s Aurora supercomputing system when it becomes available. In 2016, four projects have been selected to kick off the ADSP. The selected projects span experimental and computational sciences and range from modeling the brain to discovering new materials for solar-powered windows to simulating collision events at the Large Hadron Collider (LHC). The program will have a regular call for proposals with the next call expected in Spring 2017.http://www.alcf.anl.gov/alcf-data-science-program This research used resources of the ALCF, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

  20. A new approach to characterize very-low-level radioactive waste produced at hadron accelerators.

    PubMed

    Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa

    2017-04-01

    Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called "radionuclide inventory", and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The Next Frontier: Quantitative Biochemistry in Living Cells.

    PubMed

    Honigmann, Alf; Nadler, André

    2018-01-09

    Researchers striving to convert biology into an exact science foremost rely on structural biology and biochemical reconstitution approaches to obtain quantitative data. However, cell biological research is moving at an ever-accelerating speed into areas where these approaches lose much of their edge. Intrinsically unstructured proteins and biochemical interaction networks composed of interchangeable, multivalent, and unspecific interactions pose unique challenges to quantitative biology, as do processes that occur in discrete cellular microenvironments. Here we argue that a conceptual change in our way of conducting biochemical experiments is required to take on these new challenges. We propose that reconstitution of cellular processes in vitro should be much more focused on mimicking the cellular environment in vivo, an approach that requires detailed knowledge of the material properties of cellular compartments, essentially requiring a material science of the cell. In a similar vein, we suggest that quantitative biochemical experiments in vitro should be accompanied by corresponding experiments in vivo, as many newly relevant cellular processes are highly context-dependent. In essence, this constitutes a call for chemical biologists to convert their discipline from a proof-of-principle science to an area that could rightfully be called quantitative biochemistry in living cells. In this essay, we discuss novel techniques and experimental strategies with regard to their potential to fulfill such ambitious aims.

  2. Determination of the optimal number of components in independent components analysis.

    PubMed

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Neurovascular Network Explorer 2.0: A Simple Tool for Exploring and Sharing a Database of Optogenetically-evoked Vasomotion in Mouse Cortex In Vivo.

    PubMed

    Uhlirova, Hana; Tian, Peifang; Kılıç, Kıvılcım; Thunemann, Martin; Sridhar, Vishnu B; Chmelik, Radim; Bartsch, Hauke; Dale, Anders M; Devor, Anna; Saisan, Payam A

    2018-05-04

    The importance of sharing experimental data in neuroscience grows with the amount and complexity of data acquired and various techniques used to obtain and process these data. However, the majority of experimental data, especially from individual studies of regular-sized laboratories never reach wider research community. A graphical user interface (GUI) engine called Neurovascular Network Explorer 2.0 (NNE 2.0) has been created as a tool for simple and low-cost sharing and exploring of vascular imaging data. NNE 2.0 interacts with a database containing optogenetically-evoked dilation/constriction time-courses of individual vessels measured in mice somatosensory cortex in vivo by 2-photon microscopy. NNE 2.0 enables selection and display of the time-courses based on different criteria (subject, branching order, cortical depth, vessel diameter, arteriolar tree) as well as simple mathematical manipulation (e.g. averaging, peak-normalization) and data export. It supports visualization of the vascular network in 3D and enables localization of the individual functional vessel diameter measurements within vascular trees. NNE 2.0, its source code, and the corresponding database are freely downloadable from UCSD Neurovascular Imaging Laboratory website 1 . The source code can be utilized by the users to explore the associated database or as a template for databasing and sharing their own experimental results provided the appropriate format.

  4. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation.

  5. Identification of microRNAs with regulatory potential using a matched microRNA-mRNA time-course data.

    PubMed

    Jayaswal, Vivek; Lutherborrow, Mark; Ma, David D F; Hwa Yang, Yee

    2009-05-01

    Over the past decade, a class of small RNA molecules called microRNAs (miRNAs) has been shown to regulate gene expression at the post-transcription stage. While early work focused on the identification of miRNAs using a combination of experimental and computational techniques, subsequent studies have focused on identification of miRNA-target mRNA pairs as each miRNA can have hundreds of mRNA targets. The experimental validation of some miRNAs as oncogenic has provided further motivation for research in this area. In this article we propose an odds-ratio (OR) statistic for identification of regulatory miRNAs. It is based on integrative analysis of matched miRNA and mRNA time-course microarray data. The OR-statistic was used for (i) identification of miRNAs with regulatory potential, (ii) identification of miRNA-target mRNA pairs and (iii) identification of time lags between changes in miRNA expression and those of its target mRNAs. We applied the OR-statistic to a cancer data set and identified a small set of miRNAs that were negatively correlated to mRNAs. A literature survey revealed that some of the miRNAs that were predicted to be regulatory, were indeed oncogenic or tumor suppressors. Finally, some of the predicted miRNA targets have been shown to be experimentally valid.

  6. Non-song social call bouts of migrating humpback whales

    PubMed Central

    Rekdahl, Melinda L.; Dunlop, Rebecca A.; Goldizen, Anne W.; Garland, Ellen C.; Biassoni, Nicoletta; Miller, Patrick; Noad, Michael J.

    2015-01-01

    The use of stereotyped calls within structured bouts has been described for a number of species and may increase the information potential of call repertoires. Humpback whales produce a repertoire of social calls, although little is known about the complexity or function of these calls. In this study, digital acoustic tag recordings were used to investigate social call use within bouts, the use of bouts across different social contexts, and whether particular call type combinations were favored. Call order within bouts was investigated using call transition frequencies and information theory techniques. Call bouts were defined through analysis of inter-call intervals, as any calls within 3.9 s of each other. Bouts were produced significantly more when new whales joined a group compared to groups that did not change membership, and in groups containing multiple adults escorting a female and calf compared to adult only groups. Although social calls tended to be produced in bouts, there were few repeated bout types. However, the order in which most call types were produced within bouts was non-random and dependent on the preceding call type. These bouts appear to be at least partially governed by rules for how individual components are combined. PMID:26093396

  7. Non-song social call bouts of migrating humpback whales.

    PubMed

    Rekdahl, Melinda L; Dunlop, Rebecca A; Goldizen, Anne W; Garland, Ellen C; Biassoni, Nicoletta; Miller, Patrick; Noad, Michael J

    2015-06-01

    The use of stereotyped calls within structured bouts has been described for a number of species and may increase the information potential of call repertoires. Humpback whales produce a repertoire of social calls, although little is known about the complexity or function of these calls. In this study, digital acoustic tag recordings were used to investigate social call use within bouts, the use of bouts across different social contexts, and whether particular call type combinations were favored. Call order within bouts was investigated using call transition frequencies and information theory techniques. Call bouts were defined through analysis of inter-call intervals, as any calls within 3.9 s of each other. Bouts were produced significantly more when new whales joined a group compared to groups that did not change membership, and in groups containing multiple adults escorting a female and calf compared to adult only groups. Although social calls tended to be produced in bouts, there were few repeated bout types. However, the order in which most call types were produced within bouts was non-random and dependent on the preceding call type. These bouts appear to be at least partially governed by rules for how individual components are combined.

  8. Impact of the telephone assistive device (TAD) on stuttering severity while speaking on the telephone.

    PubMed

    Chambers, Nola

    2009-01-01

    There is extensive experimental evidence that altered auditory feedback (AAF) can have a clinically significant effect on the severity of speech symptoms in people who stutter. However, there is less evidence regarding whether these experimental effects can be observed in naturalistic everyday settings particularly when using the telephone. This study aimed to investigate the effectiveness of the Telephone Assistive Device (TAD), which is designed to provide AAF on the telephone to people who stutter, on reducing stuttering severity. Nine adults participated in a quasi-experimental study. Stuttering severity was measured first without and then with the device in participants' naturalistic settings while making and receiving telephone calls (immediate benefit). Participants were then allowed a week of repeated use of the device following which all measurements were repeated (delayed benefit). Overall, results revealed significant immediate benefits from the TAD in all call conditions. Delayed benefits in received and total calls were also significant. There was substantial individual variability in response to the TAD but none of the demographic or speech-related factors measured in the study were found to significantly impact the benefit (immediate or delayed) derived from the TAD. Results have implications for clinical decision making for adults who stutter.

  9. Inverse Function: Pre-Service Teachers' Techniques and Meanings

    ERIC Educational Resources Information Center

    Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.

    2018-01-01

    Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…

  10. Perspectives: A Challenging Patriotism

    ERIC Educational Resources Information Center

    Boyte, Harry C.

    2012-01-01

    In a time of alarm about the poisoning of electoral politics, public passions inflamed by sophisticated techniques of mass polarization, and fears that the country is losing control of its collective future, higher education is called upon to take leadership in "reinventing citizenship." It needs to respond to that call on a scale unprecedented in…

  11. Teaching Free Expression in Word and Example (Commentary).

    ERIC Educational Resources Information Center

    Merrill, John

    1991-01-01

    Suggests that the teaching of free expression may be the highest calling of a communications or journalism professor. Argues that freedom must be tempered by a sense of ethics. Calls upon teachers to encourage students to analyze the questions surrounding free expression. Describes techniques for scrutinizing journalistic myths. (SG)

  12. Hands-free human-machine interaction with voice

    NASA Astrophysics Data System (ADS)

    Juang, B. H.

    2004-05-01

    Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.

  13. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  14. Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.

    PubMed

    Hack, C Eric

    2006-04-17

    Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.

  15. Double Transfer Voltammetry in Two-Polarizable Interface Systems: Effects of the Lipophilicity and Charge of the Target and Compensating Ions.

    PubMed

    Molina, Ángela; Laborda, Eduardo; Olmos, José Manuel; Millán-Barrios, Enrique

    2018-03-06

    Analytical expressions are obtained for the study of the net current and individual fluxes across macro- and micro-liquid/liquid interfaces in series as those found in ion sensing with solvent polymeric membranes and in ion-transfer batteries. The mathematical solutions deduced are applicable to any voltammetric technique, independently of the lipophilicity and charge number of the target and compensating ions. When supporting electrolytes of semihydrophilic ions are employed, the so-called double transfer voltammograms have a tendency to merge into a single signal, which complicates notably the modeling and analysis of the electrochemical response. The present theoretical results point out that the appearance of one or two voltammetric waves is highly dependent on the size of the interfaces and on the viscosity of the organic solution. Hence, the two latter can be adjusted experimentally in order to "split" the voltammograms and extract information about the ions involved. This has been illustrated in this work with the experimental study in water | 1,2-dichloroethane | water cells of the transfer of the monovalent tetraethylammonium cation compensated by anions of different lipophilicity, and also of the divalent hexachloroplatinate anion.

  16. The Laser Communications Relay Demonstration Experiment Program

    NASA Technical Reports Server (NTRS)

    Israel, Dave

    2017-01-01

    This paper elaborates on the Laser Communications Relay Demonstration (LCRD) Experiment Program, which will engage in a number of pre-determined experiments and also call upon a wide variety of experimenters to test new laser communications technology and techniques, and to gather valuable data. LCRD is a joint project between NASAs Goddard Space Flight Center (GSFC), the Jet Propulsion Laboratory (JPL), and the Massachusetts Institute of Technology Lincoln Laboratory (MIT LL). LCRD will test the functionality in various settings and scenarios of optical communications links from a GEO payload to ground stations in Southern California and Hawaii over a two-year period following launch in 2019. The LCRD investigator team will execute numerous experiments to test critical aspects of laser communications activities over real links and systems, collecting data on the effects of atmospheric turbulence and weather on performance and communications availability. LCRD will also incorporate emulations of target scenarios, including direct-to-Earth (DTE) links from user spacecraft and optical relay providers supporting user spacecraft. To supplement and expand upon the results of these experiments, the project also includes a Guest Experimenters Program, which encourages individuals and groups from government agencies, academia and industry to propose diverse experiment ideas.

  17. The Laser Communications Relay Demonstration Experiment Program

    NASA Technical Reports Server (NTRS)

    Israel, David J.; Edwards, Bernard L.; Moores, John D.; Piazzolla, Sabino; Merritt, Scott

    2017-01-01

    This paper elaborates on the Laser Communications Relay Demonstration (LCRD) Experiment Program, which will engage in a number of pre-determined experiments and also call upon a wide variety of experimenters to test new laser communications technology and techniques, and to gather valuable data. LCRD is a joint project between NASA's Goddard Space Flight Center (GSFC), the Jet Propulsion Laboratory (JPL), and the Massachusetts Institute of Technology Lincoln Laboratory (MIT LL). LCRD will test the functionality in various settings and scenarios of optical communications links from a GEO (Geosynchronous Earth Orbit) payload to ground stations in Southern California and Hawaii over a two-year period following launch in 2019. The LCRD investigator team will execute numerous experiments to test critical aspects of laser communications activities over real links and systems, collecting data on the effects of atmospheric turbulence and weather on performance and communications availability. LCRD will also incorporate emulations of target scenarios, including direct-to-Earth (DTE) links from user spacecraft and optical relay providers supporting user spacecraft. To supplement and expand upon the results of these experiments, the project also includes a Guest Experimenters Program, which encourages individuals and groups from government agencies, academia and industry to propose diverse experiment ideas.

  18. Robust Optical Recognition of Cursive Pashto Script Using Scale, Rotation and Location Invariant Approach

    PubMed Central

    Ahmad, Riaz; Naz, Saeeda; Afzal, Muhammad Zeshan; Amin, Sayed Hassan; Breuel, Thomas

    2015-01-01

    The presence of a large number of unique shapes called ligatures in cursive languages, along with variations due to scaling, orientation and location provides one of the most challenging pattern recognition problems. Recognition of the large number of ligatures is often a complicated task in oriental languages such as Pashto, Urdu, Persian and Arabic. Research on cursive script recognition often ignores the fact that scaling, orientation, location and font variations are common in printed cursive text. Therefore, these variations are not included in image databases and in experimental evaluations. This research uncovers challenges faced by Arabic cursive script recognition in a holistic framework by considering Pashto as a test case, because Pashto language has larger alphabet set than Arabic, Persian and Urdu. A database containing 8000 images of 1000 unique ligatures having scaling, orientation and location variations is introduced. In this article, a feature space based on scale invariant feature transform (SIFT) along with a segmentation framework has been proposed for overcoming the above mentioned challenges. The experimental results show a significantly improved performance of proposed scheme over traditional feature extraction techniques such as principal component analysis (PCA). PMID:26368566

  19. Development, current applications and future roles of biorelevant two-stage in vitro testing in drug development.

    PubMed

    Fiolka, Tom; Dressman, Jennifer

    2018-03-01

    Various types of two stage in vitro testing have been used in a number of experimental settings. In addition to its application in quality control and for regulatory purposes, two-stage in vitro testing has also been shown to be a valuable technique to evaluate the supersaturation and precipitation behavior of poorly soluble drugs during drug development. The so-called 'transfer model', which is an example of two-stage testing, has provided valuable information about the in vivo performance of poorly soluble, weakly basic drugs by simulating the gastrointestinal drug transit from the stomach into the small intestine with a peristaltic pump. The evolution of the transfer model has resulted in various modifications of the experimental model set-up. Concomitantly, various research groups have developed simplified approaches to two-stage testing to investigate the supersaturation and precipitation behavior of weakly basic drugs without the necessity of using a transfer pump. Given the diversity among the various two-stage test methods available today, a more harmonized approach needs to be taken to optimize the use of two stage testing at different stages of drug development. © 2018 Royal Pharmaceutical Society.

  20. Flexible structure control experiments using a real-time workstation for computer-aided control engineering

    NASA Technical Reports Server (NTRS)

    Stieber, Michael E.

    1989-01-01

    A Real-Time Workstation for Computer-Aided Control Engineering has been developed jointly by the Communications Research Centre (CRC) and Ruhr-Universitaet Bochum (RUB), West Germany. The system is presently used for the development and experimental verification of control techniques for large space systems with significant structural flexibility. The Real-Time Workstation essentially is an implementation of RUB's extensive Computer-Aided Control Engineering package KEDDC on an INTEL micro-computer running under the RMS real-time operating system. The portable system supports system identification, analysis, control design and simulation, as well as the immediate implementation and test of control systems. The Real-Time Workstation is currently being used by CRC to study control/structure interaction on a ground-based structure called DAISY, whose design was inspired by a reflector antenna. DAISY emulates the dynamics of a large flexible spacecraft with the following characteristics: rigid body modes, many clustered vibration modes with low frequencies and extremely low damping. The Real-Time Workstation was found to be a very powerful tool for experimental studies, supporting control design and simulation, and conducting and evaluating tests withn one integrated environment.

  1. Computer Simulation of Fracture in Aerogels

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2006-01-01

    Aerogels are of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While the gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. In this work, we investigate the strength and fracture behavior of silica aerogels using a molecular statics-based computer simulation technique. The gels' structure is simulated via a Diffusion Limited Cluster Aggregation (DLCA) algorithm, which produces fractal structures representing experimentally observed aggregates of so-called secondary particles, themselves composed of amorphous silica primary particles an order of magnitude smaller. We have performed multi-length-scale simulations of fracture in silica aerogels, in which the interaction b e e n two secondary particles is assumed to be described by a Morse pair potential parameterized such that the potential range is much smaller than the secondary particle size. These Morse parameters are obtained by atomistic simulation of models of the experimentally-observed amorphous silica "bridges," with the fracture behavior of these bridges modeled via molecular statics using a Morse/Coulomb potential for silica. We consider the energetics of the fracture, and compare qualitative features of low-and high-density gel fracture.

  2. Comparing Noun Phrasing Techniques for Use with Medical Digital Library Tools.

    ERIC Educational Resources Information Center

    Tolle, Kristin M.; Chen, Hsinchun

    2000-01-01

    Describes a study that investigated the use of a natural language processing technique called noun phrasing to determine whether it is a viable technique for medical information retrieval. Evaluates four noun phrase generation tools for their ability to isolate noun phrases from medical journal abstracts, focusing on precision and recall.…

  3. Underwater Photo-Elicitation: A New Experiential Marine Education Technique

    ERIC Educational Resources Information Center

    Andrews, Steve; Stocker, Laura; Oechel, Walter

    2018-01-01

    Underwater photo-elicitation is a novel experiential marine education technique that combines direct experience in the marine environment with the use of digital underwater cameras. A program called Show Us Your Ocean! (SUYO!) was created, utilising a mixed methodology (qualitative and quantitative methods) to test the efficacy of this technique.…

  4. Q-Technique and Graphics Research.

    ERIC Educational Resources Information Center

    Kahle, Roger R.

    Because Q-technique is as appropriate for use with visual and design items as for use with words, it is not stymied by the topics one is likely to encounter in graphics research. In particular Q-technique is suitable for studying the so-called "congeniality" of typography, for various copytesting usages, and for multivariate graphics research. The…

  5. Writing with Basals: A Sentence Combining Approach to Comprehension.

    ERIC Educational Resources Information Center

    Reutzel, D. Ray; Merrill, Jimmie D.

    Sentence combining techniques can be used with basal readers to help students develop writing skills. The first technique is addition, characterized by using the connecting word "and" to join two or more base sentences together. The second technique is called "embedding," and is characterized by putting parts of two or more base sentences together…

  6. An implementation and performance measurement of the progressive retry technique

    NASA Technical Reports Server (NTRS)

    Suri, Gaurav; Huang, Yennun; Wang, Yi-Min; Fuchs, W. Kent; Kintala, Chandra

    1995-01-01

    This paper describes a recovery technique called progressive retry for bypassing software faults in message-passing applications. The technique is implemented as reusable modules to provide application-level software fault tolerance. The paper describes the implementation of the technique and presents results from the application of progressive retry to two telecommunications systems. the results presented show that the technique is helpful in reducing the total recovery time for message-passing applications.

  7. Perspective on Kraken Mare Shores

    NASA Image and Video Library

    2015-02-12

    This Cassini Synthetic Aperture Radar (SAR) image is presented as a perspective view and shows a landscape near the eastern shoreline of Kraken Mare, a hydrocarbon sea in Titan's north polar region. This image was processed using a technique for handling noise that results in clearer views that can be easier for researchers to interpret. The technique, called despeckling, also is useful for producing altimetry data and 3-D views called digital elevation maps. Scientists have used a technique called radargrammetry to determine the altitude of surface features in this view at a resolution of approximately half a mile, or 1 kilometer. The altimetry reveals that the area is smooth overall, with a maximum amplitude of 0.75 mile (1.2 kilometers) in height. The topography also shows that all observed channels flow downhill. The presence of what scientists call "knickpoints" -- locations on a river where a sharp change in slope occurs -- might indicate stratification in the bedrock, erosion mechanisms at work or a particular way the surface responds to runoff events, such as floods following large storms. One such knickpoint is visible just above the lower left corner, where an area of bright slopes is seen. The image was obtained during a flyby of Titan on April 10, 2007. A more traditional radar image of this area on Titan is seen in PIA19046. http://photojournal.jpl.nasa.gov/catalog/PIA19051

  8. Synchronization in spread spectrum laser radar systems based on PMD-DLL

    NASA Astrophysics Data System (ADS)

    Buxbaum, Bernd; Schwarte, Rudolf; Ringbeck, Thorsten; Luan, Xuming; Zhang, Zhigang; Xu, Zhanping; Hess, H.

    2000-09-01

    This paper proposes a new optoelectronic delay locked loop (OE-DLL) and its use in optical ranging systems. The so called PMD-DLL receiver module is based on a novel electro-optical modulator (EOM), called the Photonic Mixer Device (PMD). This sensor element is a semiconductor device, which combines fast optical sensing and mixing of incoherent light signals in one component part by its unique and powerful principle of operation. Integration of some simple additional on-chip components offers a high integrated electro-optical correlation unit. Simulations and experimental results have already impressively verified the operation principle of PMD structures, all realized in CMOS technology so far. Although other technologies are also promising candidates for the PMD realization they should not be further discussed in this contribution. The principle of the new DLL approach is intensively discussed in this paper. Theoretical analysis as well as experimental results of a realized PMD-DLL system are demonstrated and judged. Due to the operation principle of sophisticated PMD devices and their unique features, a correlation process may be realized in order to synchronize a reflected incoherent light wave with an electronic reference signal. The phase shift between both signals represents the distance to an obstacle and may be determined by means of the synchronization process. This new approach, avoiding so far needed critical components such as broadband amplifiers and mixers for the detection of small photo currents in optical distance measurement, offers an extremely fast and precise phase determination in ranging applications based on the time- of-flight (TOF) principle. However, the optical measurement signal may be incoherent -- therefore a laser source is not needed imperatively. The kind of waveform used for the modulation of the light signal is variable and depends on the demands of every specific application. Even if there are plenty other alternatives (e.g., heterodyne techniques), in this contribution only so called quasi-heterodyne techniques - - also known as phase shifting methods -- are discussed and used for the implementation. The light modulation schemes described in this contribution are square-wave as well as pseudo-noise modulation. The latter approach, inspired by the wide spread use in communication as well as in position detection (e.g., IS-95 and GPS), offers essential advantages and is the most promising modulation method for the ranging approach. So called CDMA (code division multiple access) systems form a major task in communication technology investigations since the third generation mobile phone standard is also partly based on this principle. Fast and reliable synchronization in direct sequence spread spectrum communication systems (DSSS) differs hardly from the already mentioned ranging approach and will also be discussed. The possibility to integrate all components in a monolithic PMD based DLL design is also presented and discussed. This method might offer the feature to integrate complete lines or matrixes of PMD based DLLs for highly parallel, multidimensional ranging. Finally, an outlook is given with regard to further optimized PMD front ends. An estimation of the expected characteristics concerning accuracy and speed of the distance measurement is given in conclusion.

  9. Gene Profiling Technique to Accelerate Stem Cell Therapies for Eye Diseases

    MedlinePlus

    ... like RPE. They also use a technique called quantitative RT-PCR to measure the expression of genes ... higher in iPS cells than mature RPE. But quantitative RT-PCR only permits the simultaneous measurement of ...

  10. A burnout prediction model based around char morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao Wu; Edward Lester; Michael Cloke

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less

  11. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  12. Development and verification of local/global analysis techniques for laminated composites

    NASA Technical Reports Server (NTRS)

    Griffin, O. Hayden, Jr.

    1989-01-01

    Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.

  13. Recognizing human activities using appearance metric feature and kinematics feature

    NASA Astrophysics Data System (ADS)

    Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye

    2017-05-01

    The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.

  14. Influence and measurement of mass ablation in ICF implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, B K; Hicks, D; Velsko, C

    2007-09-05

    Point design ignition capsules designed for the National Ignition Facility (NIF) currently use an x-ray-driven Be(Cu) ablator to compress the DT fuel. Ignition specifications require that the mass of unablated Be(Cu), called residual mass, be known to within 1% of the initial ablator mass when the fuel reaches peak velocity. The specifications also require that the implosion bang time, a surrogate measurement for implosion velocity, be known to +/- 50 ps RMS. These specifications guard against several capsule failure modes associated with low implosion velocity or low residual mass. Experiments designed to measure and to tune experimentally the amount ofmore » residual mass are being developed as part of the National Ignition Campaign (NIC). Tuning adjustments of the residual mass and peak velocity can be achieved using capsule and laser parameters. We currently plan to measure the residual mass using streaked radiographic imaging of surrogate tuning capsules. Alternative techniques to measure residual mass using activated Cu debris collection and proton spectrometry have also been developed. These developing techniques, together with bang time measurements, will allow us to tune ignition capsules to meet NIC specs.« less

  15. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Bo; Nelson, Kevin; Lipinski, Ronald J.

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-temperature high-strain-rate performance are needed for understanding high-speed impacts in severe elevated-temperature environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain-rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. Current high-temperature Kolsky compression bar techniques are not capable of obtaining satisfactory high-temperature high-strain-rate stress-strain response of thin iridium specimens investigated in this study. We analyzedmore » the difficulties encountered in high-temperature Kolsky compression bar testing of thin iridium alloy specimens. Appropriate modifications were made to the current high-temperature Kolsky compression bar technique to obtain reliable compressive stress-strain response of an iridium alloy at high strain rates (300 – 10000 s -1) and temperatures (750°C and 1030°C). Uncertainties in such high-temperature high-strain-rate experiments on thin iridium specimens were also analyzed. The compressive stress-strain response of the iridium alloy showed significant sensitivity to strain rate and temperature.« less

  17. Air-coupled ultrasound: a novel technique for monitoring the curing of thermosetting matrices.

    PubMed

    Lionetto, Francesca; Tarzia, Antonella; Maffezzoli, Alfonso

    2007-07-01

    A custom-made, air-coupled ultrasonic device was applied to cure monitoring of thick samples (7-10 mm) of unsaturated polyester resin at room temperature. A key point was the optimization of the experimental setup in order to propagate compression waves during the overall curing reaction by suitable placement of the noncontact transducers, placed on the same side of the test material, in the so-called pitch-catch configuration. The progress of polymerization was monitored through the variation of the time of flight of the propagating longitudinal waves. The exothermic character of the polymerization was taken into account by correcting the measured value of time of flight with that one in air, obtained by sampling the air velocity during the experiment. The air-coupled ultrasonic results were compared with those obtained from conventional contact ultrasonic measurements. The good agreement between the air-coupled ultrasonic results and those obtained by the rheological analysis demonstrated the reliability of air-coupled ultrasound in monitoring the changes of viscoelastic properties at gelation and vitrification. The position of the transducers on the same side of the sample makes this technique suitable for on-line cure monitoring during several composite manufacturing technologies.

  18. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  19. Digital audio watermarking using moment-preserving thresholding

    NASA Astrophysics Data System (ADS)

    Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong

    2007-09-01

    The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.

  20. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    PubMed

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Combination of confocal principle and aperture stop separation improves suppression of crystalline lens fluorescence in an eye model.

    PubMed

    Klemm, Matthias; Blum, Johannes; Link, Dietmar; Hammer, Martin; Haueisen, Jens; Schweitzer, Dietrich

    2016-09-01

    Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique to detect changes in the human retina. The autofluorescence decay over time, generated by endogenous fluorophores, is measured in vivo. The strong autofluorescence of the crystalline lens, however, superimposes the intensity decay of the retina fluorescence, as the confocal principle is not able to suppress it sufficiently. Thus, the crystalline lens autofluorescence causes artifacts in the retinal fluorescence lifetimes determined from the intensity decays. Here, we present a new technique to suppress the autofluorescence of the crystalline lens by introducing an annular stop into the detection light path, which we call Schweitzer's principle. The efficacy of annular stops with an outer diameter of 7 mm and inner diameters of 1 to 5 mm are analyzed in an experimental setup using a model eye based on fluorescent dyes. Compared to the confocal principle, Schweitzer's principle with an inner diameter of 3 mm is able to reduce the simulated crystalline lens fluorescence to 4%, while 42% of the simulated retina fluorescence is preserved. Thus, we recommend the implementation of Schweitzer's principle in scanning laser ophthalmoscopes used for fundus autofluorescence measurements, especially the FLIO device, for improved image quality.

  2. Extracorporeal CO2 removal: Technical and physiological fundaments and principal indications.

    PubMed

    Romay, E; Ferrer, R

    2016-01-01

    In recent years, technological improvements have reduced the complexity of extracorporeal membrane oxygenation devices. This have enabled the development of specific devices for the extracorporeal removal of CO2. These devices have a simpler configuration than extracorporeal membrane oxygenation devices and uses lower blood flows which could reduce the potential complications. Experimental studies have demonstrated the feasibility, efficacy and safety of extracorporeal removal of CO2 and some of its effects in humans. This technique was initially conceived as an adjunct therapy in patients with severe acute respiratory distress syndrome, as a tool to optimize protective ventilation. More recently, the use of this technique has allowed the emergence of a relatively new concept called "tra-protective ventilation"whose effects are still to be determined. In addition, the extracorporeal removal of CO2 has been used in patients with exacerbated hypercapnic respiratory failure with promising results. In this review we will describe the physiological and technical fundamentals of this therapy and its variants as well as an overview of the available clinical evidence, focused on its current potential. Copyright © 2015 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.

  3. Vibration analysis of angle-ply laminated composite plates with an embedded piezoceramic layer.

    PubMed

    Lin, Hsien-Yang; Huang, Jin-Hung; Ma, Chien-Ching

    2003-09-01

    An optical full-field technique, called amplitude-fluctuation electronic speckle pattern interferometry (AF-ESPI), is used in this study to investigate the force-induced transverse vibration of an angle-ply laminated composite embedded with a piezoceramic layer (piezolaminated plates). The piezolaminated plates are excited by applying time-harmonic voltages to the embedded piezoceramic layer. Because clear fringe patterns will appear only at resonant frequencies, both the resonant frequencies and mode shapes of the vibrating piezolaminated plates with five different fiber orientation angles are obtained by the proposed AF-ESPI method. A laser Doppler vibrometer (LDV) system that has the advantage of high resolution and broad dynamic range also is applied to measure the frequency response of piezolaminated plates. In addition to the two proposed optical techniques, numerical computations based on a commercial finite element package are presented for comparison with the experimental results. Three different numerical formulations are used to evaluate the vibration characteristics of piezolaminated plates. Good agreements of the measured data by the optical method and the numerical results predicted by the finite element method (FEM) demonstrate that the proposed methodology in this study is a powerful tool for the vibration analysis of piezolaminated plates.

  4. Magnetic resonance spectroscopic imaging at superresolution: Overview and perspectives.

    PubMed

    Kasten, Jeffrey; Klauser, Antoine; Lazeyras, François; Van De Ville, Dimitri

    2016-02-01

    The notion of non-invasive, high-resolution spatial mapping of metabolite concentrations has long enticed the medical community. While magnetic resonance spectroscopic imaging (MRSI) is capable of achieving the requisite spatio-spectral localization, it has traditionally been encumbered by significant resolution constraints that have thus far undermined its clinical utility. To surpass these obstacles, research efforts have primarily focused on hardware enhancements or the development of accelerated acquisition strategies to improve the experimental sensitivity per unit time. Concomitantly, a number of innovative reconstruction techniques have emerged as alternatives to the standard inverse discrete Fourier transform (DFT). While perhaps lesser known, these latter methods strive to effect commensurate resolution gains by exploiting known properties of the underlying MRSI signal in concert with advanced image and signal processing techniques. This review article aims to aggregate and provide an overview of the past few decades of so-called "superresolution" MRSI reconstruction methodologies, and to introduce readers to current state-of-the-art approaches. A number of perspectives are then offered as to the future of high-resolution MRSI, with a particular focus on translation into clinical settings. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    NASA Astrophysics Data System (ADS)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  6. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  7. STATISTICS OF THE VELOCITY GRADIENT TENSOR IN SPACE PLASMA TURBULENT FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Consolini, Giuseppe; Marcucci, Maria Federica; Pallocchia, Giuseppe

    2015-10-10

    In the last decade, significant advances have been presented for the theoretical characterization and experimental techniques used to measure and model all of the components of the velocity gradient tensor in the framework of fluid turbulence. Here, we attempt the evaluation of the small-scale velocity gradient tensor for a case study of space plasma turbulence, observed in the Earth's magnetosheath region by the CLUSTER mission. In detail, we investigate the joint statistics P(R, Q) of the velocity gradient geometric invariants R and Q, and find that this P(R, Q) is similar to that of the low end of the inertialmore » range for fluid turbulence, with a pronounced increase in the statistics along the so-called Vieillefosse tail. In the context of hydrodynamics, this result is referred to as the dissipation/dissipation-production due to vortex stretching.« less

  8. Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, John E.; Sener, Melih; Vandivort, Kirby L.

    The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less

  9. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    NASA Astrophysics Data System (ADS)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  10. In-situ High-energy X-ray Diffraction Study of the Local Structure of Supercooled Liquid Si

    NASA Technical Reports Server (NTRS)

    Lee, G. W.; Kim, T. H.; Sieve, B.; Gangopadhyay, A. K.; Hyers, R. W.; Rathz, T. J.; Rogers, J. R.; Robinson, D. S.; Kelton, K. F.; Goldman, A. I.

    2005-01-01

    While changes in the coordination number for liquid silicon upon supercooling, signaling an underlying liquid-liquid phase transition, have been predicted, x-ray and neutron measurements have produced conflicting reports. In particular some studies have found an increase in the first shell coordination as temperature decreases in the supercooled regime, while others have reported increases in the coordination number with decreasing temperature. Employing the technique of electrostatic levitation coupled with high energy x-ray diffraction (125 keV), and rapid data acquisition (100ms collection times) using an area detector, we have obtained high quality structural data more deeply into the supercooled regime than has been possible before. No change in coordination number is observed in this temperature region, calling into question previous experimental claims of structural evidence for the existence of a liquid-liquid phase transition.

  11. Modelling proteins' hidden conformations to predict antibiotic resistance

    NASA Astrophysics Data System (ADS)

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-10-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.

  12. FALSTAFF: A New Tool for Fission Fragment Characterization

    NASA Astrophysics Data System (ADS)

    Doré, D.; Farget, F.; Lecolley, F.-R.; Lehaut, G.; Materna, T.; Pancin, J.; Panebianco, S.; Papaevangelou, Th.

    2014-05-01

    The future Neutron For Science (NFS) facility to be installed at SPIRAL2 (Caen, France) will produce high intensity neutron beams from hundreds of keV up to 40 MeV. Taking advantage of this facility, data of particular interest to the nuclear community, in view of the development of fast reactor technology, will be measured. The development of an experimental setup called FALSTAFF for a full characterization of actinide fission fragments has been undertaken. Fission fragment isotopic yields and associated neutron multiplicities will be measured as a function of the neutron energy. Based on time-of-flight and residual energy technique, the setup will allow for the simultaneous measurement of the velocity and energy of the complementary fragments. The performance of the time-of-flight detectors of FALSTAFF will be presented and expected resolutions for fragment masses and neutron multiplicities, based on realistic simulations, will be shown.

  13. Relative distance between tracers as a measure of diffusivity within moving aggregates

    NASA Astrophysics Data System (ADS)

    Pönisch, Wolfram; Zaburdaev, Vasily

    2018-02-01

    Tracking of particles, be it a passive tracer or an actively moving bacterium in the growing bacterial colony, is a powerful technique to probe the physical properties of the environment of the particles. One of the most common measures of particle motion driven by fluctuations and random forces is its diffusivity, which is routinely obtained by measuring the mean squared displacement of the particles. However, often the tracer particles may be moving in a domain or an aggregate which itself experiences some regular or random motion and thus masks the diffusivity of tracers. Here we provide a method for assessing the diffusivity of tracer particles within mobile aggregates by measuring the so-called mean squared relative distance (MSRD) between two tracers. We provide analytical expressions for both the ensemble and time averaged MSRD allowing for direct identification of diffusivities from experimental data.

  14. Identification of PAH Isomeric Structure in Cosmic Dust Analogs: The AROMA Setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabbah, Hassan; Bonnamy, Anthony; Joblin, Christine

    We developed a new analytical experimental setup called AROMA (Astrochemistry Research of Organics with Molecular Analyzer) that combines laser desorption/ionization techniques with ion trap mass spectrometry. We report here on the ability of the apparatus to detect aromatic species in complex materials of astrophysical interest and characterize their structures. A limit of detection of 100 femto-grams has been achieved using pure polycyclic aromatic hydrocarbon (PAH) samples, which corresponds to 2 × 10{sup 8} molecules in the case of coronene (C{sub 24}H{sub 12}). We detected the PAH distribution in the Murchison meteorite, which is made of a complex mixture of extraterrestrialmore » organic compounds. In addition, collision induced dissociation experiments were performed on selected species detected in Murchison, which led to the first firm identification of pyrene and its methylated derivatives in this sample.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, B.; Nelson, K.; Lipinski, R.

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-strain -rate performance are needed for understanding high-speed impacts in severe environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain -rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. In our study, we analyzed the difficulties encountered in high-temperature Kolsky bar testing of thin iridium alloy specimens in compression. We made appropriate modifications using themore » current high-temperature Kolsky bar technique in order to obtain reliable compressive stress–strain response of an iridium alloy at high-strain rates (300–10 000 s -1) and temperatures (750 and 1030°C). The compressive stress–strain response of the iridium alloy showed significant sensitivity to both strain rate and temperature.« less

  16. DART: a practical reconstruction algorithm for discrete tomography.

    PubMed

    Batenburg, Kees Joost; Sijbers, Jan

    2011-09-01

    In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.

  17. Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, John E.; Sener, Melih; Vandivort, Kirby L.

    The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that weremore » used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less

  18. Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing

    DOE PAGES

    Stone, John E.; Sener, Melih; Vandivort, Kirby L.; ...

    2015-12-12

    The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less

  19. Functional feature embedded space mapping of fMRI data.

    PubMed

    Hu, Jin; Tian, Jie; Yang, Lei

    2006-01-01

    We have proposed a new method for fMRI data analysis which is called Functional Feature Embedded Space Mapping (FFESM). Our work mainly focuses on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the frequency domain. A nonlinear dimension reduction technique Isomap is applied to the high dimensional features obtained from frequency domain of the fMRI data for the first time. Finally, the presence of activated time series is identified by the clustering method in which the information theoretic criterion of minimum description length (MDL) is used to estimate the number of clusters. The feasibility of our algorithm is demonstrated by real human experiments. Although we focus on analyzing periodic fMRI data, the approach can be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the Fourier analysis with a wavelet analysis.

  20. Numerical Analysis of the Cavity Flow subjected to Passive Controls Techniques

    NASA Astrophysics Data System (ADS)

    Melih Guleren, Kursad; Turk, Seyfettin; Mirza Demircan, Osman; Demir, Oguzhan

    2018-03-01

    Open-source flow solvers are getting more and more popular for the analysis of challenging flow problems in aeronautical and mechanical engineering applications. They are offered under the GNU General Public License and can be run, examined, shared and modified according to user’s requirements. SU2 and OpenFOAM are the two most popular open-source solvers in Computational Fluid Dynamics (CFD) community. In the present study, some passive control methods on the high-speed cavity flows are numerically simulated using these open-source flow solvers along with one commercial flow solver called ANSYS/Fluent. The results are compared with the available experimental data. The solver SU2 are seen to predict satisfactory the mean streamline velocity but not turbulent kinetic energy and overall averaged sound pressure level (OASPL). Whereas OpenFOAM predicts all these parameters nearly as the same levels of ANSYS/Fluent.

  1. Infrared small target detection based on multiscale center-surround contrast measure

    NASA Astrophysics Data System (ADS)

    Fu, Hao; Long, Yunli; Zhu, Ran; An, Wei

    2018-04-01

    Infrared(IR) small target detection plays a critical role in the Infrared Search And Track (IRST) system. Although it has been studied for years, there are some difficulties remained to the clutter environment. According to the principle of human discrimination of small targets from a natural scene that there is a signature of discontinuity between the object and its neighboring regions, we develop an efficient method for infrared small target detection called multiscale centersurround contrast measure (MCSCM). First, to determine the maximum neighboring window size, an entropy-based window selection technique is used. Then, we construct a novel multiscale center-surround contrast measure to calculate the saliency map. Compared with the original image, the MCSCM map has less background clutters and noise residual. Subsequently, a simple threshold is used to segment the target. Experimental results show our method achieves better performance.

  2. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    PubMed

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-08-14

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  3. Gyroscope-Driven Mouse Pointer with an EMOTIV® EEG Headset and Data Analysis Based on Empirical Mode Decomposition

    PubMed Central

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-01-01

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873

  4. A FEniCS-based programming framework for modeling turbulent flow by the Reynolds-averaged Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.

    2011-09-01

    Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.

  5. Apple Snail: a Bio Cleaner of the Water Free Surface.

    NASA Astrophysics Data System (ADS)

    Bassiri, Golnaz

    2005-11-01

    Oil spills from tankers represent a threat for shorelines and marine life. Despite continuing research, there has been little change in the fundamental technology for dealing with oil spills. An experimental investigation of the feeding strategy of Apple snails from the water free surface, called surface film feeding, is being studied motivated by the need to develop new techniques to recover oil spills. To feed on floating food (usually a thin layer of microorganisms), the apple snail forms a funnel with its foot and pulls the free surface toward the funnel. High speed imaging and particle image velocimetry were used in the present investigation to measure the free surface motion and to investigate the mechanism used by the apple snails to pull the free surface. The results suggest that the snail pulls the free surface via the wavy motion of the muscles in its funnel.

  6. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Srinivasan, Ramakrishna; Gustaveson, Mark B.

    1990-01-01

    Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselage lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cutoff, and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations.

  7. MSL: A Measure to Evaluate Three-dimensional Patterns in Gene Expression Data

    PubMed Central

    Gutiérrez-Avilés, David; Rubio-Escudero, Cristina

    2015-01-01

    Microarray technology is highly used in biological research environments due to its ability to monitor the RNA concentration levels. The analysis of the data generated represents a computational challenge due to the characteristics of these data. Clustering techniques are widely applied to create groups of genes that exhibit a similar behavior. Biclustering relaxes the constraints for grouping, allowing genes to be evaluated only under a subset of the conditions. Triclustering appears for the analysis of longitudinal experiments in which the genes are evaluated under certain conditions at several time points. These triclusters provide hidden information in the form of behavior patterns from temporal experiments with microarrays relating subsets of genes, experimental conditions, and time points. We present an evaluation measure for triclusters called Multi Slope Measure, based on the similarity among the angles of the slopes formed by each profile formed by the genes, conditions, and times of the tricluster. PMID:26124630

  8. Measurement of installation deformation of the acetabulum during prosthetic replacement of a hip joint using digital image correlation

    NASA Astrophysics Data System (ADS)

    Lei, Dong; Bai, Pengxiang; Zhu, Feipeng

    2018-01-01

    Nowadays, acetabulum prosthesis replacement is widely used in clinical medicine. However, there is no efficient way to evaluate the implantation effect of the prosthesis. Based on a modern photomechanics technique called digital image correlation (DIC), the evaluation method of the installation effect of the acetabulum was established during a prosthetic replacement of a hip joint. The DIC method determines strain field by comparing the speckle images between the undeformed sample and the deformed counterpart. Three groups of experiments were carried out to verify the feasibility of the DIC method on the acetabulum installation deformation test. Experimental results indicate that the installation deformation of acetabulum generally includes elastic deformation (corresponding to the principal strain of about 1.2%) and plastic deformation. When the installation angle is ideal, the plastic deformation can be effectively reduced, which could prolong the service life of acetabulum prostheses.

  9. Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem

    NASA Astrophysics Data System (ADS)

    Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa

    A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.

  10. Modelling proteins’ hidden conformations to predict antibiotic resistance

    PubMed Central

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-01-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM’s specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models’ prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design. PMID:27708258

  11. The specific purpose Monte Carlo code McENL for simulating the response of epithermal neutron lifetime well logging tools

    NASA Astrophysics Data System (ADS)

    Prettyman, T. H.; Gardner, R. P.; Verghese, K.

    1993-08-01

    A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.

  12. Reaction Buildup of PBX Explosives JOB-9003 under Different Initiation Pressures

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Wang, Yan-fei; Hung, Wen-bin; Gu, Yan; Zhao, Feng; Wu, Qiang; Yu, Xin; Yu, Heng

    2017-04-01

    Aluminum-based embedded multiple electromagnetic particle velocity gauge technique has been developed in order to measure the shock initiation behavior of JOB-9003 explosives. In addition, another gauge element called a shock tracker has been used to monitor the progress of the shock front as a function of time, thus providing a position-time trajectory of the wave front as it moves through the explosive sample. The data are used to determine the position and time for shock to detonation transition. All the experimental results show that: the rising-up time of Al-based electromagnetic particle velocity gauge was very fast and less than 20 ns; the reaction buildup velocity profiles and the position-time for shock to detonation transition of HMX-based PBX explosive JOB-9003 with 1-8 mm depth from the origin of impact plane under different initiation pressures are obtained with high accuracy.

  13. Semiclassical description of photoionization microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordas, Ch.; Lepine, F.; Nicole, C.

    2003-07-01

    Recently, experiments have been reported where a geometrical interference pattern was observed when photoelectrons ejected in the threshold photoionization of xenon were detected in a velocity-map imaging apparatus [C. Nicole et al., Phys. Rev. Lett. 88, 133001 (2002)]. This technique, called photoionization microscopy, relies on the existence of interferences between various trajectories by which the electron moves from the atom to the plane of observation. Unlike previous predictions relevant to the hydrogenic case, the structure of the interference pattern evolves smoothly with the excess energy above the saddle point and is only weakly affected by the presence of continuum Starkmore » resonances. In this paper, we describe a semiclassical analysis of this process and present numerical simulations in excellent agreement with the experimental results. It is shown that the background contribution dominates in the observations, as opposed to the behavior expected for hydrogenic systems where the interference pattern is qualitatively different on quasidiscrete Stark resonances.« less

  14. Listening to sounds from an exploding meteor and oceanic waves

    NASA Astrophysics Data System (ADS)

    Evers, L. G.; Haak, H. W.

    Low frequency sound (infrasound) measurements have been selected within the Comprehensive Nuclear-Test-Ban Treaty (CTBT) as a technique to detect and identify possible nuclear explosions. The Seismology Division of the Royal Netherlands Meteorological Institute (KNMI) operates since 1999 an experimental infrasound array of 16 micro-barometers. Here we show the rare detection and identification of an exploding meteor above Northern Germany on November 8th, 1999 with data from the Deelen Infrasound Array (DIA). At the same time, sound was radiated from the Atlantic Ocean, South of Iceland, due to the atmospheric coupling of standing ocean waves, called microbaroms. Occurring with only 0.04 Hz difference in dominant frequency, DIA proved to be able to discriminate between the physically different sources of infrasound through its unique lay-out and instruments. The explosive power of the meteor being 1.5 kT TNT is in the range of nuclear explosions and therefore relevant to the CTBT.

  15. Joint FACET: the Canada-Netherlands initiative to study multisensor data fusion systems

    NASA Astrophysics Data System (ADS)

    Bosse, Eloi; Theil, Arne; Roy, Jean; Huizing, Albert G.; van Aartsen, Simon

    1998-09-01

    This paper presents the progress of a collaborative effort between Canada and The Netherlands in analyzing multi-sensor data fusion systems, e.g. for potential application to their respective frigates. In view of the overlapping interest in studying and comparing applicability and performance and advanced state-of-the-art Multi-Sensor Data FUsion (MSDF) techniques, the two research establishments involved have decided to join their efforts in the development of MSDF testbeds. This resulted in the so-called Joint-FACET, a highly modular and flexible series of applications that is capable of processing both real and synthetic input data. Joint-FACET allows the user to create and edit test scenarios with multiple ships, sensor and targets, generate realistic sensor outputs, and to process these outputs with a variety of MSDF algorithms. These MSDF algorithms can also be tested using typical experimental data collected during live military exercises.

  16. Generation of uniform large-area very high frequency plasmas by launching two specific standing waves simultaneously

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsin-Liang, E-mail: hlchen@iner.gov.tw; Tu, Yen-Cheng; Hsieh, Cheng-Chang

    2014-09-14

    With the characteristics of higher electron density and lower ion bombardment energy, large-area VHF (very high frequency) plasma enhanced chemical vapor deposition has become an essential manufacturing equipment to improve the production throughput and efficiency of thin film silicon solar cell. However, the combination of high frequency and large electrodes leads to the so-called standing wave effect causing a serious problem for the deposition uniformity of silicon thin film. In order to address this issue, a technique based on the idea of simultaneously launching two standing waves that possess similar amplitudes and are out of phase by 90° in timemore » and space is proposed in this study. A linear plasma reactor with discharge length of 54 cm is tested with two different frequencies including 60 and 80 MHz. The experimental results show that the proposed technique could effectively improve the non-uniformity of VHF plasmas from >±60% when only one standing wave is applied to <±10% once two specific standing waves are launched at the same time. Moreover, in terms of the reactor configuration adopted in this study, in which the standing wave effect along the much shorter dimension can be ignored, the proposed technique is applicable to different frequencies without the need to alter the number and arrangement of power feeding points.« less

  17. A large-scale measurement of dielectric properties of normal and malignant colorectal tissues obtained from cancer surgeries at Larmor frequencies.

    PubMed

    Li, Zhou; Deng, Guanhua; Li, Zhe; Xin, Sherman Xuegang; Duan, Song; Lan, Maoying; Zhang, Sa; Gao, Yixin; He, Jun; Zhang, Songtao; Tang, Hongming; Wang, Weiwei; Han, Shuai; Yang, Qing X; Zhuang, Ling; Hu, Jiani; Liu, Feng

    2016-11-01

    Knowledge of dielectric properties of malignant human tissues is necessary for the recently developed magnetic resonance (MR) technique called MR electrical property tomography. This technique may be used in early tumor detection based on the obvious differentiation of the dielectric properties between normal and malignant tissues. However, the dielectric properties of malignant human tissues in the scale of the Larmor frequencies are not completely available in the literature. In this study, the authors focused only on the dielectric properties of colorectal tumor tissue. The dielectric properties of 504 colorectal malignant samples excised from 85 patients in the scale of the Larmor frequencies were measured using the precision open-ended coaxial probe method. The obtained complex-permittivity data were fitted to the single-pole Cole-Cole model. The median permittivity and conductivity for the malignant tissue sample were 79.3 and 0.881 S/m at 128 MHz, which were 14.6% and 17.0% higher, respectively, than those of normal tissue samples. Significant differences between normal and malignant tissues were found for the dielectric properties (p < 0.05). Experimental results indicated that the dielectric properties were significantly different between normal and malignant tissues for colorectal tissue. This large-scale clinical measurement provides more subtle base data to validate the technique of MR electrical property tomography.

  18. Multi-Scale Stochastic Resonance Spectrogram for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Wu, Enhao; Pan, Yuanyuan

    2018-04-01

    It is not easy to identify incipient defect of a rolling element bearing by analyzing the vibration data because of the disturbance of background noise. The weak and unrecognizable transient fault signal of a mechanical system can be enhanced by the stochastic resonance (SR) technique that utilizes the noise in the system. However, it is challenging for the SR technique to identify sensitive fault information in non-stationary signals. This paper proposes a new method called multi-scale SR spectrogram (MSSRS) for bearing defect diagnosis. The new method considers the non-stationary property of the defective bearing vibration signals, and treats every scale of the time-frequency distribution (TFD) as a modulation system. Then the SR technique is utilized on each modulation system according to each frequencies in the TFD. The SR results are sensitive to the defect information because the energy of transient vibration is distributed in a limited frequency band in the TFD. Collecting the spectra of the SR outputs at all frequency scales then generates the MSSRS. The proposed MSSRS is able to well deal with the non-stationary transient signal, and can highlight the defect-induced frequency component corresponding to the impulse information. Experimental results with practical defective bearing vibration data have shown that the proposed method outperforms the former SR methods and exhibits a good application prospect in rolling element bearing fault diagnosis.

  19. Comprehensive approach to breast cancer detection using light: photon localization by ultrasound modulation and tissue characterization by spectral discrimination

    NASA Astrophysics Data System (ADS)

    Marks, Fay A.; Tomlinson, Harold W.; Brooksby, Glen W.

    1993-09-01

    A new technique called Ultrasound Tagging of Light (UTL) for imaging breast tissue is described. In this approach, photon localization in turbid tissue is achieved by cross- modulating a laser beam with focussed, pulsed ultrasound. Light which passes through the ultrasound focal spot is `tagged' with the frequency of the ultrasound pulse. The experimental system uses an Argon-Ion laser, a single PIN photodetector, and a 1 MHz fixed-focus pulsed ultrasound transducer. The utility of UTL as a photon localization technique in scattering media is examined using tissue phantoms consisting of gelatin and intralipid. In a separate study, in vivo optical reflectance spectrophotometry was performed on human breast tumors implanted intramuscularly and subcutaneously in nineteen nude mice. The validity of applying a quadruple wavelength breast cancer discrimination metric (developed using breast biopsy specimens) to the in vivo condition was tested. A scatter diagram for the in vivo model tumors based on this metric is presented using as the `normal' controls the hands and fingers of volunteers. Tumors at different growth stages were studied; these tumors ranged in size from a few millimeters to two centimeters. It is expected that when coupled with a suitable photon localization technique like UTL, spectral discrimination methods like this one will prove useful in the detection of breast cancer by non-ionizing means.

  20. Principle of the electrically induced Transient Current Technique

    NASA Astrophysics Data System (ADS)

    Bronuzzi, J.; Moll, M.; Bouvet, D.; Mapelli, A.; Sallese, J. M.

    2018-05-01

    In the field of detector development for High Energy Physics, the so-called Transient Current Technique (TCT) is used to characterize the electric field profile and the charge trapping inside silicon radiation detectors where particles or photons create electron-hole pairs in the bulk of a semiconductor device, as PiN diodes. In the standard approach, the TCT signal originates from the free carriers generated close to the surface of a silicon detector, by short pulses of light or by alpha particles. This work proposes a new principle of charge injection by means of lateral PN junctions implemented in one of the detector electrodes, called the electrical TCT (el-TCT). This technique is fully compatible with CMOS technology and therefore opens new perspectives for assessment of radiation detectors performances.

  1. Calculating phase equilibrium properties of plasma pseudopotential model using hybrid Gibbs statistical ensemble Monte-Carlo technique

    NASA Astrophysics Data System (ADS)

    Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.

    2015-11-01

    Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.

  2. Designing a more efficient, effective and safe Medical Emergency Team (MET) service using data analysis

    PubMed Central

    Bilgrami, Irma; Bain, Christopher; Webb, Geoffrey I.; Orosz, Judit; Pilcher, David

    2017-01-01

    Introduction Hospitals have seen a rise in Medical Emergency Team (MET) reviews. We hypothesised that the commonest MET calls result in similar treatments. Our aim was to design a pre-emptive management algorithm that allowed direct institution of treatment to patients without having to wait for attendance of the MET team and to model its potential impact on MET call incidence and patient outcomes. Methods Data was extracted for all MET calls from the hospital database. Association rule data mining techniques were used to identify the most common combinations of MET call causes, outcomes and therapies. Results There were 13,656 MET calls during the 34-month study period in 7936 patients. The most common MET call was for hypotension [31%, (2459/7936)]. These MET calls were strongly associated with the immediate administration of intra-venous fluid (70% [1714/2459] v 13% [739/5477] p<0.001), unless the patient was located on a respiratory ward (adjusted OR 0.41 [95%CI 0.25–0.67] p<0.001), had a cardiac cause for admission (adjusted OR 0.61 [95%CI 0.50–0.75] p<0.001) or was under the care of the heart failure team (adjusted OR 0.29 [95%CI 0.19–0.42] p<0.001). Modelling the effect of a pre-emptive management algorithm for immediate fluid administration without MET activation on data from a test period of 24 months following the study period, suggested it would lead to a 68.7% (2541/3697) reduction in MET calls for hypotension and a 19.6% (2541/12938) reduction in total METs without adverse effects on patients. Conclusion Routinely collected data and analytic techniques can be used to develop a pre-emptive management algorithm to administer intravenous fluid therapy to a specific group of hypotensive patients without the need to initiate a MET call. This could both lead to earlier treatment for the patient and less total MET calls. PMID:29281665

  3. Teaching Business Management to Engineers: The Impact of Interactive Lectures

    ERIC Educational Resources Information Center

    Rambocas, Meena; Sastry, Musti K. S.

    2017-01-01

    Some education specialists are challenging the use of traditional strategies in classrooms and are calling for the use of contemporary teaching and learning techniques. In response to these calls, many field experiments that compare different teaching and learning strategies have been conducted. However, to date, little is known on the outcomes of…

  4. 78 FR 69705 - 60-Day Notice of Proposed Information Collection: Mortgagee's Application for Partial Settlement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-20

    ... calling the toll-free Federal Relay Service at (800) 877-8339. FOR FURTHER INFORMATION CONTACT: Steve... through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available... techniques or other forms of information technology, e.g., permitting electronic submission of responses. HUD...

  5. Presentation and Impact of Experimental Techniques in Chemistry

    ERIC Educational Resources Information Center

    Sojka, Zbigniew; Che, Michel

    2008-01-01

    Laboratory and practical courses, where students become familiar with experimental techniques and learn to interpret data and relate them to appropriate theory, play a vital role in chemical education. In the large panoply of currently available techniques, it is difficult to find a rational and easy way to classify the techniques in relation to…

  6. Hamlet on the Macintosh: An Experimental Seminar That Worked.

    ERIC Educational Resources Information Center

    Strange, William C.

    1987-01-01

    Describes experimental college Shakespeare seminar that used Macintosh computers and software called ELIZA and ADVENTURE to develop character dialogs and adventure games based on Hamlet's characters and plots. Programming languages are examined, particularly their relationship to metaphor, and the use of computers in humanities is discussed. (LRW)

  7. CALL, Prewriting Strategies, and EFL Writing Quantity

    ERIC Educational Resources Information Center

    Shafiee, Sajad; Koosha, Mansour; Afghar, Akbar

    2015-01-01

    This study sought to explore the effect of teaching prewriting strategies through different methods of input delivery (i.e. conventional, web-based, and hybrid) on EFL learners' writing quantity. In its quasi-experimental study, the researchers recruited 98 available sophomores, and assigned them to three experimental groups (conventional,…

  8. Electroporation of DC-3F cells is a dual process.

    PubMed

    Wegner, Lars H; Frey, Wolfgang; Silve, Aude

    2015-04-07

    Treatment of biological material by pulsed electric fields is a versatile technique in biotechnology and biomedicine used, for example, in delivering DNA into cells (transfection), ablation of tumors, and food processing. Field exposure is associated with a membrane permeability increase usually ascribed to electroporation, i.e., formation of aqueous membrane pores. Knowledge of the underlying processes at the membrane level is predominantly built on theoretical considerations and molecular dynamics (MD) simulations. However, experimental data needed to monitor these processes with sufficient temporal resolution are scarce. The whole-cell patch-clamp technique was employed to investigate the effect of millisecond pulsed electric fields on DC-3F cells. Cellular membrane permeabilization was monitored by a conductance increase. For the first time, to our knowledge, it could be established experimentally that electroporation consists of two clearly separate processes: a rapid membrane poration (transient electroporation) that occurs while the membrane is depolarized or hyperpolarized to voltages beyond so-called threshold potentials (here, +201 mV and -231 mV, respectively) and is reversible within ∼100 ms after the pulse, and a long-term, or persistent, permeabilization covering the whole voltage range. The latter prevailed after the pulse for at least 40 min, the postpulse time span tested experimentally. With mildly depolarizing or hyperpolarizing pulses just above threshold potentials, the two processes could be separated, since persistent (but not transient) permeabilization required repetitive pulse exposure. Conductance increased stepwise and gradually with depolarizing and hyperpolarizing pulses, respectively. Persistent permeabilization could also be elicited by single depolarizing/hyperpolarizing pulses of very high field strength. Experimental measurements of propidium iodide uptake provided evidence of a real membrane phenomenon, rather than a mere patch-clamp artifact. In short, the response of DC-3F cells to strong pulsed electric fields was separated into a transient electroporation and a persistent permeabilization. The latter dominates postpulse membrane properties but to date has not been addressed by electroporation theory or MD simulations. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Vibration isolation of automotive vehicle engine using periodic mounting systems

    NASA Astrophysics Data System (ADS)

    Asiri, S.

    2005-05-01

    Customer awareness and sensitivity to noise and vibration levels have been raised through increasing television advertisement, in which the vehicle noise and vibration performance is used as the main market differentiation. This awareness has caused the transportation industry to regard noise and vibration as important criteria for improving market shares. One industry that tends to be in the forefront of the technology to reduce the levels of noise and vibration is the automobile industry. Hence, it is of practical interest to reduce the vibrations induced structural responses. The automotive vehicle engine is the main source of mechanical vibrations of automobiles. The engine is vulnerable to the dynamic action caused by engine disturbance force in various speed ranges. The vibrations of the automotive vehicle engines may cause structural failure, malfunction of other parts, or discomfort to passengers because of high level noise and vibrations. The mounts of the engines act as the transmission paths of the vibrations transmitted from the excitation sources to the body of the vehicle and passengers. Therefore, proper design and control of these mounts are essential to the attenuation of the vibration of platform structures. To improve vibration resistant capacities of engine mounting systems, vibration control techniques may be used. For instance, some passive and semi-active dissipation devices may be installed at mounts to enhance vibration energy absorbing capacity. In the proposed study, a radically different concept is presented whereby periodic mounts are considered because these mounts exhibit unique dynamic characteristics that make them act as mechanical filters for wave propagation. As a result, waves can propagate along the periodic mounts only within specific frequency bands called the "Pass Bands" and wave propagation is completely blocked within other frequency bands called the "Stop Bands". The experimental arrangements, including the design of mounting systems with plain and periodic mounts will be studied first. The dynamic characteristics of such systems will be obtained experimentally in both cases. The tests will be then carried out to study the performance characteristics of periodic mounts with geometrical and/or material periodicity. The effectiveness of the periodicity on the vibration levels of mounting systems will be demonstrated theoretically and experimentally. Finally, the experimental results will be compared with the theoretical predictions.

  10. Influence of atmospheric properties on detection of wood-warbler nocturnal flight calls

    NASA Astrophysics Data System (ADS)

    Horton, Kyle G.; Stepanian, Phillip M.; Wainwright, Charlotte E.; Tegeler, Amy K.

    2015-10-01

    Avian migration monitoring can take on many forms; however, monitoring active nocturnal migration of land birds is limited to a few techniques. Avian nocturnal flight calls are currently the only method for describing migrant composition at the species level. However, as this method develops, more information is needed to understand the sources of variation in call detection. Additionally, few studies examine how detection probabilities differ under varying atmospheric conditions. We use nocturnal flight call recordings from captive individuals to explore the dependence of flight call detection on atmospheric temperature and humidity. Height or distance from origin had the largest influence on call detection, while temperature and humidity also influenced detectability at higher altitudes. Because flight call detection varies with both atmospheric conditions and flight height, improved monitoring across time and space will require correction for these factors to generate standardized metrics of songbird migration.

  11. Vertical transmission of learned signatures in a wild parrot

    PubMed Central

    Berg, Karl S.; Delgado, Soraya; Cortopassi, Kathryn A.; Beissinger, Steven R.; Bradbury, Jack W.

    2012-01-01

    Learned birdsong is a widely used animal model for understanding the acquisition of human speech. Male songbirds often learn songs from adult males during sensitive periods early in life, and sing to attract mates and defend territories. In presumably all of the 350+ parrot species, individuals of both sexes commonly learn vocal signals throughout life to satisfy a wide variety of social functions. Despite intriguing parallels with humans, there have been no experimental studies demonstrating learned vocal production in wild parrots. We studied contact call learning in video-rigged nests of a well-known marked population of green-rumped parrotlets (Forpus passerinus) in Venezuela. Both sexes of naive nestlings developed individually unique contact calls in the nest, and we demonstrate experimentally that signature attributes are learned from both primary care-givers. This represents the first experimental evidence for the mechanisms underlying the transmission of a socially acquired trait in a wild parrot population. PMID:21752824

  12. Awareness Effects of a Youth Suicide Prevention Media Campaign in Louisiana

    ERIC Educational Resources Information Center

    Jenner, Eric; Jenner, Lynne Woodward; Matthews-Sterling, Maya; Butts, Jessica K.; Williams, Trina Evans

    2010-01-01

    Research on the efficacy of mediated suicide awareness campaigns is limited. The impacts of a state-wide media campaign on call volumes to a national hotline were analyzed to determine if the advertisements have raised awareness of the hotline. We use a quasi-experimental design to compare call volumes from ZIP codes where and when the campaign is…

  13. Graphics, Playability and Social Interaction, the Greatest Motivations for Playing Call of Duty. Educational Reflections

    ERIC Educational Resources Information Center

    Marcano Lárez, Beatriz Elena

    2014-01-01

    War videogames raise a lot of controversy in the educational field and are by far the most played videogames worldwide. This study explores the factors that encouraged gamers to choose war videogames with a sample of 387 Call of Duty players. The motivational factors were pinpointed using a non-experimental descriptive exploratory study through an…

  14. A Multiscale Virtual Fabrication and Lattice Modeling Approach for the Fatigue Performance Prediction of Asphalt Concrete

    NASA Astrophysics Data System (ADS)

    Dehghan Banadaki, Arash

    Predicting the ultimate performance of asphalt concrete under realistic loading conditions is the main key to developing better-performing materials, designing long-lasting pavements, and performing reliable lifecycle analysis for pavements. The fatigue performance of asphalt concrete depends on the mechanical properties of the constituent materials, namely asphalt binder and aggregate. This dependent link between performance and mechanical properties is extremely complex, and experimental techniques often are used to try to characterize the performance of hot mix asphalt. However, given the seemingly uncountable number of mixture designs and loading conditions, it is simply not economical to try to understand and characterize the material behavior solely by experimentation. It is well known that analytical and computational modeling methods can be combined with experimental techniques to reduce the costs associated with understanding and characterizing the mechanical behavior of the constituent materials. This study aims to develop a multiscale micromechanical lattice-based model to predict cracking in asphalt concrete using component material properties. The proposed algorithm, while capturing different phenomena for different scales, also minimizes the need for laboratory experiments. The developed methodology builds on a previously developed lattice model and the viscoelastic continuum damage model to link the component material properties to the mixture fatigue performance. The resulting lattice model is applied to predict the dynamic modulus mastercurves for different scales. A framework for capturing the so-called structuralization effects is introduced that significantly improves the accuracy of the modulus prediction. Furthermore, air voids are added to the model to help capture this important micromechanical feature that affects the fatigue performance of asphalt concrete as well as the modulus value. The effects of rate dependency are captured by implementing the viscoelastic fracture criterion. In the end, an efficient cyclic loading framework is developed to evaluate the damage accumulation in the material that is caused by long-sustained cyclic loads.

  15. Directional frequency and recording (DIFAR) sensors in seafloor recorders to locate calling bowhead whales during their fall migration.

    PubMed

    Greene, Charles R; McLennan, Miles Wm; Norman, Robert G; McDonald, Trent L; Jakubczak, Ray S; Richardson, W John

    2004-08-01

    Bowhead whales, Balaena mysticetus, migrate west during fall approximately 10-75 km off the north coast of Alaska, passing the petroleum developments around Prudhoe Bay. Oil production operations on an artificial island 5 km offshore create sounds heard by some whales. As part of an effort to assess whether migrating whales deflect farther offshore at times with high industrial noise, an acoustical approach was selected for localizing calling whales. The technique incorporated DIFAR (directional frequency and recording) sonobuoy techniques. An array of 11 DASARs (directional autonomous seafloor acoustic recorders) was built and installed with unit-to-unit separation of 5 km. When two or more DASARs detected the same call, the whale location was determined from the bearing intersections. This article describes the acoustic methods used to determine the locations of the calling bowhead whales and shows the types and precision of the data acquired. Calibration transmissions at GPS-measured times and locations provided measures of the individual DASAR clock drift and directional orientation. The standard error of the bearing measurements at distances of 3-4 km was approximately 1.35 degrees after corrections for gain imbalance in the two directional sensors. During 23 days in 2002, 10,587 bowhead calls were detected and 8383 were localized.

  16. Variable horizon in a peridynamic medium

    DOE PAGES

    Silling, Stewart A.; Littlewood, David J.; Seleson, Pablo

    2015-12-10

    Here, a notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forcesmore » by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local-nonlocal coupling, illustrate the methods.« less

  17. Acoustic signals of baby black caimans.

    PubMed

    Vergne, Amélie L; Aubin, Thierry; Taylor, Peter; Mathevon, Nicolas

    2011-12-01

    In spite of the importance of crocodilian vocalizations for the understanding of the evolution of sound communication in Archosauria and due to the small number of experimental investigations, information concerning the vocal world of crocodilians is limited. By studying black caimans Melanosuchus niger in their natural habitat, here we supply the experimental evidence that juvenile crocodilians can use a graded sound system in order to elicit adapted behavioral responses from their mother and siblings. By analyzing the acoustic structure of calls emitted in two different situations ('undisturbed context', during which spontaneous calls of juvenile caimans were recorded without perturbing the group, and a simulated 'predator attack', during which calls were recorded while shaking juveniles) and by testing their biological relevance through playback experiments, we reveal the existence of two functionally different types of juvenile calls that produce a different response from the mother and other siblings. Young black caimans can thus modulate the structure of their vocalizations along an acoustic continuum as a function of the emission context. Playback experiments show that both mother and juveniles discriminate between these 'distress' and 'contact' calls. Acoustic communication is thus an important component mediating relationships within family groups in caimans as it is in birds, their archosaurian relatives. Although probably limited, the vocal repertoire of young crocodilians is capable of transmitting the information necessary for allowing siblings and mother to modulate their behavior. Copyright © 2011 Elsevier GmbH. All rights reserved.

  18. Vocal Learning via Social Reinforcement by Infant Marmoset Monkeys.

    PubMed

    Takahashi, Daniel Y; Liao, Diana A; Ghazanfar, Asif A

    2017-06-19

    For over half a century now, primate vocalizations have been thought to undergo little or no experience-dependent acoustic changes during development [1]. If any changes are apparent, then they are routinely (and quite reasonably) attributed to the passive consequences of growth. Indeed, previous experiments on squirrel monkeys and macaque monkeys showed that social isolation [2, 3], deafness [2], cross-fostering [4] and parental absence [5] have little or no effect on vocal development. Here, we explicitly test in marmoset monkeys-a very vocal and cooperatively breeding species [6]-whether the transformation of immature into mature contact calls by infants is influenced by contingent parental vocal feedback. Using a closed-loop design, we experimentally provided more versus less contingent vocal feedback to twin infant marmoset monkeys over their first 2 months of life, the interval during which their contact calls transform from noisy, immature calls to tonal adult-like "phee" calls [7, 8]. Infants who received more contingent feedback had a faster rate of vocal development, producing mature-sounding contact calls earlier than the other twin. The differential rate of vocal development was not linked to genetics, perinatal experience, or body growth; nor did the amount of contingency influence the overall rate of spontaneous vocal production. Thus, we provide the first experimental evidence for production-related vocal learning during the development of a nonhuman primate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Reverse engineering highlights potential principles of large gene regulatory network design and learning.

    PubMed

    Carré, Clément; Mas, André; Krouk, Gabriel

    2017-01-01

    Inferring transcriptional gene regulatory networks from transcriptomic datasets is a key challenge of systems biology, with potential impacts ranging from medicine to agronomy. There are several techniques used presently to experimentally assay transcription factors to target relationships, defining important information about real gene regulatory networks connections. These techniques include classical ChIP-seq, yeast one-hybrid, or more recently, DAP-seq or target technologies. These techniques are usually used to validate algorithm predictions. Here, we developed a reverse engineering approach based on mathematical and computer simulation to evaluate the impact that this prior knowledge on gene regulatory networks may have on training machine learning algorithms. First, we developed a gene regulatory networks-simulating engine called FRANK (Fast Randomizing Algorithm for Network Knowledge) that is able to simulate large gene regulatory networks (containing 10 4 genes) with characteristics of gene regulatory networks observed in vivo. FRANK also generates stable or oscillatory gene expression directly produced by the simulated gene regulatory networks. The development of FRANK leads to important general conclusions concerning the design of large and stable gene regulatory networks harboring scale free properties (built ex nihilo). In combination with supervised (accepting prior knowledge) support vector machine algorithm we (i) address biologically oriented questions concerning our capacity to accurately reconstruct gene regulatory networks and in particular we demonstrate that prior-knowledge structure is crucial for accurate learning, and (ii) draw conclusions to inform experimental design to performed learning able to solve gene regulatory networks in the future. By demonstrating that our predictions concerning the influence of the prior-knowledge structure on support vector machine learning capacity holds true on real data ( Escherichia coli K14 network reconstruction using network and transcriptomic data), we show that the formalism used to build FRANK can to some extent be a reasonable model for gene regulatory networks in real cells.

  20. Application of optical spectroscopic techniques for disease diagnosis

    NASA Astrophysics Data System (ADS)

    Saha, Anushree

    Optical spectroscopy, a truly non-invasive tool for remote diagnostics, is capable of providing valuable information on the structure and function of molecules. However, most spectroscopic techniques suffer from drawbacks, which limit their application. As a part of my dissertation work, I have developed theoretical and experimental methods to address the above mentioned issues. I have successfully applied these methods for monitoring the physical, chemical and biochemical parameters of biomolecules involved in some specific life threatening diseases like lead poisoning and age-related macular degeneration (AMD). I presented optical studies of melanosomes, which are one of the vital organelles in the human eye, also known to be responsible for a disease called age-related macular degeneration (AMD), a condition of advanced degeneration which causes progressive blindness. I used Raman spectroscopy, to first chemically identify the composition of melanosome, and then monitor the changes in its functional and chemical behavior due to long term exposure to visible light. The above study, apart from explaining the role of melanosomes in AMD, also sets the threshold power for lasers used in surgeries and other clinical applications. In the second part of my dissertation, a battery of spectroscopic techniques was successfully applied to explore the different binding sites of lead ions with the most abundant carrier protein molecule in our circulatory system, human serum albumin. I applied optical spectroscopic tools for ultrasensitive detection of heavy metal ions in solution which can also be used for lead detection at a very early stage of lead poisoning. Apart from this, I used Raman microspectroscopy to study the chemical alteration occurring inside a prostate cancer cell as a result of a treatment with a low concentrated aqueous extract of a prospective drug, Nerium Oleander. The experimental methods used in this study has tremendous potential for clinical application and will gain widespread acceptance within next few years from bench to bedside as an inexpensive and non-invasive tool compared to the other technologies.

  1. Uncovering Pompeii: Examining Evidence.

    ERIC Educational Resources Information Center

    Yell, Michael M.

    2001-01-01

    Presents a lesson plan on Pompeii (Italy) for middle school students that utilizes a teaching technique called interactive presentation. Describes the technique's five phases: (1) discrepant event inquiry; (2) discussion/presentation; (3) cooperative learning activity; (4) writing for understanding activity; and (5) whole-class discussion and…

  2. Alarm Fatigue vs User Expectations Regarding Context-Aware Alarm Handling in Hospital Environments Using CallMeSmart.

    PubMed

    Solvoll, Terje; Arntsen, Harald; Hartvigsen, Gunnar

    2017-01-01

    Surveys and research show that mobile communication systems in hospital settings are old and cause frequent interruptions. In the quest to remedy this, an Android based communication system called CallMeSmart tries to encapsulate most of the frequent communication into one hand held device focusing on reducing interruptions and at the same time make the workday easier for healthcare workers. The objective of CallMeSmart is to use context-awareness techniques to automatically monitor the availability of physicians' and nurses', and use this information to prevent or route phone calls, text messages, pages and alarms that would otherwise compromise patient care. In this paper, we present the results from interviewing nurses on alarm fatigue and their expectations regarding context-aware alarm handling using CallMeSmart.

  3. Experimental scrambling and noise reduction applied to the optical encryption of QR codes.

    PubMed

    Barrera, John Fredy; Vélez, Alejandro; Torroba, Roberto

    2014-08-25

    In this contribution, we implement two techniques to reinforce optical encryption, which we restrict in particular to the QR codes, but could be applied in a general encoding situation. To our knowledge, we present the first experimental-positional optical scrambling merged with an optical encryption procedure. The inclusion of an experimental scrambling technique in an optical encryption protocol, in particular dealing with a QR code "container", adds more protection to the encoding proposal. Additionally, a nonlinear normalization technique is applied to reduce the noise over the recovered images besides increasing the security against attacks. The opto-digital techniques employ an interferometric arrangement and a joint transform correlator encrypting architecture. The experimental results demonstrate the capability of the methods to accomplish the task.

  4. 78 FR 70957 - 60-Day Notice of Proposed Information Collection: HUD-Owned Real Estate Good Neighbor Next Door...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... calling the toll-free Federal Relay Service at (800) 877-8339. FOR FURTHER INFORMATION CONTACT: Ivery W... number through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available... automated collection techniques or other forms of information technology, e.g., permitting electronic...

  5. 78 FR 67384 - 60-Day Notice of Proposed Information Collection: FHA-Insured Mortgage Loan Servicing Involving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... hearing or speech impairments may access this number through TTY by calling the toll-free Federal Relay... calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available documents submitted to... techniques or other forms of information technology, e.g., permitting electronic submission of responses. HUD...

  6. 78 FR 75364 - 60-Day Notice of Proposed Information Collection: Application for FHA Insured Mortgages

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. FOR FURTHER INFORMATION... through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available... techniques or other forms of information technology, e.g., permitting electronic submission of responses. HUD...

  7. Encourage Students to Read through the Use of Data Visualization

    ERIC Educational Resources Information Center

    Bandeen, Heather M.; Sawin, Jason E.

    2012-01-01

    Instructors are always looking for new ways to engage students in reading assignments. The authors present a few techniques that rely on a web-based data visualization tool called Wordle (wordle.net). Wordle creates word frequency representations called word clouds. The larger a word appears within a cloud, the more frequently it occurs within a…

  8. Fluorescence exclusion: A simple versatile technique to calculate cell volumes and local heights (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thouvenin, Olivier; Fink, Mathias; Boccara, A. Claude

    2017-02-01

    Understanding volume regulation during mitosis is technically challenging. Indeed, a very sensitive non invasive imaging over time scales ranging from seconds to hours and over large fields is required. Therefore, Quantitative Phase Imaging (QPI) would be a perfect tool for such a project. However, because of asymmetric protein segregation during mitosis, an efficient separation of the refractive index and the height in the phase signal is required. Even though many strategies to make such a separation have been developed, they usually are difficult to implement, have poor sensitivity, or cannot be performed in living cells, or in a single shot. In this paper, we will discuss the use of a new technique called fluorescence exclusion to perform volume measurements. By coupling such technique with a simultaneous phase measurement, we were also able to recover the refractive index inside the cells. Fluorescence exclusion is a versatile and powerful technique that allows the volume measurement of many types of cells. A fluorescent dye, which cannot penetrate inside the cells, is mixed with the external medium in a confined environment. Therefore, the fluorescent signal depends on the inverse of the object's height. We could demonstrate both experimentally and theoretically that fluorescence exclusion can accurately measure cell volumes, even for cells much higher than the depth of focus of the objective. A local accurate height and RI measurement can also be obtained for smaller cells. We will also discuss the way to optimize the confinement of the observation chamber, either mechanically or optically.

  9. Prediction of physical protein protein interactions

    NASA Astrophysics Data System (ADS)

    Szilágyi, András; Grimm, Vera; Arakaki, Adrián K.; Skolnick, Jeffrey

    2005-06-01

    Many essential cellular processes such as signal transduction, transport, cellular motion and most regulatory mechanisms are mediated by protein-protein interactions. In recent years, new experimental techniques have been developed to discover the protein-protein interaction networks of several organisms. However, the accuracy and coverage of these techniques have proven to be limited, and computational approaches remain essential both to assist in the design and validation of experimental studies and for the prediction of interaction partners and detailed structures of protein complexes. Here, we provide a critical overview of existing structure-independent and structure-based computational methods. Although these techniques have significantly advanced in the past few years, we find that most of them are still in their infancy. We also provide an overview of experimental techniques for the detection of protein-protein interactions. Although the developments are promising, false positive and false negative results are common, and reliable detection is possible only by taking a consensus of different experimental approaches. The shortcomings of experimental techniques affect both the further development and the fair evaluation of computational prediction methods. For an adequate comparative evaluation of prediction and high-throughput experimental methods, an appropriately large benchmark set of biophysically characterized protein complexes would be needed, but is sorely lacking.

  10. Helping Students Make Sense of Graphs: An Experimental Trial of SmartGraphs Software

    ERIC Educational Resources Information Center

    Zucker, Andrew; Kay, Rachel; Staudt, Carolyn

    2014-01-01

    Graphs are commonly used in science, mathematics, and social sciences to convey important concepts; yet students at all ages demonstrate difficulties interpreting graphs. This paper reports on an experimental study of free, Web-based software called SmartGraphs that is specifically designed to help students overcome their misconceptions regarding…

  11. Evidence-Based Practices in a Changing World: Reconsidering the Counterfactual in Education Research

    ERIC Educational Resources Information Center

    Lemons, Christopher J.; Fuchs, Douglas; Gilbert, Jennifer K.; Fuchs, Lynn S.

    2014-01-01

    Experimental and quasi-experimental designs are used in educational research to establish causality and develop effective practices. These research designs rely on a counterfactual model that, in simple form, calls for a comparison between a treatment group and a control group. Developers of educational practices often assume that the population…

  12. Quantum-tomographic cryptography with a semiconductor single-photon source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaszlikowski, D.; Yang, L.J.; Yong, L.S.

    2005-09-15

    We analyze the security of so-called quantum-tomographic cryptography with the source producing entangled photons via an experimental scheme proposed by Fattal et al. [Phys. Rev. Lett. 92, 37903 (2004)]. We determine the range of the experimental parameters for which the protocol is secure against the most general incoherent attacks.

  13. Comparison of VRX CT scanners geometries

    NASA Astrophysics Data System (ADS)

    DiBianca, Frank A.; Melnyk, Roman; Duckworth, Christopher N.; Russ, Stephan; Jordan, Lawrence M.; Laughter, Joseph S.

    2001-06-01

    A technique called Variable-Resolution X-ray (VRX) detection greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) as the field size decreases. The technique is based on a principle called `projective compression' that allows both the resolution element and the sampling distance of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. This paper compares the benefits obtainable with two different VRX detector geometries: the single-arm geometry and the dual-arm geometry. The analysis is based on Monte Carlo simulations and direct calculations. The results of this study indicate that the dual-arm system appears to have more advantages than the single-arm technique.

  14. Understanding the Effect of Grain Boundary Character on Dynamic Recrystallization in Stainless Steel 316L

    NASA Astrophysics Data System (ADS)

    Beck, Megan; Morse, Michael; Corolewski, Caleb; Fritchman, Koyuki; Stifter, Chris; Poole, Callum; Hurley, Michael; Frary, Megan

    2017-08-01

    Dynamic recrystallization (DRX) occurs during high-temperature deformation in metals and alloys with low to medium stacking fault energies. Previous simulations and experimental research have shown the effect of temperature and grain size on DRX behavior, but not the effect of the grain boundary character distribution. To investigate the effects of the distribution of grain boundary types, experimental testing was performed on stainless steel 316L specimens with different initial special boundary fractions (SBF). This work was completed in conjunction with computer simulations that used a modified Monte Carlo method which allowed for the addition of anisotropic grain boundary energies using orientation data from electron backscatter diffraction (EBSD). The correlation of the experimental and simulation work allows for a better understanding of how the input parameters in the simulations correspond to what occurs experimentally. Results from both simulations and experiments showed that a higher fraction of so-called "special" boundaries ( e.g., Σ3 twin boundaries) delayed the onset of recrystallization to larger strains and that it is energetically favorable for nuclei to form on triple junctions without these so-called "special" boundaries.

  15. Parent-offspring communication in the western sandpiper

    USGS Publications Warehouse

    Johnson, M.; Aref, S.; Walters, J.R.

    2008-01-01

    Western sandpiper (Calidris mauri) chicks are precocial and leave the nest shortly after hatch to forage independently. Chicks require thermoregulatory assistance from parents (brooding) for 5-7 days posthatch, and parents facilitate chick survival for 2-3 weeks posthatch by leading and defending chicks. Parental vocal signals are likely involved in protecting chicks from predators, preventing them from wandering away and becoming lost and leading them to good foraging locations. Using observational and experimental methods in the field, we describe and demonstrate the form and function of parent-chick communication in the western sandpiper. We document 4 distinct calls produced by parents that are apparently directed toward their chicks (brood, gather, alarm, and freeze calls). Through experimental playback of parental and non-parental vocalizations to chicks in a small arena, we demonstrated the following: 1) chicks respond to the alarm call by vocalizing relatively less often and moving away from the signal source, 2) chicks respond to the gather call by vocalizing relatively more often and moving toward the signal source, and 3) chicks respond to the freeze call by vocalizing relatively less often and crouching motionless on the substrate for extended periods of time. Chicks exhibited consistent directional movement and space use to parental and non-parental signals. Although fewer vocalizations were given in response to non-parental signals, which may indicate a weaker response to unfamiliar individuals, the relative number of chick calls given to each type of call signal was consistent between parental and non-parental signals. We also discovered 2 distinct chick vocalizations (chick-contact and chick-alarm calls) during arena playback experiments. Results indicate that sandpiper parents are able to elicit antipredatory chick behaviors and direct chick movement and vocalizations through vocal signals. Future study of parent-offspring communication should determine whether shorebird chicks exhibit parental recognition though vocalizations and the role of chick vocalizations in parental behavior. ?? The Author 2008. Published by Oxford University Press on behalf of the International Society for Behavioral Ecology. All rights reserved.

  16. Spectrum transformation for divergent iterations

    NASA Technical Reports Server (NTRS)

    Gupta, Murli M.

    1991-01-01

    Certain spectrum transformation techniques are described that can be used to transform a diverging iteration into a converging one. Two techniques are considered called spectrum scaling and spectrum enveloping and how to obtain the optimum values of the transformation parameters is discussed. Numerical examples are given to show how this technique can be used to transform diverging iterations into converging ones; this technique can also be used to accelerate the convergence of otherwise convergent iterations.

  17. Progress in the development of the neutron flux monitoring system of the French GEN-IV SFR: simulations and experimental validations [ANIMMA--2015-IO-392

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jammes, C.; Filliatre, P.; Izarra, G. de

    France has a long experience of about 50 years in designing, building and operating sodium-cooled fast reactors (SFR) such as RAPSODIE, PHENIX and SUPER PHENIX. Fast reactors feature the double capability of reducing nuclear waste and saving nuclear energy resources by burning actinides. Since this reactor type is one of those selected by the Generation IV International Forum, the French government asked, in the year 2006, CEA, namely the French Alternative Energies and Atomic Energy Commission, to lead the development of an innovative GEN-IV nuclear- fission power demonstrator. The major objective is to improve the safety and availability of anmore » SFR. The neutron flux monitoring (NFM) system of any reactor must, in any situation, permit both reactivity control and power level monitoring from startup to full power. It also has to monitor possible changes in neutron flux distribution within the core region in order to prevent any local melting accident. The neutron detectors will have to be installed inside the reactor vessel because locations outside the vessel will suffer from severe disadvantages; radially the neutron shield that is also contained in the reactor vessel will cause unacceptable losses in neutron flux; below the core the presence of a core-catcher prevents from inserting neutron guides; and above the core the distance is too large to obtain decent neutron signals outside the vessel. Another important point is to limit the number of detectors placed in the vessel in order to alleviate their installation into the vessel. In this paper, we show that the architecture of the NFM system will rely on high-temperature fission chambers (HTFC) featuring wide-range flux monitoring capability. The definition of such a system is presented and the justifications of technological options are brought with the use of simulation and experimental results. Firstly, neutron-transport calculations allow us to propose two in-vessel regions, namely the above-core and under-core structures. We verify that they comply with the main objective, that is the neutron power and flux distribution monitoring. HTFC placed in these two regions can detect an inadvertent control rod withdrawal that is a postulated initiating event for safety demonstration. Secondly, we show that the HTFC reliability is enhanced thanks to a more robust physical design and the fact that it has been justified that the mineral insulation is insensitive to any increase in temperature. Indeed, the HTFC insulation is subject to partial discharges at high temperature when the electric field between their electrodes is greater than about 200 V/mm or so. These discharges give rise to signals similar to the neutron pulses generated by a fission chamber itself, which may bias the HTFC count rate at start-up only. However, as displayed in Figure 1, we have experimentally verified that one can discriminate neutron pulses from partial discharges using online estimation of pulse width. Thirdly, we propose to estimate the count rate of a HTFC using the third order cumulant of its signal that is described by a filtered Poisson process. For such a statistic process, it is known that any cumulant, also called cumulative moment, is proportional to the process intensity that is here the count rate of a fission chamber. One recalls that the so-called Campbelling mode of such a detector is actually based on the signal variance, which is the second-order cumulant as well. The use of this extended Campbelling mode based on the third-order cumulant will permit to ensure the HTFC response linearity over the entire neutron flux range using a signal processing technique that is simple enough to satisfy design constraints on electric devices important for nuclear safety. We also show that this technique, named high order Campbelling method (HOC), is significantly more robust than another technique based on the change in the HTFC filling gas, which consists in adding a few percent of nitrogen. Finally, we also present an experimental campaign devoted to the required calibration process of the so-called HOC method. The Campbelling results show a good agreement with the simple pulse counting estimation at low count rates. It is also shown that the HOC technique provides a linear estimation of the count rates at higher power levels as well.« less

  18. Thermal neutron detector based on COTS CMOS imagers and a conversion layer containing Gadolinium

    NASA Astrophysics Data System (ADS)

    Pérez, Martín; Blostein, Juan Jerónimo; Bessia, Fabricio Alcalde; Tartaglione, Aureliano; Sidelnik, Iván; Haro, Miguel Sofo; Suárez, Sergio; Gimenez, Melisa Lucía; Berisso, Mariano Gómez; Lipovetzky, Jose

    2018-06-01

    In this work we will introduce a novel low cost position sensitive thermal neutron detection technique, based on a Commercial Off The Shelf CMOS image sensor covered with a Gadolinium containing conversion layer. The feasibility of the neutron detection technique implemented in this work has been experimentally demonstrated. A thermal neutron detection efficiency of 11.3% has been experimentally obtained with a conversion layer of 11.6 μm. It was experimentally verified that the thermal neutron detection efficiency of this technique is independent on the intensity of the incident thermal neutron flux, which was confirmed for conversion layers of different thicknesses. Based on the experimental results, a spatial resolution better than 25 μm is expected. This spatial resolution makes the proposed technique specially useful for neutron beam characterization, neutron beam dosimetry, high resolution neutron imaging, and several neutron scattering techniques.

  19. Optimization of Online Searching by Pre-Recording the Search Statements: A Technique for the HP-2645A Terminal.

    ERIC Educational Resources Information Center

    Oberhauser, O. C.; Stebegg, K.

    1982-01-01

    Describes the terminal's capabilities, ways to store and call up lines of statements, cassette tapes needed during searches, and master tape's use for login storage. Advantages of the technique and two sources are listed. (RBF)

  20. Development of a mix design process for cold-in-place rehabilitation using foamed asphalt.

    DOT National Transportation Integrated Search

    2003-12-01

    This study evaluates one of the recycling techniques used to rehabilitate pavement, called Cold In-Place Recycling (CIR). CIR is one of the fastest growing road rehabilitation techniques because it is quick and cost-effective. The document reports on...

  1. Classification by diagnosing all absorption features (CDAF) for the most abundant minerals in airborne hyperspectral images

    NASA Astrophysics Data System (ADS)

    Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen

    2011-12-01

    Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.

  2. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  3. Effects of wind on the dynamics of the central jet during drop impact onto a deep-water surface

    NASA Astrophysics Data System (ADS)

    Liu, Xinan; Wang, An; Wang, Shuang; Dai, Dejun

    2018-05-01

    The cavity and central jet generated by the impact of a single water drop on a deep-water surface in a wind field are experimentally studied. Different experiments are performed by varying the impacting drop diameter and wind speed. The contour profile histories of the cavity (also called crater) and central jet (also called stalk) are measured in detail with a backlit cinematic shadowgraph technique. The results show that shortly after the drop hits the water surface an asymmetrical cavity appears along the wind direction, with a train of capillary waves on the cavity wall. This is followed by the formation of an inclined central jet at the location of the drop impact. It is found that the wind has little effect on the penetration depth of the cavity at the early stage of the cavity expansion, but markedly changes the capillary waves during the retraction of the cavity. The capillary waves in turn shift the position of the central jet formation leeward. The dynamics of the central jet are dominated by two mechanisms: (i) the oblique drop impact produced by the wind and (ii) the wind drag force directly acting on the jet. The maximum height of the central jet, called the stalk height, is drastically affected by the wind, and the nondimensional stalk height H /D decreases with increasing θ Re-1 , where D is the drop diameter, θ is the impingement angle of drop impact, and Re=ρaUwD /μa is the Reynolds number with air density ρa, wind speed Uw, and air viscosity μa.

  4. CHROMATOGRAPHIC TECHNIQUES IN PHARMACEUTICAL ANALYSIS IN POIAND: HISTORY AND THE PRESENCE ON THE BASIS OF PAPERS PUBLISHED IN SELECTED POLISH PHARMACEUTICAL JOURNALS IN XX CENTURY.

    PubMed

    Bilek, Maciej; Namieśnik, Jacek

    2016-01-01

    For a long time, chromatographic techniques and techniques related to them have stimulated the development of new procedures in the field of pharmaceutical analysis. The newly developed methods, characterized by improved metrological parameters, allow for more accurate testing of, among others, the composition of raw materials, intermediates and final products. The chromatographic techniques also enable studies on waste generated in research laboratories and factories producing pharmaceuticals and parapharmaceuticals. Based on the review of reports published in Polish pharmaceutical journals, we assessed the impact of chromatographic techniques on the development of pharmaceutical analysis. The first chromatographic technique used in pharmaceutical analysis was a so-called capillary analysis. It was applied in the 1930s to control the identity of pharmaceutical formulations. In the 1940s and 1950s, the chromatographic techniques were mostly a subject of review publications, while their use in experimental work was rare. Paper chromatography and thin layer chromatography were introduced in the 1960s and 1970s, respectively. These new analytical tools have contributed to the intensive development of research in the field of phytochemistry and the analysis of herbal medicines. The development of colunm chromatography-based techniques, i.e., gas chromatography and high performance liquid chromatography took place in the end of 20th century. Both aforementioned techniques were widely applied in pharmaceutical analysis, for example, to assess the stability of drugs, test for impurities and degradation products as well as in pharmacokinetics studies. The first decade of 21" century was the time of new detection methods in gas and liquid chromatography. The information sources used to write this article were Polish pharmaceutical journals, both professional and scientific, originating from the interwar and post-war period, i.e., "Kronika Farmaceutyczna", "Farmacja Współczesna", "Wiadomości Farmaceutyczne", "Acta Poloniae Pharmaceutica", "Farmacja Polska", "Dissertationes Pharmaceuticae", "Annales UMCS sectio DDD Phamacia". The number of published works using various chromatography techniques was assessed based on the content description of individual issues of the journal "Acta Poloniae Pharmaceutica".

  5. Experimental and analytical determination of stability parameters for a balloon tethered in a wind

    NASA Technical Reports Server (NTRS)

    Redd, L. T.; Bennett, R. M.; Bland, S. R.

    1973-01-01

    Experimental and analytical techniques for determining stability parameters for a balloon tethered in a steady wind are described. These techniques are applied to a particular 7.64-meter-long balloon, and the results are presented. The stability parameters of interest appear as coefficients in linearized stability equations and are derived from the various forces and moments acting on the balloon. In several cases the results from the experimental and analytical techniques are compared and suggestions are given as to which techniques are the most practical means of determining values for the stability parameters.

  6. In-situ identification of anti-personnel mines using acoustic resonant spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, R L; Roberts, R S

    1999-02-01

    A new technique for identifying buried Anti-Personnel Mines is described, and a set of preliminary experiments designed to assess the feasibility of this technique is presented. Analysis of the experimental results indicates that the technique has potential, but additional work is required to bring the technique to fruition. In addition to the experimental results presented here, a technique used to characterize the sensor employed in the experiments is detailed.

  7. Differing types of cellular phone conversations and dangerous driving.

    PubMed

    Dula, Chris S; Martin, Benjamin A; Fox, Russell T; Leonard, Robin L

    2011-01-01

    This study sought to investigate the relationship between cell phone conversation type and dangerous driving behaviors. It was hypothesized that more emotional phone conversations engaged in while driving would produce greater frequencies of dangerous driving behaviors in a simulated environment than more mundane conversation or no phone conversation at all. Participants were semi-randomly assigned to one of three conditions: (1) no call, (2) mundane call, and, (3) emotional call. While driving in a simulated environment, participants in the experimental groups received a phone call from a research confederate who either engaged them in innocuous conversation (mundane call) or arguing the opposite position of a deeply held belief of the participant (emotional call). Participants in the no call and mundane call groups differed significantly only on percent time spent speeding and center line crossings, though the mundane call group consistently engaged in more of all dangerous driving behaviors than did the no call participants. Participants in the emotional call group engaged in significantly more dangerous driving behaviors than participants in both the no call and mundane call groups, with the exception of traffic light infractions, where there were no significant group differences. Though there is need for replication, the authors concluded that whereas talking on a cell phone while driving is risky to begin with, having emotionally intense conversations is considerably more dangerous. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Charged-particle emission tomography

    NASA Astrophysics Data System (ADS)

    Ding, Yijun

    Conventional charged-particle imaging techniques--such as autoradiography-- provide only two-dimensional (2D) images of thin tissue slices. To get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick sections, thus increasing laboratory throughput and eliminating distortions due to registration. In CPET, molecules or cells of interest are labeled so that they emit charged particles without significant alteration of their biological function. Therefore, by imaging the source of the charged particles, one can gain information about the distribution of the molecules or cells of interest. Two special case of CPET include beta emission tomography (BET) and alpha emission tomography (alphaET), where the charged particles employed are fast electrons and alpha particles, respectively. A crucial component of CPET is the charged-particle detector. Conventional charged-particle detectors are sensitive only to the 2-D positions of the detected particles. We propose a new detector concept, which we call particle-processing detector (PPD). A PPD measures attributes of each detected particle, including location, direction of propagation, and/or the energy deposited in the detector. Reconstruction algorithms for CPET are developed, and reconstruction results from simulated data are presented for both BET and alphaET. The results show that, in addition to position, direction and energy provide valuable information for 3D reconstruction of CPET. Several designs of particle-processing detectors are described. Experimental results for one detector are discussed. With appropriate detector design and careful data analysis, it is possible to measure direction and energy, as well as position of each detected particle. The null functions of CPET with PPDs that measure different combinations of attributes are calculated through singular-value decomposition. In general, the more particle attributes are measured from each detection event, the smaller the null space of CPET is. In other words, the higher dimension the data space is, the more information about an object can be recovered from CPET.

  9. Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model

    DTIC Science & Technology

    2010-03-01

    EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End

  10. Curriculum system for experimental teaching in optoelectronic information

    NASA Astrophysics Data System (ADS)

    Di, Hongwei; Chen, Zhenqiang; Zhang, Jun; Luo, Yunhan

    2017-08-01

    The experimental curriculum system is directly related to talent training quality. Based on the careful investigation of the developing request of the optoelectronic information talents in the new century, the experimental teaching goal and the content, the teaching goal was set to cultivate students' innovative consciousness, innovative thinking, creativity and problem solving ability. Through straightening out the correlation among the experimental teaching in the main courses, the whole structure design was phased out, as well as the hierarchical curriculum connotation. According to the ideas of "basic, comprehensive, applied and innovative", the construction of experimental teaching system called "triple-three" was put forward for the optoelectronic information experimental teaching practice.

  11. The effective way

    NASA Astrophysics Data System (ADS)

    Fruchart, Michel; Vitelli, Vincenzo

    2018-03-01

    A theoretical framework for the design of so-called perturbative metamaterials, based on weakly interacting unit cells, has led to the experimental demonstration of a quadrupole topological insulator.

  12. Sound imaging of nocturnal animal calls in their natural habitat.

    PubMed

    Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G

    2011-09-01

    We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.

  13. ChIP-PIT: Enhancing the Analysis of ChIP-Seq Data Using Convex-Relaxed Pair-Wise Interaction Tensor Decomposition.

    PubMed

    Zhu, Lin; Guo, Wei-Li; Deng, Su-Ping; Huang, De-Shuang

    2016-01-01

    In recent years, thanks to the efforts of individual scientists and research consortiums, a huge amount of chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experimental data have been accumulated. Instead of investigating them independently, several recent studies have convincingly demonstrated that a wealth of scientific insights can be gained by integrative analysis of these ChIP-seq data. However, when used for the purpose of integrative analysis, a serious drawback of current ChIP-seq technique is that it is still expensive and time-consuming to generate ChIP-seq datasets of high standard. Most researchers are therefore unable to obtain complete ChIP-seq data for several TFs in a wide variety of cell lines, which considerably limits the understanding of transcriptional regulation pattern. In this paper, we propose a novel method called ChIP-PIT to overcome the aforementioned limitation. In ChIP-PIT, ChIP-seq data corresponding to a diverse collection of cell types, TFs and genes are fused together using the three-mode pair-wise interaction tensor (PIT) model, and the prediction of unperformed ChIP-seq experimental results is formulated as a tensor completion problem. Computationally, we propose efficient first-order method based on extensions of coordinate descent method to learn the optimal solution of ChIP-PIT, which makes it particularly suitable for the analysis of massive scale ChIP-seq data. Experimental evaluation the ENCODE data illustrate the usefulness of the proposed model.

  14. A Robust Approach For Acoustic Noise Suppression In Speech Using ANFIS

    NASA Astrophysics Data System (ADS)

    Martinek, Radek; Kelnar, Michal; Vanus, Jan; Bilik, Petr; Zidek, Jan

    2015-11-01

    The authors of this article deals with the implementation of a combination of techniques of the fuzzy system and artificial intelligence in the application area of non-linear noise and interference suppression. This structure used is called an Adaptive Neuro Fuzzy Inference System (ANFIS). This system finds practical use mainly in audio telephone (mobile) communication in a noisy environment (transport, production halls, sports matches, etc). Experimental methods based on the two-input adaptive noise cancellation concept was clearly outlined. Within the experiments carried out, the authors created, based on the ANFIS structure, a comprehensive system for adaptive suppression of unwanted background interference that occurs in audio communication and degrades the audio signal. The system designed has been tested on real voice signals. This article presents the investigation and comparison amongst three distinct approaches to noise cancellation in speech; they are LMS (least mean squares) and RLS (recursive least squares) adaptive filtering and ANFIS. A careful review of literatures indicated the importance of non-linear adaptive algorithms over linear ones in noise cancellation. It was concluded that the ANFIS approach had the overall best performance as it efficiently cancelled noise even in highly noise-degraded speech. Results were drawn from the successful experimentation, subjective-based tests were used to analyse their comparative performance while objective tests were used to validate them. Implementation of algorithms was experimentally carried out in Matlab to justify the claims and determine their relative performances.

  15. MSDD: a manually curated database of experimentally supported associations among miRNAs, SNPs and human diseases

    PubMed Central

    Yue, Ming; Zhou, Dianshuang; Zhi, Hui; Wang, Peng; Zhang, Yan; Gao, Yue; Guo, Maoni; Li, Xin; Wang, Yanxia

    2018-01-01

    Abstract The MiRNA SNP Disease Database (MSDD, http://www.bio-bigdata.com/msdd/) is a manually curated database that provides comprehensive experimentally supported associations among microRNAs (miRNAs), single nucleotide polymorphisms (SNPs) and human diseases. SNPs in miRNA-related functional regions such as mature miRNAs, promoter regions, pri-miRNAs, pre-miRNAs and target gene 3′-UTRs, collectively called ‘miRSNPs’, represent a novel category of functional molecules. miRSNPs can lead to miRNA and its target gene dysregulation, and resulting in susceptibility to or onset of human diseases. A curated collection and summary of miRSNP-associated diseases is essential for a thorough understanding of the mechanisms and functions of miRSNPs. Here, we describe MSDD, which currently documents 525 associations among 182 human miRNAs, 197 SNPs, 153 genes and 164 human diseases through a review of more than 2000 published papers. Each association incorporates information on the miRNAs, SNPs, miRNA target genes and disease names, SNP locations and alleles, the miRNA dysfunctional pattern, experimental techniques, a brief functional description, the original reference and additional annotation. MSDD provides a user-friendly interface to conveniently browse, retrieve, download and submit novel data. MSDD will significantly improve our understanding of miRNA dysfunction in disease, and thus, MSDD has the potential to serve as a timely and valuable resource. PMID:29106642

  16. MSDD: a manually curated database of experimentally supported associations among miRNAs, SNPs and human diseases.

    PubMed

    Yue, Ming; Zhou, Dianshuang; Zhi, Hui; Wang, Peng; Zhang, Yan; Gao, Yue; Guo, Maoni; Li, Xin; Wang, Yanxia; Zhang, Yunpeng; Ning, Shangwei; Li, Xia

    2018-01-04

    The MiRNA SNP Disease Database (MSDD, http://www.bio-bigdata.com/msdd/) is a manually curated database that provides comprehensive experimentally supported associations among microRNAs (miRNAs), single nucleotide polymorphisms (SNPs) and human diseases. SNPs in miRNA-related functional regions such as mature miRNAs, promoter regions, pri-miRNAs, pre-miRNAs and target gene 3'-UTRs, collectively called 'miRSNPs', represent a novel category of functional molecules. miRSNPs can lead to miRNA and its target gene dysregulation, and resulting in susceptibility to or onset of human diseases. A curated collection and summary of miRSNP-associated diseases is essential for a thorough understanding of the mechanisms and functions of miRSNPs. Here, we describe MSDD, which currently documents 525 associations among 182 human miRNAs, 197 SNPs, 153 genes and 164 human diseases through a review of more than 2000 published papers. Each association incorporates information on the miRNAs, SNPs, miRNA target genes and disease names, SNP locations and alleles, the miRNA dysfunctional pattern, experimental techniques, a brief functional description, the original reference and additional annotation. MSDD provides a user-friendly interface to conveniently browse, retrieve, download and submit novel data. MSDD will significantly improve our understanding of miRNA dysfunction in disease, and thus, MSDD has the potential to serve as a timely and valuable resource. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Investigation of pulmonary acoustic simulation: comparing airway model generation techniques

    NASA Astrophysics Data System (ADS)

    Henry, Brian; Dai, Zoujun; Peng, Ying; Mansy, Hansen A.; Sandler, Richard H.; Royston, Thomas

    2014-03-01

    Alterations in the structure and function of the pulmonary system that occur in disease or injury often give rise to measurable spectral, spatial and/or temporal changes in lung sound production and transmission. These changes, if properly quantified, might provide additional information about the etiology, severity and location of trauma, injury, or pathology. With this in mind, the authors are developing a comprehensive computer simulation model of pulmonary acoustics, known as The Audible Human Project™. Its purpose is to improve our understanding of pulmonary acoustics and to aid in interpreting measurements of sound and vibration in the lungs generated by airway insonification, natural breath sounds, and external stimuli on the chest surface, such as that used in elastography. As a part of this development process, finite element (FE) models were constructed of an excised pig lung that also underwent experimental studies. Within these models, the complex airway structure was created via two methods: x-ray CT image segmentation and through an algorithmic means called Constrained Constructive Optimization (CCO). CCO was implemented to expedite the segmentation process, as airway segments can be grown digitally. These two approaches were used in FE simulations of the surface motion on the lung as a result of sound input into the trachea. Simulation results were compared to experimental measurements. By testing how close these models are to experimental measurements, we are evaluating whether CCO can be used as a means to efficiently construct physiologically relevant airway trees.

  18. Light-emitting diode street lights reduce last-ditch evasive manoeuvres by moths to bat echolocation calls

    PubMed Central

    Wakefield, Andrew; Stone, Emma L.; Jones, Gareth; Harris, Stephen

    2015-01-01

    The light-emitting diode (LED) street light market is expanding globally, and it is important to understand how LED lights affect wildlife populations. We compared evasive flight responses of moths to bat echolocation calls experimentally under LED-lit and -unlit conditions. Significantly, fewer moths performed ‘powerdive’ flight manoeuvres in response to bat calls (feeding buzz sequences from Nyctalus spp.) under an LED street light than in the dark. LED street lights reduce the anti-predator behaviour of moths, shifting the balance in favour of their predators, aerial hawking bats. PMID:26361558

  19. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  20. Adaptation of warrant price with Black Scholes model and historical volatility

    NASA Astrophysics Data System (ADS)

    Aziz, Khairu Azlan Abd; Idris, Mohd Fazril Izhar Mohd; Saian, Rizauddin; Daud, Wan Suhana Wan

    2015-05-01

    This project discusses about pricing warrant in Malaysia. The Black Scholes model with non-dividend approach and linear interpolation technique was applied in pricing the call warrant. Three call warrants that are listed in Bursa Malaysia were selected randomly from UiTM's datastream. The finding claims that the volatility for each call warrants are different to each other. We have used the historical volatility which will describes the price movement by which an underlying share is expected to fluctuate within a period. The Black Scholes model price that was obtained by the model will be compared with the actual market price. Mispricing the call warrants will contribute to under or over valuation price. Other variables like interest rate, time to maturity date, exercise price and underlying stock price are involves in pricing call warrants as well as measuring the moneyness of call warrants.

Top