Sample records for experimental techniques commonly

  1. The role of chemometrics in single and sequential extraction assays: a review. Part II. Cluster analysis, multiple linear regression, mixture resolution, experimental design and other techniques.

    PubMed

    Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo

    2011-03-04

    Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Technique of spinal cord compression induced by inflation of epidural balloon catheter in rabbits (Oryctologus cuniculus): efficient and easy to use model.

    PubMed

    Fonseca, Antonio F B DA; Scheffer, Jussara P; Coelho, Barbara P; Aiello, Graciane; Guimarães, Arthur G; Gama, Carlos R B; Vescovini, Victor; Cabral, Paula G A; Oliveira, André L A

    2016-09-01

    The most common cause of spinal cord injury are high impact trauma, which often result in some motor impairment, sensory or autonomic a greater or lesser extent in the distal areas the level of trauma. In terms of survival and complications due to sequelae, veterinary patients have a poor prognosis unfavorable. Therefore justified the study of experimental models of spinal cord injury production that could provide more support to research potential treatments for spinal cord injuries in medicine and veterinary medicine. Preclinical studies of acute spinal cord injury require an experimental animal model easily reproducible. The most common experimental animal model is the rat, and several techniques for producing a spinal cord injury. The objective of this study was to describe and evaluate the effectiveness of acute spinal cord injury production technique through inflation of Fogarty(r) catheter using rabbits as an experimental model because it is a species that has fewer conclusive publications and contemplating. The main requirements of a model as low cost, handling convenience, reproducibility and uniformity. The technique was adequate for performing preclinical studies in neuro-traumatology area, effectively leading to degeneration and necrosis of the nervous tissue fostering the emergence of acute paraplegia.

  3. An experimental study on the effect of temperature on piezoelectric sensors for impedance-based structural health monitoring.

    PubMed

    Baptista, Fabricio G; Budoya, Danilo E; de Almeida, Vinicius A D; Ulson, Jose Alfredo C

    2014-01-10

    The electromechanical impedance (EMI) technique is considered to be one of the most promising methods for developing structural health monitoring (SHM) systems. This technique is simple to implement and uses small and inexpensive piezoelectric sensors. However, practical problems have hindered its application to real-world structures, and temperature effects have been cited in the literature as critical problems. In this paper, we present an experimental study of the effect of temperature on the electrical impedance of the piezoelectric sensors used in the EMI technique. We used 5H PZT (lead zirconate titanate) ceramic sensors, which are commonly used in the EMI technique. The experimental results showed that the temperature effects were strongly frequency-dependent, which may motivate future research in the SHM field.

  4. Prediction of physical protein protein interactions

    NASA Astrophysics Data System (ADS)

    Szilágyi, András; Grimm, Vera; Arakaki, Adrián K.; Skolnick, Jeffrey

    2005-06-01

    Many essential cellular processes such as signal transduction, transport, cellular motion and most regulatory mechanisms are mediated by protein-protein interactions. In recent years, new experimental techniques have been developed to discover the protein-protein interaction networks of several organisms. However, the accuracy and coverage of these techniques have proven to be limited, and computational approaches remain essential both to assist in the design and validation of experimental studies and for the prediction of interaction partners and detailed structures of protein complexes. Here, we provide a critical overview of existing structure-independent and structure-based computational methods. Although these techniques have significantly advanced in the past few years, we find that most of them are still in their infancy. We also provide an overview of experimental techniques for the detection of protein-protein interactions. Although the developments are promising, false positive and false negative results are common, and reliable detection is possible only by taking a consensus of different experimental approaches. The shortcomings of experimental techniques affect both the further development and the fair evaluation of computational prediction methods. For an adequate comparative evaluation of prediction and high-throughput experimental methods, an appropriately large benchmark set of biophysically characterized protein complexes would be needed, but is sorely lacking.

  5. On the Experimental Determination of the One-Way Speed of Light

    ERIC Educational Resources Information Center

    Perez, Israel

    2011-01-01

    In this paper the question of the isotropy of the one-way speed of light is addressed from an experimental perspective. In particular, we analyse two experimental methods commonly used in its determination. The analysis is aimed at clarifying the view that the one-way speed of light cannot be determined by techniques in which physical entities…

  6. Teaching Writing and Critical Thinking in Large Political Science Classes

    ERIC Educational Resources Information Center

    Franklin, Daniel; Weinberg, Joseph; Reifler, Jason

    2014-01-01

    In the interest of developing a combination of teaching techniques designed to maximize efficiency "and" quality of instruction, we have experimentally tested three separate and relatively common teaching techniques in three large introductory political science classes at a large urban public university. Our results indicate that the…

  7. A comparison of solute-transport solution techniques based on inverse modelling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2000-01-01

    Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.

  8. New fluorescence techniques for high-throughput drug discovery.

    PubMed

    Jäger, S; Brand, L; Eggeling, C

    2003-12-01

    The rapid increase of compound libraries as well as new targets emerging from the Human Genome Project require constant progress in pharmaceutical research. An important tool is High-Throughput Screening (HTS), which has evolved as an indispensable instrument in the pre-clinical target-to-IND (Investigational New Drug) discovery process. HTS requires machinery, which is able to test more than 100,000 potential drug candidates per day with respect to a specific biological activity. This calls for certain experimental demands especially with respect to sensitivity, speed, and statistical accuracy, which are fulfilled by using fluorescence technology instrumentation. In particular the recently developed family of fluorescence techniques, FIDA (Fluorescence Intensity Distribution Analysis), which is based on confocal single-molecule detection, has opened up a new field of HTS applications. This report describes the application of these new techniques as well as of common fluorescence techniques--such as confocal fluorescence lifetime and anisotropy--to HTS. It gives experimental examples and presents advantages and disadvantages of each method. In addition the most common artifacts (auto-fluorescence or quenching by the drug candidates) emerging from the fluorescence detection techniques are highlighted and correction methods for confocal fluorescence read-outs are presented, which are able to circumvent this deficiency.

  9. Opto-electronic characterization of third-generation solar cells.

    PubMed

    Neukom, Martin; Züfle, Simon; Jenatsch, Sandra; Ruhstaller, Beat

    2018-01-01

    We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC 70 BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified.

  10. Genes and Gene Therapy

    MedlinePlus

    ... a child can have a genetic disorder. Gene therapy is an experimental technique that uses genes to ... prevent disease. The most common form of gene therapy involves inserting a normal gene to replace an ...

  11. Cryptographic salting for security enhancement of double random phase encryption schemes

    NASA Astrophysics Data System (ADS)

    Velez Zea, Alejandro; Fredy Barrera, John; Torroba, Roberto

    2017-10-01

    Security in optical encryption techniques is a subject of great importance, especially in light of recent reports of successful attacks. We propose a new procedure to reinforce the ciphertexts generated in double random phase encrypting experimental setups. This ciphertext is protected by multiplexing with a ‘salt’ ciphertext coded with the same setup. We present an experimental implementation of the ‘salting’ technique. Thereafter, we analyze the resistance of the ‘salted’ ciphertext under some of the commonly known attacks reported in the literature, demonstrating the validity of our proposal.

  12. Opto-electronic characterization of third-generation solar cells

    PubMed Central

    Jenatsch, Sandra

    2018-01-01

    Abstract We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC70BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified. PMID:29707069

  13. Conditional clustering of temporal expression profiles

    PubMed Central

    Wang, Ling; Montano, Monty; Rarick, Matt; Sebastiani, Paola

    2008-01-01

    Background Many microarray experiments produce temporal profiles in different biological conditions but common cluster techniques are not able to analyze the data conditional on the biological conditions. Results This article presents a novel technique to cluster data from time course microarray experiments performed across several experimental conditions. Our algorithm uses polynomial models to describe the gene expression patterns over time, a full Bayesian approach with proper conjugate priors to make the algorithm invariant to linear transformations, and an iterative procedure to identify genes that have a common temporal expression profile across two or more experimental conditions, and genes that have a unique temporal profile in a specific condition. Conclusion We use simulated data to evaluate the effectiveness of this new algorithm in finding the correct number of clusters and in identifying genes with common and unique profiles. We also use the algorithm to characterize the response of human T cells to stimulations of antigen-receptor signaling gene expression temporal profiles measured in six different biological conditions and we identify common and unique genes. These studies suggest that the methodology proposed here is useful in identifying and distinguishing uniquely stimulated genes from commonly stimulated genes in response to variable stimuli. Software for using this clustering method is available from the project home page. PMID:18334028

  14. V/STOL and STOL ground effects and testing techniques

    NASA Technical Reports Server (NTRS)

    Kuhn, R. E.

    1987-01-01

    The ground effects associated with V/STOL operation were examined and an effort was made to develop the equipment and testing techniques needed for that understanding. Primary emphasis was on future experimental programs in the 40 x 80 and the 80 x 120 foot test sections and in the outdoor static test stand associated with these facilities. The commonly used experimental techniques are reviewed and data obtained by various techniques are compared with each other and with available estimating methods. These reviews and comparisons provide insight into the limitations of past studies and the testing techniques used and identify areas where additional work is needed. The understanding of the flow mechanics involved in hovering and in transition in and out of ground effect is discussed. The basic flow fields associated with hovering, transition and STOL operation of jet powered V/STOL aircraft are depicted.

  15. Models for randomly distributed nanoscopic domains on spherical vesicles

    NASA Astrophysics Data System (ADS)

    Anghel, Vinicius N. P.; Bolmatov, Dima; Katsaras, John

    2018-06-01

    The existence of lipid domains in the plasma membrane of biological systems has proven controversial, primarily due to their nanoscopic size—a length scale difficult to interrogate with most commonly used experimental techniques. Scattering techniques have recently proven capable of studying nanoscopic lipid domains populating spherical vesicles. However, the development of analytical methods able of predicting and analyzing domain pair correlations from such experiments has not kept pace. Here, we developed models for the random distribution of monodisperse, circular nanoscopic domains averaged on the surface of a spherical vesicle. Specifically, the models take into account (i) intradomain correlations corresponding to form factors and interdomain correlations corresponding to pair distribution functions, and (ii) the analytical computation of interdomain correlations for cases of two and three domains on a spherical vesicle. In the case of more than three domains, these correlations are treated either by Monte Carlo simulations or by spherical analogs of the Ornstein-Zernike and Percus-Yevick (PY) equations. Importantly, the spherical analog of the PY equation works best in the case of nanoscopic size domains, a length scale that is mostly inaccessible by experimental approaches such as, for example, fluorescent techniques and optical microscopies. The analytical form factors and structure factors of nanoscopic domains populating a spherical vesicle provide a new and important framework for the quantitative analysis of experimental data from commonly studied phase-separated vesicles used in a wide range of biophysical studies.

  16. An Approach for Practical Use of Common-Mode Noise Reduction Technique for In-Vehicle Electronic Equipment

    NASA Astrophysics Data System (ADS)

    Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atsushi; Okazaki, Yuji; Asai, Hideki

    In this paper, we studied the use of common-mode noise reduction technique for in-vehicle electronic equipment in an actual instrument design. We have improved the circuit model of the common-mode noise that flows to the wire harness to add the effect of a bypass capacitor located near the LSI. We analyzed the improved circuit model using a circuit simulator and verified the effectiveness of the noise reduction condition derived from the circuit model. It was also confirmed that offsetting the impedance mismatch in the PCB section requires to make a circuit constant larger than that necessary for doing the impedance mismatch in the LSI section. An evaluation circuit board comprising an automotive microcomputer was prototyped to experiment on the common-mode noise reduction effect of the board. The experimental results showed the noise reduction effect of the board. The experimental results also revealed that the degree of impedance mismatch in the LSI section can be estimated by using a PCB having a known impedance. We further inquired into the optimization of impedance parameters, which is difficult for actual products at present. To satisfy the noise reduction condition composed of numerous parameters, we proposed a design method using an optimization algorithm and an electromagnetic field simulator, and confirmed its effectiveness.

  17. Does Angling Technique Selectively Target Fishes Based on Their Behavioural Type?

    PubMed Central

    Wilson, Alexander D. M.; Brownscombe, Jacob W.; Sullivan, Brittany; Jain-Schlaepfer, Sofia; Cooke, Steven J.

    2015-01-01

    Recently, there has been growing recognition that fish harvesting practices can have important impacts on the phenotypic distributions and diversity of natural populations through a phenomenon known as fisheries-induced evolution. Here we experimentally show that two common recreational angling techniques (active crank baits versus passive soft plastics) differentially target wild largemouth bass (Micropterus salmoides) and rock bass (Ambloplites rupestris) based on variation in their behavioural tendencies. Fish were first angled in the wild using both techniques and then brought back to the laboratory and tested for individual-level differences in common estimates of personality (refuge emergence, flight-initiation-distance, latency-to-recapture and with a net, and general activity) in an in-lake experimental arena. We found that different angling techniques appear to selectively target these species based on their boldness (as characterized by refuge emergence, a standard measure of boldness in fishes) but not other assays of personality. We also observed that body size was independently a significant predictor of personality in both species, though this varied between traits and species. Our results suggest a context-dependency for vulnerability to capture relative to behaviour in these fish species. Ascertaining the selective pressures angling practices exert on natural populations is an important area of fisheries research with significant implications for ecology, evolution, and resource management. PMID:26284779

  18. Quantitative Analysis by Isotopic Dilution Using Mass Spectroscopy: The Determination of Caffeine by GC-MS.

    ERIC Educational Resources Information Center

    Hill, Devon W.; And Others

    1988-01-01

    Describes a laboratory technique for quantitative analysis of caffeine by an isotopic dilution method for coupled gas chromatography-mass spectroscopy. Discusses caffeine analysis and experimental methodology. Lists sample caffeine concentrations found in common products. (MVL)

  19. Single-molecule experiments in biological physics: methods and applications.

    PubMed

    Ritort, F

    2006-08-16

    I review single-molecule experiments (SMEs) in biological physics. Recent technological developments have provided the tools to design and build scientific instruments of high enough sensitivity and precision to manipulate and visualize individual molecules and measure microscopic forces. Using SMEs it is possible to manipulate molecules one at a time and measure distributions describing molecular properties, characterize the kinetics of biomolecular reactions and detect molecular intermediates. SMEs provide additional information about thermodynamics and kinetics of biomolecular processes. This complements information obtained in traditional bulk assays. In SMEs it is also possible to measure small energies and detect large Brownian deviations in biomolecular reactions, thereby offering new methods and systems to scrutinize the basic foundations of statistical mechanics. This review is written at a very introductory level, emphasizing the importance of SMEs to scientists interested in knowing the common playground of ideas and the interdisciplinary topics accessible by these techniques. The review discusses SMEs from an experimental perspective, first exposing the most common experimental methodologies and later presenting various molecular systems where such techniques have been applied. I briefly discuss experimental techniques such as atomic-force microscopy (AFM), laser optical tweezers (LOTs), magnetic tweezers (MTs), biomembrane force probes (BFPs) and single-molecule fluorescence (SMF). I then present several applications of SME to the study of nucleic acids (DNA, RNA and DNA condensation) and proteins (protein-protein interactions, protein folding and molecular motors). Finally, I discuss applications of SMEs to the study of the nonequilibrium thermodynamics of small systems and the experimental verification of fluctuation theorems. I conclude with a discussion of open questions and future perspectives.

  20. TOPICAL REVIEW: Single-molecule experiments in biological physics: methods and applications

    NASA Astrophysics Data System (ADS)

    Ritort, F.

    2006-08-01

    I review single-molecule experiments (SMEs) in biological physics. Recent technological developments have provided the tools to design and build scientific instruments of high enough sensitivity and precision to manipulate and visualize individual molecules and measure microscopic forces. Using SMEs it is possible to manipulate molecules one at a time and measure distributions describing molecular properties, characterize the kinetics of biomolecular reactions and detect molecular intermediates. SMEs provide additional information about thermodynamics and kinetics of biomolecular processes. This complements information obtained in traditional bulk assays. In SMEs it is also possible to measure small energies and detect large Brownian deviations in biomolecular reactions, thereby offering new methods and systems to scrutinize the basic foundations of statistical mechanics. This review is written at a very introductory level, emphasizing the importance of SMEs to scientists interested in knowing the common playground of ideas and the interdisciplinary topics accessible by these techniques. The review discusses SMEs from an experimental perspective, first exposing the most common experimental methodologies and later presenting various molecular systems where such techniques have been applied. I briefly discuss experimental techniques such as atomic-force microscopy (AFM), laser optical tweezers (LOTs), magnetic tweezers (MTs), biomembrane force probes (BFPs) and single-molecule fluorescence (SMF). I then present several applications of SME to the study of nucleic acids (DNA, RNA and DNA condensation) and proteins (protein-protein interactions, protein folding and molecular motors). Finally, I discuss applications of SMEs to the study of the nonequilibrium thermodynamics of small systems and the experimental verification of fluctuation theorems. I conclude with a discussion of open questions and future perspectives.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labby, Z.

    Physicists are often expected to have a solid grounding in experimental design and statistical analysis, sometimes filling in when biostatisticians or other experts are not available for consultation. Unfortunately, graduate education on these topics is seldom emphasized and few opportunities for continuing education exist. Clinical physicists incorporate new technology and methods into their practice based on published literature. A poor understanding of experimental design and analysis could Result in inappropriate use of new techniques. Clinical physicists also improve current practice through quality initiatives that require sound experimental design and analysis. Academic physicists with a poor understanding of design and analysismore » may produce ambiguous (or misleading) results. This can Result in unnecessary rewrites, publication rejection, and experimental redesign (wasting time, money, and effort). This symposium will provide a practical review of error and uncertainty, common study designs, and statistical tests. Instruction will primarily focus on practical implementation through examples and answer questions such as: where would you typically apply the test/design and where is the test/design typically misapplied (i.e., common pitfalls)? An analysis of error and uncertainty will also be explored using biological studies and associated modeling as a specific use case. Learning Objectives: Understand common experimental testing and clinical trial designs, what questions they can answer, and how to interpret the results Determine where specific statistical tests are appropriate and identify common pitfalls Understand the how uncertainty and error are addressed in biological testing and associated biological modeling.« less

  2. Investigating true and false confessions within a novel experimental paradigm.

    PubMed

    Russano, Melissa B; Meissner, Christian A; Narchet, Fadia M; Kassin, Saul M

    2005-06-01

    The primary goal of the current study was to develop a novel experimental paradigm with which to study the influence of psychologically based interrogation techniques on the likelihood of true and false confessions. The paradigm involves guilty and innocent participants being accused of intentionally breaking an experimental rule, or "cheating." In the first demonstration of this paradigm, we explored the influence of two common police interrogation tactics: minimization and an explicit offer of leniency, or a "deal." Results indicated that guilty persons were more likely to confess than innocent persons, and that the use of minimization and the offer of a deal increased the rate of both true and false confessions. Police investigators are encouraged to avoid interrogation techniques that imply or directly promise leniency, as they appear to reduce the diagnostic value of any confession that is elicited.

  3. Investigation of composite materials property requirements for sonic fatigue research

    NASA Technical Reports Server (NTRS)

    Patrick, H. V. L.

    1985-01-01

    Experimental techniques for determining the extensional and bending stiffness characteristics for symmetric laminates are presented. Vibrational test techniques for determining the dynamic modulus and material damping are also discussed. Partial extensional stiffness results intially indicate that the laminate theory used for predicting stiffness is accurate. It is clearly shown that the laminate theory can only be as accurate as the physical characteristics describing the lamina, which may vary significantly. It is recommended that all of the stiffness characteristics in both extension and bending be experimentally determined to fully verify the laminate theory. Dynamic modulus should be experimentally evaluated to determine if static data adequately predicts dynamic behavior. Material damping should also be ascertained because laminate damping is an order of magnitude greater than found in common metals and can significantly effect the displacement response of composite panels.

  4. Dynamic Load Measurement of Ballistic Gelatin Impact Using an Instrumented Tube

    NASA Technical Reports Server (NTRS)

    Seidt, J. D.; Periira, J. M.; Hammer, J. T.; Gilat, A.; Ruggeri, C. R.

    2012-01-01

    Bird strikes are a common problem for the aerospace industry and can cause serious damage to an aircraft. Ballistic gelatin is frequently used as a surrogate for actual bird carcasses in bird strike tests. Numerical simulations of these tests are used to supplement experimental data, therefore it is necessary to use numerical modeling techniques that can accurately capture the dynamic response of ballistic gelatin. An experimental technique is introduced to validate these modeling techniques. A ballistic gelatin projectile is fired into a strike plate attached to a 36 in. long sensor tube. Dynamic load is measured at two locations relative to the strike plate using strain gages configured in a full Wheatstone bridge. Data from these experiments are used to validate a gelatin constitutive model. Simulations of the apparatus are analyzed to investigate its performance.

  5. Efficient optical injection locking of electronic oscillators

    NASA Astrophysics Data System (ADS)

    Cochran, S. R.; Wang, S. Y.

    1989-05-01

    The paper presents techniques for direct optical injection locking of electronic oscillators and analyzes the problem of direct optical injection locking of a common-source FET oscillator using a high impedance optoelectronic transducer. A figure-of-merit for optically injection locked oscillators is defined, and an experimental oscillator based on the design criteria was fabricated. The oscillator achieved efficient, high power operation and moderate locking bandwidth with small locking signal magnitude. The experimental results are consistent with the theoretical model.

  6. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  7. Isolation and Analysis of Essential Oils from Spices

    ERIC Educational Resources Information Center

    O'Shea, Stephen K.; Von Riesen, Daniel D.; Rossi, Lauren L.

    2012-01-01

    Natural product isolation and analysis provide an opportunity to present a variety of experimental techniques to undergraduate students in introductory organic chemistry. Eugenol, anethole, and carvone were extracted from six common spices using steam-distillation and diethyl ether as the extraction solvent. Students assessed the purity of their…

  8. Investigations of ultrafast charge dynamics in laser-irradiated targets by a self probing technique employing laser driven protons

    NASA Astrophysics Data System (ADS)

    Ahmed, H.; Kar, S.; Cantono, G.; Nersisyan, G.; Brauckmann, S.; Doria, D.; Gwynne, D.; Macchi, A.; Naughton, K.; Willi, O.; Lewis, C. L. S.; Borghesi, M.

    2016-09-01

    The divergent and broadband proton beams produced by the target normal sheath acceleration mechanism provide the unique opportunity to probe, in a point-projection imaging scheme, the dynamics of the transient electric and magnetic fields produced during laser-plasma interactions. Commonly such experimental setup entails two intense laser beams, where the interaction produced by one beam is probed with the protons produced by the second. We present here experimental studies of the ultra-fast charge dynamics along a wire connected to laser irradiated target carried out by employing a 'self' proton probing arrangement - i.e. by connecting the wire to the target generating the probe protons. The experimental data shows that an electromagnetic pulse carrying a significant amount of charge is launched along the wire, which travels as a unified pulse of 10s of ps duration with a velocity close to speed of light. The experimental capabilities and the analysis procedure of this specific type of proton probing technique are discussed.

  9. An overview of HyFIE Technical Research Project: cross-testing in main European hypersonic wind tunnels on EXPERT body

    NASA Astrophysics Data System (ADS)

    Brazier, Jean-Philippe; Martinez Schramm, Jan; Paris, Sébastien; Gawehn, Thomas; Reimann, Bodo

    2016-09-01

    HyFIE project aimed at improving the measurement techniques in hypersonic wind tunnels and comparing the experimental data provided by four major European facilities: DLR HEG and H2K, ONERA F4 and VKI Longshot. A common geometry of EXPERT body was chosen and four different models were used. A large amount of experimental data was collected and compared with the results of numerical simulations. Collapsing all the measured values showed a good agreement between the different facilities, as well as between experimental and computed data.

  10. Protein Modelling: What Happened to the “Protein Structure Gap”?

    PubMed Central

    Schwede, Torsten

    2013-01-01

    Computational modeling and prediction of three-dimensional macromolecular structures and complexes from their sequence has been a long standing vision in structural biology as it holds the promise to bypass part of the laborious process of experimental structure solution. Over the last two decades, a paradigm shift has occurred: starting from a situation where the “structure knowledge gap” between the huge number of protein sequences and small number of known structures has hampered the widespread use of structure-based approaches in life science research, today some form of structural information – either experimental or computational – is available for the majority of amino acids encoded by common model organism genomes. Template based homology modeling techniques have matured to a point where they are now routinely used to complement experimental techniques. With the scientific focus of interest moving towards larger macromolecular complexes and dynamic networks of interactions, the integration of computational modeling methods with low-resolution experimental techniques allows studying large and complex molecular machines. Computational modeling and prediction techniques are still facing a number of challenges which hamper the more widespread use by the non-expert scientist. For example, it is often difficult to convey the underlying assumptions of a computational technique, as well as the expected accuracy and structural variability of a specific model. However, these aspects are crucial to understand the limitations of a model, and to decide which interpretations and conclusions can be supported. PMID:24010712

  11. Condensation enhancement by means of electrohydrodynamic techniques

    NASA Astrophysics Data System (ADS)

    Butrymowicz, Dariusz; Karwacki, Jarosław; Trela, Marian

    2014-12-01

    Short state-of-the-art on the enhancement of condensation heat transfer techniques by means of condensate drainage is presented in this paper. The electrohydrodynamic (EHD) technique is suitable for dielectric media used in refrigeration, organic Rankine cycles and heat pump devices. The electric field is commonly generated in the case of horizontal tubes by means of a rod-type electrode or mesh electrodes. Authors proposed two geometries in the presented own experimental investigations. The first one was an electrode placed just beneath the tube bottom and the second one consisted of a horizontal finned tube with a double electrode placed beneath the tube. The experimental investigations of these two configurations for condensation of refrigerant R-123 have been accomplished. The obtained results confirmed that the application of the EHD technique for the investigated tube and electrode arrangement caused significant increase in heat transfer coefficient. The condensation enhancement depends both on the geometry of the electrode system and on the applied voltage.

  12. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    PubMed

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  13. Partial Least Squares with Structured Output for Modelling the Metabolomics Data Obtained from Complex Experimental Designs: A Study into the Y-Block Coding.

    PubMed

    Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston

    2016-10-28

    Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ongari, Daniele; Boyd, Peter G.; Barthel, Senja

    Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. Lasty, wemore » show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms.« less

  15. The development of laser speckle velocimetry for the study of vortical flows

    NASA Technical Reports Server (NTRS)

    Krothapalli, A.

    1991-01-01

    A new experimental technique commonly known as PIDV (particle image displacement velocity) was developed to measure an instantaneous two dimensional velocity fluid in a selected plane of the flow field. This technique was successfully applied to the study of several problems: (1) unsteady flows with large scale vortical structures; (2) the instantaneous two dimensional flow in the transition region of a rectangular air jet; and (3) the instantaneous flow over a circular bump in a transonic flow. In several other experiments PIDV is routinely used as a non-intrusive measurement technique to obtain instantaneous two dimensional velocity fields.

  16. Study the effect of elevated dies temperature on aluminium and steel round deep drawing

    NASA Astrophysics Data System (ADS)

    Lean, Yeong Wei; Azuddin, M.

    2016-02-01

    Round deep drawing operation can only be realized by expensive multi-step production processes. To reduce the cost of processes while expecting an acceptable result, round deep drawing can be done at elevated temperature. There are 3 common problems which are fracture, wrinkling and earing of deep drawing a round cup. The main objective is to investigate the effect of dies temperature on aluminium and steel round deep drawing; with a sub-objective of eliminate fracture and reducing wrinkling effect. Experimental method is conducted with 3 different techniques on heating the die. The techniques are heating both upper and lower dies, heating only the upper dies, and heating only the lower dies. 4 different temperatures has been chosen throughout the experiment. The experimental result then will be compared with finite element analysis software. There is a positive result from steel material on heating both upper and lower dies, where the simulation result shows comparable as experimental result. Heating both upper and lower dies will be the best among 3 types of heating techniques.

  17. Multispectral Photogrammetric Data Acquisition and Processing Forwall Paintings Studies

    NASA Astrophysics Data System (ADS)

    Pamart, A.; Guillon, O.; Faraci, S.; Gattet, E.; Genevois, M.; Vallet, J. M.; De Luca, L.

    2017-02-01

    In the field of wall paintings studies different imaging techniques are commonly used for the documentation and the decision making in term of conservation and restoration. There is nowadays some challenging issues to merge scientific imaging techniques in a multimodal context (i.e. multi-sensors, multi-dimensions, multi-spectral and multi-temporal approaches). For decades those CH objects has been widely documented with Technical Photography (TP) which gives precious information to understand or retrieve the painting layouts and history. More recently there is an increasing demand of the use of digital photogrammetry in order to provide, as one of the possible output, an orthophotomosaic which brings a possibility for metrical quantification of conservators/restorators observations and actions planning. This paper presents some ongoing experimentations of the LabCom MAP-CICRP relying on the assumption that those techniques can be merged through a common pipeline to share their own benefits and create a more complete documentation.

  18. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  19. "Open-Box" Approach to Measuring Fluorescence Quenching Using an iPad Screen and Digital SLR Camera

    ERIC Educational Resources Information Center

    Koenig, Michael H.; Yi, Eun P.; Sandridge, Matthew J.; Mathew, Alexander S.; Demas, James N.

    2015-01-01

    Fluorescence quenching is an analytical technique and a common undergraduate laboratory exercise. Unfortunately, a typical quenching experiment requires the use of an expensive fluorometer that measures the relative fluorescence intensity of a single sample in a closed compartment unseen by the experimenter. To overcome these shortcomings, we…

  20. Isothermal Titration Calorimetry in the Student Laboratory

    ERIC Educational Resources Information Center

    Wadso, Lars; Li, Yujing; Li, Xi

    2011-01-01

    Isothermal titration calorimetry (ITC) is the measurement of the heat produced by the stepwise addition of one substance to another. It is a common experimental technique, for example, in pharmaceutical science, to measure equilibrium constants and reaction enthalpies. We describe a stirring device and an injection pump that can be used with a…

  1. Identification and evaluation of reliable reference genes for quantitative real-time PCR analysis in tea plant (Camellia sinensis (L.) O. Kuntze)

    USDA-ARS?s Scientific Manuscript database

    Quantitative real-time polymerase chain reaction (qRT-PCR) is a commonly used technique for measuring gene expression levels due to its simplicity, specificity, and sensitivity. Reliable reference selection for the accurate quantification of gene expression under various experimental conditions is a...

  2. Application of the laser induced deflection (LID) technique for low absorption measurements in bulk materials and coatings

    NASA Astrophysics Data System (ADS)

    Triebel, W.; Mühlig, C.; Kufert, S.

    2005-10-01

    Precise absorption measurements of bulk materials and coatings upon pulsed ArF laser irradiation are presented using a compact experimental setup based on the laser induced deflection technique (LID). For absorption measurements of bulk materials the influence of pure bulk and pure surface absorption on the temperature and refractive index profile and thus for the probe beam deflection is analyzed in detail. The separation of bulk and surface absorption via the commonly used variation of the sample thickness is carried out for fused silica and calcium fluoride. The experimental results show that for the given surface polishing quality the bulk absorption coefficient of fused silica can be obtained by investigating only one sample. To avoid the drawback of different bulk and surface properties amongst a thickness series, we propose a strategy based on the LID technique to generally obtain surface and bulk absorption separately by investigating only one sample. Apart from measuring bulk absorption coefficients the LID technique is applied to determine the absorption of highly reflecting (HR) coatings on CaF2 substrates. Beside the measuring strategy the experimental results of a AlF3/LaF3 based HR coating are presented. In order to investigate a larger variety of coatings, including high transmitting coatings, a general measuring strategy based on the LID technique is proposed.

  3. Accurate Characterization of the Pore Volume in Microporous Crystalline Materials

    PubMed Central

    2017-01-01

    Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. We show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms. PMID:28636815

  4. Flow optimization study of a batch microfluidics PET tracer synthesizing device

    PubMed Central

    Elizarov, Arkadij M.; Meinhart, Carl; van Dam, R. Michael; Huang, Jiang; Daridon, Antoine; Heath, James R.; Kolb, Hartmuth C.

    2010-01-01

    We present numerical modeling and experimental studies of flow optimization inside a batch microfluidic micro-reactor used for synthesis of human-scale doses of Positron Emission Tomography (PET) tracers. Novel techniques are used for mixing within, and eluting liquid out of, the coin-shaped reaction chamber. Numerical solutions of the general incompressible Navier Stokes equations along with time-dependent elution scalar field equation for the three dimensional coin-shaped geometry were obtained and validated using fluorescence imaging analysis techniques. Utilizing the approach presented in this work, we were able to identify optimized geometrical and operational conditions for the micro-reactor in the absence of radioactive material commonly used in PET related tracer production platforms as well as evaluate the designed and fabricated micro-reactor using numerical and experimental validations. PMID:21072595

  5. Accurate Characterization of the Pore Volume in Microporous Crystalline Materials

    DOE PAGES

    Ongari, Daniele; Boyd, Peter G.; Barthel, Senja; ...

    2017-06-21

    Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. Lasty, wemore » show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms.« less

  6. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2009-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  7. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2010-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  8. Shock wave facilities at Pulter Laboratory of SRI international

    NASA Astrophysics Data System (ADS)

    Murri, W. J.

    1982-04-01

    Shock wave research in the Poulter Laboratory covers two broad areas: dynamic material response and dynamic structural response. Workers in both areas use common facilities. The Laboratory has several guns and the facilities to perform various types of high explosive loading experiments. The use of these facilities and experimental techniques is illustrated with examples from research projects.

  9. Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick; Klein, Vladislav

    2011-01-01

    Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.

  10. Assessment of traffic noise levels in urban areas using different soft computing techniques.

    PubMed

    Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D

    2016-10-01

    Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.

  11. A critical review on tablet disintegration.

    PubMed

    Quodbach, Julian; Kleinebudde, Peter

    2016-09-01

    Tablet disintegration is an important factor for drug release and can be modified with excipients called tablet disintegrants. Tablet disintegrants act via different mechanisms and the efficacy of these excipients is influenced by various factors. In this review, the existing literature on tablet disintegration is critically reviewed. Potential disintegration mechanisms, as well as impact factors on the disintegration process will be discussed based on experimental evidence. Search terms for Scopus and Web of Science included "tablet disintegration", "mechanism tablet disintegration", "superdisintegrants", "disintegrants", "swelling force", "disintegration force", "disintegration mechanisms", as well as brand names of commonly applied superdisintegrants. References of identified papers were screened as well. Experimental data supports swelling and shape recovery as main mechanisms of action of disintegrants. Other tablet excipients and different manufacturing techniques greatly influence the disintegration process. The use of different excipients, experimental setups and manufacturing techniques, as well as the demand for original research led to a distinct patchwork of knowledge. Broader, more systematic approaches are necessary not only to structure the past but also future findings.

  12. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses

    PubMed Central

    Stephen, Emily P.; Lepage, Kyle Q.; Eden, Uri T.; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S.; Guenther, Frank H.; Kramer, Mark A.

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience. PMID:24678295

  13. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.

    PubMed

    Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.

  14. 3D Modeling of Ultrasonic Wave Interaction with Disbonds and Weak Bonds

    NASA Technical Reports Server (NTRS)

    Leckey, C.; Hinders, M.

    2011-01-01

    Ultrasonic techniques, such as the use of guided waves, can be ideal for finding damage in the plate and pipe-like structures used in aerospace applications. However, the interaction of waves with real flaw types and geometries can lead to experimental signals that are difficult to interpret. 3-dimensional (3D) elastic wave simulations can be a powerful tool in understanding the complicated wave scattering involved in flaw detection and for optimizing experimental techniques. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate Lamb wave scattering from realistic flaws. This paper discusses simulation results for an aluminum-aluminum diffusion disbond and an aluminum-epoxy disbond and compares results from the disbond case to the common artificial flaw type of a flat-bottom hole. The paper also discusses the potential for extending the 3D EFIT equations to incorporate physics-based weak bond models for simulating wave scattering from weak adhesive bonds.

  15. Screening and Identification of Peptides Specifically Targeted to Gastric Cancer Cells from a Phage Display Peptide Library

    PubMed

    Sahin, Deniz; Taflan, Sevket Onur; Yartas, Gizem; Ashktorab, Hassan; Smoot, Duane T

    2018-04-25

    Background: Gastric cancer is the second most common cancer among the malign cancer types. Inefficiency of traditional techniques both in diagnosis and therapy of the disease makes the development of alternative and novel techniques indispensable. As an alternative to traditional methods, tumor specific targeting small peptides can be used to increase the efficiency of the treatment and reduce the side effects related to traditional techniques. The aim of this study is screening and identification of individual peptides specifically targeted to human gastric cancer cells using a phage-displayed peptide library and designing specific peptide sequences by using experimentally-eluted peptide sequences. Methods: Here, MKN-45 human gastric cancer cells and HFE-145 human normal gastric epithelial cells were used as the target and control cells, respectively. 5 rounds of biopannning with a phage display 12-peptide library were applied following subtraction biopanning with HFE-145 control cells. The selected phage clones were established by enzyme-linked immunosorbent assay and immunofluorescence detection. We first obtain random phage clones after five biopanning rounds, determine the binding levels of each individual clone. Then, we analyze the frequencies of each amino acid in best binding clones to determine positively overexpressed amino acids for designing novel peptide sequences. Results: DE532 (VETSQYFRGTLS) phage clone was screened positive, showing specific binding on MKN-45 gastric cancer cells. DE-Obs (HNDLFPSWYHNY) peptide, which was designed by using amino acid frequencies of experimentally selected peptides in the 5th round of biopanning, showed specific binding in MKN-45 cells. Conclusion: Selection and characterization of individual clones may give us specifically binding peptides, but more importantly, data extracted from eluted phage clones may be used to design theoretical peptides with better binding properties than even experimentally selected ones. Both peptides, experimental and designed, may be potential candidates to be developed as useful diagnostic or therapeutic ligand molecules in gastric cancer research. Creative Commons Attribution License

  16. Non-imaged based method for matching brains in a common anatomical space for cellular imagery.

    PubMed

    Midroit, Maëllie; Thevenet, Marc; Fournel, Arnaud; Sacquet, Joelle; Bensafi, Moustafa; Breton, Marine; Chalençon, Laura; Cavelius, Matthias; Didier, Anne; Mandairon, Nathalie

    2018-04-22

    Cellular imagery using histology sections is one of the most common techniques used in Neuroscience. However, this inescapable technique has severe limitations due to the need to delineate regions of interest on each brain, which is time consuming and variable across experimenters. We developed algorithms based on a vectors field elastic registration allowing fast, automatic realignment of experimental brain sections and associated labeling in a brain atlas with high accuracy and in a streamlined way. Thereby, brain areas of interest can be finely identified without outlining them and different experimental groups can be easily analyzed using conventional tools. This method directly readjusts labeling in the brain atlas without any intermediate manipulation of images. We mapped the expression of cFos, in the mouse brain (C57Bl/6J) after olfactory stimulation or a non-stimulated control condition and found an increased density of cFos-positive cells in the primary olfactory cortex but not in non-olfactory areas of the odor-stimulated animals compared to the controls. Existing methods of matching are based on image registration which often requires expensive material (two-photon tomography mapping or imaging with iDISCO) or are less accurate since they are based on mutual information contained in the images. Our new method is non-imaged based and relies only on the positions of detected labeling and the external contours of sections. We thus provide a new method that permits automated matching of histology sections of experimental brains with a brain reference atlas. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Particle Substructure. A Common Theme of Discovery in this Century

    DOE R&D Accomplishments Database

    Panofsky, W. K. H.

    1984-02-01

    Some example of modern developments in particle physics are given which demonstrate that the fundamental rules of quantum mechanics, applied to all forces in nature as they became understood, have retained their validity. The well-established laws of electricity and magnetism, reformulated in terms of quantum mechanics, have exhibited a truly remarkable numerical agreement between theory and experiment over an enormous range of observation. As experimental techniques have grown from the top of a laboratory bench to the large accelerators of today, the basic components of experimentation have changed vastly in scale but only little in basic function. More important, the motivation of those engaged in this type of experimentation has hardly changed at all.

  18. Observations of the Kaiser effect under multiaxial stress states: Implications for its use in determining in situ stress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holcomb, D.J.

    1993-10-08

    Experimental tests of the Kaiser effect, the stress-history dependence of acoustic emission production, show that interactions between principal stresses cannot be ignored as is commonly done when trying to use the Kaiser effect to determine in situ stress. Experimental results obtained under multiaxial stress states are explained in terms of a qualitative model. The results show that the commonly-used technique of loading uniaxially along various directions to determine stress history must be reevaluated as it cannot be justified in terms of the laboratory experiments. One possible resolution of the conflict between laboratory and field results is that the Kaiser effectmore » phenomenon observed in cores retrieved from the earth is not the same phenomenon as is observed in rock loaded under laboratory conditions.« less

  19. Prenotification, Incentives, and Survey Modality: An Experimental Test of Methods to Increase Survey Response Rates of School Principals

    ERIC Educational Resources Information Center

    Jacob, Robin Tepper; Jacob, Brian

    2012-01-01

    Teacher and principal surveys are among the most common data collection techniques employed in education research. Yet there is remarkably little research on survey methods in education, or about the most cost-effective way to raise response rates among teachers and principals. In an effort to explore various methods for increasing survey response…

  20. The Leadership Evaluation and Analysis Program (LEAP). Economic Feasibility Report.

    DTIC Science & Technology

    1979-07-01

    data input and retrieval system that generates common solutions to Marine Corps con- cerns and produces leadership/ management training material while...experimental measures to assess the effects of Human Resource Management Cycle intervention aboard Navy ships (Mumford, 1976). Planned future evaluation...some management process or technique. Generally, the entire inter- vention procedure represents an expenditure toward the primary goal of improving

  1. Fiber-MZI-based FBG sensor interrogation: comparative study with a CCD spectrometer.

    PubMed

    Das, Bhargab; Chandra, Vikash

    2016-10-10

    We present an experimental comparative study of the two most commonly used fiber Bragg grating (FBG) sensor interrogation techniques: a charge-coupled device (CCD) spectrometer and a fiber Mach-Zehnder interferometer (F-MZI). Although the interferometric interrogation technique is historically known to offer the highest sensitivity measurements, very little information exists regarding how it compares with the current commercially available spectral-characteristics-based interrogation systems. It is experimentally established here that the performance of a modern-day CCD spectrometer interrogator is very close to a F-MZI interrogator with the capability of measuring Bragg wavelength shifts with sub-picometer-level accuracy. The results presented in this research study can further be used as a guideline for choosing between the two FBG sensor interrogator types for small-amplitude dynamic perturbation measurements down to nano-level strain.

  2. Advantages and disadvantages of the animal models v. in vitro studies in iron metabolism: a review.

    PubMed

    García, Y; Díaz-Castro, J

    2013-10-01

    Iron deficiency is the most common nutritional deficiency in the world. Special molecules have evolved for iron acquisition, transport and storage in soluble, nontoxic forms. Studies about the effects of iron on health are focused on iron metabolism or nutrition to prevent or treat iron deficiency and anemia. These studies are focused in two main aspects: (1) basic studies to elucidate iron metabolism and (2) nutritional studies to evaluate the efficacy of iron supplementation to prevent or treat iron deficiency and anemia. This paper reviews the advantages and disadvantages of the experimental models commonly used as well as the methods that are more used in studies related to iron. In vitro studies have used different parts of the gut. In vivo studies are done in humans and animals such as mice, rats, pigs and monkeys. Iron metabolism is a complex process that includes interactions at the systemic level. In vitro studies, despite physiological differences to humans, are useful to increase knowledge related to this essential micronutrient. Isotopic techniques are the most recommended in studies related to iron, but their high cost and required logistic, making them difficult to use. The depletion-repletion of hemoglobin is a method commonly used in animal studies. Three depletion-repletion techniques are mostly used: hemoglobin regeneration efficiency, relative biological values (RBV) and metabolic balance, which are official methods of the association of official analytical chemists. These techniques are well-validated to be used as studies related to iron and their results can be extrapolated to humans. Knowledge about the main advantages and disadvantages of the in vitro and animal models, and methods used in these studies, could increase confidence of researchers in the experimental results with less costs.

  3. Will Somebody do the Dishes? Weathering Analogies, Geologic Processes and Geologic Time

    NASA Astrophysics Data System (ADS)

    Stelling, P.; Wuotila, S.; Giuliani, M.

    2006-12-01

    A good analogy is one of the most powerful tools in any instructors' arsenal, and encouraging students to explore the links between an analogy and a scientific concept can cement both ideas in a student's mind. A common analogy for weathering and erosion processes is doing the dishes. Oxidation, hydration, and solution reactions can be intimidating on the chalkboard but easily understood in the context of cleaning up after dinner. Rather than present this analogy as a lecture demonstration, students are encouraged to experimentally determine which type of weathering works best on their dirty dishes. The experiment must use at least four identically dirty dishes: three experimental dishes and one control dish. The experimental dishes are subjected to simulated weathering and erosion processes of the student's design. Common techniques developed by students are cold or warm water baths, baths with and without acid (lemon juice or soda), and freeze-thaw cycles. Occasionally creative experiments result in unexpected discoveries, such the inefficiency of abrasion from wind-blown sand, especially when compared to soaking dishes in Canadian Whiskey. The effectiveness of each experimental run is determined by comparison to the control plate after loose debris is removed from each. The dish with the smallest aerial extent of remaining food is the declared the most effective. Discussion sections of the experimental write-up includes a description of which geologic processes were being simulated in each experiment, comparisons of the effectiveness of each techniques, and statements of how these experiments differ from reality. In order to advance this project, a second stage of the assignment, a direct comparison of weathering and erosion techniques on food and on geologic materials, will be added this fall. Ideally, students will empirically derive erosion rates and calculate the time required to remove the volume of material represented by a geologically important feature, such as Mt. Rainier or the Grand Canyon. In the end, students completing this project gain an understanding of how geologic processes work, the time scales required, the differences between analogies and the real thing, and arguably the most important aspect, a best-practices approach to doing the dishes.

  4. Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.

    PubMed

    Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M

    2010-11-01

    Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.

  5. Polarization-based material classification technique using passive millimeter-wave polarimetric imagery.

    PubMed

    Hu, Fei; Cheng, Yayun; Gui, Liangqi; Wu, Liang; Zhang, Xinyi; Peng, Xiaohui; Su, Jinlong

    2016-11-01

    The polarization properties of thermal millimeter-wave emission capture inherent information of objects, e.g., material composition, shape, and surface features. In this paper, a polarization-based material-classification technique using passive millimeter-wave polarimetric imagery is presented. Linear polarization ratio (LPR) is created to be a new feature discriminator that is sensitive to material type and to remove the reflected ambient radiation effect. The LPR characteristics of several common natural and artificial materials are investigated by theoretical and experimental analysis. Based on a priori information about LPR characteristics, the optimal range of incident angle and the classification criterion are discussed. Simulation and measurement results indicate that the presented classification technique is effective for distinguishing between metals and dielectrics. This technique suggests possible applications for outdoor metal target detection in open scenes.

  6. Neutron flux characterization of californium-252 Neutron Research Facility at the University of Texas - Pan American by nuclear analytical technique

    NASA Astrophysics Data System (ADS)

    Wahid, Kareem; Sanchez, Patrick; Hannan, Mohammad

    2014-03-01

    In the field of nuclear science, neutron flux is an intrinsic property of nuclear reaction facilities that is the basis for experimental irradiation calculations and analysis. In the Rio Grande Valley (Texas), the UTPA Neutron Research Facility (NRF) is currently the only neutron facility available for experimental research purposes. The facility is comprised of a 20-microgram californium-252 neutron source surrounded by a shielding cascade containing different irradiation cavities. Thermal and fast neutron flux values for the UTPA NRF have yet to be fully investigated and may be of particular interest to biomedical studies in low neutron dose applications. Though a variety of techniques exist for the characterization of neutron flux, neutron activation analysis (NAA) of metal and nonmetal foils is a commonly utilized experimental method because of its detection sensitivity and availability. The aim of our current investigation is to employ foil activation in the determination of neutron flux values for the UTPA NSRF for further research purposes. Neutron spectrum unfolding of the acquired experimental data via specialized software and subsequent comparison for consistency with computational models lends confidence to the results.

  7. In Vitro, Ex Vivo and In Vivo Techniques to Study Neuronal Migration in the Developing Cerebral Cortex

    PubMed Central

    Azzarelli, Roberta; Oleari, Roberto; Lettieri, Antonella; Andre', Valentina; Cariboni, Anna

    2017-01-01

    Neuronal migration is a fundamental biological process that underlies proper brain development and neuronal circuit formation. In the developing cerebral cortex, distinct neuronal populations, producing excitatory, inhibitory and modulatory neurotransmitters, are generated in different germinative areas and migrate along various routes to reach their final positions within the cortex. Different technical approaches and experimental models have been adopted to study the mechanisms regulating neuronal migration in the cortex. In this review, we will discuss the most common in vitro, ex vivo and in vivo techniques to visualize and study cortical neuronal migration. PMID:28448448

  8. Investigation of laser holographic interferometric techniques for structure inspection

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1973-01-01

    The application of laser holographic interferometric techniques for nondestructive inspection of material structures commonly used in aerospace works is investigated. Two types of structures, composite plate and solid fuel rocket engine motor casing, were examined. In conducting the experiments, both CW HeNe gas lasers and Q-switched ruby lasers were used as light sources for holographic recording setups. Different stressing schemes were investigated as to their effectiveness in generating maximum deformation at regions of structural weakness such as flaws and disbonds. Experimental results on stressing schemes such as thermal stressing, pressurized stressing, transducer excitation, and mechanical impact are presented and evaluated.

  9. Radiofrequency ablation for benign thyroid nodules.

    PubMed

    Bernardi, S; Stacul, F; Zecchin, M; Dobrinja, C; Zanconati, F; Fabris, B

    2016-09-01

    Benign thyroid nodules are an extremely common occurrence. Radiofrequency ablation (RFA) is gaining ground as an effective technique for their treatment, in case they become symptomatic. Here we review what are the current indications to RFA, its outcomes in terms of efficacy, tolerability, and cost, and also how it compares to the other conventional and experimental treatment modalities for benign thyroid nodules. Moreover, we will also address the issue of treating with this technique patients with cardiac pacemakers (PM) or implantable cardioverter-defibrillators (ICD), as it is a rather frequent occurrence that has never been addressed in detail in the literature.

  10. Study to design and develop remote manipulator system

    NASA Technical Reports Server (NTRS)

    Hill, J. W.; Sword, A. J.

    1973-01-01

    Human performance measurement techniques for remote manipulation tasks and remote sensing techniques for manipulators are described for common manipulation tasks, performance is monitored by means of an on-line computer capable of measuring the joint angles of both master and slave arms as a function of time. The computer programs allow measurements of the operator's strategy and physical quantities such as task time and power consumed. The results are printed out after a test run to compare different experimental conditions. For tracking tasks, we describe a method of displaying errors in three dimensions and measuring the end-effector position in three dimensions.

  11. Extracting joint weak values with local, single-particle measurements.

    PubMed

    Resch, K J; Steinberg, A M

    2004-04-02

    Weak measurement is a new technique which allows one to describe the evolution of postselected quantum systems. It appears to be useful for resolving a variety of thorny quantum paradoxes, particularly when used to study properties of pairs of particles. Unfortunately, such nonlocal or joint observables often prove difficult to measure directly in practice (for instance, in optics-a common testing ground for this technique-strong photon-photon interactions would be needed to implement an appropriate von Neumann interaction). Here we derive a general, experimentally feasible, method for extracting these joint weak values from correlations between single-particle observables.

  12. Laboratory Production of Lemon Liqueur (Limoncello) by Conventional Maceration and a Two-Syringe System to Illustrate Rapid Solid-Liquid Dynamic Extraction

    ERIC Educational Resources Information Center

    Naviglio, Daniele; Montesano, Domenico; Gallo, Monica

    2015-01-01

    Two experimental techniques of solid-liquid extraction are compared relating to the lab-scale production of lemon liqueur, most commonly named "limoncello"; the first is the official method of maceration for the solid-liquid extraction of analytes and is widely used to extract active ingredients from a great variety of natural products;…

  13. Determining Kinetic Parameters for Isothermal Crystallization of Glasses

    NASA Technical Reports Server (NTRS)

    Ray, C. S.; Zhang, T.; Reis, S. T.; Brow, R. K.

    2006-01-01

    Non-isothermal crystallization techniques are frequently used to determine the kinetic parameters for crystallization in glasses. These techniques are experimentally simple and quick compared to the isothermal techniques. However, the analytical models used for non-isothermal data analysis, originally developed for describing isothermal transformation kinetics, are fundamentally flawed. The present paper describes a technique for determining the kinetic parameters for isothermal crystallization in glasses, which eliminates most of the common problems that generally make the studies of isothermal crystallization laborious and time consuming. In this technique, the volume fraction of glass that is crystallized as a function of time during an isothermal hold was determined using differential thermal analysis (DTA). The crystallization parameters for the lithium-disilicate (Li2O.2SiO2) model glass were first determined and compared to the same parameters determined by other techniques to establish the accuracy and usefulness of the present technique. This technique was then used to describe the crystallization kinetics of a complex Ca-Sr-Zn-silicate glass developed for sealing solid oxide fuel cells.

  14. Diagnostic methodology is critical for accurately determining the prevalence of ichthyophonus infections in wild fish populations

    USGS Publications Warehouse

    Kocan, R.; Dolan, H.; Hershberger, P.

    2011-01-01

    Several different techniques have been employed to detect and identify Ichthyophonus spp. in infected fish hosts; these include macroscopic observation, microscopic examination of tissue squashes, histological evaluation, in vitro culture, and molecular techniques. Examination of the peer-reviewed literature revealed that when more than 1 diagnostic method is used, they often result in significantly different results; for example, when in vitro culture was used to identify infected trout in an experimentally exposed population, 98.7% of infected trout were detected, but when standard histology was used to confirm known infected tissues from wild salmon, it detected ~50% of low-intensity infections and ~85% of high-intensity infections. Other studies on different species reported similar differences. When we examined a possible mechanism to explain the disparity between different diagnostic techniques, we observed non-random distribution of the parasite in 3-dimensionally visualized tissue sections from infected hosts, thus providing a possible explanation for the different sensitivities of commonly used diagnostic techniques. Based on experimental evidence and a review of the peer-reviewed literature, we have concluded that in vitro culture is currently the most accurate diagnostic technique for determining infection prevalence of Ichthyophonus, particularly when the exposure history of the population is not known.

  15. Fatigue crack localization with near-field acoustic emission signals

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Zhang, Yunfeng

    2013-04-01

    This paper presents an AE source localization technique using near-field acoustic emission (AE) signals induced by crack growth and propagation. The proposed AE source localization technique is based on the phase difference in the AE signals measured by two identical AE sensing elements spaced apart at a pre-specified distance. This phase difference results in canceling-out of certain frequency contents of signals, which can be related to AE source direction. Experimental data from simulated AE source such as pencil breaks was used along with analytical results from moment tensor analysis. It is observed that the theoretical predictions, numerical simulations and the experimental test results are in good agreement. Real data from field monitoring of an existing fatigue crack on a bridge was also used to test this system. Results show that the proposed method is fairly effective in determining the AE source direction in thick plates commonly encountered in civil engineering structures.

  16. An overview of clinical and experimental treatment modalities for port wine stains

    PubMed Central

    Chen, Jennifer K.; Ghasri, Pedram; Aguilar, Guillermo; van Drooge, Anne Margreet; Wolkerstorfer, Albert; Kelly, Kristen M.; Heger, Michal

    2014-01-01

    Port wine stains (PWS) are the most common vascular malformation of the skin, occurring in 0.3% to 0.5% of the population. Noninvasive laser irradiation with flashlamp-pumped pulsed dye lasers (selective photothermolysis) currently comprises the gold standard treatment of PWS; however, the majority of PWS fail to clear completely after selective photothermolysis. In this review, the clinically used PWS treatment modalities (pulsed dye lasers, alexandrite lasers, neodymium:yttrium-aluminum-garnet lasers, and intense pulsed light) and techniques (combination approaches, multiple passes, and epidermal cooling) are discussed. Retrospective analysis of clinical studies published between 1990 and 2011 was performed to determine therapeutic efficacies for each clinically used modality/technique. In addition, factors that have resulted in the high degree of therapeutic recalcitrance are identified, and emerging experimental treatment strategies are addressed, including the use of photodynamic therapy, immunomodulators, angiogenesis inhibitors, hypobaric pressure, and site-specific pharmaco-laser therapy. PMID:22305042

  17. Mechanical Characterization of Bone: State of the Art in Experimental Approaches-What Types of Experiments Do People Do and How Does One Interpret the Results?

    PubMed

    Bailey, Stacyann; Vashishth, Deepak

    2018-06-18

    The mechanical integrity of bone is determined by the direct measurement of bone mechanical properties. This article presents an overview of the current, most common, and new and upcoming experimental approaches for the mechanical characterization of bone. The key outcome variables of mechanical testing, as well as interpretations of the results in the context of bone structure and biology are also discussed. Quasi-static tests are the most commonly used for determining the resistance to structural failure by a single load at the organ (whole bone) level. The resistance to crack initiation or growth by fracture toughness testing and fatigue loading offers additional and more direct characterization of tissue material properties. Non-traditional indentation techniques and in situ testing are being increasingly used to probe the material properties of bone ultrastructure. Destructive ex vivo testing or clinical surrogate measures are considered to be the gold standard for estimating fracture risk. The type of mechanical test used for a particular investigation depends on the length scale of interest, where the outcome variables are influenced by the interrelationship between bone structure and composition. Advancement in the sensitivity of mechanical characterization techniques to detect changes in bone at the levels subjected to modifications by aging, disease, and/or pharmaceutical treatment is required. As such, a number of techniques are now available to aid our understanding of the factors that contribute to fracture risk.

  18. Cognitive-behavioral treatment of adult rumination behavior in the setting of disordered eating: A single case experimental design.

    PubMed

    Thomas, Jennifer J; Murray, Helen B

    2016-10-01

    The integration of feeding and eating disorders into a single DSM-5 chapter introduces an opportunity to explore common mechanisms and transdiagnostic treatment approaches. In contrast to a robust literature on the evidence-based treatment of eating disorders, very few data guide the treatment of rumination disorder (RD). In a single case experimental design, we describe the treatment of a 27-year-old woman who presented to an eating-disorder clinic with a 15-year history of untreated rumination and intermittent binge eating. According to time series analysis, she reduced rumination frequency at trend-level during the initial baseline phase (self-monitoring only), and exhibited significant reductions during the active intervention phase (self-monitoring + cognitive-behavioral techniques including diaphragmatic breathing and behavioral experimentation). She maintained these gains at 23 weeks post-intervention. Although more rigorous systematic investigation is needed, these data suggest that selected cognitive and behavioral techniques already familiar to eating-disorder clinicians may have heuristic value for RD treatment. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:967-972). © 2016 Wiley Periodicals, Inc.

  19. First spin-resolved electron distributions in crystals from combined polarized neutron and X-ray diffraction experiments.

    PubMed

    Deutsch, Maxime; Gillon, Béatrice; Claiser, Nicolas; Gillet, Jean-Michel; Lecomte, Claude; Souhassou, Mohamed

    2014-05-01

    Since the 1980s it has been possible to probe crystallized matter, thanks to X-ray or neutron scattering techniques, to obtain an accurate charge density or spin distribution at the atomic scale. Despite the description of the same physical quantity (electron density) and tremendous development of sources, detectors, data treatment software etc., these different techniques evolved separately with one model per experiment. However, a breakthrough was recently made by the development of a common model in order to combine information coming from all these different experiments. Here we report the first experimental determination of spin-resolved electron density obtained by a combined treatment of X-ray, neutron and polarized neutron diffraction data. These experimental spin up and spin down densities compare very well with density functional theory (DFT) calculations and also confirm a theoretical prediction made in 1985 which claims that majority spin electrons should have a more contracted distribution around the nucleus than minority spin electrons. Topological analysis of the resulting experimental spin-resolved electron density is also briefly discussed.

  20. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  2. Visual mining geo-related data using pixel bar charts

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Keim, Daniel A.; Dayal, Umeshwar; Wright, Peter; Schneidewind, Joern

    2005-03-01

    A common approach to analyze geo-related data is using bar charts or x-y plots. They are intuitive and easy to use. But important information often gets lost. In this paper, we introduce a new interactive visualization technique called Geo Pixel Bar Charts, which combines the advantages of Pixel Bar Charts and interactive maps. This technique allows analysts to visualize large amounts of spatial data without aggregation and shows the geographical regions corresponding to the spatial data attribute at the same time. In this paper, we apply Geo Pixel Bar Charts to visually mining sales transactions and Internet usage from different locations. Our experimental results show the effectiveness of this technique for providing data distribution and exceptions from the map.

  3. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  4. An Intelligent Harmonic Synthesis Technique for Air-Gap Eccentricity Fault Diagnosis in Induction Motors

    NASA Astrophysics Data System (ADS)

    Li, De Z.; Wang, Wilson; Ismail, Fathy

    2017-11-01

    Induction motors (IMs) are commonly used in various industrial applications. To improve energy consumption efficiency, a reliable IM health condition monitoring system is very useful to detect IM fault at its earliest stage to prevent operation degradation, and malfunction of IMs. An intelligent harmonic synthesis technique is proposed in this work to conduct incipient air-gap eccentricity fault detection in IMs. The fault harmonic series are synthesized to enhance fault features. Fault related local spectra are processed to derive fault indicators for IM air-gap eccentricity diagnosis. The effectiveness of the proposed harmonic synthesis technique is examined experimentally by IMs with static air-gap eccentricity and dynamic air-gap eccentricity states under different load conditions. Test results show that the developed harmonic synthesis technique can extract fault features effectively for initial IM air-gap eccentricity fault detection.

  5. Note: A non-invasive electronic measurement technique to measure the embedded four resistive elements in a Wheatstone bridge sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravelo Arias, S. I.; Ramírez Muñoz, D.; Cardoso, S.

    2015-06-15

    The work shows a measurement technique to obtain the correct value of the four elements in a resistive Wheatstone bridge without the need to separate the physical connections existing between them. Two electronic solutions are presented, based on a source-and-measure unit and using discrete electronic components. The proposed technique brings the possibility to know the mismatching or the tolerance between the bridge resistive elements and then to pass or reject it in terms of its related common-mode rejection. Experimental results were taken in various Wheatstone resistive bridges (discrete and magnetoresistive integrated bridges) validating the proposed measurement technique specially when themore » bridge is micro-fabricated and there is no physical way to separate one resistive element from the others.« less

  6. Ventricular-subcutaneous shunt for the treatment of experimental hydrocephalus in young rats: technical note.

    PubMed

    Santos, Marcelo Volpon; Garcia, Camila Araujo Bernardino; Jardini, Evelise Oliveira; Romeiro, Thais Helena; da Silva Lopes, Luiza; Machado, Hélio Rubens; de Oliveira, Ricardo Santos

    2016-08-01

    Hydrocephalus is a complex disease that affects cerebrospinal fluid (CSF) dynamics and is very common in children. To this date, CSF shunting is still the standard treatment for childhood hydrocephalus, but, nevertheless, the effects of such an operation on the developing brain are widely unknown. To help overcome this, experimental models of CSF shunts are surely very useful tools. The objective of this study was to describe a feasible and reliable technique of an adapted ventricular-subcutaneous shunt for the treatment of kaolin-induced hydrocephalus in young rats. We developed a ventricular-subcutaneous shunt (VSCS) technique which was used in 31 Wistar young rats with kaolin-induced hydrocephalus. Hydrocephalus was induced at 7 days of age, and shunt implantation was performed 7 days later. Our technique used a 0.7-mm gauge polypropylene catheter tunneled to a subcutaneous pocket created over the animal's back and inserted into the right lateral ventricle. All animals were sacrificed 14 days after shunt insertion. Twenty-four rats survived and remained well until the study was ended. No major complications were seen. Their weight gain went back to normal. They all underwent ambulatory behavioral testing prior and after VSCS, which showed improvement in their motor skills. We have also obtained magnetic resonance (MR) scans of 16 pups confirming reduction of ventricular size after shunting and indicating effective treatment. Histopathological analysis of brain samples before and after shunting showed reversion of ependymal and corpus callosum disruption, as well as fewer reactive astrocytes in shunted animals. An experimental CSF shunt technique was devised. Excessive CSF of hydrocephalic rats is diverted into the subcutaneous space where it can be resorbed. This technique has a low complication rate and is effective. It might be applied to various types of experimental studies involving induction and treatment of hydrocephalus.

  7. Numerical determination of personal aerosol sampler aspiration efficiency.

    PubMed

    Lo Savio, Simone; Paradisi, Paolo; Tampieri, Francesco; Belosi, Franco; Morigi, Maria Pia; Agostini, Sergio

    2003-04-01

    In this work the determination of the aspiration efficiency of personal aerosol samplers, commonly used in occupational exposure assessment, is investigated by means of CFD techniques. Specifically, it will be described a code to calculate the particle trajectories in a given flow field. At the present state the code considers only the effects of the mean flow field on the particle motion, whereas the turbulent diffusion effects are neglected. Comparisons with experimental measurements are also given in the framework of a research contract, supported by the European Community, with several experimental contributions from the participants. The main objective of the European research is to develop a new approach to experimentation with airborne particle flows, working on a reduced scale. This methodology has the advantage of allowing real-time aerosol determination and use of small wind tunnels, with a better experimental control. In this article we describe how the methodology has been verified using computational fluid dynamics. Experimental and numerical aspiration efficiencies have been compared and the influence of gravity and turbulence intensity in full and reduced scale has been investigated. The numerical techniques described here are in agreement with previous similar research and allow at least qualitative predictions of aspiration efficiency for real samplers, taking care of orientation from the incoming air flow. The major discrepancies among predicted and experimental results may be a consequence of bounce effects, which are very difficult to eliminate also by greasing the sampler surface.

  8. Genes2Networks: connecting lists of gene symbols using mammalian protein interactions databases.

    PubMed

    Berger, Seth I; Posner, Jeremy M; Ma'ayan, Avi

    2007-10-04

    In recent years, mammalian protein-protein interaction network databases have been developed. The interactions in these databases are either extracted manually from low-throughput experimental biomedical research literature, extracted automatically from literature using techniques such as natural language processing (NLP), generated experimentally using high-throughput methods such as yeast-2-hybrid screens, or interactions are predicted using an assortment of computational approaches. Genes or proteins identified as significantly changing in proteomic experiments, or identified as susceptibility disease genes in genomic studies, can be placed in the context of protein interaction networks in order to assign these genes and proteins to pathways and protein complexes. Genes2Networks is a software system that integrates the content of ten mammalian interaction network datasets. Filtering techniques to prune low-confidence interactions were implemented. Genes2Networks is delivered as a web-based service using AJAX. The system can be used to extract relevant subnetworks created from "seed" lists of human Entrez gene symbols. The output includes a dynamic linkable three color web-based network map, with a statistical analysis report that identifies significant intermediate nodes used to connect the seed list. Genes2Networks is powerful web-based software that can help experimental biologists to interpret lists of genes and proteins such as those commonly produced through genomic and proteomic experiments, as well as lists of genes and proteins associated with disease processes. This system can be used to find relationships between genes and proteins from seed lists, and predict additional genes or proteins that may play key roles in common pathways or protein complexes.

  9. Coherent Transition Radiation Generated from Transverse Electron Density Modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halavanau, A.; Piot, P.; Tyukhtin, A. V.

    Coherent Transition radiation (CTR) of a given frequency is commonly generated with longitudinal electron bunch trains. In this paper, we present a study of CTR properties produced from simultaneous electron transverse and longitudinal density modulation. We demonstrate via numerical simulations a simple technique to generate THz-scale frequencies from mm-scale transversely separated electron beamlets formed into a ps-scale bunch train. The results and a potential experimental setup are discussed.

  10. An experimental and theoretical study to relate uncommon rock/fluid properties to oil recovery. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, R.

    Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less

  11. What the Pendulum Can Tell Educators about Children's Scientific Reasoning

    NASA Astrophysics Data System (ADS)

    Stafford, Erin

    2004-11-01

    Inhelder and Piaget (1958) studied schoolchildren's understanding of a simplependulum as a means of investigating the development of the control of variablesscheme and the ceteris paribus principle central to scientific experimentation.The time-consuming nature of the individual interview technique used by Inhelderhas led to the development of a whole range of group test techniques aimed attesting the empirical validity and increasing the practical utility of Piaget's work.The Rasch measurement techniques utilized in this study reveal that the Piagetian Reasoning Task III — Pendulum and the méthode clinique interview revealthe same underlying ability. Of particular interest to classroom teachers is theevidence that some individuals produced rather disparate performances across thetwo testing situations. The implications of the commonalities and individualdifferences in performance for interpreting children's scientific understanding arediscussed.

  12. Experimental verification of the shape of the excitation depth distribution function for AES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tougaard, S.; Jablonski, A.; Institute of Physical Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, 01-224 Warsaw

    2011-09-15

    In the common formalism of AES, it is assumed that the in-depth distribution of ionizations is uniform. There are experimental indications that this assumption may not be true for certain primary electron energies and solids. The term ''excitation depth distribution function'' (EXDDF) has been introduced to describe the distribution of ionizations at energies used in AES. This function is conceptually equivalent to the Phi-rho-z function of electron microprobe analysis (EPMA). There are, however, experimental difficulties to determine this function in particular for energies below {approx} 10 keV. In the present paper, we investigate the possibility of determining the shape ofmore » the EXDDF from the background of inelastically scattered electrons on the low energy side of the Auger electron features in the electron energy spectra. The experimentally determined EXDDFs are compared with the EXDDFs determined from Monte Carlo simulations of electron trajectories in solids. It is found that this technique is useful for the experimental determination of the EXDDF function.« less

  13. Observation of Biological Tissues Using Common Path Optical Coherence Tomography with Gold Coated Conical Tip Lens Fiber

    NASA Astrophysics Data System (ADS)

    Taguchi, K.; Sugiyama, J.; Totsuka, M.; Imanaka, S.

    2012-03-01

    In this paper, we proposed a high lateral resolution common-path Fourier domain optical coherence tomography(OCT) system with the use of a chemically etched single mode fiber. In our experiments, single mode optical fiber for 1310nm was used for preparing the tapered tips. Our system used a conical microlens that was chemically etched by selective chemical etching technique using an etching solution of buffered hydrofluoric acid (BHF). From experimental results, we verified that our proposed optical coherence tomography system could operate as a common-path Fourier domain OCT system and conical tip lens fiber was very useful for a high lateral resolution common-path Fourier domain OCT system. Furthermore, we could observe a surface of paramecium bursaria and symbiotic chlorella in the paramecium bursaria using gold coated conical-tip fiber in the water.

  14. ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION

    PubMed Central

    Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.

    2012-01-01

    Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561

  15. Physiotherapists use a small number of behaviour change techniques when promoting physical activity: A systematic review comparing experimental and observational studies.

    PubMed

    Kunstler, Breanne E; Cook, Jill L; Freene, Nicole; Finch, Caroline F; Kemp, Joanne L; O'Halloran, Paul D; Gaida, James E

    2018-06-01

    Physiotherapists promote physical activity as part of their practice. This study reviewed the behaviour change techniques physiotherapists use when promoting physical activity in experimental and observational studies. Systematic review of experimental and observational studies. Twelve databases were searched using terms related to physiotherapy and physical activity. We included experimental studies evaluating the efficacy of physiotherapist-led physical activity interventions delivered to adults in clinic-based private practice and outpatient settings to individuals with, or at risk of, non-communicable diseases. Observational studies reporting the techniques physiotherapists use when promoting physical activity were also included. The behaviour change techniques used in all studies were identified using the Behaviour Change Technique Taxonomy. The behaviour change techniques appearing in efficacious and inefficacious experimental interventions were compared using a narrative approach. Twelve studies (nine experimental and three observational) were retained from the initial search yield of 4141. Risk of bias ranged from low to high. Physiotherapists used seven behaviour change techniques in the observational studies, compared to 30 behaviour change techniques in the experimental studies. Social support (unspecified) was the most frequently identified behaviour change technique across both settings. Efficacious experimental interventions used more behaviour change techniques (n=29) and functioned in more ways (n=6) than did inefficacious experimental interventions (behaviour change techniques=10 and functions=1). Physiotherapists use a small number of behaviour change techniques. Less behaviour change techniques were identified in observational studies compared to experimental studies, suggesting physiotherapists use less BCTs clinically than experimentally. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  16. Comparison of Quadrapolar™ radiofrequency lesions produced by standard versus modified technique: an experimental model.

    PubMed

    Safakish, Ramin

    2017-01-01

    Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%-30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques.

  17. Conformal fractal antenna and FSS for low-RCS applications

    NASA Astrophysics Data System (ADS)

    Varadan, Vijay K.; Vinoy, K. J.; Jose, K. A.; Varadan, Vasundara V.

    2000-06-01

    On many situations the reduction of radar cross section (RCS) is of continued strategic interest, especially with aircraft and missiles. Once the overall RCS of the vehicle is reduced, the reflections from the antennas can dominate. The commonly known approaches to RCS reduction may not be applicable for antennas, and hence special techniques are followed. These include configuring the antennas completely conformal, and using band pass frequency selective surfaces. The use fractal patterns have shown to result in such band pass characteristics. The overall RCS of a typical target body is experimentally found to be reduced when these screens are used. The paper presents the experimental result on the transmission and backscatter characteristics of a fractal FSS screen.

  18. Laboratory Studies of DIB Carriers

    NASA Technical Reports Server (NTRS)

    Allamandola, L. J.

    1995-01-01

    Spectroscopic studies of the following potential diffuse interstellar band (DIB) carriers are reviewed: unspecified organics, carbon chains, polycyclic aromatic hydrocarbons (PAHs), fullerenes and derivatives, as well as porphyrins and related material. An assessment of each is given, along with suggestions for further experimental studies needed to fully test each candidate. Of the experimental techniques in common use matrix isolation spectroscopy with neon matrices is the most appropriate for the DIBs. The low vapor pressure and high reactivity of these materials preclude gas phase studies on many of these species. At this point, given the type and quality of published data available, carbon chains and PARs are the most promising candidates for a number of the DIBs.

  19. Masonry arches retrofitted with steel reinforced grout materials: In-situ experimental tests and advanced FE simulations

    NASA Astrophysics Data System (ADS)

    Bertolesi, Elisa; Carozzi, Francesca Giulia; Milani, Gabriele; Poggi, Carlo

    2017-11-01

    The paper presents the results of a series of in-situ tests carried out on two masonry arches, one unreinforced and the other reinforced with SRG (Steel Reinforced Grout). The arches are built adopting a peculiar construction technique using common Italian bricks with dimensions 250 × 120 × 55 mm3 and 10 mm thick mortar joints. One of the two arches has been reinforced with an SRG material constituted by an inox grid embedded into a layer of lime mortar, whereas the second one is maintained unreinforced for comparison purposes. The experimental set-up is designed to apply an eccentric vertical load placed at ¼ of the span in a series of loading and unloading cycles up to the failure. The numerical analyses have been performed using a sophisticated heterogeneous micro-modeling technique, where bricks, mortar joints and the strengthening have been modeled separately. Finally, the numerical outcomes have been comparatively assessed with respect to the experimental results and the crack patterns obtained at the end of the tests, showing a satisfactory agreement in terms of the global behavior of the arches and their collapse mechanisms.

  20. Experimenter's laboratory for visualized interactive science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.

    1992-01-01

    The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.

  1. Experimenter's laboratory for visualized interactive science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.

    1993-01-01

    The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperatable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color-shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.

  2. Experimental investigation of analog and digital dimming techniques on photometric performance of an indoor Visible Light Communication (VLC) system

    NASA Astrophysics Data System (ADS)

    Zafar, Fahad; Kalavally, Vineetha; Bakaul, Masuduzzaman; Parthiban, R.

    2015-09-01

    For making commercial implementation of light emitting diode (LED) based visible light communication (VLC) systems feasible, it is necessary to incorporate it with dimming schemes which will provide energy savings, moods and increase the aesthetic value of the places using this technology. There are two general methods which are used to dim LEDs commonly categorized as analog and digital dimming. Incorporating fast data transmission with these techniques is a key challenge in VLC. In this paper, digital and analog dimming for a 10 Mb/s non return to zero on-off keying (NRZ-OOK) based VLC system is experimentally investigated considering both photometric and communicative parameters. A spectrophotometer was used for photometric analysis and a line of sight (LOS) configuration in the presence of ambient light was used for analyzing communication parameters. Based on the experimental results, it was determined that digital dimming scheme is preferable for use in indoor VLC systems requiring high dimming precision and data transmission at lower brightness levels. On the other hand, analog dimming scheme is a cost effective solution for high speed systems where dimming precision is insignificant.

  3. Performance improvement of a binary quantized all-digital phase-locked loop with a new aided-acquisition technique

    NASA Astrophysics Data System (ADS)

    Sandoz, J.-P.; Steenaart, W.

    1984-12-01

    The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.

  4. The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement

    DOE PAGES

    Hua, Zilong; Ban, Heng; Hurley, David H.

    2015-05-05

    A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less

  5. The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, Zilong; Ban, Heng; Hurley, David H.

    A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less

  6. Resolving small signal measurements in experimental plasma environments using calibrated subtraction of noise signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fimognari, P. J., E-mail: PJFimognari@XanthoTechnologies.com; Demers, D. R.; Chen, X.

    2014-11-15

    The performance of many diagnostic and control systems within fusion and other fields of research are often detrimentally affected by spurious noise signals. This is particularly true for those (such as radiation or particle detectors) working with very small signals. Common sources of radiated and conducted noise in experimental fusion environments include the plasma itself and instrumentation. The noise complicates data analysis, as illustrated by noise on signals measured with the heavy ion beam probe (HIBP) installed on the Madison Symmetric Torus. The noise is time-varying and often exceeds the secondary ion beam current (in contrast with previous applications). Analysismore » of the noise identifies the dominant source as photoelectric emission from the detectors induced by ultraviolet light from the plasma. This has led to the development of a calibrated subtraction technique, which largely removes the undesired temporal noise signals from data. The advantages of the technique for small signal measurement applications are demonstrated through improvements realized on HIBP fluctuation measurements.« less

  7. An overview of clinical and experimental treatment modalities for port wine stains.

    PubMed

    Chen, Jennifer K; Ghasri, Pedram; Aguilar, Guillermo; van Drooge, Anne Margreet; Wolkerstorfer, Albert; Kelly, Kristen M; Heger, Michal

    2012-08-01

    Port wine stains (PWS) are the most common vascular malformation of the skin, occurring in 0.3% to 0.5% of the population. Noninvasive laser irradiation with flashlamp-pumped pulsed dye lasers (selective photothermolysis) currently comprises the gold standard treatment of PWS; however, the majority of PWS fail to clear completely after selective photothermolysis. In this review, the clinically used PWS treatment modalities (pulsed dye lasers, alexandrite lasers, neodymium:yttrium-aluminum-garnet lasers, and intense pulsed light) and techniques (combination approaches, multiple passes, and epidermal cooling) are discussed. Retrospective analysis of clinical studies published between 1990 and 2011 was performed to determine therapeutic efficacies for each clinically used modality/technique. In addition, factors that have resulted in the high degree of therapeutic recalcitrance are identified, and emerging experimental treatment strategies are addressed, including the use of photodynamic therapy, immunomodulators, angiogenesis inhibitors, hypobaric pressure, and site-specific pharmaco-laser therapy. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  8. An investigation of matched index of refraction technique and its application in optical measurements of fluid flow

    NASA Astrophysics Data System (ADS)

    Amini, Noushin; Hassan, Yassin A.

    2012-12-01

    Optical distortions caused by non-uniformities of the refractive index within the measurement volume is a major impediment for all laser diagnostic imaging techniques applied in experimental fluid dynamic studies. Matching the refractive indices of the working fluid and the test section walls and interfaces provides an effective solution to this problem. The experimental set-ups designed to be used along with laser imaging techniques are typically constructed of transparent solid materials. In this investigation, different types of aqueous salt solutions and various organic fluids are studied for refractive index matching with acrylic and fused quartz, which are commonly used in construction of the test sections. One aqueous CaCl2·2H2O solution (63 % by weight) and two organic fluids, Dibutyl Phthalate and P-Cymene, are suggested for refractive index matching with fused quartz and acrylic, respectively. Moreover, the temperature dependence of the refractive indices of these fluids is investigated, and the Thermooptic Constant is calculated for each fluid. Finally, the fluid viscosity for different shear rates is measured as a function of temperature and is applied to characterize the physical behavior of the proposed fluids.

  9. Development of a Theoretical Model to Assess the Hepatocarcinogenic Potential of Chemicals Using Structure-Activity Relationships and the Rat Hepatocyte Assay

    DTIC Science & Technology

    1985-11-01

    Kappus ,19a5; Tyson and Green, in press). When ethane evolution was quantitated, the experimental conditions were modified to maximize sensitivity, as...commonly used and convenient technique for tVat purpose ( Kappus , 1985). Since MDA, the lipid breakdown product that the TBA reaction primarily...cytcchrome P-450(c) reductase. Mol. Pharmacol. 20, 669-673 (1981). Kappus , H. (1985). Lipid peroxidatioa: mechanisms, analysis, enzymology and

  10. Concept Development and Experimentation Policy and Process: How Analysis Provides Rigour

    DTIC Science & Technology

    2010-04-01

    modelling and simulation techniques, but in reality the main tool in use is common sense and logic. The main goal of OA analyst is to bring forward those...doing so she should distinguish between the ideal and the intended or desired models to approach the reality as much as possible. Subsequently, the...and collection of measurements to be conducted. In doing so the analyst must ensure to distinguish between the actual and the perceived reality . From

  11. Nitrogen fluorescence in air for observing extensive air showers

    NASA Astrophysics Data System (ADS)

    Keilhauer, B.; Bohacova, M.; Fraga, M.; Matthews, J.; Sakaki, N.; Tameda, Y.; Tsunesada, Y.; Ulrich, A.

    2013-06-01

    Extensive air showers initiate the fluorescence emissions from nitrogen molecules in air. The UV-light is emitted isotropically and can be used for observing the longitudinal development of extensive air showers in the atmosphere over tenth of kilometers. This measurement technique is well-established since it is exploited for many decades by several cosmic ray experiments. However, a fundamental aspect of the air shower analyses is the description of the fluorescence emission in dependence on varying atmospheric conditions. Different fluorescence yields affect directly the energy scaling of air shower reconstruction. In order to explore the various details of the nitrogen fluorescence emission in air, a few experimental groups have been performing dedicated measurements over the last decade. Most of the measurements are now finished. These experimental groups have been discussing their techniques and results in a series of Air Fluorescence Workshops commenced in 2002. At the 8th Air Fluorescence Workshop 2011, it was suggested to develop a common way of describing the nitrogen fluorescence for application to air shower observations. Here, first analyses for a common treatment of the major dependences of the emission procedure are presented. Aspects like the contributions at different wavelengths, the dependence on pressure as it is decreasing with increasing altitude in the atmosphere, the temperature dependence, in particular that of the collisional cross sections between molecules involved, and the collisional de-excitation by water vapor are discussed.

  12. Quantitative analysis of the mixtures of illicit drugs using terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Jiang, Dejun; Zhao, Shusen; Shen, Jingling

    2008-03-01

    A method was proposed to quantitatively inspect the mixtures of illicit drugs with terahertz time-domain spectroscopy technique. The mass percentages of all components in a mixture can be obtained by linear regression analysis, on the assumption that all components in the mixture and their absorption features be known. For illicit drugs were scarce and expensive, firstly we used common chemicals, Benzophenone, Anthraquinone, Pyridoxine hydrochloride and L-Ascorbic acid in the experiment. Then illicit drugs and a common adulterant, methamphetamine and flour, were selected for our experiment. Experimental results were in significant agreement with actual content, which suggested that it could be an effective method for quantitative identification of illicit drugs.

  13. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  14. Separation of phytochemicals from Helichrysum italicum: An analysis of different isolation techniques and biological activity of prepared extracts.

    PubMed

    Maksimovic, Svetolik; Tadic, Vanja; Skala, Dejan; Zizovic, Irena

    2017-06-01

    Helichrysum italicum presents a valuable source of natural bioactive compounds. In this work, a literature review of terpenes, phenolic compounds, and other less common phytochemicals from H. italicum with regard to application of different separation methods is presented. Data including extraction/separation methods and experimental conditions applied, obtained yields, number of identified compounds, content of different compound groups, and analytical techniques applied are shown as corresponding tables. Numerous biological activities of both isolates and individual compounds are emphasized. In addition, the data reported are discussed, and the directions for further investigations are proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Use of LigaSure™ on bile duct in rats: an experimental study.

    PubMed

    Marte, Antonio; Pintozzi, Lucia

    2017-08-01

    The closure of a cystic duct during cholecystectomy by means of radiofrequency is still controversial. We report our preliminary experimental results with the use of LigaSure™ on common bile duct in rats. Thirty Wistar rats weighing 70 to 120 g were employed for this study. The animals were all anesthetized with intraperitoneal ketamine and then divided into three groups. The first group (10 rats, Group C) underwent only laparotomy and isolation of the common bile duct. The second (10 rats, Group M) underwent laparotomy and closure of the common bile duct (CBD) with monopolar coagulation. The third group (10 rats, Group L) underwent laparotomy and sealing of the common bile duct with two application of LigaSureTM. Afterwards, all rats were kept in comfortable cages and were administered dibenzamine for five days. They were all sacrificed on day 20. Through a laparotomy, the liver and bile duct were removed for histological examination. Blood samples were obtained to dose bilirubin, amylase and transaminase levels. Mortality rate was 0 in the control group (C), 3/10 rats in group M and 0 in group L. In group L, the macroscopic examination showed a large choledochocele (3-3.5 × 1.5 cm) with few adhesions. At the histological examination there was optimal sealing of the common bile duct in 9/10 rats. In group M, 2/10 rats had liver abscesses, 3/10 rats had choledochocele and 5/10 rats, biliary peritonitis. There was intense tissue inflammation and the dissection was difficult. Analyses of blood samples showed an increase in total bilirubin, aspartate aminotransferase (AST) and alanine aminotransferase (ALT) in groups M and L. The preliminary results of our study confirm that radiofrequency can be safely used for the closure of the common bile duct. The choledochocele obtained with this technique could represent a good experimental model. These results could be a further step for using the LigaSureTM in clipless cholecystectomy.

  16. Effects on Diagnostic Parameters After Removing Additional Synchronous Gear Meshes

    NASA Technical Reports Server (NTRS)

    Decker, Harry J.

    2003-01-01

    Gear cracks are typically difficult to diagnose with sufficient time before catastrophic damage occurs. Significant damage must be present before algorithms appear to be able to detect the damage. Frequently there are multiple gear meshes on a single shaft. Since they are all synchronous with the shaft frequency, the commonly used synchronous averaging technique is ineffective in removing other gear mesh effects. Carefully applying a filter to these extraneous gear mesh frequencies can reduce the overall vibration signal and increase the accuracy of commonly used vibration metrics. The vibration signals from three seeded fault tests were analyzed using this filtering procedure. Both the filtered and unfiltered vibration signals were then analyzed using commonly used fault detection metrics and compared. The tests were conducted on aerospace quality spur gears in a test rig. The tests were conducted at speeds ranging from 2500 to 5000 revolutions per minute and torques from 184 to 228 percent of design load. The inability to detect these cracks with high confidence results from the high loading which is causing fast fracture as opposed to stable crack growth. The results indicate that these techniques do not currently produce an indication of damage that significantly exceeds experimental scatter.

  17. Plume radiation

    NASA Astrophysics Data System (ADS)

    Dirscherl, R.

    1993-06-01

    The electromagnetic radiation originating from the exhaust plume of tactical missile motors is of outstanding importance for military system designers. Both missile- and countermeasure engineer rely on the knowledge of plume radiation properties, be it for guidance/interference control or for passive detection of adversary missiles. To allow access to plume radiation properties, they are characterized with respect to the radiation producing mechanisms like afterburning, its chemical constituents, and reactions as well as particle radiation. A classification of plume spectral emissivity regions is given due to the constraints imposed by available sensor technology and atmospheric propagation windows. Additionally assessment methods are presented that allow a common and general grouping of rocket motor properties into various categories. These methods describe state of the art experimental evaluation techniques as well as calculation codes that are most commonly used by developers of NATO countries. Dominant aspects influencing plume radiation are discussed and a standardized test technique is proposed for the assessment of plume radiation properties that include prediction procedures. These recommendations on terminology and assessment methods should be common to all employers of plume radiation. Special emphasis is put on the omnipresent need for self-protection by the passive detection of plume radiation in the ultraviolet (UV) and infrared (IR) spectral band.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  19. A quantitative image cytometry technique for time series or population analyses of signaling networks.

    PubMed

    Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya

    2010-04-01

    Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.

  20. Methodological flaws introduce strong bias into molecular analysis of microbial populations.

    PubMed

    Krakat, N; Anjum, R; Demirel, B; Schröder, P

    2017-02-01

    In this study, we report how different cell disruption methods, PCR primers and in silico analyses can seriously bias results from microbial population studies, with consequences for the credibility and reproducibility of the findings. Our results emphasize the pitfalls of commonly used experimental methods that can seriously weaken the interpretation of results. Four different cell lysis methods, three commonly used primer pairs and various computer-based analyses were applied to investigate the microbial diversity of a fermentation sample composed of chicken dung. The fault-prone, but still frequently used, amplified rRNA gene restriction analysis was chosen to identify common weaknesses. In contrast to other studies, we focused on the complete analytical process, from cell disruption to in silico analysis, and identified potential error rates. This identified a wide disagreement of results between applied experimental approaches leading to very different community structures depending on the chosen approach. The interpretation of microbial diversity data remains a challenge. In order to accurately investigate the taxonomic diversity and structure of prokaryotic communities, we suggest a multi-level approach combining DNA-based and DNA-independent techniques. The identified weaknesses of commonly used methods to study microbial diversity can be overcome by a multi-level approach, which produces more reliable data about the fate and behaviour of microbial communities of engineered habitats such as biogas plants, so that the best performance can be ensured. © 2016 The Society for Applied Microbiology.

  1. Experimental demonstration of passive coherent combining of fiber lasers by phase contrast filtering.

    PubMed

    Jeux, François; Desfarges-Berthelemot, Agnès; Kermène, Vincent; Barthelemy, Alain

    2012-12-17

    We report experiments on a new laser architecture involving phase contrast filtering to coherently combine an array of fiber lasers. We demonstrate that the new technique yields a more stable phase-locking than standard methods using only amplitude filtering. A spectral analysis of the output beams shows that the new scheme generates more resonant frequencies common to the coupled lasers. This property can enhance the combining efficiency when the number of lasers to be coupled is large.

  2. Instrumental biosensors: new perspectives for the analysis of biomolecular interactions.

    PubMed

    Nice, E C; Catimel, B

    1999-04-01

    The use of instrumental biosensors in basic research to measure biomolecular interactions in real time is increasing exponentially. Applications include protein-protein, protein-peptide, DNA-protein, DNA-DNA, and lipid-protein interactions. Such techniques have been applied to, for example, antibody-antigen, receptor-ligand, signal transduction, and nuclear receptor studies. This review outlines the principles of two of the most commonly used instruments and highlights specific operating parameters that will assist in optimising experimental design, data generation, and analysis.

  3. An experimentally based analytical model for the shear capacity of FRP-strengthened reinforced concrete beams

    NASA Astrophysics Data System (ADS)

    Pellegrino, C.; Modena, C.

    2008-05-01

    This paper deals with the shear strengthening of Reinforced Concrete (RC) flexural members with externally bonded Fiber-Reinforced Polymers (FRPs). The interaction between an external FRP and an internal transverse steel reinforcement is not considered in actual code recommendations, but it strongly influences the efficiency of the shear strengthening rehabilitation technique and, as a consequence, the computation of interacting contributions to the nominal shear strength of beams. This circumstance is also discussed on the basis of the results of an experimental investigation of rectangular RC beams strengthened in shear with "U-jacketed" carbon FRP sheets. Based on experimental results of the present and other investigations, a new analytical model for describing the shear capacity of RC beams strengthened according to the most common schemes (side-bonded and "U-jacketed"), taking into account the interaction between steel and FRP shear strength contributions, is proposed.

  4. Probing the geometry of copper and silver adatoms on magnetite: quantitative experiment versus theory† †Electronic supplementary information (ESI) available: Experimental and computational details, as well as further details on the results and analyses. See DOI: 10.1039/c7nr07319d

    PubMed Central

    Meier, Matthias; Jakub, Zdeněk; Balajka, Jan; Hulva, Jan; Bliem, Roland; Thakur, Pardeep K.; Lee, Tien-Lin; Franchini, Cesare; Schmid, Michael; Diebold, Ulrike; Allegretti, Francesco; Parkinson, Gareth S.

    2018-01-01

    Accurately modelling the structure of a catalyst is a fundamental prerequisite for correctly predicting reaction pathways, but a lack of clear experimental benchmarks makes it difficult to determine the optimal theoretical approach. Here, we utilize the normal incidence X-ray standing wave (NIXSW) technique to precisely determine the three dimensional geometry of Ag1 and Cu1 adatoms on Fe3O4(001). Both adatoms occupy bulk-continuation cation sites, but with a markedly different height above the surface (0.43 ± 0.03 Å (Cu1) and 0.96 ± 0.03 Å (Ag1)). HSE-based calculations accurately predict the experimental geometry, but the more common PBE + U and PBEsol + U approaches perform poorly. PMID:29334395

  5. The Dynamics of the Stapedial Acoustic Reflex.

    NASA Astrophysics Data System (ADS)

    Moss, Sherrin Mary

    Available from UMI in association with The British Library. This thesis aims to separate the neural and muscular components of the stapedial acoustic reflex, both anatomically and physiologically. It aims to present an hypothesis to account for the differences between ipsilateral and contralateral reflex characteristics which have so far been unexplained, and achieve a greater understanding of the mechanisms underlying the reflex dynamics. A technique enabling faithful reproduction of the time course of the reflex is used throughout the experimental work. The technique measures tympanic membrane displacement as a result of reflex stapedius muscle contraction. The recorded response can be directly related to the mechanics of the middle ear and stapedius muscle contraction. Some development of the technique is undertaken by the author. A model of the reflex neural arc and stapedius muscle dynamics is evolved that is based upon a second order system. The model is unique in that it includes a latency in the ipsilateral negative feedback loop. Oscillations commonly observed on reflex responses are seen to be produced because of the inclusion of a latency in the feedback loop. The model demonstrates and explains the complex relationships between neural and muscle dynamic parameters observed in the experimental work. This more comprehensive understanding of the interaction between the stapedius dynamics and the neural arc of the reflex would not usually have been possible using human subjects, coupled with a non-invasive measurement technique. Evidence from the experimental work revealed the ipsilateral reflex to have, on average, a 5 dB lower threshold than the contralateral reflex. The oscillatory charcteristics, and the steady state response, of the contralateral reflex are also seen to be significantly different from those of the ipsilateral reflex. An hypothesis to account for the experimental observations is proposed. It is propounded that chemical neurotransmitters, and their effect upon the contralateral reflex arc from the site of the superior olivary complex to the motoneurones innervating the stapedius, account for the difference between the contralateral and ipsilateral reflex thresholds and dynamic characteristics. In the past two years the measurement technique used for the experimental work has developed from an audiological to a neurological diagnostic tool. This has enabled the results from the study to be applied in the field for valuable biomechanical and neurological explanations of the reflex response. (Abstract shortened by UMI.).

  6. Recycling Titanium and Its Alloys by Utilizing Molten Salt

    NASA Astrophysics Data System (ADS)

    Okabe, Toru H.; Taninouchi, Yu-ki

    It is commonly believed that the deoxidation of titanium (Ti), or the direct removal of oxygen (O) dissolved in metallic Ti, is practically impossible when magnesium (Mg) is used as the deoxidizing agent. In recent years, it has been experimentally demonstrated that O dissolved in Ti can be directly removed using MgCl2 molten salt electrolysis. By the electrochemical deoxidation technique, Ti wires containing 0.12 mass% O were deoxidized to less than 0.02 mass% O. In some cases, the concentration of O in the Ti wires was reduced to the level of 0.01 mass% O, which cannot be attained using the current Kroll process. The possible application of this deoxidation technique to practical industrial recycling processes is also discussed.

  7. Evolution and enabling capabilities of spatially resolved techniques for the characterization of heterogeneously catalyzed reactions

    DOE PAGES

    Morgan, Kevin; Touitou, Jamal; Choi, Jae -Soon; ...

    2016-01-15

    The development and optimization of catalysts and catalytic processes requires knowledge of reaction kinetics and mechanisms. In traditional catalyst kinetic characterization, the gas composition is known at the inlet, and the exit flow is measured to determine changes in concentration. As such, the progression of the chemistry within the catalyst is not known. Technological advances in electromagnetic and physical probes have made visualizing the evolution of the chemistry within catalyst samples a reality, as part of a methodology commonly known as spatial resolution. Herein, we discuss and evaluate the development of spatially resolved techniques, including the evolutions and achievements ofmore » this growing area of catalytic research. The impact of such techniques is discussed in terms of the invasiveness of physical probes on catalytic systems, as well as how experimentally obtained spatial profiles can be used in conjunction with kinetic modeling. Moreover, some aims and aspirations for further evolution of spatially resolved techniques are considered.« less

  8. Practical technique to quantify small, dense low-density lipoprotein cholesterol using dynamic light scattering

    NASA Astrophysics Data System (ADS)

    Trirongjitmoah, Suchin; Iinaga, Kazuya; Sakurai, Toshihiro; Chiba, Hitoshi; Sriyudthsak, Mana; Shimizu, Koichi

    2016-04-01

    Quantification of small, dense low-density lipoprotein (sdLDL) cholesterol is clinically significant. We propose a practical technique to estimate the amount of sdLDL cholesterol using dynamic light scattering (DLS). An analytical solution in a closed form has newly been obtained to estimate the weight fraction of one species of scatterers in the DLS measurement of two species of scatterers. Using this solution, we can quantify the sdLDL cholesterol amount from the amounts of the low-density lipoprotein cholesterol and the high-density lipoprotein (HDL) cholesterol, which are commonly obtained through clinical tests. The accuracy of the proposed technique was confirmed experimentally using latex spheres with known size distributions. The applicability of the proposed technique was examined using samples of human blood serum. The possibility of estimating the sdLDL amount using the HDL data was demonstrated. These results suggest that the quantitative estimation of sdLDL amounts using DLS is feasible for point-of-care testing in clinical practice.

  9. Mobile robot self-localization system using single webcam distance measurement technology in indoor environments.

    PubMed

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-27

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.

  10. The development of laser speckle or particle image displacement velocimetry. Part 1: The role of photographic parameters

    NASA Technical Reports Server (NTRS)

    Lourenco, L. M. M.; Krothapalli, A.

    1987-01-01

    One of the difficult problems in experimental fluid dynamics remains the determination of the vorticity field in fluid flows. Recently, a novel velocity measurement technique, commonly known as Laser Speckle or Particle Image Displacement Velocimetry became available. This technique permits the simultaneous visualization of the 2 dimensional streamline pattern in unsteady flows and the quantification of the velocity field. The main advantage of this new technique is that the whole 2 dimensional velocity field can be recorded with great accuracy and spatial resolution, from which the instantaneous vorticity field can be easily obtained. A apparatus used for taking particle displacement images is described. Local coherent illumination by the probe laser beam yielded Young's fringes of good quality at almost every location of the flow field. These fringes were analyzed and the velocity and vorticity fields were derived. Several conclusions drawn are discussed.

  11. Mobile Robot Self-Localization System Using Single Webcam Distance Measurement Technology in Indoor Environments

    PubMed Central

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-01

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282

  12. Predicting the activity and toxicity of new psychoactive substances: a pharmaceutical industry perspective.

    PubMed

    Leach, Andrew G

    2014-01-01

    Predicting the effect that new compounds might have when administered to human beings is a common desire shared by researchers in the pharmaceutical industry and those interested in psychoactive compounds (illicit or otherwise). The experience of the pharmaceutical industry is that making such predictions at a usefully accurate level is not only difficult but that even when billions of dollars are spent to ensure that only compounds likely to have a desired effect without unacceptable side-effects are dosed to humans in clinical trials, they fail in more than 90% of cases. A range of experimental and computational techniques is used and they are placed in their context in this paper. The particular roles played by computational techniques and their limitations are highlighted; these techniques are used primarily to reduce the number of experiments that must be performed but cannot replace those experiments. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Simple laser vision sensor calibration for surface profiling applications

    NASA Astrophysics Data System (ADS)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  14. [Clinical and radiographic evaluation of a new percutaneous technique for moderate to severe hallux valgus deformity].

    PubMed

    Vélez-de Lachica, J C; Valdez-Jiménez, L A; Inzunza-Sánchez, J M

    2017-01-01

    Hallux valgus is considered the most common musculoskeletal deformity, with a prevalence of 88%. There are more than 130 surgical techniques for its treatment; currently, percutaneous ones are popular; however, they do not take into account the metatarsal-phalangeal correction angle. The aim of this study is to propose a modified technique for the correction of the percutaneous metatarsal-phalangeal and inter-metatarsal angles and to evaluate its clinical and radiological results. An experimental, prospective and longitudinal study in 10 patients with moderate to severe hallux valgus according to the classification of Coughlin and Mann were collected; the results were evaluated with the AOFAS scale at 15, 30, 60 and 90 days. The McBride technique and the technique of percutaneous anchor with the proposed amendment were performed. The AOFAS scale was applied as described, finding a progressive increase of the rating; the average correction of the inter-metatarsal angle was 8.8 degrees and of the metatarsal-phalangeal, 9.12. The modified technique of percutaneous anchor showed clear clinical and radiographic improvements in the short term. Our modified technique is proposed for future projects, including a large sample with long-term follow-up.

  15. An evaluation of the lap-shear test for Sn-rich solder/Cu couples: Experiments and simulation

    NASA Astrophysics Data System (ADS)

    Chawla, N.; Shen, Y.-L.; Deng, X.; Ege, E. S.

    2004-12-01

    The lap-shear technique is commonly used to evaluate the shear, creep, and thermal fatigue behavior of solder joints. We have conducted a parametric experimental and modeling study, on the effect of testing and geometrical parameters on solder/copper joint response in lap-shear. It was shown that the farfield applied strain is quite different from the actual solder strain (measured optically). Subtraction of the deformation of the Cu substrate provides a reasonable approximation of the solder strain in the elastic regime, but not in the plastic regime. Solder joint thickness has a profound effect on joint response. The solder response moves progressively closer to “true” shear response with increasing joint thickness. Numerical modeling using finite-element analyses were performed to rationalize the experimental findings. The same lap-shear configuration was used in the simulation. The input response for solder was based on the experimental tensile test result on bulk specimens. The calculated shear response, using both the commonly adopted far-field measure and the actual shear strain in solder, was found to be consistent with the trends observed in the lap-shear experiments. The geometric features were further explored to provide physical insight into the problem. Deformation of the substrate was found to greatly influence the shear behavior of the solder.

  16. A new method to quantitatively compare focal ratio degradation due to different end termination techniques

    NASA Astrophysics Data System (ADS)

    Poppett, Claire; Allington-Smith, Jeremy

    2010-07-01

    We investigate the FRD performance of a 150 μm core fibre for its suitability to the SIDE project.1 This work builds on our previous work2 (Paper 1) where we examined the dependence of FRD on length in fibres with a core size of 100 μm and proposed a new multi-component model to explain the results. In order to predict the FRD characteristics of a fibre, the most commonly used model is an adaptation of the Gloge8model by Carrasco and Parry3 which quantifies the the number of scattering defects within an optical bre using a single parameter, d0. The model predicts many trends which are seen experimentally, for example, a decrease in FRD as core diameter increases, and also as wavelength increases. However the model also predicts a strong dependence on FRD with length that is not seen experimentally. By adapting the single fibre model to include a second fibre, we can quantify the amount of FRD due to stress caused by the method of termination. By fitting the model to experimental data we find that polishing the fibre causes a small increase in stress to be induced in the end of the fibre compared to a simple cleave technique.

  17. Transcranial Magnetic Stimulation: Basic Principles and Clinical Applications in Migraine.

    PubMed

    Barker, Anthony T; Shields, Kevin

    2017-03-01

    Transcranial magnetic stimulation (TMS) is a neurophysiological technique with a long established pedigree of safety, tolerability, and efficacy. Initially TMS was used to study the function of the cerebral cortex, but it has now become a treatment for migraine, one of the most common and debilitating neurological conditions. In this review we discuss the scientific background and development of the technique. We explore its application for the treatment of migraine and ponder the possible mechanisms of action in this most common neurological condition. The generation of brief magnetic pulses by a suitable coil can induce electrical fields in the body. When applied to the cerebral cortex, currents are painlessly induced in cortical neurons. These currents can lead to neuronal depolarization and may influence cortical excitability by means that are as yet not fully understood. This ability to modulate cortical excitability has been exploited as a treatment for migraine with aura. Aura is implicated in the pathophysiology of migraine. Experimental studies have shown that transcranial magnetic pulses can block waves of cortical spreading depression - the experimental equivalent of migrainous aura. Migraine is a debilitating condition characterized by headache, nausea, and sensory hypersensitivity. It may affect up to 15% of the population, yet current drug treatments are often poorly tolerated. Clinical studies have shown that TMS is an effective treatment for migraine. In addition, it has the added advantages of being safe and well tolerated by patients. © 2016 American Headache Society.

  18. Slit-scanning differential phase-contrast mammography: first experimental results

    NASA Astrophysics Data System (ADS)

    Roessl, Ewald; Daerr, Heiner; Koehler, Thomas; Martens, Gerhard; van Stevendaal, Udo

    2014-03-01

    The demands for a large field-of-view (FOV) and the stringent requirements for a stable acquisition geometry rank among the major obstacles for the translation of grating-based, differential phase-contrast techniques from the laboratory to clinical applications. While for state-of-the-art Full-Field-Digital Mammography (FFDM) FOVs of 24 cm x 30 cm are common practice, the specifications for mechanical stability are naturally derived from the detector pixel size which ranges between 50 and 100 μm. However, in grating-based, phasecontrast imaging, the relative placement of the gratings in the interferometer must be guaranteed to within micro-meter precision. In this work we report on first experimental results on a phase-contrast x-ray imaging system based on the Philips MicroDose L30 mammography unit. With the proposed approach we achieve a FOV of about 65 mm x 175 mm by the use of the slit-scanning technique. The demand for mechanical stability on a micrometer scale was relaxed by the specific interferometer design, i.e., a rigid, actuator-free mount of the phase-grating G1 with respect to the analyzer-grating G2 onto a common steel frame. The image acquisition and formation processes are described and first phase-contrast images of a test object are presented. A brief discussion of the shortcomings of the current approach is given, including the level of remaining image artifacts and the relatively inefficient usage of the total available x-ray source output.

  19. Target Highlights in CASP9: Experimental Target Structures for the Critical Assessment of Techniques for Protein Structure Prediction

    PubMed Central

    Kryshtafovych, Andriy; Moult, John; Bartual, Sergio G.; Bazan, J. Fernando; Berman, Helen; Casteel, Darren E.; Christodoulou, Evangelos; Everett, John K.; Hausmann, Jens; Heidebrecht, Tatjana; Hills, Tanya; Hui, Raymond; Hunt, John F.; Jayaraman, Seetharaman; Joachimiak, Andrzej; Kennedy, Michael A.; Kim, Choel; Lingel, Andreas; Michalska, Karolina; Montelione, Gaetano T.; Otero, José M.; Perrakis, Anastassis; Pizarro, Juan C.; van Raaij, Mark J.; Ramelot, Theresa A.; Rousseau, Francois; Tong, Liang; Wernimont, Amy K.; Young, Jasmine; Schwede, Torsten

    2011-01-01

    One goal of the CASP Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction is to identify the current state of the art in protein structure prediction and modeling. A fundamental principle of CASP is blind prediction on a set of relevant protein targets, i.e. the participating computational methods are tested on a common set of experimental target proteins, for which the experimental structures are not known at the time of modeling. Therefore, the CASP experiment would not have been possible without broad support of the experimental protein structural biology community. In this manuscript, several experimental groups discuss the structures of the proteins which they provided as prediction targets for CASP9, highlighting structural and functional peculiarities of these structures: the long tail fibre protein gp37 from bacteriophage T4, the cyclic GMP-dependent protein kinase Iβ (PKGIβ) dimerization/docking domain, the ectodomain of the JTB (Jumping Translocation Breakpoint) transmembrane receptor, Autotaxin (ATX) in complex with an inhibitor, the DNA-Binding J-Binding Protein 1 (JBP1) domain essential for biosynthesis and maintenance of DNA base-J (β-D-glucosyl-hydroxymethyluracil) in Trypanosoma and Leishmania, an so far uncharacterized 73 residue domain from Ruminococcus gnavus with a fold typical for PDZ-like domains, a domain from the Phycobilisome (PBS) core-membrane linker (LCM) phycobiliprotein ApcE from Synechocystis, the Heat shock protein 90 (Hsp90) activators PFC0360w and PFC0270w from Plasmodium falciparum, and 2-oxo-3-deoxygalactonate kinase from Klebsiella pneumoniae. PMID:22020785

  20. Experimental model with bilioenteric anastomosis in rats--technique and significance.

    PubMed

    Nagai, T; Yamakawa, T

    1992-08-01

    A simple technique of hepaticojejunostomy in rats is introduced in this paper and its suitability for use as an experimental model was evaluated histologically. Hepaticojejunostomy was performed as follows; the stump of the supra-pancreatic common bile duct (CBD), detached from adjacent tissue, was introduced into the jejunal lumen using the outer catheter previously inserted into the jejunum, and the jejunal wall close to the implantation site of the CBD was fixed to the porta hepatitis with a suture. Among 40 rats in which hepaticojejunostomy was performed, the postoperative mortality rate was 17.5%. The remaining experimental animals (33 rats, 82.5%) survived for the duration of this study. The rats were sacrificed at 3, 5, 8, and 12 months after surgery, and liver function tests, macroscopic and histological studies of the biliary tract were carried out. No signs of cholangitis or liver abscess were noted in any experimental animals during this period. The median values of liver function tests were within normal limits in almost all of the experimental rats. The anastomotic stoma was also patent, and free drainage of bile was noted, but the bile duct proximal to the site of anastomosis was generally macroscopically dilated. Histologically, epithelial hyperplasia and fibrous thickening of the wall accompanied by inflammatory cell infiltration were noted in the rats sacrificed at 3 and 5 months postoperatively. Marked hyperplasia of mucous glands, goblet cell metaplasia and atypical epithelium were usually seen in the rats killed at 8 months and 12 months after surgery.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Determination of dynamic fracture toughness using a new experimental technique

    NASA Astrophysics Data System (ADS)

    Cady, Carl M.; Liu, Cheng; Lovato, Manuel L.

    2015-09-01

    In other studies dynamic fracture toughness has been measured using Charpy impact and modified Hopkinson Bar techniques. In this paper results will be shown for the measurement of fracture toughness using a new test geometry. The crack propagation velocities range from ˜0.15 mm/s to 2.5 m/s. Digital image correlation (DIC) will be the technique used to measure both the strain and the crack growth rates. The boundary of the crack is determined using the correlation coefficient generated during image analysis and with interframe timing the crack growth rate and crack opening can be determined. A comparison of static and dynamic loading experiments will be made for brittle polymeric materials. The analysis technique presented by Sammis et al. [1] is a semi-empirical solution, however, additional Linear Elastic Fracture Mechanics analysis of the strain fields generated as part of the DIC analysis allow for the more commonly used method resembling the crack tip opening displacement (CTOD) experiment. It should be noted that this technique was developed because limited amounts of material were available and crack growth rates were to fast for a standard CTOD method.

  2. Remote inspection with multi-copters, radiological sensors and SLAM techniques

    NASA Astrophysics Data System (ADS)

    Carvalho, Henrique; Vale, Alberto; Marques, Rúben; Ventura, Rodrigo; Brouwer, Yoeri; Gonçalves, Bruno

    2018-01-01

    Activated material can be found in different scenarios, such as in nuclear reactor facilities or medical facilities (e.g. in positron emission tomography commonly known as PET scanning). In addition, there are unexpected scenarios resulting from possible accidents, or where dangerous material is hidden for terrorism attacks using nuclear weapons. Thus, a technological solution is important to cope with fast and reliable remote inspection. The multi-copter is a common type of Unmanned Aerial Vehicle (UAV) that provides the ability to perform a first radiological inspection in the described scenarios. The paper proposes a solution with a multi-copter equipped with on-board sensors to perform a 3D reconstruction and a radiological mapping of the scenario. A depth camera and a Geiger-Müler counter are the used sensors. The inspection is performed in two steps: i) a 3D reconstruction of the environment and ii) radiation activity inference to localise and quantify sources of radiation. Experimental results were achieved with real 3D data and simulated radiation activity. Experimental tests with real sources of radiation are planned in the next iteration of the work.

  3. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    PubMed

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  4. Neuropharmacological Manipulation of Restrained and Free-flying Honey Bees, Apis mellifera.

    PubMed

    Søvik, Eirik; Plath, Jenny A; Devaud, Jean-Marc; Barron, Andrew B

    2016-11-26

    Honey bees demonstrate astonishing learning abilities and advanced social behavior and communication. In addition, their brain is small, easy to visualize and to study. Therefore, bees have long been a favored model amongst neurobiologists and neuroethologists for studying the neural basis of social and natural behavior. It is important, however, that the experimental techniques used to study bees do not interfere with the behaviors being studied. Because of this, it has been necessary to develop a range of techniques for pharmacological manipulation of honey bees. In this paper we demonstrate methods for treating restrained or free-flying honey bees with a wide range of pharmacological agents. These include both noninvasive methods such as oral and topical treatments, as well as more invasive methods that allow for precise drug delivery in either systemic or localized fashion. Finally, we discuss the advantages and disadvantages of each method and describe common hurdles and how to best overcome them. We conclude with a discussion on the importance of adapting the experimental method to the biological questions rather than the other way around.

  5. New types of experimental data shape the use of enzyme kinetics for dynamic network modeling.

    PubMed

    Tummler, Katja; Lubitz, Timo; Schelker, Max; Klipp, Edda

    2014-01-01

    Since the publication of Leonor Michaelis and Maude Menten's paper on the reaction kinetics of the enzyme invertase in 1913, molecular biology has evolved tremendously. New measurement techniques allow in vivo characterization of the whole genome, proteome or transcriptome of cells, whereas the classical enzyme essay only allows determination of the two Michaelis-Menten parameters V and K(m). Nevertheless, Michaelis-Menten kinetics are still commonly used, not only in the in vitro context of enzyme characterization but also as a rate law for enzymatic reactions in larger biochemical reaction networks. In this review, we give an overview of the historical development of kinetic rate laws originating from Michaelis-Menten kinetics over the past 100 years. Furthermore, we briefly summarize the experimental techniques used for the characterization of enzymes, and discuss web resources that systematically store kinetic parameters and related information. Finally, describe the novel opportunities that arise from using these data in dynamic mathematical modeling. In this framework, traditional in vitro approaches may be combined with modern genome-scale measurements to foster thorough understanding of the underlying complex mechanisms. © 2013 FEBS.

  6. Online and offline experimental techniques for polycyclic aromatic hydrocarbons recovery and measurement.

    PubMed

    Comandini, A; Malewicki, T; Brezinsky, K

    2012-03-01

    The implementation of techniques aimed at improving engine performance and reducing particulate matter (PM) pollutant emissions is strongly influenced by the limited understanding of the polycyclic aromatic hydrocarbons (PAH) formation chemistry, in combustion devices, that produces the PM emissions. New experimental results which examine the formation of multi-ring compounds are required. The present investigation focuses on two techniques for such an experimental examination by recovery of PAH compounds from a typical combustion oriented experimental apparatus. The online technique discussed constitutes an optimal solution but not always feasible approach. Nevertheless, a detailed description of a new online sampling system is provided which can serve as reference for future applications to different experimental set-ups. In comparison, an offline technique, which is sometimes more experimentally feasible but not necessarily optimal, has been studied in detail for the recovery of a variety of compounds with different properties, including naphthalene, biphenyl, and iodobenzene. The recovery results from both techniques were excellent with an error in the total carbon balance of around 10% for the online technique and an uncertainty in the measurement of the single species of around 7% for the offline technique. Although both techniques proved to be suitable for measurement of large PAH compounds, the online technique represents the optimal solution in view of the simplicity of the corresponding experimental procedure. On the other hand, the offline technique represents a valuable solution in those cases where the online technique cannot be implemented.

  7. Hook, Line and Infection: A Guide to Culturing Parasites, Establishing Infections and Assessing Immune Responses in the Three-Spined Stickleback.

    PubMed

    Stewart, Alexander; Jackson, Joseph; Barber, Iain; Eizaguirre, Christophe; Paterson, Rachel; van West, Pieter; Williams, Chris; Cable, Joanne

    2017-01-01

    The three-spined stickleback (Gasterosteus aculeatus) is a model organism with an extremely well-characterized ecology, evolutionary history, behavioural repertoire and parasitology that is coupled with published genomic data. These small temperate zone fish therefore provide an ideal experimental system to study common diseases of coldwater fish, including those of aquacultural importance. However, detailed information on the culture of stickleback parasites, the establishment and maintenance of infections and the quantification of host responses is scattered between primary and grey literature resources, some of which is not readily accessible. Our aim is to lay out a framework of techniques based on our experience to inform new and established laboratories about culture techniques and recent advances in the field. Here, essential knowledge on the biology, capture and laboratory maintenance of sticklebacks, and their commonly studied parasites is drawn together, highlighting recent advances in our understanding of the associated immune responses. In compiling this guide on the maintenance of sticklebacks and a range of common, taxonomically diverse parasites in the laboratory, we aim to engage a broader interdisciplinary community to consider this highly tractable model when addressing pressing questions in evolution, infection and aquaculture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Hepatitis A

    PubMed Central

    Maynard, James E.

    1976-01-01

    Hepatitis A is a disease of worldwide distribution which occurs in endemic and epidemic form and is transmitted primarily by person-to-person contact through the fecal-oral route. Common source epidemics due to contamination of food are relatively common, and water-borne epidemics have been described less frequently. The presumed etiologic agent of hepatitis A has now been visualized by immune electron microscopic (IEM) techniques in early acute-illness-phase stools of humans with hepatitis A as well as in chimpanzees experimentally infected with material known to contain hepatitis A virus. In addition, several new serologic tests for the detection of antibody against hepatitis A virus have been described. These include complement fixation and immune adherence techniques. Current data suggest that hepatitis A is caused by a single viral agent lacking the morphologic heterogeneity of hepatitis B viral components and that there may be relative antigenic homogeneity between strains of virus recovered from various parts of the world. Serologic studies to date also indicate that hepatitis A virus is not a major contributing cause in post-transfusion hepatitis. ImagesFIG. 2 PMID:183390

  9. Laser tracker orientation in confined space using on-board targets

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Kyle, Stephen; Lin, Jiarui; Yang, Linghui; Ren, Yu; Zhu, Jigui

    2016-08-01

    This paper presents a novel orientation method for two laser trackers using on-board targets attached to the tracker head and rotating with it. The technique extends an existing method developed for theodolite intersection systems which are now rarely used. This method requires only a very narrow space along the baseline between the instrument heads, in order to establish the orientation relationship. This has potential application in environments where space is restricted. The orientation parameters can be calculated by means of two-face reciprocal measurements to the on-board targets, and measurements to a common point close to the baseline. An accurate model is then applied which can be solved through nonlinear optimization. Experimental comparison has been made with the conventional orientation method, which is based on measurements to common intersection points located off the baseline. This requires more space and the comparison has demonstrated the feasibility of the more compact technique presented here. Physical setup and testing suggest that the method is practical. Uncertainties estimated by simulation indicate good performance in terms of measurement quality.

  10. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Heard, W.; Song, B.; Williams, B.; Martin, B.; Sparks, P.; Nie, X.

    2018-01-01

    This review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior of geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Uniqueness and limitations for each experimental technique are also discussed.

  11. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    DOE PAGES

    Heard, W.; Song, B.; Williams, B.; ...

    2018-01-03

    Here, this review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior ofmore » geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Finally, uniqueness and limitations for each experimental technique are also discussed.« less

  12. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heard, W.; Song, B.; Williams, B.

    Here, this review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior ofmore » geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Finally, uniqueness and limitations for each experimental technique are also discussed.« less

  13. Introducing a New Experimental Islet Transplantation Model using Biomimetic Hydrogel and a Simple High Yield Islet Isolation Technique.

    PubMed

    Mohammadi Ayenehdeh, Jamal; Niknam, Bahareh; Hashemi, Seyed Mahmoud; Rahavi, Hossein; Rezaei, Nima; Soleimani, Masoud; Tajik, Nader

    2017-07-01

    Islet transplantation could be an ideal alternative treatment to insulin therapy for type 1 diabetes Mellitus (T1DM). This clinical and experimental field requires a model that covers problems such as requiring a large number of functional and viable islets, the optimal transplantation site, and the prevention of islet dispersion. Hence, the methods of choice for isolation of functional islets and transplantation are crucial. The present study has introduced an experimental model that overcomes some critical issues in islet transplantation, including in situ pancreas perfusion by digestive enzymes through common bile duct. In comparison with conventional methods, we inflated the pancreas in Petri dishes with only 1 ml collagenase type XI solution, which was followed by hand-picking isolation or Ficoll gradient separation to purify the islets. Then we used a hydrogel composite in which the islets were embedded and transplanted into the peritoneal cavity of the streptozotocin-induced diabetic C57BL/6 mice. As compared to the yield of the classical methods, in our modified technique, the mean yield of isolation was about 130-200 viable islets/mouse pancreas. In vitro glucose-mediated insulin secretion assay indicated an appropriate response in isolated islets. In addition, data from in vivo experiments revealed that the allograft remarkably maintained blood glucose levels under 400 mg/dl and hydrogel composite prevents the passage of immune cells. In the model presented here, the rapid islet isolation technique and the application of biomimetic hydrogel wrapping of islets could facilitate islet transplantation procedures.

  14. Dynamic tensile characterization of a 4330 steel with kolsky bar techniques.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Bo; Antoun, Bonnie R.; Connelly, Kevin

    2010-08-01

    There has been increasing demand to understand the stress-strain response as well as damage and failure mechanisms of materials under impact loading condition. Dynamic tensile characterization has been an efficient approach to acquire satisfactory information of mechanical properties including damage and failure of the materials under investigation. However, in order to obtain valid experimental data, reliable tensile experimental techniques at high strain rates are required. This includes not only precise experimental apparatus but also reliable experimental procedures and comprehensive data interpretation. Kolsky bar, originally developed by Kolsky in 1949 [1] for high-rate compressive characterization of materials, has been extended formore » dynamic tensile testing since 1960 [2]. In comparison to Kolsky compression bar, the experimental design of Kolsky tension bar has been much more diversified, particularly in producing high speed tensile pulses in the bars. Moreover, instead of directly sandwiching the cylindrical specimen between the bars in Kolsky bar compression bar experiments, the specimen must be firmly attached to the bar ends in Kolsky tensile bar experiments. A common method is to thread a dumbbell specimen into the ends of the incident and transmission bars. The relatively complicated striking and specimen gripping systems in Kolsky tension bar techniques often lead to disturbance in stress wave propagation in the bars, requiring appropriate interpretation of experimental data. In this study, we employed a modified Kolsky tension bar, newly developed at Sandia National Laboratories, Livermore, CA, to explore the dynamic tensile response of a 4330-V steel. The design of the new Kolsky tension bar has been presented at 2010 SEM Annual Conference [3]. Figures 1 and 2 show the actual photograph and schematic of the Kolsky tension bar, respectively. As shown in Fig. 2, the gun barrel is directly connected to the incident bar with a coupler. The cylindrical striker set inside the gun barrel is launched to impact on the end cap that is threaded into the open end of the gun barrel, producing a tension on the gun barrel and the incident bar.« less

  15. Dynamic tensile characterization of a 4330-V steel with kolsky bar techniques.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Bo; Antoun, Bonnie R.; Connelly, Kevin

    2010-09-01

    There has been increasing demand to understand the stress-strain response as well as damage and failure mechanisms of materials under impact loading condition. Dynamic tensile characterization has been an efficient approach to acquire satisfactory information of mechanical properties including damage and failure of the materials under investigation. However, in order to obtain valid experimental data, reliable tensile experimental techniques at high strain rates are required. This includes not only precise experimental apparatus but also reliable experimental procedures and comprehensive data interpretation. Kolsky bar, originally developed by Kolsky in 1949 [1] for high-rate compressive characterization of materials, has been extended formore » dynamic tensile testing since 1960 [2]. In comparison to Kolsky compression bar, the experimental design of Kolsky tension bar has been much more diversified, particularly in producing high speed tensile pulses in the bars. Moreover, instead of directly sandwiching the cylindrical specimen between the bars in Kolsky bar compression bar experiments, the specimen must be firmly attached to the bar ends in Kolsky tensile bar experiments. A common method is to thread a dumbbell specimen into the ends of the incident and transmission bars. The relatively complicated striking and specimen gripping systems in Kolsky tension bar techniques often lead to disturbance in stress wave propagation in the bars, requiring appropriate interpretation of experimental data. In this study, we employed a modified Kolsky tension bar, newly developed at Sandia National Laboratories, Livermore, CA, to explore the dynamic tensile response of a 4330-V steel. The design of the new Kolsky tension bar has been presented at 2010 SEM Annual Conference [3]. Figures 1 and 2 show the actual photograph and schematic of the Kolsky tension bar, respectively. As shown in Fig. 2, the gun barrel is directly connected to the incident bar with a coupler. The cylindrical striker set inside the gun barrel is launched to impact on the end cap that is threaded into the open end of the gun barrel, producing a tension on the gun barrel and the incident bar.« less

  16. A B-TOF mass spectrometer for the analysis of ions with extreme high start-up energies.

    PubMed

    Lezius, M

    2002-03-01

    Weak magnetic deflection is combined with two acceleration stage time-of-flight mass spectrometry and subsequent position-sensitive ion detection. The experimental method, called B-TOF mass spectrometry, is described with respect to its theoretical background and some experimental results. It is demonstrated that the technique has distinct advantages over other approaches, with special respect to the identification and analysis of very highly energetic ions with an initially large energy broadening (up to 1 MeV) and with high charge states (up to 30+). Similar energetic targets are a common case in intense laser-matter interaction processes found during laser ablation, laser-cluster and laser-molecule interaction and fast particle and x-ray generation from laser-heated plasma. Copyright 2002 John Wiley & Sons, Ltd.

  17. Accurate simulations of helium pick-up experiments using a rejection-free Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Dutra, Matthew; Hinde, Robert

    2018-04-01

    In this paper, we present Monte Carlo simulations of helium droplet pick-up experiments with the intention of developing a robust and accurate theoretical approach for interpreting experimental helium droplet calorimetry data. Our approach is capable of capturing the evaporative behavior of helium droplets following dopant acquisition, allowing for a more realistic description of the pick-up process. Furthermore, we circumvent the traditional assumption of bulk helium behavior by utilizing density functional calculations of the size-dependent helium droplet chemical potential. The results of this new Monte Carlo technique are compared to commonly used Poisson pick-up statistics for simulations that reflect a broad range of experimental parameters. We conclude by offering an assessment of both of these theoretical approaches in the context of our observed results.

  18. [Research progress on the technique and materials for three-dimensional bio-printing].

    PubMed

    Yang, Runhuai; Chen, Yueming; Ma, Changwang; Wang, Huiqin; Wang, Shuyue

    2017-04-01

    Three-dimensional (3D) bio-printing is a novel engineering technique by which the cells and support materials can be manufactured to a complex 3D structure. Compared with other 3D printing methods, 3D bio-printing should pay more attention to the biocompatible environment of the printing methods and the materials. Aimed at studying the feature of the 3D bio-printing, this paper mainly focuses on the current research state of 3D bio-printing, with the techniques and materials of the bio-printing especially emphasized. To introduce current printing methods, the inkjet method, extrusion method, stereolithography skill and laser-assisted technique are described. The printing precision, process, requirements and influence of all the techniques on cell status are compared. For introduction of the printing materials, the cross-link, biocompatibility and applications of common bio-printing materials are reviewed and compared. Most of the 3D bio-printing studies are being remained at the experimental stage up to now, so the review of 3D bio-printing could improve this technique for practical use, and it could also contribute to the further development of 3D bio-printing.

  19. True and Quasi-Experimental Designs. ERIC/AE Digest.

    ERIC Educational Resources Information Center

    Gribbons, Barry; Herman, Joan

    Among the different types of experimental design are two general categories: true experimental designs and quasi- experimental designs. True experimental designs include more than one purposively created group, common measured outcomes, and random assignment. Quasi-experimental designs are commonly used when random assignment is not practical or…

  20. Randomized trial of supplementary interviewing techniques to enhance recall of sexual partners in contact interviews.

    PubMed

    Brewer, Devon D; Potterat, John J; Muth, Stephen Q; Malone, Patricia Z; Montoya, Pamela; Green, David L; Rogers, Helen L; Cox, Patricia A

    2005-03-01

    People with multiple sex partners tend to forget a significant proportion when recalling them. Randomized trial of supplementary interviewing techniques during routine partner notification contact interviews for chlamydia, gonorrhea, and syphilis in Colorado Springs, CO. Cases with multiple sex partners in the last 3 months (n = 123) participated. Interviewers prompted nonspecifically and read back the list of elicited partners after cases recalled partners on their own. We then randomly assigned cases to receive 1 of 3 sets of recall cues: (1) an experimental set of cues consisting of locations where people meet partners, role relationships, network ties, and first letters of names; (2) another experimental set including common first names; and (3) control cues referring to individual characteristics (e.g., physical appearance). Nonspecific prompting and reading back the list each increased the number of additional partners elicited and located by 3% to 5% on average. On average, the combined location/role/letter/network cues elicited more additional partners (0.57) than did the first-name (0.29) and individual characteristics (0.28) cues. The location and first-name cues were the most effective in eliciting located partners. The supplementary techniques increased the number of new cases found by 12% and, importantly, identified branches of the sexual network that would not otherwise have been discovered. Elicitation of sex partners can be enhanced in contact interviews with simple interviewing techniques, resulting in improved network ascertainment and sexually transmitted disease case finding.

  1. The medical ethics of the 'father of gynaecology', Dr J Marion Sims.

    PubMed Central

    Ojanuga, D

    1993-01-01

    Vesico-vaginal fistula (VVF) was a common ailment among American women in the 19th century. Prior to that time, no successful surgery had been developed for the cure of this condition until Dr J Marion Sims perfected a successful surgical technique in 1849. Dr Sims used female slaves as research subjects over a four-year period of experimentation (1845-1849). This paper discusses the controversy surrounding his use of powerless women and whether his actions were acceptable during that historical period. PMID:8459435

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gómez, A. M., E-mail: amgomezl-1@uqvirtual.edu.co; Torres, D. A., E-mail: datorresg@unal.edu.co

    The experimental study of nuclear magnetic moments, using the Transient Field technique, makes use of spin-orbit hyperfine interactions to generate strong magnetic fields, above the kilo-Tesla regime, capable to create a precession of the nuclear spin. A theoretical description of such magnetic fields is still under theoretical research, and the use of parametrizations is still a common way to address the lack of theoretical information. In this contribution, a review of the main parametrizations utilized in the measurements of Nuclear Magnetic Moments will be presented, the challenges to create a theoretical description from first principles will be discussed.

  3. [Calcitonin as an alternative treatment for root resorption].

    PubMed

    Pierce, A; Berg, J O; Lindskog, S

    1989-01-01

    Inflammatory root resorption is a common finding following trauma and will cause eventual destruction of the tooth root if left untreated. This study examined the effects of intrapulpal application of calcitonin, a hormone known to inhibit osteoclastic bone resorption, on experimental inflammatory root resorption induced in monkeys. Results were histologically evaluated using a morphometric technique and revealed that calcitonin was an effective medicament for the treatment of inflammatory root resorption. It was concluded that this hormone could be a useful therapeutic adjunct in difficult cases of external root resorption.

  4. NASA/ESA CV-990 spacelab simulation

    NASA Technical Reports Server (NTRS)

    Reller, J. O., Jr.

    1976-01-01

    Simplified techniques were applied to conduct an extensive spacelab simulation using the airborne laboratory. The scientific payload was selected to perform studies in upper atmospheric physics and infrared astronomy. The mission was successful and provided extensive data relevant to spacelab objectives on overall management of a complex international payload; experiment preparation, testing, and integration; training for proxy operation in space; data handling; multiexperimenter use of common experimenter facilities (telescopes); multiexperiment operation by experiment operators; selection criteria for spacelab experiment operators; and schedule requirements to prepare for such a spacelab mission.

  5. Thermotropic Liquid Crystal-Assisted Chemical and Biological Sensors

    PubMed Central

    Honaker, Lawrence W.; Usol’tseva, Nadezhda; Mann, Elizabeth K.

    2017-01-01

    In this review article, we analyze recent progress in the application of liquid crystal-assisted advanced functional materials for sensing biological and chemical analytes. Multiple research groups demonstrate substantial interest in liquid crystal (LC) sensing platforms, generating an increasing number of scientific articles. We review trends in implementing LC sensing techniques and identify common problems related to the stability and reliability of the sensing materials as well as to experimental set-ups. Finally, we suggest possible means of bridging scientific findings to viable and attractive LC sensor platforms. PMID:29295530

  6. ASSESSMENT OF VENOUS THROMBOSIS IN ANIMAL MODELS

    PubMed Central

    SP, Grover; CE, Evans; AS, Patel; B, Modarai; P, Saha; A, Smith

    2016-01-01

    Deep vein thrombosis and common complications, including pulmonary embolism and post thrombotic syndrome, represent a major source of morbidity and mortality worldwide. Experimental models of venous thrombosis have provided considerable insight into the cellular and molecular mechanisms that regulate thrombus formation and subsequent resolution. Here we critically appraise the ex vivo and in vivo techniques used to assess venous thrombosis in these models. Particular attention is paid to imaging modalities, including magnetic resonance imaging, micro computed tomography and high frequency ultrasound that facilitate longitudinal assessment of thrombus size and composition. PMID:26681755

  7. Assessment of Venous Thrombosis in Animal Models.

    PubMed

    Grover, Steven P; Evans, Colin E; Patel, Ashish S; Modarai, Bijan; Saha, Prakash; Smith, Alberto

    2016-02-01

    Deep vein thrombosis and common complications, including pulmonary embolism and post-thrombotic syndrome, represent a major source of morbidity and mortality worldwide. Experimental models of venous thrombosis have provided considerable insight into the cellular and molecular mechanisms that regulate thrombus formation and subsequent resolution. Here, we critically appraise the ex vivo and in vivo techniques used to assess venous thrombosis in these models. Particular attention is paid to imaging modalities, including magnetic resonance imaging, micro-computed tomography, and high-frequency ultrasound that facilitate longitudinal assessment of thrombus size and composition. © 2015 American Heart Association, Inc.

  8. Development and characterization of an IPMC hair-like transducer

    NASA Astrophysics Data System (ADS)

    Akle, Barbar J.; Challita, Elio; Khairalah, Nady

    2015-04-01

    Hair-like sensors are very common in natural and biological systems. Such sensors are used to measure acoustic pressures, fluid flows, and chemical concentrations among others. Hair-like actuators are also used to control fluid flows and perform temperature management. This study presents a manufacturing technique for a hair-like IPMC transducer. A thorough study is presented on the building process of the sensor. The method used to control the diameter and the electrodes thickness of the transducer is developed. The sensing behavior of the manufactured transducers is experimentally characterized.

  9. [Improvement of magnetic resonance phase unwrapping method based on Goldstein Branch-cut algorithm].

    PubMed

    Guo, Lin; Kang, Lili; Wang, Dandan

    2013-02-01

    The phase information of magnetic resonance (MR) phase image can be used in many MR imaging techniques, but phase wrapping of the images often results in inaccurate phase information and phase unwrapping is essential for MR imaging techniques. In this paper we analyze the causes of errors in phase unwrapping with the commonly used Goldstein Brunch-cut algorithm and propose an improved algorithm. During the unwrapping process, masking, filtering, dipole- remover preprocessor, and the Prim algorithm of the minimum spanning tree were introduced to optimize the residues essential for the Goldstein Brunch-cut algorithm. Experimental results showed that the residues, branch-cuts and continuous unwrapped phase surface were efficiently reduced and the quality of MR phase images was obviously improved with the proposed method.

  10. Evolutionary Based Techniques for Fault Tolerant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Larchev, Gregory V.; Lohn, Jason D.

    2006-01-01

    The use of SRAM-based Field Programmable Gate Arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating Single-Event Latchups (SELs). Repair methods based on Evolutionary Algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 440-7 decoder) into which a number of simulated faults have been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of or as a supplement to Triple Modular Redundancy (TMR), which is currently the predominant method for mitigating FPGA faults.

  11. Discovering semantic features in the literature: a foundation for building functional associations

    PubMed Central

    Chagoyen, Monica; Carmona-Saez, Pedro; Shatkay, Hagit; Carazo, Jose M; Pascual-Montano, Alberto

    2006-01-01

    Background Experimental techniques such as DNA microarray, serial analysis of gene expression (SAGE) and mass spectrometry proteomics, among others, are generating large amounts of data related to genes and proteins at different levels. As in any other experimental approach, it is necessary to analyze these data in the context of previously known information about the biological entities under study. The literature is a particularly valuable source of information for experiment validation and interpretation. Therefore, the development of automated text mining tools to assist in such interpretation is one of the main challenges in current bioinformatics research. Results We present a method to create literature profiles for large sets of genes or proteins based on common semantic features extracted from a corpus of relevant documents. These profiles can be used to establish pair-wise similarities among genes, utilized in gene/protein classification or can be even combined with experimental measurements. Semantic features can be used by researchers to facilitate the understanding of the commonalities indicated by experimental results. Our approach is based on non-negative matrix factorization (NMF), a machine-learning algorithm for data analysis, capable of identifying local patterns that characterize a subset of the data. The literature is thus used to establish putative relationships among subsets of genes or proteins and to provide coherent justification for this clustering into subsets. We demonstrate the utility of the method by applying it to two independent and vastly different sets of genes. Conclusion The presented method can create literature profiles from documents relevant to sets of genes. The representation of genes as additive linear combinations of semantic features allows for the exploration of functional associations as well as for clustering, suggesting a valuable methodology for the validation and interpretation of high-throughput experimental data. PMID:16438716

  12. Vibrationally resolved photoelectron spectroscopy of electronic excited states of DNA bases: application to the ã state of thymine cation.

    PubMed

    Hochlaf, Majdi; Pan, Yi; Lau, Kai-Chung; Majdi, Youssef; Poisson, Lionel; Garcia, Gustavo A; Nahon, Laurent; Al Mogren, Muneerah Mogren; Schwell, Martin

    2015-02-19

    For fully understanding the light-molecule interaction dynamics at short time scales, recent theoretical and experimental studies proved the importance of accurate characterizations not only of the ground (D0) but also of the electronic excited states (e.g., D1) of molecules. While ground state investigations are currently straightforward, those of electronic excited states are not. Here, we characterized the à electronic state of ionic thymine (T(+)) DNA base using explicitly correlated coupled cluster ab initio methods and state-of-the-art synchrotron-based electron/ion coincidence techniques. The experimental spectrum is composed of rich and long vibrational progressions corresponding to the population of the low frequency modes of T(+)(Ã). This work challenges previous numerous works carried out on DNA bases using common synchrotron and VUV-based photoelectron spectroscopies. We provide hence a powerful theoretical and experimental framework to study the electronic structure of ionized DNA bases that could be generalized to other medium-sized biologically relevant systems.

  13. The development of laser speckle velocimetry for the study of vortical flows

    NASA Technical Reports Server (NTRS)

    Krothapalli, A.

    1991-01-01

    A research program was undertaken to develop a new experimental technique commonly known as particle image displacement velocity (PIVD) to measure an instantaneous two dimensional velocity field in a selected plane of flow field. This technique was successfully developed and applied to the study of several aerodynamic problems. A detailed description of the technique and a broad review of all the research activity carried out in this field are reported. A list of technical publications is also provided. The application of PIDV to unsteady flows with large scale structures is demonstrated in a study of the temporal evolution of the flow past an impulsively started circular cylinder. The instantaneous two dimensional flow in the transition region of a rectangular air jet was measured using PIDV and the details are presented. This experiment clearly demonstrates the PIDV capability in the measurement of turbulent flows. Preliminary experiments were also conducted to measure the instantaneous flow over a circular bump in a transonic flow. Several other experiments now routinely use PIDV as a non-intrustive measurement technique to obtain instantaneous two dimensional velocity fields.

  14. Differentiation of Cariogenic Streptococci by Fluorescent Antibody1

    PubMed Central

    Jablon, James M.; Zinner, Doran D.

    1966-01-01

    Jablon, J. M. (University of Miami, Miami, Fla.), and D. D. Zinner. Differentiation of cariogenic streptococci by fluorescent antibody. J. Bacteriol. 92:1590–1596. 1966.—Eight strains of streptococci were isolated from human carious lesions by the fluorescent-antibody (FA) technique. Seven of these strains produced experimental caries in hamsters or rats maintained on a high sucrose diet. The eighth strain was noncariogenic in animals but possessed some antigenic components in common with the cariogenic strains. On the basis of antigen-antibody reactions by microprecipitin and agar-gel diffusion patterns, the strains were divided into four groups; these groups differed with regard to their cariogenic activity in hamsters. Fluorescein-conjugated antisera, prepared against the human strains, showed some cross-reactions which interfered with the efficacy of the FA technique in differentiating between the related streptococcal groups. To eliminate these cross-reactions, a small amount of related-strain antisera was added to the fluorescein-conjugated antisera to the cariogenic strains. This technique is effective in blocking cross-reactions and should be tried wherever cross-reactions are encountered in the FA technique. Images PMID:5334765

  15. Frequency-independent radiation modes of interior sound radiation: Experimental study and global active control

    NASA Astrophysics Data System (ADS)

    Hesse, C.; Papantoni, V.; Algermissen, S.; Monner, H. P.

    2017-08-01

    Active control of structural sound radiation is a promising technique to overcome the poor passive acoustic isolation performance of lightweight structures in the low-frequency region. Active structural acoustic control commonly aims at the suppression of the far-field radiated sound power. This paper is concerned with the active control of sound radiation into acoustic enclosures. Experimental results of a coupled rectangular plate-fluid system under stochastic excitation are presented. The amplitudes of the frequency-independent interior radiation modes are determined in real-time using a set of structural vibration sensors, for the purpose of estimating their contribution to the acoustic potential energy in the enclosure. This approach is validated by acoustic measurements inside the cavity. Utilizing a feedback control approach, a broadband reduction of the global acoustic response inside the enclosure is achieved.

  16. Respect the technique: Status-based respect increases minority group social cohesion with majority groups, while also increasing minority collective action tendencies.

    PubMed

    Glasford, Demis E; Johnston, Brian

    2018-01-01

    The present work explores the implications of respect for social change. Social change can be achieved via improved attitudes between minority and majority groups (i.e., social cohesion) or via action taken by minority groups (i.e., collective action). Recent work suggests that the social cohesion route to social change, in particular an emphasis on commonality, may be incompatible with the collective action route to social change. We suggest that social-cohesion strategies rooted in status-based respect may allow for social cohesion and collective action. We experimentally investigated the relative effects of a majority group communicating status-based respect and commonality, as compared to a control, on minority group members' social cohesion with the majority group and willingness to engage in collective action. Status-based respect increased positive attitudes toward a majority group, relative to commonality and control, but was also associated with increased collective action tendencies. Implications for social change are discussed.

  17. Determination of high temperature strains using a PC based vision system

    NASA Astrophysics Data System (ADS)

    McNeill, Stephen R.; Sutton, Michael A.; Russell, Samuel S.

    1992-09-01

    With the widespread availability of video digitizers and cheap personal computers, the use of computer vision as an experimental tool is becoming common place. These systems are being used to make a wide variety of measurements that range from simple surface characterization to velocity profiles. The Sub-Pixel Digital Image Correlation technique has been developed to measure full field displacement and gradients of the surface of an object subjected to a driving force. The technique has shown its utility by measuring the deformation and movement of objects that range from simple translation to fluid velocity profiles to crack tip deformation of solid rocket fuel. This technique has recently been improved and used to measure the surface displacement field of an object at high temperature. The development of a PC based Sub-Pixel Digital Image Correlation system has yielded an accurate and easy to use system for measuring surface displacements and gradients. Experiments have been performed to show the system is viable for measuring thermal strain.

  18. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  19. A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics

    NASA Technical Reports Server (NTRS)

    Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan

    2013-01-01

    In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.

  20. In-depth analysis of internal control genes for quantitative real-time PCR in Brassica oleracea var. botrytis.

    PubMed

    Sheng, X G; Zhao, Z Q; Yu, H F; Wang, J S; Zheng, C F; Gu, H H

    2016-07-15

    Quantitative reverse-transcription PCR (qRT-PCR) is a versatile technique for the analysis of gene expression. The selection of stable reference genes is essential for the application of this technique. Cauliflower (Brassica oleracea L. var. botrytis) is a commonly consumed vegetable that is rich in vitamin, calcium, and iron. Thus far, to our knowledge, there have been no reports on the validation of suitable reference genes for the data normalization of qRT-PCR in cauliflower. In the present study, we analyzed 12 candidate housekeeping genes in cauliflower subjected to different abiotic stresses, hormone treatment conditions, and accessions. geNorm and NormFinder algorithms were used to assess the expression stability of these genes. ACT2 and TIP41 were selected as suitable reference genes across all experimental samples in this study. When different accessions were compared, ACT2 and UNK3 were found to be the most suitable reference genes. In the hormone and abiotic stress treatments, ACT2, TIP41, and UNK2 were the most stably expressed. Our study also provided guidelines for selecting the best reference genes under various experimental conditions.

  1. Comprehensive investigations of kinetics of alkaline hydrolysis of TNT (2,4,6-trinitrotoluene), DNT (2,4-dinitrotoluene), and DNAN (2,4-dinitroanisole).

    PubMed

    Sviatenko, Liudmyla; Kinney, Chad; Gorb, Leonid; Hill, Frances C; Bednar, Anthony J; Okovytyy, Sergiy; Leszczynski, Jerzy

    2014-09-02

    Combined experimental and computational techniques were used to analyze multistep chemical reactions in the alkaline hydrolysis of three nitroaromatic compounds: 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (DNT), and 2,4-dinitroanisole (DNAN). The study reveals common features and differences in the kinetic behavior of these compounds. The analysis of the predicted pathways includes modeling of the reactions, along with simulation of UV-vis spectra, experimental monitoring of reactions using LC/MS techniques, development of the kinetic model by designing and solving the system of differential equations, and obtaining computationally predicted kinetics for decay and accumulation of reactants and products. Obtained results suggest that DNT and DNAN are more resistant to alkaline hydrolysis than TNT. The direct substitution of a nitro group by a hydroxide represents the most favorable pathway for all considered compounds. The formation of Meisenheimer complexes leads to the kinetic first-step intermediates in the hydrolysis of TNT. Janovsky complexes can also be formed during hydrolysis of TNT and DNT but in small quantities. Methyl group abstraction is one of the suggested pathways of DNAN transformation during alkaline hydrolysis.

  2. A further component analysis for illicit drugs mixtures with THz-TDS

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Shen, Jingling; He, Ting; Pan, Rui

    2009-07-01

    A new method for quantitative analysis of mixtures of illicit drugs with THz time domain spectroscopy was proposed and verified experimentally. In traditional method we need fingerprints of all the pure chemical components. In practical as only the objective components in a mixture and their absorption features are known, it is necessary and important to present a more practical technique for the detection and identification. Our new method of quantitatively inspect of the mixtures of illicit drugs is developed by using derivative spectrum. In this method, the ratio of objective components in a mixture can be obtained on the assumption that all objective components in the mixture and their absorption features are known but the unknown components are not needed. Then methamphetamine and flour, a illicit drug and a common adulterant, were selected for our experiment. The experimental result verified the effectiveness of the method, which suggested that it could be an effective method for quantitative identification of illicit drugs. This THz spectroscopy technique is great significant in the real-world applications of illicit drugs quantitative analysis. It could be an effective method in the field of security and pharmaceuticals inspection.

  3. Linking microbial community structure and microbial processes: An empirical and conceptual overview

    USGS Publications Warehouse

    Bier, R.L.; Bernhardt, Emily S.; Boot, Claudia M.; Graham, Emily B.; Hall, Edward K.; Lennon, Jay T.; Nemergut, Diana R.; Osborne, Brooke B.; Ruiz-Gonzalez, Clara; Schimel, Joshua P.; Waldrop, Mark P.; Wallenstein, Matthew D.

    2015-01-01

    A major goal of microbial ecology is to identify links between microbial community structure and microbial processes. Although this objective seems straightforward, there are conceptual and methodological challenges to designing studies that explicitly evaluate this link. Here, we analyzed literature documenting structure and process responses to manipulations to determine the frequency of structure-process links and whether experimental approaches and techniques influence link detection. We examined nine journals (published 2009–13) and retained 148 experimental studies measuring microbial community structure and processes. Many qualifying papers (112 of 148) documented structure and process responses, but few (38 of 112 papers) reported statistically testing for a link. Of these tested links, 75% were significant and typically used Spearman or Pearson's correlation analysis (68%). No particular approach for characterizing structure or processes was more likely to produce significant links. Process responses were detected earlier on average than responses in structure or both structure and process. Together, our findings suggest that few publications report statistically testing structure-process links. However, when links are tested for they often occur but share few commonalities in the processes or structures that were linked and the techniques used for measuring them.

  4. SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology

    PubMed Central

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  5. Experimental cross-correlation nitrogen Q-branch CARS thermometry in a spark ignition engine

    NASA Astrophysics Data System (ADS)

    Lockett, R. D.; Ball, D.; Robertson, G. N.

    2013-07-01

    A purely experimental technique was employed to derive temperatures from nitrogen Q-branch Coherent Anti-Stokes Raman Scattering (CARS) spectra, obtained in a high pressure, high temperature environment (spark ignition Otto engine). This was in order to obviate any errors arising from deficiencies in the spectral scaling laws which are commonly used to represent nitrogen Q-branch CARS spectra at high pressure. The spectra obtained in the engine were compared with spectra obtained in a calibrated high pressure, high temperature cell, using direct cross-correlation in place of the minimisation of sums of squares of residuals. The technique is demonstrated through the measurement of air temperature as a function of crankshaft angle inside the cylinder of a motored single-cylinder Ricardo E6 research engine, followed by the measurement of fuel-air mixture temperatures obtained during the compression stroke in a knocking Ricardo E6 engine. A standard CARS programme (SANDIA's CARSFIT) was employed to calibrate the altered non-resonant background contribution to the CARS spectra that was caused by the alteration to the mole fraction of nitrogen in the unburned fuel-air mixture. The compression temperature profiles were extrapolated in order to predict the auto-ignition temperatures.

  6. SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.

    PubMed

    Youn, Seongwook

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.

  7. Aspergillosis in waterfowl

    USGS Publications Warehouse

    Herman, Carlton M.; Sladen, William J. L.

    1958-01-01

    Aspergillosis, a respiratory disease most commonly caused by the fungus Aspergillus fumigatus, although frequently the cause of losses in captive birds, has been little studied in wild waterfowl and other avian species. Evidence indicates this to be of importance in the wild, and studies were conducted to determine factors relating to its epizoology. Field collections from corn and other plants have yielded infective spores of Aspergillus which were inoculated into experimental chickens and ducklings and then re-isolated from characteristic lesions. A technique was developed for inoculating suspensions of known numbers of spores directly into one of the posterior thoracic airsacs. It was demonstrated that less than one million spores of A. fumigatus killed less than one-half of the experimental chickens, 10 million spores killed over 80 per cent and 50 million killed all inoculated chickens as well as ducklings. Older birds were able to survive as many as 500 million spores except when in a weakened condition. Chickens usually started dying within two days after inoculation while those that survived as long as 11 days usually fully recovered by three weeks. Pathological involvement usually was confined to lungs and airsacs. The procedures and techniques involved in these studies were illustrated on a color motion picture.

  8. Fundamental approaches in molecular biology for communication sciences and disorders.

    PubMed

    Bartlett, Rebecca S; Jetté, Marie E; King, Suzanne N; Schaser, Allison; Thibeault, Susan L

    2012-08-01

    This contemporary tutorial will introduce general principles of molecular biology, common deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein assays and their relevance in the field of communication sciences and disorders. Over the past 2 decades, knowledge of the molecular pathophysiology of human disease has increased at a remarkable pace. Most of this progress can be attributed to concomitant advances in basic molecular biology and, specifically, the development of an ever-expanding armamentarium of technologies for analysis of DNA, RNA, and protein structure and function. Details of these methodologies, their limitations, and examples from the communication sciences and disorders literature are presented. Results/Conclusions The use of molecular biology techniques in the fields of speech, language, and hearing sciences is increasing, facilitating the need for an understanding of molecular biology fundamentals and common experimental assays.

  9. Dynamic response of RC beams strengthened with near surface mounted Carbon-FRP rods subjected to damage

    NASA Astrophysics Data System (ADS)

    Capozucca, R.; Blasi, M. G.; Corina, V.

    2015-07-01

    Near surface mounted (NSM) technique with fiber reinforced polymer (FRP) is becoming a common method in the strengthening of concrete beams. The availability of NSM FRP technique depends on many factors linked to materials and geometry - dimensions of the rods used, type of FRP material employed, rods’ surface configuration, groove size - and to adhesion between concrete and FRP rods. In this paper detection of damage is investigated measuring the natural frequency values of beam in the case of free-free ends. Damage was due both to reduction of adhesion between concrete and carbon-FRP rectangular and circular rods and cracking of concrete under static bending tests on beams. Comparison between experimental and theoretical frequency values evaluating frequency changes due to damage permits to monitor actual behaviour of RC beams strengthened by NSM CFRP rods.

  10. Simulation of keratoconus observation in photorefraction

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Ling; Tan, B.; Baker, K.; Lewis, J. W. L.; Swartz, T.; Jiang, Y.; Wang, M.

    2006-11-01

    In the recent years, keratoconus (KC) has increasingly gained attention due to its treatment options and to the popularity of keratorefractive surgery. This paper investigates the potential of identification of KC using photorefraction (PR), an optical technique that is similar to objective retinoscopy and is commonly used for large-scale ocular screening. Using personalized eye models of both KC and pre-LASIK patients, computer simulations were performed to achieve visualization of this ophthalmic measurement. The simulations are validated by comparing results to two sets of experimental measurements. These PR images show distinguishable differences between KC eyes and eyes that are either normal or ametropic. The simulation technique with personalized modeling can be extended to other ophthalmic instrument developments. It makes possible investigation with the least number of real human subjects. The application is also of great interest in medical training.

  11. Detecting Spatial Patterns in Biological Array Experiments

    PubMed Central

    ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.

    2005-01-01

    Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791

  12. A novel functional electrical stimulation-control system for restoring motor function of post-stroke hemiplegic patients

    PubMed Central

    Huang, Zonghao; Wang, Zhigong; Lv, Xiaoying; Zhou, Yuxuan; Wang, Haipeng; Zong, Sihao

    2014-01-01

    Hemiparesis is one of the most common consequences of stroke. Advanced rehabilitation techniques are essential for restoring motor function in hemiplegic patients. Functional electrical stimulation applied to the affected limb based on myoelectric signal from the unaffected limb is a promising therapy for hemiplegia. In this study, we developed a prototype system for evaluating this novel functional electrical stimulation-control strategy. Based on surface electromyography and a vector machine model, a self-administered, multi-movement, force-modulation functional electrical stimulation-prototype system for hemiplegia was implemented. This paper discusses the hardware design, the algorithm of the system, and key points of the self-oscillation-prone system. The experimental results demonstrate the feasibility of the prototype system for further clinical trials, which is being conducted to evaluate the efficacy of the proposed rehabilitation technique. PMID:25657728

  13. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  14. Dielectric fluid directional spreading under the action of corona discharge

    NASA Astrophysics Data System (ADS)

    Zhou, Shangru; Liu, Jie; Hu, Qun; Jiang, Teng; Yang, Jinchu; Liu, Sheng; Zheng, Huai

    2018-01-01

    Liquid spreading is a very common nature phenomenon and of significant importance for a broad range of applications. In this study, a dielectric fluid directional spreading phenomenon is presented. Under the action of corona discharge, a dielectric fluid, here a typical silicone directionally spreads along conductive patterns on conductive/nonconductive substrates. Directional spreading behaviors of silicone were experimentally observed on different conductive patterns in detail. Spreading speeds were analyzed at different driving voltages, which induced the corona discharge. The presented phenomenon may be useful to inspire several techniques of manipulating liquid transportation and fabricating micropatterns.

  15. Earthquake Building Damage Mapping Based on Feature Analyzing Method from Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    An, L.; Zhang, J.; Gong, L.

    2018-04-01

    Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.

  16. Analysis of dynamic hydrogen (H2) generation

    NASA Astrophysics Data System (ADS)

    Buford, Marcelle C.

    2003-03-01

    The focus of this research is on-demand hydrogen generation for applications such as electric vehicles and electric appliances. Hydrogen can be generated by steam reformation of alcohols, hydrocarbons and other hydrogen containing complexes. Steam reformation can be represented as a simple chemical reaction between an alcohol, commonly methanol, and water vapor to produce hydrogen and carbon dioxide. A fuel cell can then be employed to produce electrical power from hydrogen and air. Numerical and experimental techniques are employed to analyze the most appropriate reforming fuel to maximize H2 yield and minimize by-products of which carbon monoxide is the most harmful

  17. Fetal-maternal interface: a chronicle of allogeneic coexistence.

    PubMed

    Pujal, Josep-Maria; Roura, Santiago; Muñoz-Marmol, Ana M; Mate, Jose-Luis; Bayes-Genis, Antoni

    2012-01-01

    The existence of allogeneic cells within an individual has been demonstrated in multiple fields such as hematopoietic stem cell or solid organ transplantation, non-depleted blood transfusions and the most common form which is bidirectional maternal-fetal cell trafficking, whereby cells from the fetus pass through the placental barrier. In order to graphically illustrate this early natural phenomenon that initiates the journey of a child's cells within the mother's blood and other tissues, we used a new procedure in microscopy imaging generating Large Scale Panoramic Pictures (LSPP). This technique can also be extended to explore a broad diversity of experimental models.

  18. X-Ray based Lung Function measurement-a sensitive technique to quantify lung function in allergic airway inflammation mouse models

    NASA Astrophysics Data System (ADS)

    Dullin, C.; Markus, M. A.; Larsson, E.; Tromba, G.; Hülsmann, S.; Alves, F.

    2016-11-01

    In mice, along with the assessment of eosinophils, lung function measurements, most commonly carried out by plethysmography, are essential to monitor the course of allergic airway inflammation, to examine therapy efficacy and to correlate animal with patient data. To date, plethysmography techniques either use intubation and/or restraining of the mice and are thus invasive, or are limited in their sensitivity. We present a novel unrestrained lung function method based on low-dose planar cinematic x-ray imaging (X-Ray Lung Function, XLF) and demonstrate its performance in monitoring OVA induced experimental allergic airway inflammation in mice and an improved assessment of the efficacy of the common treatment dexamethasone. We further show that XLF is more sensitive than unrestrained whole body plethysmography (UWBP) and that conventional broncho-alveolar lavage and histology provide only limited information of the efficacy of a treatment when compared to XLF. Our results highlight the fact that a multi-parametric imaging approach as delivered by XLF is needed to address the combined cellular, anatomical and functional effects that occur during the course of asthma and in response to therapy.

  19. Techniques in Experimental Mechanics Applicable to Forest Products Research

    Treesearch

    Leslie H. Groom; Audrey G. Zink

    1994-01-01

    The title of this publication-Techniques in Experimental Mechanics Applicable to Forest Products Research-is the theme of this plenary session from the 1994 Annual Meeting of the Forest Products Society (FPS). Although this session focused on experimental techniques that can be of assistance to researchers in the field of forest products, it is hoped that the...

  20. Fluctuations in alliance and use of techniques over time: A bidirectional relation between use of "common factors" techniques and the development of the working alliance.

    PubMed

    Solomonov, Nili; McCarthy, Kevin S; Keefe, John R; Gorman, Bernard S; Blanchard, Mark; Barber, Jacques P

    2018-01-01

    The aim of this study was twofold: (a) Investigate whether therapists are consistent in their use of therapeutic techniques throughout supportive-expressive therapy (SET) and (b) Examine the bi-directional relation between therapists' use of therapeutic techniques and the working alliance over the course of SET. Thirty-seven depressed patients were assigned to 16 weeks of SET as part of a larger randomized clinical trial (Barber, Barrett, Gallop, Rynn, & Rickels, ). Working Alliance Inventory-Short Form (WAI-SF) was collected at Weeks 2, 4, and 8. Use of therapeutic interventions was rated by independent observers using the Multitheoretical List of Therapeutic Interventions (MULTI). Intraclass correlation coefficients assessed therapists' consistency in use of techniques. A cross-lagged path analysis estimated the working alliance inventory- Multitheoretical List of Therapeutic Interventions bidirectional relation across time. Therapists were moderately consistent in their use of prescribed techniques (psychodynamic, process-experiential, and person-centred). However, they were inconsistent, or more flexible, in their use of "common factors" techniques (e.g., empathy, active listening, hope, and encouragements). A positive bidirectional relation was found between use of common factors techniques and the working alliance, such that initial high levels of common factors (but not prescribed) techniques predicted higher alliance later on and vice versa. Therapists tend to modulate their use of common factors techniques across treatment. Additionally, when a strong working alliance is developed early in treatment, therapists tend to use more common factors later on. Moreover, high use of common factors techniques is predictive of later improvement in the alliance. Copyright © 2017 John Wiley & Sons, Ltd.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    CorAL is a software Library designed to aid in the analysis of femtoscipic data. Femtoscopic data are a class of measured quantities used in heavy-ion collisions to characterize particle emitting source sizes. The most common type of this data is two-particle correleations induced by the Hanbury-Brown/Twiss (HBT) Effect, but can also include correlations induced by final-state interactions between pairs of emitted particles in a heavy-ion collision. Because heavy-ion collisions are complex many particle systems, modeling hydrodynamical models or hybrid techniques. Using the CRAB module, CorAL can turn the output from these models into something that can be directley compared tomore » experimental data. CorAL can also take the raw experimentally measured correlation functions and image them by inverting the Koonin-Pratt equation to extract the space-time emission profile of the particle emitting source. This source function can be further analyzed or directly compared to theoretical calculations.« less

  2. How to Catch a Smurf? - Ageing and Beyond… In vivo Assessment of Intestinal Permeability in Multiple Model Organisms.

    PubMed

    Martins, Raquel R; McCracken, Andrew W; Simons, Mirre J P; Henriques, Catarina M; Rera, Michael

    2018-02-05

    The Smurf Assay (SA) was initially developed in the model organism Drosophila melanogaster where a dramatic increase of intestinal permeability has been shown to occur during aging (Rera et al. , 2011). We have since validated the protocol in multiple other model organisms (Dambroise et al. , 2016) and have utilized the assay to further our understanding of aging (Tricoire and Rera, 2015; Rera et al. , 2018). The SA has now also been used by other labs to assess intestinal barrier permeability (Clark et al. , 2015; Katzenberger et al. , 2015; Barekat et al. , 2016; Chakrabarti et al. , 2016; Gelino et al. , 2016). The SA in itself is simple; however, numerous small details can have a considerable impact on its experimental validity and subsequent interpretation. Here, we provide a detailed update on the SA technique and explain how to catch a Smurf while avoiding the most common experimental fallacies.

  3. The Portevin–Le Chatelier effect: a review of experimental findings

    PubMed Central

    Yilmaz, Ahmet

    2011-01-01

    The Portevin–Le Chatelier (PLC) effect manifests itself as an unstable plastic flow during tensile tests of some dilute alloys under certain regimes of strain rate and temperature. The plastic strain becomes localized in the form of bands which move along a specimen gauge in various ways as the PLC effect occurs. Because the localization of strain causes degradation of the inherent structural properties and surface quality of materials, understanding the effect is crucial for the effective use of alloys. The characteristic behaviors of localized strain bands and techniques commonly used to study the PLC effect are summarized in this review. A brief overview of experimental findings, the effect of material properties and test parameters on the PLC effect, and some discussion on the mechanisms of the effect are included. Tests for predicting the early failure of structural materials due to embrittlement induced by the PLC effect are also discussed. PMID:27877450

  4. An experimental comparison of several current viscoplastic constitutive models at elevated temperature

    NASA Technical Reports Server (NTRS)

    James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.

  5. Low cost ellipsometer using a standard commercial polarimeter

    NASA Astrophysics Data System (ADS)

    Velosa, F.; Abreu, M.

    2017-08-01

    Ellipsometry is an optical technique to characterize materials or phenomena that occurs at an interface or thin film between two different media. In this paper, we present an experimental low-cost version of a photometric ellipsometer, assembled with commonly found material at every Optics laboratory. The polarization parameters measurement was performed using a Thorlabs PAX5710 polarimeter. The uncertainty computed using the Guide to the Expression of Uncertainty in Measurement (GUM) procedures. With the assembled ellipsometer we were able to measure the thickness of a 10 nm thick SiO2 thin film deposited upon Si, and the complex refractive index of Gold and Tantalum samples. The SiO2 thickness we achieved had an experimental deviation of 4.5% with 2.00 nm uncertainty. The value complex refractive index of Gold and Tantalum measured agrees with the different values found in several references. The uncertainty values were found to be mostly limited by the polarimeter's uncertainty.

  6. Evaluation of the effects of laparotomy and laparoscopy on the immune system in intra-abdominal sepsis--a review.

    PubMed

    Karantonis, Fotios-Filippos; Nikiteas, Nikolaos; Perrea, Despina; Vlachou, Antonia; Giamarellos-Bourboulis, Evangelos J; Tsigris, Christos; Kostakis, Alkiviadis

    2008-01-01

    This review portrays the most common experimental models of intra-abdominal sepsis. Additionally, it outlines the facts that distinguish laparotomy from laparoscopy, in respect to the immune response, when comparing these two techniques in experimental models of intra-abdominal sepsis. It describes the consequences of pneumoperitoneum and trauma produced by laparoscopy or laparotomy, respectively, on bacterial translocation and immunity. Furthermore, we report the few efforts that have been made in clinical settings, where surgeons have attempted to utilize laparoscopy as a therapeutic module when treating peritonitis or sepsis of abdominal origin. Certainly there is a need for more research in order to fortify the role of pneumoperitoneum in sepsis of abdominal origin. It seems that minimally invasive surgery will inevitably gain acceptance by surgeons, as evidence points that by inflicting less trauma the healing response is expected to be more efficient, especially in septic patients.

  7. The materials processing research base of the Materials Processing Center. Report for FY 1982

    NASA Technical Reports Server (NTRS)

    Flemings, M. C.

    1983-01-01

    The work described, while involving research in the broad field of materials processing, has two common features: the problems are closed related to space precessing of materials and have both practical and fundamental significance. An interesting and important feature of many of the projects is that the interdisciplinary nature of the problem mandates complementary analytical modeling/experimental approaches. An other important aspect of many of the projects is the increasing use of mathematical modeling techniques as one of the research tools. The predictive capability of these models, when tested against measurements, plays a very important role in both the planning of experimental programs and in the rational interpretation of the results. Many of the projects described have a space experiment as their ultimate objective. Mathematical models are proving to be extremely valuable in projecting the findings of ground - based experiments to microgravity conditions.

  8. Correlation Equations for Condensing Heat Exchangers Based on an Algorithmic Performance-Data Classification

    NASA Astrophysics Data System (ADS)

    Pacheco-Vega, Arturo

    2016-09-01

    In this work a new set of correlation equations is developed and introduced to accurately describe the thermal performance of compact heat exchangers with possible condensation. The feasible operating conditions for the thermal system correspond to dry- surface, dropwise condensation, and film condensation. Using a prescribed form for each condition, a global regression analysis for the best-fit correlation to experimental data is carried out with a simulated annealing optimization technique. The experimental data were taken from the literature and algorithmically classified into three groups -related to the possible operating conditions- with a previously-introduced Gaussian-mixture-based methodology. Prior to their use in the analysis, the correct data classification was assessed and confirmed via artificial neural networks. Predictions from the correlations obtained for the different conditions are within the uncertainty of the experiments and substantially more accurate than those commonly used.

  9. Predicting gene regulatory networks of soybean nodulation from RNA-Seq transcriptome data.

    PubMed

    Zhu, Mingzhu; Dahmen, Jeremy L; Stacey, Gary; Cheng, Jianlin

    2013-09-22

    High-throughput RNA sequencing (RNA-Seq) is a revolutionary technique to study the transcriptome of a cell under various conditions at a systems level. Despite the wide application of RNA-Seq techniques to generate experimental data in the last few years, few computational methods are available to analyze this huge amount of transcription data. The computational methods for constructing gene regulatory networks from RNA-Seq expression data of hundreds or even thousands of genes are particularly lacking and urgently needed. We developed an automated bioinformatics method to predict gene regulatory networks from the quantitative expression values of differentially expressed genes based on RNA-Seq transcriptome data of a cell in different stages and conditions, integrating transcriptional, genomic and gene function data. We applied the method to the RNA-Seq transcriptome data generated for soybean root hair cells in three different development stages of nodulation after rhizobium infection. The method predicted a soybean nodulation-related gene regulatory network consisting of 10 regulatory modules common for all three stages, and 24, 49 and 70 modules separately for the first, second and third stage, each containing both a group of co-expressed genes and several transcription factors collaboratively controlling their expression under different conditions. 8 of 10 common regulatory modules were validated by at least two kinds of validations, such as independent DNA binding motif analysis, gene function enrichment test, and previous experimental data in the literature. We developed a computational method to reliably reconstruct gene regulatory networks from RNA-Seq transcriptome data. The method can generate valuable hypotheses for interpreting biological data and designing biological experiments such as ChIP-Seq, RNA interference, and yeast two hybrid experiments.

  10. Recursive formulae and performance comparisons for first mode dynamics of periodic structures

    NASA Astrophysics Data System (ADS)

    Hobeck, Jared D.; Inman, Daniel J.

    2017-05-01

    Periodic structures are growing in popularity especially in the energy harvesting and metastructures communities. Common types of these unique structures are referred to in the literature as zigzag, orthogonal spiral, fan-folded, and longitudinal zigzag structures. Many of these studies on periodic structures have two competing goals in common: (a) minimizing natural frequency, and (b) minimizing mass or volume. These goals suggest that no single design is best for all applications; therefore, there is a need for design optimization and comparison tools which first require efficient easy-to-implement models. All available structural dynamics models for these types of structures do provide exact analytical solutions; however, they are complex requiring tedious implementation and providing more information than necessary for practical applications making them computationally inefficient. This paper presents experimentally validated recursive models that are able to very accurately and efficiently predict the dynamics of the four most common types of periodic structures. The proposed modeling technique employs a combination of static deflection formulae and Rayleigh’s Quotient to estimate the first mode shape and natural frequency of periodic structures having any number of beams. Also included in this paper are the results of an extensive experimental validation study which show excellent agreement between model prediction and measurement. Lastly, the proposed models are used to evaluate the performance of each type of structure. Results of this performance evaluation reveal key advantages and disadvantages associated with each type of structure.

  11. Adaptive suppression of power line interference in ultra-low field magnetic resonance imaging in an unshielded environment

    NASA Astrophysics Data System (ADS)

    Huang, Xiaolei; Dong, Hui; Qiu, Yang; Li, Bo; Tao, Quan; Zhang, Yi; Krause, Hans-Joachim; Offenhäusser, Andreas; Xie, Xiaoming

    2018-01-01

    Power-line harmonic interference and fixed-frequency noise peaks may cause stripe-artifacts in ultra-low field (ULF) magnetic resonance imaging (MRI) in an unshielded environment and in a conductively shielded room. In this paper we describe an adaptive suppression method to eliminate these artifacts in MRI images. This technique utilizes spatial correlation of the interference from different positions, and is realized by subtracting the outputs of the reference channel(s) from those of the signal channel(s) using wavelet analysis and the least squares method. The adaptive suppression method is first implemented to remove the image artifacts in simulation. We then experimentally demonstrate the feasibility of this technique by adding three orthogonal superconducting quantum interference device (SQUID) magnetometers as reference channels to compensate the output of one 2nd-order gradiometer. The experimental results show great improvement in the imaging quality in both 1D and 2D MRI images at two common imaging frequencies, 1.3 kHz and 4.8 kHz. At both frequencies, the effective compensation bandwidth is as high as 2 kHz. Furthermore, we examine the longitudinal relaxation times of the same sample before and after compensation, and show that the MRI properties of the sample did not change after applying adaptive suppression. This technique can effectively increase the imaging bandwidth and be applied to ULF MRI detected by either SQUIDs or Faraday coil in both an unshielded environment and a conductively shielded room.

  12. Hydroforming Of Patchwork Blanks — Numerical Modeling And Experimental Validation

    NASA Astrophysics Data System (ADS)

    Lamprecht, Klaus; Merklein, Marion; Geiger, Manfred

    2005-08-01

    In comparison to the commonly applied technology of tailored blanks the concept of patchwork blanks offers a number of additional advantages. Potential application areas for patchwork blanks in automotive industry are e.g. local reinforcements of automotive closures, structural reinforcements of rails and pillars as well as shock towers. But even if there is a significant application potential for patchwork blanks in automobile production, industrial realization of this innovative technique is decelerated due to a lack of knowledge regarding the forming behavior and the numerical modeling of patchwork blanks. Especially for the numerical simulation of hydroforming processes, where one part of the forming tool is replaced by a fluid under pressure, advanced modeling techniques are required to ensure an accurate prediction of the blanks' forming behavior. The objective of this contribution is to provide an appropriate model for the numerical simulation of patchwork blanks' forming processes. Therefore, different finite element modeling techniques for patchwork blanks are presented. In addition to basic shell element models a combined finite element model consisting of shell and solid elements is defined. Special emphasis is placed on the modeling of the weld seam. For this purpose the local mechanical properties of the weld metal, which have been determined by means of Martens-hardness measurements and uniaxial tensile tests, are integrated in the finite element models. The results obtained from the numerical simulations are compared to experimental data from a hydraulic bulge test. In this context the focus is laid on laser- and spot-welded patchwork blanks.

  13. Cross-spectrum measurement of thermal-noise limited oscillators.

    PubMed

    Hati, A; Nelson, C W; Howe, D A

    2016-03-01

    Cross-spectrum analysis is a commonly used technique for the detection of phase and amplitude noise of a signal in the presence of interfering uncorrelated noise. Recently, we demonstrated that the phase-inversion (anti-correlation) effect due to amplitude noise leakage can cause complete or partial collapse of the cross-spectral function. In this paper, we discuss the newly discovered effect of anti-correlated thermal noise that originates from the common-mode power divider (splitter), an essential component in a cross-spectrum noise measurement system. We studied this effect for different power splitters and discuss its influence on the measurement of thermal-noise limited oscillators. We provide theory, simulation and experimental results. In addition, we expand this study to reveal how the presence of ferrite-isolators and amplifiers at the output ports of the power splitters can affect the oscillator noise measurements. Finally, we discuss a possible solution to overcome this problem.

  14. Pre-processing Tasks in Indonesian Twitter Messages

    NASA Astrophysics Data System (ADS)

    Hidayatullah, A. F.; Ma'arif, M. R.

    2017-01-01

    Twitter text messages are very noisy. Moreover, tweet data are unstructured and complicated enough. The focus of this work is to investigate pre-processing technique for Twitter messages in Bahasa Indonesia. The main goal of this experiment is to clean the tweet data for further analysis. Thus, the objectives of this pre-processing task is simply removing all meaningless character and left valuable words. In this research, we divide our proposed pre-processing experiments into two parts. The first part is common pre-processing task. The second part is a specific pre-processing task for tweet data. From the experimental result we can conclude that by employing a specific pre-processing task related to tweet data characteristic we obtained more valuable result. The result obtained is better in terms of less meaningful word occurrence which is not significant in number comparing to the result obtained by just running common pre-processing tasks.

  15. Presentation and Impact of Experimental Techniques in Chemistry

    ERIC Educational Resources Information Center

    Sojka, Zbigniew; Che, Michel

    2008-01-01

    Laboratory and practical courses, where students become familiar with experimental techniques and learn to interpret data and relate them to appropriate theory, play a vital role in chemical education. In the large panoply of currently available techniques, it is difficult to find a rational and easy way to classify the techniques in relation to…

  16. Exploring the Interplay between Rescue Drugs, Data Imputation, and Study Outcomes: Conceptual Review and Qualitative Analysis of an Acute Pain Data Set.

    PubMed

    Singla, Neil K; Meske, Diana S; Desjardins, Paul J

    2017-12-01

    In placebo-controlled acute surgical pain studies, provisions must be made for study subjects to receive adequate analgesic therapy. As such, most protocols allow study subjects to receive a pre-specified regimen of open-label analgesic drugs (rescue drugs) as needed. The selection of an appropriate rescue regimen is a critical experimental design choice. We hypothesized that a rescue regimen that is too liberal could lead to all study arms receiving similar levels of pain relief (thereby confounding experimental results), while a regimen that is too stringent could lead to a high subject dropout rate (giving rise to a preponderance of missing data). Despite the importance of rescue regimen as a study design feature, there exist no published review articles or meta-analysis focusing on the impact of rescue therapy on experimental outcomes. Therefore, when selecting a rescue regimen, researchers must rely on clinical factors (what analgesics do patients usually receive in similar surgical scenarios) and/or anecdotal evidence. In the following article, we attempt to bridge this gap by reviewing and discussing the experimental impacts of rescue therapy on a common acute surgical pain population: first metatarsal bunionectomy. The function of this analysis is to (1) create a framework for discussion and future exploration of rescue as a methodological study design feature, (2) discuss the interplay between data imputation techniques and rescue drugs, and (3) inform the readership regarding the impact of data imputation techniques on the validity of study conclusions. Our findings indicate that liberal rescue may degrade assay sensitivity, while stringent rescue may lead to unacceptably high dropout rates.

  17. Electrochemical Analysis of Neurotransmitters

    NASA Astrophysics Data System (ADS)

    Bucher, Elizabeth S.; Wightman, R. Mark

    2015-07-01

    Chemical signaling through the release of neurotransmitters into the extracellular space is the primary means of communication between neurons. More than four decades ago, Ralph Adams and his colleagues realized the utility of electrochemical methods for the study of easily oxidizable neurotransmitters, such as dopamine, norepinephrine, and serotonin and their metabolites. Today, electrochemical techniques are frequently coupled to microelectrodes to enable spatially resolved recordings of rapid neurotransmitter dynamics in a variety of biological preparations spanning from single cells to the intact brain of behaving animals. In this review, we provide a basic overview of the principles underlying constant-potential amperometry and fast-scan cyclic voltammetry, the most commonly employed electrochemical techniques, and the general application of these methods to the study of neurotransmission. We thereafter discuss several recent developments in sensor design and experimental methodology that are challenging the current limitations defining the application of electrochemical methods to neurotransmitter measurements.

  18. Integrative change model in psychotherapy: Perspectives from Indian thought.

    PubMed

    Manickam, L S S

    2013-01-01

    Different psychotherapeutic approaches claim positive changes in patients as a result of therapy. Explanations related to the change process led to different change models. Some of the change models are experimentally oriented whereas some are theoretical. Apart from the core models of behavioral, psychodynamic, humanistic, cognitive and spiritually oriented models there are specific models, within psychotherapy that explains the change process. Integrative theory of a person as depicted in Indian thought provides a common ground for the integration of various therapies. Integrative model of change based on Indian thought, with specific reference to psychological concepts in Upanishads, Ayurveda, Bhagavad Gita and Yoga are presented. Appropriate psychological tools may be developed in order to help the clinicians to choose the techniques that match the problem and the origin of the dimension. Explorations have to be conducted to develop more techniques that are culturally appropriate and clinically useful. Research has to be initiated to validate the identified concepts.

  19. Integrative change model in psychotherapy: Perspectives from Indian thought

    PubMed Central

    Manickam, L. S. S

    2013-01-01

    Different psychotherapeutic approaches claim positive changes in patients as a result of therapy. Explanations related to the change process led to different change models. Some of the change models are experimentally oriented whereas some are theoretical. Apart from the core models of behavioral, psychodynamic, humanistic, cognitive and spiritually oriented models there are specific models, within psychotherapy that explains the change process. Integrative theory of a person as depicted in Indian thought provides a common ground for the integration of various therapies. Integrative model of change based on Indian thought, with specific reference to psychological concepts in Upanishads, Ayurveda, Bhagavad Gita and Yoga are presented. Appropriate psychological tools may be developed in order to help the clinicians to choose the techniques that match the problem and the origin of the dimension. Explorations have to be conducted to develop more techniques that are culturally appropriate and clinically useful. Research has to be initiated to validate the identified concepts. PMID:23858275

  20. Detonation Properties Measurements for Inorganic Explosives

    NASA Astrophysics Data System (ADS)

    Morgan, Brent A.; Lopez, Angel

    2005-03-01

    Many commonly available explosive materials have never been quantitatively or theoretically characterized in a manner suitable for use in analytical models. This includes inorganic explosive materials used in spacecraft ordnance, such as zirconium potassium perchlorate (ZPP). Lack of empirical information about these materials impedes the development of computational techniques. We have applied high fidelity measurement techniques to experimentally determine the pressure and velocity characteristics of ZPP, a previously uncharacterized explosive material. Advances in measurement technology now permit the use of very small quantities of material, thus yielding a significant reduction in the cost of conducting these experiments. An empirical determination of the explosive behavior of ZPP derived a Hugoniot for ZPP with an approximate particle velocity (uo) of 1.0 km/s. This result compares favorably with the numerical calculations from the CHEETAH thermochemical code, which predicts uo of approximately 1.2 km/s under ideal conditions.

  1. Enhanced Impurity-Free Intermixing Bandgap Engineering for InP-Based Photonic Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Cui, Xiao; Zhang, Can; Liang, Song; Zhu, Hong-Liang; Hou, Lian-Ping

    2014-04-01

    Impurity-free intermixing of InGaAsP multiple quantum wells (MQW) using sputtering Cu/SiO2 layers followed by rapid thermal processing (RTP) is demonstrated. The bandgap energy could be modulated by varying the sputtering power and time of Cu, RTP temperature and time to satisfy the demands for lasers, modulators, photodetector, and passive waveguides for the photonic integrated circuits with a simple procedure. The blueshift of the bandgap wavelength of MQW is experimentally investigated on different sputtering and annealing conditions. It is obvious that the introduction of the Cu layer could increase the blueshift more greatly than the common impurity free vacancy disordering technique. A maximum bandgap blueshift of 172 nm is realized with an annealing condition of 750°C and 200s. The improved technique is promising for the fabrication of the active/passive optoelectronic components on a single wafer with simple process and low cost.

  2. A PVS Prover Strategy Package for Common Manipulations

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.

    2002-01-01

    Sequent manipulations for an interactive prover such as PVS can often be labor intensive. We describe an approach to tactic-based proving for improved interactive deduction in specialized domains. An experimental package of strategies (tactics) and support functions has been developed for PVS to reduce the tedium of arithmetic manipulation. Included are strategies aimed at algebraic simplification of real-valued expressions as well as term-access techniques applicable in arbitrary settings. The approach is general enough to serve in other mathematical domains and for provers other than PVS. This report presents the full set of arithmetic strategies and discusses how they are invoked within the prover. Included is a description of the extended expression notation for accessing terms as well as a substitution technique provided for higher-order strategies. Several sample proofs are displayed in full to show how the strategies might be used in practice.

  3. Cartilage magnetic resonance imaging techniques at 3 T: current status and future directions.

    PubMed

    Thakkar, Rashmi S; Subhawong, Ty; Carrino, John A; Chhabra, Avneesh

    2011-04-01

    Magnetic resonance imaging (MRI) remains the imaging modality of choice for morphological and compositional evaluation of the articular cartilage. Accurate detection and characterization of cartilage lesions are necessary to guide the medical and surgical therapy and are also critical for longitudinal studies of the cartilage. Recent work using 3.0-T MRI systems shows promise in improving detection and characterization of the cartilage lesions, particularly with increasing use of high-resolution and high-contrast 3-dimensional sequences, which allow detailed morphological assessment of cartilage in arbitrary imaging planes. In addition, implementation of biochemical sequences in clinically feasible scan times has a potential in the early detection of cartilage lesions before they become morphologically apparent. This article discusses relative advantages and disadvantages of various commonly used as well as experimental MRI techniques to directly assess the morphology and indirectly evaluate the biochemical composition of the articular cartilage.

  4. Macroindentation hardness measurement-Modernization and applications.

    PubMed

    Patel, Sarsvat; Sun, Changquan Calvin

    2016-06-15

    In this study, we first developed a modernized indentation technique for measuring tablet hardness. This technique is featured by rapid digital image capture, using a calibrated light microscope, and precise area-determination. We then systematically studied effects of key experimental parameters, including indentation force, speed, and holding time, on measured hardness of a very soft material, hydroxypropyl cellulose, and a very hard material, dibasic calcium phosphate, to cover a wide range of material properties. Based on the results, a holding period of 3min at the peak indentation load is recommended to minimize the effect of testing speed on H. Using this method, we show that an exponential decay function well describes the relationship between tablet hardness and porosity for seven commonly used pharmaceutical powders investigated in this work. We propose that H and H at zero porosity may be used to quantify the tablet deformability and powder plasticity, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Diagnosing pure-electron plasmas with internal particle flux probes.

    PubMed

    Kremer, J P; Pedersen, T Sunn; Marksteiner, Q; Lefrancois, R G; Hahn, M

    2007-01-01

    Techniques for measuring local plasma potential, density, and temperature of pure-electron plasmas using emissive and Langmuir probes are described. The plasma potential is measured as the least negative potential at which a hot tungsten filament emits electrons. Temperature is measured, as is commonly done in quasineutral plasmas, through the interpretation of a Langmuir probe current-voltage characteristic. Due to the lack of ion-saturation current, the density must also be measured through the interpretation of this characteristic thereby greatly complicating the measurement. Measurements are further complicated by low densities, low cross field transport rates, and large flows typical of pure-electron plasmas. This article describes the use of these techniques on pure-electron plasmas in the Columbia Non-neutral Torus (CNT) stellarator. Measured values for present baseline experimental parameters in CNT are phi(p)=-200+/-2 V, T(e)=4+/-1 eV, and n(e) on the order of 10(12) m(-3) in the interior.

  7. BFPTool: a software tool for analysis of Biomembrane Force Probe experiments.

    PubMed

    Šmít, Daniel; Fouquet, Coralie; Doulazmi, Mohamed; Pincet, Frédéric; Trembleau, Alain; Zapotocky, Martin

    2017-01-01

    The Biomembrane Force Probe is an approachable experimental technique commonly used for single-molecule force spectroscopy and experiments on biological interfaces. The technique operates in the range of forces from 0.1 pN to 1000 pN. Experiments are typically repeated many times, conditions are often not optimal, the captured video can be unstable and lose focus; this makes efficient analysis challenging, while out-of-the-box non-proprietary solutions are not freely available. This dedicated tool was developed to integrate and simplify the image processing and analysis of videomicroscopy recordings from BFP experiments. A novel processing feature, allowing the tracking of the pipette, was incorporated to address a limitation of preceding methods. Emphasis was placed on versatility and comprehensible user interface implemented in a graphical form. An integrated analytical tool was implemented to provide a faster, simpler and more convenient way to process and analyse BFP experiments.

  8. Property Differencing for Incremental Checking

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Khurshid, Sarfraz; Person, Suzette; Rungta, Neha

    2014-01-01

    This paper introduces iProperty, a novel approach that facilitates incremental checking of programs based on a property di erencing technique. Speci cally, iProperty aims to reduce the cost of checking properties as they are initially developed and as they co-evolve with the program. The key novelty of iProperty is to compute the di erences between the new and old versions of expected properties to reduce the number and size of the properties that need to be checked during the initial development of the properties. Furthermore, property di erencing is used in synergy with program behavior di erencing techniques to optimize common regression scenarios, such as detecting regression errors or checking feature additions for conformance to new expected properties. Experimental results in the context of symbolic execution of Java programs annotated with properties written as assertions show the e ectiveness of iProperty in utilizing change information to enable more ecient checking.

  9. Comparative study of two approaches to model the offshore fish cages

    NASA Astrophysics Data System (ADS)

    Zhao, Yun-peng; Wang, Xin-xin; Decew, Jud; Tsukrov, Igor; Bai, Xiao-dong; Bi, Chun-wei

    2015-06-01

    The goal of this paper is to provide a comparative analysis of two commonly used approaches to discretize offshore fish cages: the lumped-mass approach and the finite element technique. Two case studies are chosen to compare predictions of the LMA (lumped-mass approach) and FEA (finite element analysis) based numerical modeling techniques. In both case studies, we consider several loading conditions consisting of different uniform currents and monochromatic waves. We investigate motion of the cage, its deformation, and the resultant tension in the mooring lines. Both model predictions are sufficient close to the experimental data, but for the first experiment, the DUT-FlexSim predictions are slightly more accurate than the ones provided by Aqua-FE™. According to the comparisons, both models can be successfully utilized to the design and analysis of the offshore fish cages provided that an appropriate safety factor is chosen.

  10. Recent advances in micromechanical characterization of polymer, biomaterial, and cell surfaces with atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Chyasnavichyus, Marius; Young, Seth L.; Tsukruk, Vladimir V.

    2015-08-01

    Probing of micro- and nanoscale mechanical properties of soft materials with atomic force microscopy (AFM) gives essential information about the performance of the nanostructured polymer systems, natural nanocomposites, ultrathin coatings, and cell functioning. AFM provides efficient and is some cases the exclusive way to study these properties nondestructively in controlled environment. Precise force control in AFM methods allows its application to variety of soft materials and can be used to go beyond elastic properties and examine temperature and rate dependent materials response. In this review, we discuss experimental AFM methods currently used in the field of soft nanostructured composites and biomaterials. We discuss advantages and disadvantages of common AFM probing techniques, which allow for both qualitative and quantitative mappings of the elastic modulus of soft materials with nanosacle resolution. We also discuss several advanced techniques for more elaborate measurements of viscoelastic properties of soft materials and experiments on single cells.

  11. 3D FISH to analyse gene domain-specific chromatin re-modeling in human cancer cell lines.

    PubMed

    Kocanova, Silvia; Goiffon, Isabelle; Bystricky, Kerstin

    2018-06-01

    Fluorescence in situ hybridization (FISH) is a common technique used to label DNA and/or RNA for detection of a genomic region of interest. However, the technique can be challenging, in particular when applied to single genes in human cancer cells. Here, we provide a step-by-step protocol for analysis of short (35 kb-300 kb) genomic regions in three dimensions (3D). We discuss the experimental design and provide practical considerations for 3D imaging and data analysis to determine chromatin folding. We demonstrate that 3D FISH using BACs (Bacterial Artificial Chromosomes) or fosmids can provide detailed information of the architecture of gene domains. More specifically, we show that mapping of specific chromatin landscapes informs on changes associated with estrogen stimulated gene activity in human breast cancer cell lines. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Mindfulness meditation-related pain relief: Evidence for unique brain mechanisms in the regulation of pain

    PubMed Central

    Zeidan, F.; Grant, J.A.; Brown, C.A.; McHaffie, J.G.; Coghill, R.C.

    2013-01-01

    The cognitive modulation of pain is influenced by a number of factors ranging from attention, beliefs, conditioning, expectations, mood, and the regulation of emotional responses to noxious sensory events. Recently, mindfulness meditation has been found attenuate pain through some of these mechanisms including enhanced cognitive and emotional control, as well as altering the contextual evaluation of sensory events. This review discusses the brain mechanisms involved in mindfulness meditation-related pain relief across different meditative techniques, expertise and training levels, experimental procedures, and neuroimaging methodologies. Converging lines of neuroimaging evidence reveal that mindfulness meditation-related pain relief is associated with unique appraisal cognitive processes depending on expertise level and meditation tradition. Moreover, it is postulated that mindfulness meditation-related pain relief may share a common final pathway with other cognitive techniques in the modulation of pain. PMID:22487846

  13. Electrochemical Analysis of Neurotransmitters

    PubMed Central

    Bucher, Elizabeth S.; Wightman, R. Mark

    2016-01-01

    Chemical signaling through the release of neurotransmitters into the extracellular space is the primary means of communication between neurons. More than four decades ago, Ralph Adams and his colleagues realized the utility of electrochemical methods for the study of easily oxidizable neurotransmitters, such as dopamine, norepinephrine, and serotonin and their metabolites. Today, electrochemical techniques are frequently coupled to microelectrodes to enable spatially resolved recordings of rapid neurotransmitter dynamics in a variety of biological preparations spanning from single cells to the intact brain of behaving animals. In this review, we provide a basic overview of the principles underlying constant-potential amperometry and fast-scan cyclic voltammetry, the most commonly employed electrochemical techniques, and the general application of these methods to the study of neurotransmission. We thereafter discuss several recent developments in sensor design and experimental methodology that are challenging the current limitations defining the application of electrochemical methods to neurotransmitter measurements. PMID:25939038

  14. Experimental analysis and modeling of melt growth processes

    NASA Astrophysics Data System (ADS)

    Müller, Georg

    2002-04-01

    Melt growth processes provide the basic crystalline materials for many applications. The research and development of crystal growth processes is therefore driven by the demands which arise from these specific applications; however, common goals include an increased uniformity of the relevant crystal properties at the micro- and macro-scale, a decrease of deleterious crystal defects, and an increase of crystal dimensions. As melt growth equipment and experimentation becomes more and more expensive, little room remains for improvements by trial and error procedures. A more successful strategy is to optimize the crystal growth process by a combined use of experimental process analysis and computer modeling. This will be demonstrated in this paper by several examples from the bulk growth of silicon, gallium arsenide, indium phosphide, and calcium fluoride. These examples also involve the most important melt growth techniques, crystal pulling (Czochralski methods) and vertical gradient freeze (Bridgman-type methods). The power and success of the above optimization strategy, however, is not limited only to the given examples but can be generalized and applied to many types of bulk crystal growth.

  15. Effectiveness of Indian Turmeric Powder with Honey as Complementary Therapy on Oral Mucositis : A Nursing Perspective among Cancer Patients in Mysore.

    PubMed

    Francis, Manjusha; Williams, Sheela

    2014-01-01

    Oral mucositis is a common, debilitating complication of cancer patients undergoing chemotherapy and radiotherapy, occurring in about 40 percent cases. Mucositis may limit the patient's ability to tolerate chemotherapy or radiation therapy, and nutrition status is compromised. The aim of the study was to assess the effect of Indian turmeric powder with honey as a complementary therapy on treatment induced oral mucositis. In the study, quasi experimental non-equivalent control group pre test post-test design was used and non-probability purposive sampling technique was adopted to select 60 cancer patients with treatment induced oral mucositis, 30 each in experimental and control group. The independent 't' value for post-test 2 and 3 (post-test 2: 2.86 for WHO OMAS and 4.58 for MPJ OMAS, post test 2: 5.42 for WHO OMAS and 7.2 for MPJ OMAS; p < 0.05) were significant between experimental and control group. It is inferred that the application of Indian turmeric and honey on treatment-induced oral mucositis is effective.

  16. Model-Based Estimation of Knee Stiffness

    PubMed Central

    Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J.

    2013-01-01

    During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step towards our ultimate goal of quantifying knee stiffness during gait. PMID:22801482

  17. Transition-metal-free catalysts for the sustainable epoxidation of alkenes: from discovery to optimisation by means of high throughput experimentation.

    PubMed

    Lueangchaichaweng, Warunee; Geukens, Inge; Peeters, Annelies; Jarry, Benjamin; Launay, Franck; Bonardet, Jean-Luc; Jacobs, Pierre A; Pescarmona, Paolo P

    2012-02-01

    Transition-metal-free oxides were studied as heterogeneous catalysts for the sustainable epoxidation of alkenes with aqueous H₂O₂ by means of high throughput experimentation (HTE) techniques. A full-factorial HTE approach was applied in the various stages of the development of the catalysts: the synthesis of the materials, their screening as heterogeneous catalysts in liquid-phase epoxidation and the optimisation of the reaction conditions. Initially, the chemical composition of transition-metal-free oxides was screened, leading to the discovery of gallium oxide as a novel, active and selective epoxidation catalyst. On the basis of these results, the research line was continued with the study of structured porous aluminosilicates, gallosilicates and silica-gallia composites. In general, the gallium-based materials showed the best catalytic performances. This family of materials represents a promising class of heterogeneous catalysts for the sustainable epoxidation of alkenes and offers a valid alternative to the transition-metal heterogeneous catalysts commonly used in epoxidation. High throughput experimentation played an important role in promoting the development of these catalytic systems.

  18. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  19. Quantitative image fusion in infrared radiometry

    NASA Astrophysics Data System (ADS)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  20. Effectiveness of touch and feel (TAF) technique on first aid measures for visually challenged.

    PubMed

    Mary, Helen; Sasikalaz, D; Venkatesan, Latha

    2013-01-01

    There is a common perception that a blind person cannot even help his own self. In order to challenge that view, a workshop for visually-impaired people to develop the skills to be independent and productive members of society was conceived. An experimental study was conducted at National Institute of Visually Handicapped, Chennai with the objective to assess the effectiveness of Touch and Feel (TAF) technique on first aid measures for the visually challenged. Total 25 visually challenged people were selected by non-probability purposive sampling technique and data was collected using demographic variable and structured knowledge questionnaire. The score obtained was categorised into three levels: inadequate (0-8), moderately adequate (8 - 17), adequate (17 -25). The study revealed that most of the visually challenged (40%) had inadequate knowledge, and 56 percent had moderately adequate and only few (4%) had adequate knowledge in the pre-test, whereas most (68%) of them had adequate knowledge in the post-test which is statistically significant at p < 0.000 with t-value 6.779. This proves that TAF technique was effective for the visually challenged. There was no association between the demographic variables and their level of knowledge regarding first aid.

  1. IVF: exploiting intensity variation function for high-performance pedestrian tracking in forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Lamberti, Fabrizio; Sanna, Andrea; Paravati, Gianluca; Belluccini, Luca

    2014-02-01

    Tracking pedestrian targets in forward-looking infrared video sequences is a crucial component of a growing number of applications. At the same time, it is particularly challenging, since image resolution and signal-to-noise ratio are generally very low, while the nonrigidity of the human body produces highly variable target shapes. Moreover, motion can be quite chaotic with frequent target-to-target and target-to-scene occlusions. Hence, the trend is to design ever more sophisticated techniques, able to ensure rather accurate tracking results at the cost of a generally higher complexity. However, many of such techniques might not be suitable for real-time tracking in limited-resource environments. This work presents a technique that extends an extremely computationally efficient tracking method based on target intensity variation and template matching originally designed for targets with a marked and stable hot spot by adapting it to deal with much more complex thermal signatures and by removing the native dependency on configuration choices. Experimental tests demonstrated that, by working on multiple hot spots, the designed technique is able to achieve the robustness of other common approaches by limiting drifts and preserving the low-computational footprint of the reference method.

  2. Albumen Glue, New Material for Conjunctival Graft Surgery, an Animal Experiment

    NASA Astrophysics Data System (ADS)

    Kartiwa, A.; Miraprahesti, R.; Sovani, I.; Enus, S.; Boediono, A.

    2017-02-01

    Attach conjunctival graft commonly used are suture technique and fibrin glue. This study was to investigate albumen glue as an alternative to suture technique in attaching conjunctival graft in rabbits. Aim of this study was to compare the conjunctival wound healing between albumen glue and suture technique in rabbit eye as a model. There was an experimental animal study included 32 eyes (16 rabbits) in PT. Bio Farma (Persero) and Histology Laboratory, Faculty of Medicine, Padjadjaran University from March 2014 to July 2104. The study consisted of albumen glue group and suture technique group. The examination included the comparison of conjunctival graft attachment and histologic examination by microscopically was done to obtain the wound gap, then analyze by Mann-Whitney test. The results indicated that the graft attachment was significantly better-using albumen glue (grade 4) compared to suture (grade 2-3) on day-1 after surgery (p=0,000). The wound gap was smaller using albumen glue (0-0,33 μm versus 5,33-14 μm ; p=0,0005) on 10 minutes after surgery and 0 μm versus 0,33-4 μm ; p=0,0005 on day-7 after surgery. In conclusion, the graft attachment using albumen glue was better and the wound gap was smaller using albumen glue than suture technique.

  3. Determining the refractive index of shocked [100] lithium fluoride to the limit of transmissibility

    NASA Astrophysics Data System (ADS)

    Rigg, P. A.; Knudson, M. D.; Scharff, R. J.; Hixson, R. S.

    2014-07-01

    Lithium fluoride (LiF) is a common window material used in shock- and ramp-compression experiments because it displays a host of positive attributes in these applications. Most commonly, it is used to maintain stress at an interface and velocimetry techniques are used to record the particle velocity at that interface. In this application, LiF remains transparent to stresses up to 200 GPa. In this stress range, LiF has an elastic-plastic response with a very low (<0.5 GPa) elastic precursor and exhibits no known solid-solid phase transformations. However, because the density dependence of the refractive index of LiF does not follow the Gladstone-Dale relation, the measured particle velocity at this interface is not the true particle velocity and must be corrected. For that reason, the measured velocity is often referred to as the apparent velocity in these types of experiments. In this article, we describe a series of shock-compression experiments that have been performed to determine the refractive index of LiF at the two most commonly used wavelengths (532 nm and 1550 nm) between 35 and 200 GPa to high precision. A modified form of the Gladstone-Dale relation was found to work best to fit the determined values of refractive index. In addition, we provide a direct relationship between the apparent and true particle velocity to correct experimentally obtained wave profiles by others using these velocimetry techniques.

  4. Determining the refractive index of shocked [100] lithium fluoride to the limit of transmissibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rigg, P. A., E-mail: prigg@lanl.gov; Scharff, R. J.; Hixson, R. S.

    2014-07-21

    Lithium fluoride (LiF) is a common window material used in shock- and ramp-compression experiments because it displays a host of positive attributes in these applications. Most commonly, it is used to maintain stress at an interface and velocimetry techniques are used to record the particle velocity at that interface. In this application, LiF remains transparent to stresses up to 200 GPa. In this stress range, LiF has an elastic-plastic response with a very low (<0.5 GPa) elastic precursor and exhibits no known solid-solid phase transformations. However, because the density dependence of the refractive index of LiF does not follow the Gladstone-Dale relation,more » the measured particle velocity at this interface is not the true particle velocity and must be corrected. For that reason, the measured velocity is often referred to as the apparent velocity in these types of experiments. In this article, we describe a series of shock-compression experiments that have been performed to determine the refractive index of LiF at the two most commonly used wavelengths (532 nm and 1550 nm) between 35 and 200 GPa to high precision. A modified form of the Gladstone-Dale relation was found to work best to fit the determined values of refractive index. In addition, we provide a direct relationship between the apparent and true particle velocity to correct experimentally obtained wave profiles by others using these velocimetry techniques.« less

  5. Modified technique for common carotid artery transposition in standing horses.

    PubMed

    Tapio, Heidi; Argüelles, David; Gracia-Calvo, Luis A; Raekallio, Marja

    2017-01-01

    To describe a modified technique for permanent translocation of the common carotid artery (CCA) to a subcutaneous position in standing horses. Experimental study. Healthy adult Standardbred and Warmblood horses (n = 8). Surgery was performed with the horses standing under sedation and with local anesthesia. A combination of previously described techniques was used modifying the approach and closure of the incision. The right CCA was approached through a linear skin incision dorsal and parallel to the jugular vein and through the brachiocephalicus and omohyoideus muscles. The artery was dissected free of its sheath and elevated to the skin incision with Penrose drains. The brachiocephalicus muscle was sutured in two layers underneath the artery leaving it in a subcutaneous position. The horses were allowed to heal for 3 weeks prior to catheterization of the artery. The transposed CCA was successfully used for repeated catheterization in six of eight horses for a period of 10 weeks. None of the horses had intraoperative complications. Two horses developed mild peri-incisional edema that resolved spontaneously. Right-sided laryngeal hemiplegia was observed endoscopically in two horses postoperatively. Two horses developed complications (surgical site infection and excessive periarterial fibrosis) that compromised the patency of the CCA and precluded catheterization. Permanent translocation of the CCA in standing horses was successful in six out of eight horses. Upper airway endoscopy postoperatively may be warranted as laryngeal hemiplegia may ensue. © 2016 The American College of Veterinary Surgeons.

  6. A dynamic model-based approach to motion and deformation tracking of prosthetic valves from biplane x-ray images.

    PubMed

    Wagner, Martin G; Hatt, Charles R; Dunkerley, David A P; Bodart, Lindsay E; Raval, Amish N; Speidel, Michael A

    2018-04-16

    Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures. © 2018 American Association of Physicists in Medicine.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delcamp, E.; Lagarde, B.; Polack, F.

    Though optimization softwares are commonly used in visible optical design, none seems to exist for soft X-ray optics. It is shown here that optimization techniques can be applied with some advantages to X-UV monochromator design. A merit function, suitable for minimizing the aberrations is proposed, and the general method of computation is described. Samples of the software inputs and outputs are presented, and compared to reference data. As an example of application to soft X-ray monochromator design, the optimization of the soft X-ray monochromator of the ESRF microscopy beamline is presented. Good agreement between the predicted resolution of a modifiedmore » PGM monochromator and experimental measurements is reported.« less

  8. Islanding detection technique using wavelet energy in grid-connected PV system

    NASA Astrophysics Data System (ADS)

    Kim, Il Song

    2016-08-01

    This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.

  9. A fast and flexible method for manufacturing 3D molded interconnect devices by the use of a rapid prototyping technology

    NASA Astrophysics Data System (ADS)

    Amend, P.; Pscherer, C.; Rechtenwald, T.; Frick, T.; Schmidt, M.

    This paper presents experimental results of manufacturing MID-prototypes by means of SLS, laser structuring and metallization. Therefore common SLS powder (PA12) doped with laser structuring additives is used. First of all the influence of the additives on the characteristic temperatures of melting and crystallization is analyzed by means of DSC. Afterwards the sintering process is carried out and optimized by experiments. Finally the generated components are qualified regarding their density, mechanical properties and surface roughness. Especially the surface quality is important for the metallization process. Therefore surface finishing techniques are investigated.

  10. A study of numerical methods of solution of the equations of motion of a controlled satellite under the influence of gravity gradient torque

    NASA Technical Reports Server (NTRS)

    Thompson, J. F.; Mcwhorter, J. C.; Siddiqi, S. A.; Shanks, S. P.

    1973-01-01

    Numerical methods of integration of the equations of motion of a controlled satellite under the influence of gravity-gradient torque are considered. The results of computer experimentation using a number of Runge-Kutta, multi-step, and extrapolation methods for the numerical integration of this differential system are presented, and particularly efficient methods are noted. A large bibliography of numerical methods for initial value problems for ordinary differential equations is presented, and a compilation of Runge-Kutta and multistep formulas is given. Less common numerical integration techniques from the literature are noted for further consideration.

  11. Data mining in bioinformatics using Weka.

    PubMed

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  12. Second harmonic generation: Effects of the multiple reflections of the fundamental and the second harmonic waves on the Maker fringes

    NASA Astrophysics Data System (ADS)

    Tellier, Gildas; Boisrobert, Christian

    2007-11-01

    The Maker fringes technique is commonly used for the determination of nonlinear optical coefficients. In this article, we present a new formulation of Maker fringes in parallel-surface samples, using boundary conditions taking into account the anisotropy of the crystal, the refractive-index dispersion, and the reflections of the fundamental and the second harmonic waves inside the material. Complete expressions for the generated second harmonic intensity are given for birefringent crystals for the case of no pump depletion. A comparison between theory and experimental results is made, showing the accuracy of our theoretical expressions.

  13. Modelling nonlinear viscoelastic behaviours of loudspeaker suspensions-like structures

    NASA Astrophysics Data System (ADS)

    Maillou, Balbine; Lotton, Pierrick; Novak, Antonin; Simon, Laurent

    2018-03-01

    Mechanical properties of an electrodynamic loudspeaker are mainly determined by its suspensions (surround and spider) that behave nonlinearly and typically exhibit frequency dependent viscoelastic properties such as creep effect. The paper aims at characterizing the mechanical behaviour of electrodynamic loudspeaker suspensions at low frequencies using nonlinear identification techniques developed in recent years. A Generalized Hammerstein based model can take into account both frequency dependency and nonlinear properties. As shown in the paper, the model generalizes existing nonlinear or viscoelastic models commonly used for loudspeaker modelling. It is further experimentally shown that a possible input-dependent law may play a key role in suspension characterization.

  14. Modeling air concentration over macro roughness conditions by Artificial Intelligence techniques

    NASA Astrophysics Data System (ADS)

    Roshni, T.; Pagliara, S.

    2018-05-01

    Aeration is improved in rivers by the turbulence created in the flow over macro and intermediate roughness conditions. Macro and intermediate roughness flow conditions are generated by flows over block ramps or rock chutes. The measurements are taken in uniform flow region. Efficacy of soft computing methods in modeling hydraulic parameters are not common so far. In this study, modeling efficiencies of MPMR model and FFNN model are found for estimating the air concentration over block ramps under macro roughness conditions. The experimental data are used for training and testing phases. Potential capability of MPMR and FFNN model in estimating air concentration are proved through this study.

  15. The Missing Link: Rotational Spectrum and Geometrical Structure of Disilicon Carbide, Si_2C

    NASA Astrophysics Data System (ADS)

    McCarthy, Michael C.; Baraban, Joshua H.; Changala, Bryan; Stanton, John F.; Martin-Drumel, Marie-Aline; Thorwirth, Sven; Reilly, Neil J.; Gottlieb, Carl A.

    2015-06-01

    Disilicon carbide Si_2C is one of the most fascinating small molecules for both fundamental and applied reasons. Like C_3, it has a shallow bending angle, and may therefore also serve as a classic example of a quasilinear species. Si_2C is also thought to be quite stable. Mass spectrometric studies conclude that it is one of the most common gas-phase fragments in the evaporation of silicon carbide at high temperature. For these same reasons, it may be abundant in certain evolved carbon stars such as IRC+12016. Its electronic spectrum was recently studied by several of us, but its ground state geometry and rotational spectrum remain unknown until now. Using sensitive microwave techniques and high-level coupled cluster calculations, Si_2C has been detected in the radio band, and is found to be highly abundant. Its more common rare isotopic species have also be observed either in natural abundance or using isotopically-enriched samples, from which a highly precise semi-experimental structure has been derived. This talk will summarize recent work, and discuss the prospects for astronomical detection. Now that all four of the Si_mC_n clusters with m+n=3 has been detected experimentally, a rigorous comparison of their structure and chemical bonding can be made.

  16. Computational Fluid Dynamics Based Extraction of Heat Transfer Coefficient in Cryogenic Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from: inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. The work presented under this task uses the first-principles based Computational Fluid Dynamics (CFD) technique to compute heat transfer from tank wall to the cryogenic fluids, and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between tank wall and cryogenic propellant, and that between tank wall and ullage gas were then simulated. The results showed that commonly used heat transfer correlations for either vertical or horizontal plate over predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.

  17. An in fiber experimental approach to photonic quantum digital signatures that does not require quantum memory

    NASA Astrophysics Data System (ADS)

    Collins, Robert J.; Donaldon, Ross J.; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J.; Andersson, Erika; Jeffers, John; Buller, Gerald S.

    2014-10-01

    Classical digital signatures are commonly used in e-mail, electronic financial transactions and other forms of electronic communications to ensure that messages have not been tampered with in transit, and that messages are transferrable. The security of commonly used classical digital signature schemes relies on the computational difficulty of inverting certain mathematical functions. However, at present, there are no such one-way functions which have been proven to be hard to invert. With enough computational resources certain implementations of classical public key cryptosystems can be, and have been, broken with current technology. It is nevertheless possible to construct information-theoretically secure signature schemes, including quantum digital signature schemes. Quantum signature schemes can be made information theoretically secure based on the laws of quantum mechanics, while classical comparable protocols require additional resources such as secret communication and a trusted authority. Early demonstrations of quantum digital signatures required quantum memory, rendering them impractical at present. Our present implementation is based on a protocol that does not require quantum memory. It also uses the new technique of unambiguous quantum state elimination, Here we report experimental results for a test-bed system, recorded with a variety of different operating parameters, along with a discussion of aspects of the system security.

  18. A fresh approach to forecasting in astroparticle physics and dark matter searches

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas D. P.; Weniger, Christoph

    2018-02-01

    We present a toolbox of new techniques and concepts for the efficient forecasting of experimental sensitivities. These are applicable to a large range of scenarios in (astro-)particle physics, and based on the Fisher information formalism. Fisher information provides an answer to the question 'what is the maximum extractable information from a given observation?'. It is a common tool for the forecasting of experimental sensitivities in many branches of science, but rarely used in astroparticle physics or searches for particle dark matter. After briefly reviewing the Fisher information matrix of general Poisson likelihoods, we propose very compact expressions for estimating expected exclusion and discovery limits ('equivalent counts method'). We demonstrate by comparison with Monte Carlo results that they remain surprisingly accurate even deep in the Poisson regime. We show how correlated background systematics can be efficiently accounted for by a treatment based on Gaussian random fields. Finally, we introduce the novel concept of Fisher information flux. It can be thought of as a generalization of the commonly used signal-to-noise ratio, while accounting for the non-local properties and saturation effects of background and instrumental uncertainties. It is a powerful and flexible tool ready to be used as core concept for informed strategy development in astroparticle physics and searches for particle dark matter.

  19. Experimental scrambling and noise reduction applied to the optical encryption of QR codes.

    PubMed

    Barrera, John Fredy; Vélez, Alejandro; Torroba, Roberto

    2014-08-25

    In this contribution, we implement two techniques to reinforce optical encryption, which we restrict in particular to the QR codes, but could be applied in a general encoding situation. To our knowledge, we present the first experimental-positional optical scrambling merged with an optical encryption procedure. The inclusion of an experimental scrambling technique in an optical encryption protocol, in particular dealing with a QR code "container", adds more protection to the encoding proposal. Additionally, a nonlinear normalization technique is applied to reduce the noise over the recovered images besides increasing the security against attacks. The opto-digital techniques employ an interferometric arrangement and a joint transform correlator encrypting architecture. The experimental results demonstrate the capability of the methods to accomplish the task.

  20. Testing and design life analysis of polyurea liner materials

    NASA Astrophysics Data System (ADS)

    Ghasemi Motlagh, Siavash

    Certainly, water pipes, as part of an underground infrastructure system, play a key role in maintaining quality of life, health, and wellbeing of human kind. As these potable water pipes reach the end of their useful life, they create high maintenance costs, loss of flow capacity, decreased water quality, and increased dissatisfaction. There are several different pipeline renewal techniques available for different applications, among which linings are most commonly used for the renewal of water pipes. Polyurea is a lining material applied to the interior surface of the deteriorated host pipe using spray-on technique. It is applied to structurally enhance the host pipe and provide a barrier coating against further corrosion or deterioration. The purpose of this study was to establish a relationship between stress, strain and time. The results obtained from these tests were used in predicting the strength of the polyurea material during its planned 50-year design life. In addition to this, based on the 10,000 hours experimental data, curve fitting and Findley power law models were employed to predict long-term behavior of the material. Experimental results indicated that the tested polyurea material offers a good balance of strength and stiffness and can be utilized in structural enhancement applications of potable water pipes.

  1. Osteosarcoma Overview.

    PubMed

    Lindsey, Brock A; Markel, Justin E; Kleinerman, Eugenie S

    2017-06-01

    Osteosarcoma (OS) is the most common primary malignancy of bone and patients with metastatic disease or recurrences continue to have very poor outcomes. Unfortunately, little prognostic improvement has been generated from the last 20 years of research and a new perspective is warranted. OS is extremely heterogeneous in both its origins and manifestations. Although multiple associations have been made between the development of osteosarcoma and race, gender, age, various genomic alterations, and exposure situations among others, the etiology remains unclear and controversial. Noninvasive diagnostic methods include serum markers like alkaline phosphatase and a growing variety of imaging techniques including X-ray, computed tomography, magnetic resonance imaging, and positron emission as well as combinations thereof. Still, biopsy and microscopic examination are required to confirm the diagnosis and carry additional prognostic implications such as subtype classification and histological response to neoadjuvant chemotherapy. The current standard of care combines surgical and chemotherapeutic techniques, with a multitude of experimental biologics and small molecules currently in development and some in clinical trial phases. In this review, in addition to summarizing the current understanding of OS etiology, diagnostic methods, and the current standard of care, our group describes various experimental therapeutics and provides evidence to encourage a potential paradigm shift toward the introduction of immunomodulation, which may offer a more comprehensive approach to battling cancer pleomorphism.

  2. Generalized Procedure for Improved Accuracy of Thermal Contact Resistance Measurements for Materials With Arbitrary Temperature-Dependent Thermal Conductivity

    DOE PAGES

    Sayer, Robert A.

    2014-06-26

    Thermal contact resistance (TCR) is most commonly measured using one-dimensional steady-state calorimetric techniques. In the experimental methods we utilized, a temperature gradient is applied across two contacting beams and the temperature drop at the interface is inferred from the temperature profiles of the rods that are measured at discrete points. During data analysis, thermal conductivity of the beams is typically taken to be an average value over the temperature range imposed during the experiment. Our generalized theory is presented and accounts for temperature-dependent changes in thermal conductivity. The procedure presented enables accurate measurement of TCR for contacting materials whose thermalmore » conductivity is any arbitrary function of temperature. For example, it is shown that the standard technique yields TCR values that are about 15% below the actual value for two specific examples of copper and silicon contacts. Conversely, the generalized technique predicts TCR values that are within 1% of the actual value. The method is exact when thermal conductivity is known exactly and no other errors are introduced to the system.« less

  3. Experimental study of the stability and flow characteristics of floating liquid columns confined between rotating disks

    NASA Technical Reports Server (NTRS)

    Fowle, A. A.; Soto, L.; Strong, P. F.; Wang, C. A.

    1980-01-01

    A low Bond number simulation technique was used to establish the stability limits of cylindrical and conical floating liquid columns under conditions of isorotation, equal counter rotation, rotation of one end only, and parallel axis offset. The conditions for resonance in cylindrical liquid columns perturbed by axial, sinusoidal vibration of one end face are also reported. All tests were carried out under isothermal conditions with water and silicone fluids of various viscosities. A technique for the quantitative measurement of stream velocity within a floating, isothermal, liquid column confined between rotatable disks was developed. In the measurement, small, light scattering particles were used as streamline markers in common arrangement, but the capability of the measurement was extended by use of stereopair photography system to provide quantitative data. Results of velocity measurements made under a few selected conditions, which established the precision and accuracy of the technique, are given. The general qualitative features of the isothermal flow patterns under various conditions of end face rotation resulting from both still photography and motion pictures are presented.

  4. Detection, Localization, and Tracking of Unauthorized UAS and Jammers

    NASA Technical Reports Server (NTRS)

    Guvenc, Ismail; Ozdemir, Ozgur; Yapici, Yavuz; Mehrpouyan, Hani; Matolak, David

    2017-01-01

    Small unmanned aircraft systems (UASs) are expected to take major roles in future smart cities, for example, by delivering goods and merchandise, potentially serving as mobile hot spots for broadband wireless access, and maintaining surveillance and security. Although they can be used for the betterment of the society, they can also be used by malicious entities to conduct physical and cyber attacks to infrastructure, private/public property, and people. Even for legitimate use-cases of small UASs, air traffic management (ATM) for UASs becomes of critical importance for maintaining safe and collusion-free operation. Therefore, various ways to detect, track, and interdict potentially unauthorized drones carries critical importance for surveillance and ATM applications. In this paper, we will review techniques that rely on ambient radio frequency signals (emitted from UASs), radars, acoustic sensors, and computer vision techniques for detection of malicious UASs. We will present some early experimental and simulation results on radar-based range estimation of UASs, and receding horizon tracking of UASs. Subsequently, we will overview common techniques that are considered for interdiction of UASs.

  5. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Virtual environment assessment for laser-based vision surface profiling

    NASA Astrophysics Data System (ADS)

    ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.

    2015-03-01

    Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.

  7. The human motor neuron pools receive a dominant slow‐varying common synaptic input

    PubMed Central

    Negro, Francesco; Yavuz, Utku Şükrü

    2016-01-01

    Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459

  8. Thermal neutron detector based on COTS CMOS imagers and a conversion layer containing Gadolinium

    NASA Astrophysics Data System (ADS)

    Pérez, Martín; Blostein, Juan Jerónimo; Bessia, Fabricio Alcalde; Tartaglione, Aureliano; Sidelnik, Iván; Haro, Miguel Sofo; Suárez, Sergio; Gimenez, Melisa Lucía; Berisso, Mariano Gómez; Lipovetzky, Jose

    2018-06-01

    In this work we will introduce a novel low cost position sensitive thermal neutron detection technique, based on a Commercial Off The Shelf CMOS image sensor covered with a Gadolinium containing conversion layer. The feasibility of the neutron detection technique implemented in this work has been experimentally demonstrated. A thermal neutron detection efficiency of 11.3% has been experimentally obtained with a conversion layer of 11.6 μm. It was experimentally verified that the thermal neutron detection efficiency of this technique is independent on the intensity of the incident thermal neutron flux, which was confirmed for conversion layers of different thicknesses. Based on the experimental results, a spatial resolution better than 25 μm is expected. This spatial resolution makes the proposed technique specially useful for neutron beam characterization, neutron beam dosimetry, high resolution neutron imaging, and several neutron scattering techniques.

  9. Experimental and analytical determination of stability parameters for a balloon tethered in a wind

    NASA Technical Reports Server (NTRS)

    Redd, L. T.; Bennett, R. M.; Bland, S. R.

    1973-01-01

    Experimental and analytical techniques for determining stability parameters for a balloon tethered in a steady wind are described. These techniques are applied to a particular 7.64-meter-long balloon, and the results are presented. The stability parameters of interest appear as coefficients in linearized stability equations and are derived from the various forces and moments acting on the balloon. In several cases the results from the experimental and analytical techniques are compared and suggestions are given as to which techniques are the most practical means of determining values for the stability parameters.

  10. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  11. Investigation of a novel common subexpression elimination method for low power and area efficient DCT architecture.

    PubMed

    Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.

  12. Investigation of a Novel Common Subexpression Elimination Method for Low Power and Area Efficient DCT Architecture

    PubMed Central

    Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249

  13. In-situ identification of anti-personnel mines using acoustic resonant spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, R L; Roberts, R S

    1999-02-01

    A new technique for identifying buried Anti-Personnel Mines is described, and a set of preliminary experiments designed to assess the feasibility of this technique is presented. Analysis of the experimental results indicates that the technique has potential, but additional work is required to bring the technique to fruition. In addition to the experimental results presented here, a technique used to characterize the sensor employed in the experiments is detailed.

  14. Reducing surgical levels by paraspinal mapping and diffusion tensor imaging techniques in lumbar spinal stenosis.

    PubMed

    Chen, Hua-Biao; Wan, Qi; Xu, Qi-Feng; Chen, Yi; Bai, Bo

    2016-04-25

    Correlating symptoms and physical examination findings with surgical levels based on common imaging results is not reliable. In patients who have no concordance between radiological and clinical symptoms, the surgical levels determined by conventional magnetic resonance imaging (MRI) and neurogenic examination (NE) may lead to a more extensive surgery and significant complications. We aimed to confirm that whether the use of diffusion tensor imaging (DTI) and paraspinal mapping (PM) techniques can further prevent the occurrence of false positives with conventional MRI, distinguish which are clinically relevant from levels of cauda equina and/or nerve root lesions based on MRI, and determine and reduce the decompression levels of lumbar spinal stenosis than MRI + NE, while ensuring or improving surgical outcomes. We compared the data between patients who underwent MRI + (PM or DTI) and patients who underwent conventional MRI + NE to determine levels of decompression for the treatment of lumbar spinal stenosis. Outcome measures were assessed at 2 weeks, 3 months, 6 months, and 12 months postoperatively. One hundred fourteen patients (59 in the control group, 54 in the experimental group) underwent decompression. The levels of decompression determined by MRI + (PM or DTI) in the experimental group were significantly less than that determined by MRI + NE in the control group (p = 0.000). The surgical time, blood loss, and surgical transfusion were significantly less in the experimental group (p = 0.001, p = 0.011, p = 0.001, respectively). There were no differences in improvement of the visual analog scale back and leg pain (VAS-BP, VAS-LP) scores and Oswestry Disability Index (ODI) scores at 2 weeks, 3 months, 6 months, and 12 months after operation between the experimental and control groups. MRI + (PM or DTI) showed clear benefits in determining decompression levels of lumbar spinal stenosis than MRI + NE. In patients with lumbar spinal stenosis, the use of PM and DTI techniques reduces decompression levels and increases safety and benefits of surgery.

  15. Experimental, theoretical, and device application development of nanoscale focused electron-beam-induced deposition

    NASA Astrophysics Data System (ADS)

    Randolph, Steven Jeffrey

    Electron-beam-induced deposition (EBID) is a highly versatile nanofabrication technique that allows for growth of a variety of materials with nanoscale precision and resolution. While several applications and studies of EBID have been reported and published, there is still a significant lack of understanding of the complex mechanisms involved in the process. Consequently, EBID process control is, in general, limited and certain common experimental results regarding nanofiber growth have yet to be fully explained. Such anomalous results have been addressed in this work both experimentally and by computer simulation. Specifically, a correlation between SiOx nanofiber deposition observations and the phenomenon of electron beam heating (EBH) was shown by comparison of thermal computer models and experimental results. Depending on the beam energy, beam current, and nanostructure geometry, the heat generated can be substantial and may influence the deposition rate. Temperature dependent EBID growth experiments qualitatively verified the results of the EBH model. Additionally, EBID was used to produce surface image layers for maskless, direct-write lithography (MDL). A single layer process used directly written SiOx features as a masking layer for amorphous silicon thin films. A bilayer process implemented a secondary masking layer consisting of standard photoresist into which a pattern---directly written by EBID tungsten---was transferred. The single layer process was found to be extremely sensitive to the etch selectivity of the plasma etch. In the bilayer process, EBID tungsten was written onto photoresist and the pattern transferred by means of oxygen plasma dry development following a brief refractory descum. Conditions were developed to reduce the spatial spread of electrons in the photoresist layer and obtain ˜ 35 nm lines. Finally, an EBID-based technique for field emitter repair was applied to the Digital Electrostatically focused e-beam Array Lithography (DEAL) parallel electron beam lithography configuration to repair damaged or missing carbon nanofiber cathodes. The I-V response and lithography results from EBID tungsten-based devices were comparable to CNF-based DEAL devices indicating a successful repair technique.

  16. Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model

    DTIC Science & Technology

    2010-03-01

    EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End

  17. Evaluating abdominal oedema during experimental sepsis using an isotope technique.

    PubMed

    Lattuada, Marco; Maripuu, Enn; Segerstad, Carl Hard af; Lundqvist, Hans; Hedenstierna, Göran

    2012-05-01

    Abdominal oedema is common in sepsis. A technique for the study of such oedema may guide in the fluid regime of these patients. We modified a double-isotope technique to evaluate abdominal organ oedema and fluid extravasation in 24 healthy or endotoxin-exposed ('septic') piglets. Two different markers were used: red blood cells (RBC) labelled with Technetium-99m ((99m)Tc) and Transferrin labelled with Indium111 ((111)In). Images were acquired on a dual-head gamma camera. Microscopic evaluation of tissue biopsies was performed to compare data with the isotope technique. No (99m)Tc activity was measured in the plasma fraction in blood sampled after labelling. Similarly, after molecular size gel chromatography, (111)In activity was exclusively found in the high molecular fraction of the plasma. Extravasation of transferrin, indicating the degree of abdominal oedema, was 4·06 times higher in the LPS group compared to the healthy controls (P<0·0001). Abdominal free fluid, studied in 3 animals, had as high (111)In activity as in plasma, but no (99m)Tc activity. Intestinal lymphatic vessel size was higher in LPS (3·7 ± 1·1 μm) compared to control animals (0·6 + 0·2 μm; P<0·001) and oedema correlated to villus diameter (R(2) = 0·918) and lymphatic diameter (R(2) = 0·758). A correlation between a normalized index of oedema formation (NI) and intra-abdominal pressure (IAP) was also found: NI = 0·46*IAP-3·3 (R(2) = 0·56). The technique enables almost continuous recording of abdominal oedema formation and may be a valuable tool in experimental research, with the potential to be applied in the clinic. © 2011 The Authors. Clinical Physiology and Functional Imaging © 2011 Scandinavian Society of Clinical Physiology and Nuclear Medicine.

  18. Techniques for detection and localization of weak hippocampal and medial frontal sources using beamformers in MEG.

    PubMed

    Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A

    2012-07-01

    Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.

  19. Knowledge discovery and system biology in molecular medicine: an application on neurodegenerative diseases.

    PubMed

    Fattore, Matteo; Arrigo, Patrizio

    2005-01-01

    The possibility to study an organism in terms of system theory has been proposed in the past, but only the advancement of molecular biology techniques allow us to investigate the dynamical properties of a biological system in a more quantitative and rational way than before . These new techniques can gave only the basic level view of an organisms functionality. The comprehension of its dynamical behaviour depends on the possibility to perform a multiple level analysis. Functional genomics has stimulated the interest in the investigation the dynamical behaviour of an organism as a whole. These activities are commonly known as System Biology, and its interests ranges from molecules to organs. One of the more promising applications is the 'disease modeling'. The use of experimental models is a common procedure in pharmacological and clinical researches; today this approach is supported by 'in silico' predictive methods. This investigation can be improved by a combination of experimental and computational tools. The Machine Learning (ML) tools are able to process different heterogeneous data sources, taking into account this peculiarity, they could be fruitfully applied to support a multilevel data processing (molecular, cellular and morphological) that is the prerequisite for the formal model design; these techniques can allow us to extract the knowledge for mathematical model development. The aim of our work is the development and implementation of a system that combines ML and dynamical models simulations. The program is addressed to the virtual analysis of the pathways involved in neurodegenerative diseases. These pathologies are multifactorial diseases and the relevance of the different factors has not yet been well elucidated. This is a very complex task; in order to test the integrative approach our program has been limited to the analysis of the effects of a specific protein, the Cyclin dependent kinase 5 (CDK5) which relies on the induction of neuronal apoptosis. The system has a modular structure centred on a textual knowledge discovery approach. The text mining is the only way to enhance the capability to extract ,from multiple data sources, the information required for the dynamical simulator. The user may access the publically available modules through the following site: http://biocomp.ge.ismac.cnr.it.

  20. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    PubMed

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  1. eSBMTools 1.0: enhanced native structure-based modeling tools.

    PubMed

    Lutz, Benjamin; Sinner, Claude; Heuermann, Geertje; Verma, Abhinav; Schug, Alexander

    2013-11-01

    Molecular dynamics simulations provide detailed insights into the structure and function of biomolecular systems. Thus, they complement experimental measurements by giving access to experimentally inaccessible regimes. Among the different molecular dynamics techniques, native structure-based models (SBMs) are based on energy landscape theory and the principle of minimal frustration. Typically used in protein and RNA folding simulations, they coarse-grain the biomolecular system and/or simplify the Hamiltonian resulting in modest computational requirements while achieving high agreement with experimental data. eSBMTools streamlines running and evaluating SBM in a comprehensive package and offers high flexibility in adding experimental- or bioinformatics-derived restraints. We present a software package that allows setting up, modifying and evaluating SBM for both RNA and proteins. The implemented workflows include predicting protein complexes based on bioinformatics-derived inter-protein contact information, a standardized setup of protein folding simulations based on the common PDB format, calculating reaction coordinates and evaluating the simulation by free-energy calculations with weighted histogram analysis method or by phi-values. The modules interface with the molecular dynamics simulation program GROMACS. The package is open source and written in architecture-independent Python2. http://sourceforge.net/projects/esbmtools/. alexander.schug@kit.edu. Supplementary data are available at Bioinformatics online.

  2. Experimental investigation on the characteristics of supersonic fuel spray and configurations of induced shock waves.

    PubMed

    Wang, Yong; Yu, Yu-Song; Li, Guo-Xiu; Jia, Tao-Ming

    2017-01-05

    The macro characteristics and configurations of induced shock waves of the supersonic sprays are investigated by experimental methods. Visualization study of spray shape is carried out with the high-speed camera. The macro characteristics including spray tip penetration, velocity of spray tip and spray angle are analyzed. The configurations of shock waves are investigated by Schlieren technique. For supersonic sprays, the concept of spray front angle is presented. Effects of Mach number of spray on the spray front angle are investigated. The results show that the shape of spray tip is similar to blunt body when fuel spray is at transonic region. If spray entered the supersonic region, the oblique shock waves are induced instead of normal shock wave. With the velocity of spray increasing, the spray front angle and shock wave angle are increased. The tip region of the supersonic fuel spray is commonly formed a cone. Mean droplet diameter of fuel spray is measured using Malvern's Spraytec. Then the mean droplet diameter results are compared with three popular empirical models (Hiroyasu's, Varde's and Merrigton's model). It is found that the Merrigton's model shows a relative good correlation between models and experimental results. Finally, exponent of injection velocity in the Merrigton's model is fitted with experimental results.

  3. Experimental investigation on the characteristics of supersonic fuel spray and configurations of induced shock waves

    PubMed Central

    Wang, Yong; Yu, Yu-song; Li, Guo-xiu; Jia, Tao-ming

    2017-01-01

    The macro characteristics and configurations of induced shock waves of the supersonic sprays are investigated by experimental methods. Visualization study of spray shape is carried out with the high-speed camera. The macro characteristics including spray tip penetration, velocity of spray tip and spray angle are analyzed. The configurations of shock waves are investigated by Schlieren technique. For supersonic sprays, the concept of spray front angle is presented. Effects of Mach number of spray on the spray front angle are investigated. The results show that the shape of spray tip is similar to blunt body when fuel spray is at transonic region. If spray entered the supersonic region, the oblique shock waves are induced instead of normal shock wave. With the velocity of spray increasing, the spray front angle and shock wave angle are increased. The tip region of the supersonic fuel spray is commonly formed a cone. Mean droplet diameter of fuel spray is measured using Malvern’s Spraytec. Then the mean droplet diameter results are compared with three popular empirical models (Hiroyasu’s, Varde’s and Merrigton’s model). It is found that the Merrigton’s model shows a relative good correlation between models and experimental results. Finally, exponent of injection velocity in the Merrigton’s model is fitted with experimental results. PMID:28054555

  4. In Quest of the Alanine R3 Radical: Multivariate EPR Spectral Analyses of X-Irradiated Alanine in the Solid State.

    PubMed

    Jåstad, Eirik O; Torheim, Turid; Villeneuve, Kathleen M; Kvaal, Knut; Hole, Eli O; Sagstuen, Einar; Malinen, Eirik; Futsaether, Cecilia M

    2017-09-28

    The amino acid l-α-alanine is the most commonly used material for solid-state electron paramagnetic resonance (EPR) dosimetry, due to the formation of highly stable radicals upon irradiation, with yields proportional to the radiation dose. Two major alanine radical components designated R1 and R2 have previously been uniquely characterized from EPR and electron-nuclear double resonance (ENDOR) studies as well as from quantum chemical calculations. There is also convincing experimental evidence of a third minor radical component R3, and a tentative radical structure has been suggested, even though no well-defined spectral signature has been observed experimentally. In the present study, temperature dependent EPR spectra of X-ray irradiated polycrystalline alanine were analyzed using five multivariate methods in further attempts to understand the composite nature of the alanine dosimeter EPR spectrum. Principal component analysis (PCA), maximum likelihood common factor analysis (MLCFA), independent component analysis (ICA), self-modeling mixture analysis (SMA), and multivariate curve resolution (MCR) were used to extract pure radical spectra and their fractional contributions from the experimental EPR spectra. All methods yielded spectral estimates resembling the established R1 spectrum. Furthermore, SMA and MCR consistently predicted both the established R2 spectrum and the shape of the R3 spectrum. The predicted shape of the R3 spectrum corresponded well with the proposed tentative spectrum derived from spectrum simulations. Thus, results from two independent multivariate data analysis techniques strongly support the previous evidence that three radicals are indeed present in irradiated alanine samples.

  5. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures

    DOE PAGES

    Caskey, Christopher M.; Holder, Aaron; Shulda, Sarah; ...

    2016-04-12

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experimentsmore » indicate that this novel material is N-deficient SnN with tin in the mixed ii/iv valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. Furthermore, this observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn 3N 4 spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.« less

  6. Parameterization and prediction of nanoparticle transport in porous media: A reanalysis using artificial neural network

    NASA Astrophysics Data System (ADS)

    Babakhani, Peyman; Bridge, Jonathan; Doong, Ruey-an; Phenrat, Tanapon

    2017-06-01

    The continuing rapid expansion of industrial and consumer processes based on nanoparticles (NP) necessitates a robust model for delineating their fate and transport in groundwater. An ability to reliably specify the full parameter set for prediction of NP transport using continuum models is crucial. In this paper we report the reanalysis of a data set of 493 published column experiment outcomes together with their continuum modeling results. Experimental properties were parameterized into 20 factors which are commonly available. They were then used to predict five key continuum model parameters as well as the effluent concentration via artificial neural network (ANN)-based correlations. The Partial Derivatives (PaD) technique and Monte Carlo method were used for the analysis of sensitivities and model-produced uncertainties, respectively. The outcomes shed light on several controversial relationships between the parameters, e.g., it was revealed that the trend of Katt with average pore water velocity was positive. The resulting correlations, despite being developed based on a "black-box" technique (ANN), were able to explain the effects of theoretical parameters such as critical deposition concentration (CDC), even though these parameters were not explicitly considered in the model. Porous media heterogeneity was considered as a parameter for the first time and showed sensitivities higher than those of dispersivity. The model performance was validated well against subsets of the experimental data and was compared with current models. The robustness of the correlation matrices was not completely satisfactory, since they failed to predict the experimental breakthrough curves (BTCs) at extreme values of ionic strengths.

  7. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures

    NASA Astrophysics Data System (ADS)

    Caskey, Christopher M.; Holder, Aaron; Shulda, Sarah; Christensen, Steven T.; Diercks, David; Schwartz, Craig P.; Biagioni, David; Nordlund, Dennis; Kukliansky, Alon; Natan, Amir; Prendergast, David; Orvananos, Bernardo; Sun, Wenhao; Zhang, Xiuwen; Ceder, Gerbrand; Ginley, David S.; Tumas, William; Perkins, John D.; Stevanovic, Vladan; Pylypenko, Svitlana; Lany, Stephan; Richards, Ryan M.; Zakutayev, Andriy

    2016-04-01

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experiments indicate that this novel material is N-deficient SnN with tin in the mixed ii/iv valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. This observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn3N4 spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.

  8. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures.

    PubMed

    Caskey, Christopher M; Holder, Aaron; Shulda, Sarah; Christensen, Steven T; Diercks, David; Schwartz, Craig P; Biagioni, David; Nordlund, Dennis; Kukliansky, Alon; Natan, Amir; Prendergast, David; Orvananos, Bernardo; Sun, Wenhao; Zhang, Xiuwen; Ceder, Gerbrand; Ginley, David S; Tumas, William; Perkins, John D; Stevanovic, Vladan; Pylypenko, Svitlana; Lany, Stephan; Richards, Ryan M; Zakutayev, Andriy

    2016-04-14

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experiments indicate that this novel material is N-deficient SnN with tin in the mixed ii/iv valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. This observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn3N4 spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.

  9. Synthesis of a mixed-valent tin nitride and considerations of its possible crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caskey, Christopher M.; Colorado School of Mines, Golden, Colorado 80401; Larix Chemical Science, Golden, Colorado 80401

    2016-04-14

    Recent advances in theoretical structure prediction methods and high-throughput computational techniques are revolutionizing experimental discovery of the thermodynamically stable inorganic materials. Metastable materials represent a new frontier for these studies, since even simple binary non-ground state compounds of common elements may be awaiting discovery. However, there are significant research challenges related to non-equilibrium thin film synthesis and crystal structure predictions, such as small strained crystals in the experimental samples and energy minimization based theoretical algorithms. Here, we report on experimental synthesis and characterization, as well as theoretical first-principles calculations of a previously unreported mixed-valent binary tin nitride. Thin film experimentsmore » indicate that this novel material is N-deficient SnN with tin in the mixed II/IV valence state and a small low-symmetry unit cell. Theoretical calculations suggest that the most likely crystal structure has the space group 2 (SG2) related to the distorted delafossite (SG166), which is nearly 0.1 eV/atom above the ground state SnN polymorph. This observation is rationalized by the structural similarity of the SnN distorted delafossite to the chemically related Sn{sub 3}N{sub 4} spinel compound, which provides a fresh scientific insight into the reasons for growth of polymorphs of metastable materials. In addition to reporting on the discovery of the simple binary SnN compound, this paper illustrates a possible way of combining a wide range of advanced characterization techniques with the first-principle property calculation methods, to elucidate the most likely crystal structure of the previously unreported metastable materials.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  11. Novel Experimental Techniques to Investigate Wellbore Damage Mechanisms

    NASA Astrophysics Data System (ADS)

    Choens, R. C., II; Ingraham, M. D.; Lee, M.; Dewers, T. A.

    2017-12-01

    A new experimental technique with unique geometry is presented investigating deformation of simulated boreholes using standard axisymmetric triaxial deformation equipment. The Sandia WEllbore SImulation, SWESI, geometry, uses right cylinders of rock 50mm in diameter and 75mm in length. A 11.3mm hole is drilled perpendicular to the axis of the cylinder in the center of the sample to simulate a borehole. The hole is covered with a solid metal cover, and sealed with polyurethane. The metal cover can be machined with a high-pressure port to introduce different fluid chemistries into the borehole at controlled pressures. Samples are deformed in a standard load frame under confinement, allowing for a broad range of possible stresses, load paths, and temperatures. Experiments in this study are loaded to the desired confining pressure, then deformed at a constant axial strain rate or 10-5 sec-1. Two different suites of experiments are conducted in this study on sedimentary and crystalline rock types. The first series of experiments are conducted on Mancos Shale, a finely laminated transversely isotropic rock. Samples are cored at three different orientations to the laminations. A second series of experiments is conducted on Sierra White granite with different fluid chemistries inside the borehole. Numerical modelling and experimental observations including CT-microtomography demonstrate that stresses are concentrated around the simulated wellbore and recreate wellbore deformation mechanisms. Borehole strength and damage development is dependent on anisotropy orientation and fluid chemistry. Observed failure geometries, particularly for Mancos shale, can be highly asymmetric. These results demonstrate uncertainties in in situ stresses measurements using commonly-applied borehole breakout techniques in complicated borehole physico-chemical environments. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. SAND2017-8259 A

  12. Physical and optical properties of DCJTB dye for OLED display applications: Experimental and theoretical investigation

    NASA Astrophysics Data System (ADS)

    Kurban, Mustafa; Gündüz, Bayram

    2017-06-01

    In this study, 4-(dicyanomethylene)-2-tert-butyl-6-(1,1,7,7-tetramethyljulolidin-4-yl-vinyl)-4H-pyran (DCJTB) was achieved using the experimental and theoretical studies. The electronic, optical and spectroscopic properties of DCJTB molecule were first investigated by performing experimental both solution and thin film techniques and then theoretical calculations. Theoretical results showed that one intense electronic transition is 505.26 nm a quite reasonable and agreement with the measured experimental data 505.00 and 503 nm with solution technique and film technique, respectively. Experimental and simple models were also taken into consideration to calculate the optical refractive index (n) of DCJTB molecule. The structural and electronic properties were next calculated using density functional theory (DFT) with B3LYP/6-311G (d, p) basis set. UV, FT-IR spectra characteristics and the electronic properties, such as frontier orbitals, and band gap energy (Eg) of DCJTB were also recorded time-dependent (TD) DFT approach. The theoretical Eg value were found to be 2.269 eV which is consistent with experimental results obtained from solution technique for THF solvent (2.155 eV) and literature (2.16 eV). The results herein obtained reveal that solution is simple, cost-efficient and safe for optoelectronic applications when compared with film technique.

  13. Parametric studies and characterization measurements of x-ray lithography mask membranes

    NASA Astrophysics Data System (ADS)

    Wells, Gregory M.; Chen, Hector T. H.; Engelstad, Roxann L.; Palmer, Shane R.

    1991-08-01

    The techniques used in the experimental characterization of thin membranes are considered for their potential use as mask blanks for x-ray lithography. Among the parameters of interest for this evaluation are the film's stress, fracture strength, uniformity of thickness, absorption in the x-ray and visible spectral regions and the modulus and grain structure of the material. The experimental techniques used for measuring these properties are described. The accuracy and applicability of the assumptions used to derive the formulas that relate the experimental measurements to the parameters of interest are considered. Experimental results for silicon carbide and diamond films are provided. Another characteristic needed for an x-ray mask carrier is radiation stability. The number of x-ray exposures expected to be performed in the lifetime of an x-ray mask on a production line is on the order of 107. The dimensional stability requirements placed on the membranes during this period are discussed. Interferometric techniques that provide sufficient sensitivity for these stability measurements are described. A comparison is made between the different techniques that have been developed in term of the information that each technique provides, the accuracy of the various techniques, and the implementation issues that are involved with each technique.

  14. Adaptive suppression of power line interference in ultra-low field magnetic resonance imaging in an unshielded environment.

    PubMed

    Huang, Xiaolei; Dong, Hui; Qiu, Yang; Li, Bo; Tao, Quan; Zhang, Yi; Krause, Hans-Joachim; Offenhäusser, Andreas; Xie, Xiaoming

    2018-01-01

    Power-line harmonic interference and fixed-frequency noise peaks may cause stripe-artifacts in ultra-low field (ULF) magnetic resonance imaging (MRI) in an unshielded environment and in a conductively shielded room. In this paper we describe an adaptive suppression method to eliminate these artifacts in MRI images. This technique utilizes spatial correlation of the interference from different positions, and is realized by subtracting the outputs of the reference channel(s) from those of the signal channel(s) using wavelet analysis and the least squares method. The adaptive suppression method is first implemented to remove the image artifacts in simulation. We then experimentally demonstrate the feasibility of this technique by adding three orthogonal superconducting quantum interference device (SQUID) magnetometers as reference channels to compensate the output of one 2nd-order gradiometer. The experimental results show great improvement in the imaging quality in both 1D and 2D MRI images at two common imaging frequencies, 1.3 kHz and 4.8 kHz. At both frequencies, the effective compensation bandwidth is as high as 2 kHz. Furthermore, we examine the longitudinal relaxation times of the same sample before and after compensation, and show that the MRI properties of the sample did not change after applying adaptive suppression. This technique can effectively increase the imaging bandwidth and be applied to ULF MRI detected by either SQUIDs or Faraday coil in both an unshielded environment and a conductively shielded room. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-05-01

    The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  16. NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.

    2012-01-01

    A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.

  17. Novel Augmentation Technique for Patellar Tendon Repair Improves Strength and Decreases Gap Formation: A Cadaveric Study.

    PubMed

    Black, James C; Ricci, William M; Gardner, Michael J; McAndrew, Christopher M; Agarwalla, Avinesh; Wojahn, Robert D; Abar, Orchid; Tang, Simon Y

    2016-12-01

    Patellar tendon ruptures commonly are repaired using transosseous patellar drill tunnels with modified-Krackow sutures in the patellar tendon. This simple suture technique has been associated with failure rates and poor clinical outcomes in a modest proportion of patients. Failure of this repair technique can result from gap formation during loading or a single catastrophic event. Several augmentation techniques have been described to improve the integrity of the repair, but standardized biomechanical evaluation of repair strength among different techniques is lacking. The purpose of this study was to describe a novel figure-of-eight suture technique to augment traditional fixation and evaluate its biomechanical performance. We hypothesized that the augmentation technique would (1) reduce gap formation during cyclic loading and (2) increase the maximum load to failure. Ten pairs (two male, eight female) of fresh-frozen cadaveric knees free of overt disorders or patellar tendon damage were used (average donor age, 76 years; range, 65-87 years). For each pair, one specimen underwent the standard transosseous tunnel suture repair with a modified-Krackow suture technique and the second underwent the standard repair with our experimental augmentation method. Nine pairs were suitable for testing. Each specimen underwent cyclic loading while continuously measuring gap formation across the repair. At the completion of cyclic loading, load to failure testing was performed. A difference in gap formation and mean load to failure was seen in favor of the augmentation technique. At 250 cycles, a 68% increase in gap formation was seen for the control group (control: 5.96 ± 0.86 mm [95% CI, 5.30-6.62 mm]; augmentation: 3.55 ± 0.56 mm [95% CI, 3.12-3.98 mm]; p = 0.02). The mean load to failure was 13% greater in the augmentation group (control: 899.57 ± 96.94 N [95% CI, 825.06-974.09 N]; augmentation: 1030.70 ± 122.41 N [95% CI, 936.61-1124.79 N]; p = 0.01). This biomechanical study showed improved performance of a novel augmentation technique compared with the standard repair, in terms of reduced gap formation during cyclic loading and increased maximum load to failure. Decreased gap formation and higher load to failure may improve healing potential and minimize failure risk. This study shows a potential biomechanical advantage of the augmentation technique, providing support for future clinical investigations comparing this technique with other repair methods that are in common use such as transosseous suture repair.

  18. Electrodeposition of titania and barium titanate thin films for high dielectric applications

    NASA Astrophysics Data System (ADS)

    Roy, Biplab Kumar

    In order to address the requirement of a low-temperature low-cost cost processing for depositing high dielectric constant ceramic films for applications in embedded capacitor and flexible electronics technology, two different chemical bath processes, namely, thermohydrolytic deposition (TD) and cathodic electrodeposition (ED) have been exploited to generate titania thin films. In thermohydrolytic deposition technique, titania films were generated from acidic aqueous solution of titanium chloride on F: SnO2 coated glass and Si substrates by temperature assisted hydrolysis mechanism. On the other hand, in cathodic electrodeposition, in-situ electro-generation of hydroxyl ions triggered a fast deposition of titania on conductive substrates such as copper and F: SnO2 coated glass from peroxotitanium solution at low temperatures (˜0°C). In both techniques, solution compositions affected the morphology and crystallinity of the films. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) techniques have been employed to perform such characterization. As both processes utilized water as solvent, the as-deposited films contained hydroxyl ligand or physically adsorbed water molecules in the titania layer. Besides that, electrodeposited films contained peroxotitanium bonds which were characterized by FTIR studies. Although as-electrodeposited titania films were X-ray amorphous, considerable crystallinity could be generated by heat treatment. The films obtained from both the processes showed v moderately high dielectric constant (ranging from 9-30 at 100 kHz) and high breakdown voltage (0.09-0.15 MV/cm) in electrical measurements. To further improve the dielectric constant, electrodeposited titania films were converted to barium titanate films in high pH barium ion containing solution at 80-90°C. The resultant film contained cubic crystalline barium titanate verified by XRD analysis. Simple low-temperature hydrothermal technique of conversion worked perfect for F:SnO2 coated glass substrates, but in this process, high pH precursor caused corrosion in copper substrates and deposition of copper oxide in the final films. To overcome this, an innovative technique, which incorporates an electrochemical protection of substrates by application of cathodic potential in addition to common hydrothermal conversion, has been adopted. Films generated by common hydrothermal technique on F:SnO 2/glass substrates and via electrochemical-hydrothermal technique on Cu substrates showed promising dielectric behavior. Apart from the experimental studies, this report also includes various thermodynamic studies related to hydrolysis and precipitation of titanium ion, protection of copper during titania deposition and barium titanate conversion. Gibbs free energy based model and speciation studies were used to understand supersaturation which is a controlling factor in thermohydrolytic deposition. Similar approaches were utilized to understand the possibilities of barium titanate formation at different Ba2+ concentrations with different pH conditions. Possibilities of atmospheric carbon dioxide incorporation to generate barium carbonate instead of barium titanate formation were also determined by mathematical calculations. Whenever relevant, results of such theoretical analysis were utilized to design the experiment or to explain the experimental observations.

  19. Application of the ultrasonic technique and high-speed filming for the study of the structure of air-water bubbly flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carvalho, R.D.M.; Venturini, O.J.; Tanahashi, E.I.

    2009-10-15

    Multiphase flows are very common in industry, oftentimes involving very harsh environments and fluids. Accordingly, there is a need to determine the dispersed phase holdup using noninvasive fast responding techniques; besides, knowledge of the flow structure is essential for the assessment of the transport processes involved. The ultrasonic technique fulfills these requirements and could have the capability to provide the information required. In this paper, the potential of the ultrasonic technique for application to two-phase flows was investigated by checking acoustic attenuation data against experimental data on the void fraction and flow topology of vertical, upward, air-water bubbly flows inmore » the zero to 15% void fraction range. The ultrasonic apparatus consisted of one emitter/receiver transducer and three other receivers at different positions along the pipe circumference; simultaneous high-speed motion pictures of the flow patterns were made at 250 and 1000 fps. The attenuation data for all sensors exhibited a systematic interrelated behavior with void fraction, thereby testifying to the capability of the ultrasonic technique to measure the dispersed phase holdup. From the motion pictures, basic gas phase structures and different flows patterns were identified that corroborated several features of the acoustic attenuation data. Finally, the acoustic wave transit time was also investigated as a function of void fraction. (author)« less

  20. Planar quadrature RF transceiver design using common-mode differential-mode (CMDM) transmission line method for 7T MR imaging.

    PubMed

    Li, Ye; Yu, Baiying; Pang, Yong; Vigneron, Daniel B; Zhang, Xiaoliang

    2013-01-01

    The use of quadrature RF magnetic fields has been demonstrated to be an efficient method to reduce transmit power and to increase the signal-to-noise (SNR) in magnetic resonance (MR) imaging. The goal of this project was to develop a new method using the common-mode and differential-mode (CMDM) technique for compact, planar, distributed-element quadrature transmit/receive resonators for MR signal excitation and detection and to investigate its performance for MR imaging, particularly, at ultrahigh magnetic fields. A prototype resonator based on CMDM method implemented by using microstrip transmission line was designed and fabricated for 7T imaging. Both the common mode (CM) and the differential mode (DM) of the resonator were tuned and matched at 298MHz independently. Numerical electromagnetic simulation was performed to verify the orthogonal B1 field direction of the two modes of the CMDM resonator. Both workbench tests and MR imaging experiments were carried out to evaluate the performance. The intrinsic decoupling between the two modes of the CMDM resonator was demonstrated by the bench test, showing a better than -36 dB transmission coefficient between the two modes at resonance frequency. The MR images acquired by using each mode and the images combined in quadrature showed that the CM and DM of the proposed resonator provided similar B1 coverage and achieved SNR improvement in the entire region of interest. The simulation and experimental results demonstrate that the proposed CMDM method with distributed-element transmission line technique is a feasible and efficient technique for planar quadrature RF coil design at ultrahigh fields, providing intrinsic decoupling between two quadrature channels and high frequency capability. Due to its simple and compact geometry and easy implementation of decoupling methods, the CMDM quadrature resonator can possibly be a good candidate for design blocks in multichannel RF coil arrays.

  1. Current uses of ground penetrating radar in groundwater-dependent ecosystems research.

    PubMed

    Paz, Catarina; Alcalá, Francisco J; Carvalho, Jorge M; Ribeiro, Luís

    2017-10-01

    Ground penetrating radar (GPR) is a high-resolution technique widely used in shallow groundwater prospecting. This makes GPR ideal to characterize the hydrogeological functioning of groundwater-dependent ecosystems (GDE). This paper reviews current uses of GPR in GDE research through the construction of a database comprising 91 worldwide GPR case studies selected from the literature and classified according to (1) geological environments favouring GDE; (2) hydrogeological research interests; and (3) field technical and (4) hydrogeological conditions of the survey. The database analysis showed that inland alluvial, colluvial, and glacial formations were the most widely covered geological environments. Water-table depth was the most repeated research interest. By contrast, weathered-marl and crystalline-rock environments as well as the delineation of salinity interfaces in coastal and inland areas were less studied. Despite that shallow groundwater propitiated GDE in almost all the GPR case studies compiled, only one case expressly addressed GDE research. Common ranges of prospecting depth, water-table depth, and volumetric water content deduced by GPR and other techniques were identified. Antenna frequency of 100MHz and the common offset acquisition technique predominated in the database. Most of GPR case studies were in 30-50° N temperate latitudes, mainly in Europe and North America. Eight original radargrams were selected from several GPR profiles performed in 2014 and 2015 to document database classes and identified gaps, as well as to define experimental ranges of operability in GDE environments. The results contribute to the design of proper GPR surveys in GDE research. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  3. Learning Compositional Simulation Models

    DTIC Science & Technology

    2010-01-01

    techniques developed by social scientists, economists, and medical researchers over the past four decades. Quasi-experimental designs (QEDs) are...statistical techniques from the social sciences known as quasi- experimental design (QED). QEDs allow a researcher to exploit unique characteristics...can be grouped under the rubric “quasi-experimental design ” (QED), and they attempt to exploit inherent characteristics of observational data sets

  4. Determination of cellular strains by combined atomic force microscopy and finite element modeling.

    PubMed Central

    Charras, Guillaume T; Horton, Mike A

    2002-01-01

    Many organs adapt to their mechanical environment as a result of physiological change or disease. Cells are both the detectors and effectors of this process. Though many studies have been performed in vitro to investigate the mechanisms of detection and adaptation to mechanical strains, the cellular strains remain unknown and results from different stimulation techniques cannot be compared. By combining experimental determination of cell profiles and elasticities by atomic force microscopy with finite element modeling and computational fluid dynamics, we report the cellular strain distributions exerted by common whole-cell straining techniques and from micromanipulation techniques, hence enabling their comparison. Using data from our own analyses and experiments performed by others, we examine the threshold of activation for different signal transduction processes and the strain components that they may detect. We show that modulating cell elasticity, by increasing the F-actin content of the cytoskeleton, or cellular Poisson ratio are good strategies to resist fluid shear or hydrostatic pressure. We report that stray fluid flow in some substrate-stretch systems elicits significant cellular strains. In conclusion, this technique shows promise in furthering our understanding of the interplay among mechanical forces, strain detection, gene expression, and cellular adaptation in physiology and disease. PMID:12124270

  5. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  6. Interdisciplinary Common Ground: Techniques and Attentional Processes

    ERIC Educational Resources Information Center

    Arvidson, P. Sven

    2014-01-01

    Common ground in the interdisciplinary research process is the pivot from disciplinary to interdisciplinary perspective. As thinking moves from disciplinary to interdisciplinary, what is the shape or structure of attention, how does intellectual content transform in the attending process? Four common ground techniques--extension, redefinition,…

  7. Polarization-difference imaging: a biologically inspired technique for observation through scattering media

    NASA Astrophysics Data System (ADS)

    Rowe, M. P.; Pugh, E. N., Jr.; Tyo, J. S.; Engheta, N.

    1995-03-01

    Many animals have visual systems that exploit the polarization of light, and some of these systems are thought to compute difference signals in parallel from arrays of photoreceptors optimally tuned to orthogonal polarizations. We hypothesize that such polarization-difference systems can improve the visibility of objects in scattering media by serving as common-mode rejection amplifiers that reduce the effects of background scattering and amplify the signal from targets whose polarization-difference magnitude is distinct from the background. We present experimental results obtained with a target in a highly scattering medium, demonstrating that a manmade polarization-difference system can render readily visible surface features invisible to conventional imaging.

  8. Improved near-field characteristics of phased arrays for assessing concrete and cementitious materials

    NASA Astrophysics Data System (ADS)

    Wooh, Shi-Chang; Azar, Lawrence

    1999-01-01

    The degradation of civil infrastructure has placed a focus on effective nondestructive evaluation techniques to correctly assess the condition of existing concrete structures. Conventional high frequency ultrasonic response are severely affected by scattering and material attenuation, resulting in weak and confusing signal returns. Therefore, low frequency ultrasonic transducers, which avoid this problem of wave attenuation, are commonly used for concrete with limited capabilities. The focus of this research is to ascertain some benefits and limitations of a low frequency ultrasonic phased array transducer. In this paper, we investigate a novel low-frequency ultrasonic phased array and the results of experimental feasibility test for practical condition assessment of concrete structures are reported.

  9. Virtual reality systems for rodents

    PubMed Central

    Ayaz, Aslı

    2017-01-01

    Abstract Over the last decade virtual reality (VR) setups for rodents have been developed and utilized to investigate the neural foundations of behavior. Such VR systems became very popular since they allow the use of state-of-the-art techniques to measure neural activity in behaving rodents that cannot be easily used with classical behavior setups. Here, we provide an overview of rodent VR technologies and review recent results from related research. We discuss commonalities and differences as well as merits and issues of different approaches. A special focus is given to experimental (behavioral) paradigms in use. Finally we comment on possible use cases that may further exploit the potential of VR in rodent research and hence inspire future studies. PMID:29491968

  10. A Selective-Echo Method for Chemical-Shift Imaging of Two-Component Systems

    NASA Astrophysics Data System (ADS)

    Gerald, Rex E., II; Krasavin, Anatoly O.; Botto, Robert E.

    A simple and effective method for selectively imaging either one of two chemical species in a two-component system is presented and demonstrated experimentally. The pulse sequence employed, selective- echo chemical- shift imaging (SECSI), is a hybrid (frequency-selective/ T1-contrast) technique that is executed in a short period of time, utilizes the full Boltzmann magnetization of each chemical species to form the corresponding image, and requires only hard pulses of quadrature phase. This approach provides a direct and unambiguous representation of the spatial distribution of the two chemical species. In addition, the performance characteristics and the advantages of the SECSI sequence are compared on a common basis to those of other pulse sequences.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, David; Kress, James; Choi, Jong

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  12. Experience in Evaluating AAL Solutions in Living Labs

    PubMed Central

    Colomer, Juan Bautista Montalvá; Salvi, Dario; Cabrera-Umpierrez, Maria Fernanda; Arredondo, Maria Teresa; Abril, Patricia; Jimenez-Mixco, Viveca; García-Betances, Rebeca; Fioravanti, Alessio; Pastorino, Matteo; Cancela, Jorge; Medrano, Alejandro

    2014-01-01

    Ambient assisted living (AAL) is a complex field, where different technologies are integrated to offer solutions for the benefit of different stakeholders. Several evaluation techniques are commonly applied that tackle specific aspects of AAL; however, holistic evaluation approaches are lacking when addressing the needs of both developers and end-users. Living labs have been often used as real-life test and experimentation environments for co-designing AAL technologies and validating them with relevant stakeholders. During the last five years, we have been evaluating AAL systems and services in the framework of various research projects. This paper presents the lessons learned in this experience and proposes a set of harmonized guidelines to conduct evaluations in living labs. PMID:24763209

  13. Packing properties of starch-based powders under mild mechanical stress.

    PubMed

    Zanardi, I; Gabbrielli, A; Travagli, V

    2009-07-01

    This study reports the ability to settle of commercial pharmaceutical grade starch samples, both native and pregelatinized. The experiments were carried out under different relative humidity (RH%) conditions and the packing properties were evaluated using both official pharmacopoeial monograph conditions and also modified conditions in order to give a deeper knowledge of tapping under mild mechanical stress. The technique adopted, simulating common pharmaceutical operating practices, appears to be useful to estimate some technologically relevant features of diluent powder materials. Moreover, a general mathematical function has been applied to the experimental data; this could be appropriate for adequately describing material settling patterns and offers practical parameters for characterizing starch powders within the context of a pharmaceutical quality system.

  14. Macromolecular assemblies in reduced gravity environments

    NASA Technical Reports Server (NTRS)

    Moos, Philip J.; Hayes, James W.; Stodieck, Louis S.; Luttges, Marvin W.

    1990-01-01

    The assembly of protein macro molecules into structures commonly produced within biological systems was achieved using in vitro techniques carried out in nominal as well as reduced gravity environments. Appropriate hardware was designed and fabricated to support such studies. Experimental protocols were matched to the available reduced gravity test opportunities. In evaluations of tubulin, fibrin and collagen assembly products the influence of differing gravity test conditions are apparent. Product homogeneity and organization were characteristic enhancements documented in reduced gravity samples. These differences can be related to the fluid flow conditions that exist during in vitro product formation. Reduced gravity environments may provide a robust opportunity for directing the products formed in a variety of bioprocessing applications.

  15. Picometer Level Modeling of a Shared Vertex Double Corner Cube in the Space Interferometry Mission Kite Testbed

    NASA Technical Reports Server (NTRS)

    Kuan, Gary M.; Dekens, Frank G.

    2006-01-01

    The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.

  16. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    NASA Astrophysics Data System (ADS)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  17. Impact-damaged graphite-thermoplastic trapezoidal-corrugation sandwich and semi-sandwich panels

    NASA Technical Reports Server (NTRS)

    Jegley, D.

    1993-01-01

    The results of a study of the effects of impact damage on compression-loaded trapezoidal-corrugation sandwich and semi-sandwich graphite-thermoplastic panels are presented. Sandwich panels with two identical face sheets and a trapezoidal corrugated core between them, and semi-sandwich panels with a corrugation attached to a single skin are considered in this study. Panels were designed, fabricated and tested. The panels were made using the manufacturing process of thermoforming, a less-commonly used technique for fabricating composite parts. Experimental results for unimpacted control panels and panels subjected to impact damage prior to loading are presented. Little work can be found in the literature about these configurations of thermoformed panels.

  18. The Oxford Probe: an open access five-hole probe for aerodynamic measurements

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2017-03-01

    The Oxford Probe is an open access five-hole probe designed for experimental aerodynamic measurements. The open access probe can be manufactured by the end user via additive manufacturing (metal or plastic). The probe geometry, drawings, calibration maps, and software are available under a creative commons license. The purpose is to widen access to aerodynamic measurement techniques in education and research environments. There are many situations in which the open access probe will allow results of comparable accuracy to a well-calibrated commercial probe. We discuss the applications and limitations of the probe, and compare the calibration maps for 16 probes manufactured in different materials and at different scales, but with the same geometrical design.

  19. Experimental evidence of phase coherence of magnetohydrodynamic turbulence in the solar wind: GEOTAIL satellite data.

    PubMed

    Koga, D; Chian, A C-L; Hada, T; Rempel, E L

    2008-02-13

    Magnetohydrodynamic (MHD) turbulence is commonly observed in the solar wind. Nonlinear interactions among MHD waves are likely to produce finite correlation of the wave phases. For discussions of various transport processes of energetic particles, it is fundamentally important to determine whether the wave phases are randomly distributed (as assumed in the quasi-linear theory) or have a finite coherence. Using a method based on the surrogate data technique, we analysed the GEOTAIL magnetic field data to evaluate the phase coherence in MHD turbulence in the Earth's foreshock region. The results demonstrate the existence of finite phase correlation, indicating that nonlinear wave-wave interactions are in progress.

  20. Computer simulation of schlieren images of rotationally symmetric plasma systems: a simple method.

    PubMed

    Noll, R; Haas, C R; Weikl, B; Herziger, G

    1986-03-01

    Schlieren techniques are commonly used methods for quantitative analysis of cylindrical or spherical index of refraction profiles. Many schlieren objects, however, are characterized by more complex geometries, so we have investigated the more general case of noncylindrical, rotationally symmetric distributions of index of refraction n(r,z). Assuming straight ray paths in the schlieren object we have calculated 2-D beam deviation profiles. It is shown that experimental schlieren images of the noncylindrical plasma generated by a plasma focus device can be simulated with these deviation profiles. The computer simulation allows a quantitative analysis of these schlieren images, which yields, for example, the plasma parameters, electron density, and electron density gradients.

  1. Investigating Methodological Differences in the Assessment of Dendritic Morphology of Basolateral Amygdala Principal Neurons-A Comparison of Golgi-Cox and Neurobiotin Electroporation Techniques.

    PubMed

    Klenowski, Paul M; Wright, Sophie E; Mu, Erica W H; Noakes, Peter G; Lavidis, Nickolas A; Bartlett, Selena E; Bellingham, Mark C; Fogarty, Matthew J

    2017-12-19

    Quantitative assessments of neuronal subtypes in numerous brain regions show large variations in dendritic arbor size. A critical experimental factor is the method used to visualize neurons. We chose to investigate quantitative differences in basolateral amygdala (BLA) principal neuron morphology using two of the most common visualization methods: Golgi-Cox staining and neurobiotin (NB) filling. We show in 8-week-old Wistar rats that NB-filling reveals significantly larger dendritic arbors and different spine densities, compared to Golgi-Cox-stained BLA neurons. Our results demonstrate important differences and provide methodological insights into quantitative disparities of BLA principal neuron morphology reported in the literature.

  2. Body size and shape misperception and visual adaptation: An overview of an emerging research paradigm.

    PubMed

    Challinor, Kirsten L; Mond, Jonathan; Stephen, Ian D; Mitchison, Deborah; Stevenson, Richard J; Hay, Phillipa; Brooks, Kevin R

    2017-12-01

    Although body size and shape misperception (BSSM) is a common feature of anorexia nervosa, bulimia nervosa and muscle dysmorphia, little is known about its underlying neural mechanisms. Recently, a new approach has emerged, based on the long-established non-invasive technique of perceptual adaptation, which allows for inferences about the structure of the neural apparatus responsible for alterations in visual appearance. Here, we describe several recent experimental examples of BSSM, wherein exposure to "extreme" body stimuli causes visual aftereffects of biased perception. The implications of these studies for our understanding of the neural and cognitive representation of human bodies, along with their implications for clinical practice are discussed.

  3. Analysis of chemical warfare agents. II. Use of thiols and statistical experimental design for the trace level determination of vesicant compounds in air samples.

    PubMed

    Muir, Bob; Quick, Suzanne; Slater, Ben J; Cooper, David B; Moran, Mary C; Timperley, Christopher M; Carrick, Wendy A; Burnell, Christopher K

    2005-03-18

    Thermal desorption with gas chromatography-mass spectrometry (TD-GC-MS) remains the technique of choice for analysis of trace concentrations of analytes in air samples. This paper describes the development and application of a method for analysing the vesicant compounds sulfur mustard and Lewisites I-III. 3,4-Dimercaptotoluene and butanethiol were used to spike sorbent tubes and vesicant vapours sampled; Lewisite I and II reacted with the thiols while sulfur mustard and Lewisite III did not. Statistical experimental design was used to optimise thermal desorption parameters and the optimum method used to determine vesicant compounds in headspace samples taken from a decontamination trial. 3,4-Dimercaptotoluene reacted with Lewisites I and II to give a common derivative with a limit of detection (LOD) of 260 microg m(-3), while the butanethiol gave distinct derivatives with limits of detection around 30 microg m(-3).

  4. Multiscale Analysis of a Collapsible Respiratory Airway

    NASA Astrophysics Data System (ADS)

    Ghadiali, Samir; Bell, E. David; Swarts, J. Douglas

    2006-11-01

    The Eustachian tube (ET) is a collapsible respiratory airway that connects the nasopharynx with the middle ear (ME). The ET normally exists in a collapsed state and must be periodically opened to maintain a healthy and sterile ME. Although the inability to open the ET (i.e. ET dysfunction) is the primary etiology responsible for several common ME diseases (i.e. Otitis Media), the mechanisms responsible for ET dysfunction are not well established. To investigate these mechanisms, we developed a multi-scale model of airflow in the ET and correlated model results with experimental data obtained in healthy and diseased subjects. The computational models utilized finite-element methods to simulate fluid-structure interactions and molecular dynamics techniques to quantify the adhesive properties of mucus glycoproteins. Results indicate that airflow in the ET is highly sensitive to both the dynamics of muscle contraction and molecular adhesion forces within the ET lumen. In addition, correlation of model results with experimental data obtained in diseased subjects was used to identify the biomechanical mechanisms responsible for ET dysfunction.

  5. Matter-wave diffraction approaching limits predicted by Feynman path integrals for multipath interference

    NASA Astrophysics Data System (ADS)

    Barnea, A. Ronny; Cheshnovsky, Ori; Even, Uzi

    2018-02-01

    Interference experiments have been paramount in our understanding of quantum mechanics and are frequently the basis of testing the superposition principle in the framework of quantum theory. In recent years, several studies have challenged the nature of wave-function interference from the perspective of Born's rule—namely, the manifestation of so-called high-order interference terms in a superposition generated by diffraction of the wave functions. Here we present an experimental test of multipath interference in the diffraction of metastable helium atoms, with large-number counting statistics, comparable to photon-based experiments. We use a variation of the original triple-slit experiment and accurate single-event counting techniques to provide a new experimental bound of 2.9 ×10-5 on the statistical deviation from the commonly approximated null third-order interference term in Born's rule for matter waves. Our value is on the order of the maximal contribution predicted for multipath trajectories by Feynman path integrals.

  6. Specific formation of negative ions from leucine and isoleucine molecules

    NASA Astrophysics Data System (ADS)

    Papp, Peter; Shchukin, Pavel; Matejčík, Štefan

    2010-01-01

    Dissociative electron attachment (DEA) to gas phase leucine (Leu) and isoleucine (Ile) molecules was studied using experimental and quantum-chemical methods. The relative partial cross sections for DEA have been measured using crossed electron/molecular beams technique. Supporting ab initio calculations of the structure, energies of neutral molecules, fragments, and negative ions have been carried out at G3MP2 and B3LYP levels in order to interpret the experimental data. Leu and Ile exhibit several common features. The negative ionic fragments from both molecules are formed in the electron energy range from 0 to approximately 14 eV via three resonances (1.2, 5.5, and 8 eV). The relative partial cross sections for DEA Leu and Ile are very similar. The dominant negative ions formed were closed shell negative ions (M-H)- (m/z=130) formed preferentially via low electron energy resonance of 1.23 eV. Additional negative ions with m/z=115, 114, 113, 112, 84, 82, 74, 45, 26, and 17 have been detected.

  7. Ecoinformatics (Big Data) for Agricultural Entomology: Pitfalls, Progress, and Promise.

    PubMed

    Rosenheim, Jay A; Gratton, Claudio

    2017-01-31

    Ecoinformatics, as defined in this review, is the use of preexisting data sets to address questions in ecology. We provide the first review of ecoinformatics methods in agricultural entomology. Ecoinformatics methods have been used to address the full range of questions studied by agricultural entomologists, enabled by the special opportunities associated with data sets, nearly all of which have been observational, that are larger and more diverse and that embrace larger spatial and temporal scales than most experimental studies do. We argue that ecoinformatics research methods and traditional, experimental research methods have strengths and weaknesses that are largely complementary. We address the important interpretational challenges associated with observational data sets, highlight common pitfalls, and propose some best practices for researchers using these methods. Ecoinformatics methods hold great promise as a vehicle for capitalizing on the explosion of data emanating from farmers, researchers, and the public, as novel sampling and sensing techniques are developed and digital data sharing becomes more widespread.

  8. One-shot phase-shifting phase-grating interferometry with modulation of polarization: case of four interferograms.

    PubMed

    Rodriguez-Zurita, Gustavo; Meneses-Fabian, Cruz; Toto-Arellano, Noel-Ivan; Vázquez-Castillo, José F; Robledo-Sánchez, Carlos

    2008-05-26

    An experimental setup for optical phase extraction from 2-D interferograms using a one-shot phase-shifting technique able to achieve four interferograms with 90 degrees phase shifts in between is presented. The system uses a common-path interferometer consisting of two windows in the input plane and a phase grating in Fourier plane as its pupil. Each window has a birefringent wave plate attached in order to achieve nearly circular polarization of opposite rotations one respect to the other after being illuminated with a 45 degrees linear polarized beam. In the output, interference of the fields associated with replicated windows (diffraction orders) is achieved by a proper choice of the windows spacing with respect to the grating period. The phase shifts to achieve four interferograms simultaneously to perform phase-shifting interferometry can be obtained by placing linear polarizers on each diffraction orders before detection at an appropriate angle. Some experimental results are shown.

  9. Guided-waves technique for inspecting the health of wall-covered building risers

    NASA Astrophysics Data System (ADS)

    Tse, Peter W.; Chen, J. M.; Wan, X.

    2015-03-01

    The inspection technique uses guided ultrasonic waves (GW) has been proven effective in detecting pipes' defects. However, as of today, the technique has not attracted much market attention because of insufficient field tests and lack of traceable records with proven results in commercial applications. In this paper, it presents the results obtained by using GW to inspect the defects occurred in real gas risers that are commonly installed in tall buildings. The purpose of having risers is to deliver gas from any building external piping system to each household unit of the building. The risers extend from the external wall of the building, penetrate thorough the concrete wall, into the kitchen or bathroom of each household unit. Similar to in-service pipes, risers are prone to corrosion due to water leaks into the concrete wall. However, the corrosion occurs in the section of riser, which is covered by the concrete wall, is difficult to be inspected by conventional techniques. Hence, GW technique was employed. The effectiveness of GW technique was tested by laboratory and on-site experiments using real risers gathered from tall buildings. The experimental results show that GW can partially penetrate thorough the riser's section that is covered by wall. The integrity of the wall-covered section of a riser can be determined by the reflected wave signals generated by the corroded area that may exit inside the wall-covered section. Based on the reflected wave signal, one can determine the health of the wall-covered riser.

  10. Robust dynamical decoupling for quantum computing and quantum memory.

    PubMed

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  11. Characterization of inclusion complexes of organic ions with hydrophilic hosts by ion transfer voltammetry with solvent polymeric membranes.

    PubMed

    Olmos, José Manuel; Laborda, Eduardo; Ortuño, Joaquín Ángel; Molina, Ángela

    2017-03-01

    The quantitative characterization of inclusion complexes formed in aqueous phase between organic ions and hydrophilic hosts by ion-transfer voltammetry with solvent polymeric membrane ion sensors is studied, both in a theoretical and experimental way. Simple analytical solutions are presented for the determination of the binding constant of the complex from the variation with the host concentration of the electrochemical signal. These solutions are valid for any voltammetric technique and for solvent polymeric membrane ion sensors comprising one polarisable interface (1PI) and also, for the first time, two polarisable interfaces (2PIs). Suitable experimental conditions and data analysis procedures are discussed and applied to the study of the interactions of a common ionic liquid cation (1-octyl-3-metyl-imidazolium) and an ionisable drug (clomipramine) with two hydrophilic cyclodextrins: α-cyclodextrin and 2-hydroxypropyl-β-cyclodextrin. The experimental study is performed via square wave voltammetry with 2PIs and 1PI solvent polymeric membranes and in both cases the electrochemical experiments enable the detection of inclusion complexes and the determination of the corresponding binding constant. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Design and Analysis of AN Static Aeroelastic Experiment

    NASA Astrophysics Data System (ADS)

    Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang

    2016-06-01

    Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.

  13. Effects of experimental sleep deprivation on anxiety-like behavior in animal research: Systematic review and meta-analysis.

    PubMed

    Pires, Gabriel Natan; Bezerra, Andréia Gomes; Tufik, Sergio; Andersen, Monica Levy

    2016-09-01

    Increased acute anxiety is a commonly reported behavioral consequence of sleep deprivation in humans. However, rodent studies conducted so far produced inconsistent results, failing to reproduce the same sleep deprivation induced-anxiety observed in clinical experiments. While some presented anxiogenesis as result of sleep deprivation, others reported anxiolysis. In face of such inconsistencies, this article explores the effects of experimental sleep deprivation on anxiety-like behavior in animal research through a systematic review and a series of meta-analyses. A total of 50 of articles met our inclusion criteria, 30 on mice, 19 on rats and one on Zebrafish. Our review shows that sleep deprivation induces a decrease in anxiety-like behavior in preclinical models, which is opposite to results observed in human settings. These results were corroborated in stratified analyses according to species, sleep deprivation method and anxiety measurement technique. In conclusion, the use of animal models for the evaluation of the relationship between sleep deprivation lacks translational applicability and new experimental tools are needed to properly evaluate sleep deprivation-induced anxiogenesis in rodents. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Robust hypothesis tests for detecting statistical evidence of two-dimensional and three-dimensional interactions in single-molecule measurements

    NASA Astrophysics Data System (ADS)

    Calderon, Christopher P.; Weiss, Lucien E.; Moerner, W. E.

    2014-05-01

    Experimental advances have improved the two- (2D) and three-dimensional (3D) spatial resolution that can be extracted from in vivo single-molecule measurements. This enables researchers to quantitatively infer the magnitude and directionality of forces experienced by biomolecules in their native environment. Situations where such force information is relevant range from mitosis to directed transport of protein cargo along cytoskeletal structures. Models commonly applied to quantify single-molecule dynamics assume that effective forces and velocity in the x ,y (or x ,y,z) directions are statistically independent, but this assumption is physically unrealistic in many situations. We present a hypothesis testing approach capable of determining if there is evidence of statistical dependence between positional coordinates in experimentally measured trajectories; if the hypothesis of independence between spatial coordinates is rejected, then a new model accounting for 2D (3D) interactions can and should be considered. Our hypothesis testing technique is robust, meaning it can detect interactions, even if the noise statistics are not well captured by the model. The approach is demonstrated on control simulations and on experimental data (directed transport of intraflagellar transport protein 88 homolog in the primary cilium).

  15. A metadata approach for clinical data management in translational genomics studies in breast cancer.

    PubMed

    Papatheodorou, Irene; Crichton, Charles; Morris, Lorna; Maccallum, Peter; Davies, Jim; Brenton, James D; Caldas, Carlos

    2009-11-30

    In molecular profiling studies of cancer patients, experimental and clinical data are combined in order to understand the clinical heterogeneity of the disease: clinical information for each subject needs to be linked to tumour samples, macromolecules extracted, and experimental results. This may involve the integration of clinical data sets from several different sources: these data sets may employ different data definitions and some may be incomplete. In this work we employ semantic web techniques developed within the CancerGrid project, in particular the use of metadata elements and logic-based inference to annotate heterogeneous clinical information, integrate and query it. We show how this integration can be achieved automatically, following the declaration of appropriate metadata elements for each clinical data set; we demonstrate the practicality of this approach through application to experimental results and clinical data from five hospitals in the UK and Canada, undertaken as part of the METABRIC project (Molecular Taxonomy of Breast Cancer International Consortium). We describe a metadata approach for managing similarities and differences in clinical datasets in a standardized way that uses Common Data Elements (CDEs). We apply and evaluate the approach by integrating the five different clinical datasets of METABRIC.

  16. Graph Matching: Relax at Your Own Risk.

    PubMed

    Lyzinski, Vince; Fishkind, Donniell E; Fiori, Marcelo; Vogelstein, Joshua T; Priebe, Carey E; Sapiro, Guillermo

    2016-01-01

    Graph matching-aligning a pair of graphs to minimize their edge disagreements-has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches.

  17. An Experimental Study for Effectiveness of Super-Learning Technique at Elementary Level in Pakistan

    ERIC Educational Resources Information Center

    Shafqat, Hussain; Muhammad, Sarwar; Imran, Yousaf; Naemullah; Inamullah

    2010-01-01

    The objective of the study was to experience the effectiveness of super-learning technique of teaching at elementary level. The study was conducted with 8th grade students at a public sector school. Pre-test and post-test control group designs were used. Experimental and control groups were formed randomly, the experimental group (N = 62),…

  18. Numerical model updating technique for structures using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  19. Determination of the W W polarization fractions in p p → W ± W ± j j using a deep machine learning technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Searcy, Jacob; Huang, Lillian; Pleier, Marc -Andre

    The unitarization of the longitudinal vector boson scattering (VBS) cross section by the Higgs boson is a fundamental prediction of the Standard Model which has not been experimentally verified. One of the most promising ways to measure VBS uses events containing two leptonically decaying same-electric-charge W bosons produced in association with two jets. However, the angular distributions of the leptons in the W boson rest frame, which are commonly used to fit polarization fractions, are not readily available in this process due to the presence of two neutrinos in the final state. In this paper we present a method tomore » alleviate this problem by using a deep machine learning technique to recover these angular distributions from measurable event kinematics and demonstrate how the longitudinal-longitudinal scattering fraction could be studied. Furthermore, we show that this method doubles the expected sensitivity when compared to previous proposals.« less

  20. Determination of the W W polarization fractions in p p → W ± W ± j j using a deep machine learning technique

    DOE PAGES

    Searcy, Jacob; Huang, Lillian; Pleier, Marc -Andre; ...

    2016-05-27

    The unitarization of the longitudinal vector boson scattering (VBS) cross section by the Higgs boson is a fundamental prediction of the Standard Model which has not been experimentally verified. One of the most promising ways to measure VBS uses events containing two leptonically decaying same-electric-charge W bosons produced in association with two jets. However, the angular distributions of the leptons in the W boson rest frame, which are commonly used to fit polarization fractions, are not readily available in this process due to the presence of two neutrinos in the final state. In this paper we present a method tomore » alleviate this problem by using a deep machine learning technique to recover these angular distributions from measurable event kinematics and demonstrate how the longitudinal-longitudinal scattering fraction could be studied. Furthermore, we show that this method doubles the expected sensitivity when compared to previous proposals.« less

  1. Generation of phase edge singularities by coplanar three-beam interference and their detection.

    PubMed

    Patorski, Krzysztof; Sluzewski, Lukasz; Trusiak, Maciej; Pokorski, Krzysztof

    2017-02-06

    In recent years singular optics has gained considerable attention in science and technology. Up to now optical vortices (phase point dislocations) have been of main interest. This paper presents the first general analysis of formation of phase edge singularities by coplanar three-beam interference. They can be generated, for example, by three-slit interference or self-imaging in the Fresnel diffraction field of a sinusoidal grating. We derive a general condition for the ratio of amplitudes of interfering beams resulting in phase edge dislocations, lateral separation of dislocations depends on this ratio as well. Analytically derived properties are corroborated by numerical and experimental studies. We develop a simple, robust, common path optical self-imaging configuration aided by a coherent tilted reference wave and spatial filtering. Finally, we propose an automatic fringe pattern analysis technique for detecting phase edge dislocations, based on the continuous wavelet transform. Presented studies open new possibilities for developing grating based sensing techniques for precision metrology of very small phase differences.

  2. Technical support for creating an artificial intelligence system for feature extraction and experimental design

    NASA Technical Reports Server (NTRS)

    Glick, B. J.

    1985-01-01

    Techniques for classifying objects into groups or clases go under many different names including, most commonly, cluster analysis. Mathematically, the general problem is to find a best mapping of objects into an index set consisting of class identifiers. When an a priori grouping of objects exists, the process of deriving the classification rules from samples of classified objects is known as discrimination. When such rules are applied to objects of unknown class, the process is denoted classification. The specific problem addressed involves the group classification of a set of objects that are each associated with a series of measurements (ratio, interval, ordinal, or nominal levels of measurement). Each measurement produces one variable in a multidimensional variable space. Cluster analysis techniques are reviewed and methods for incuding geographic location, distance measures, and spatial pattern (distribution) as parameters in clustering are examined. For the case of patterning, measures of spatial autocorrelation are discussed in terms of the kind of data (nominal, ordinal, or interval scaled) to which they may be applied.

  3. Synthetic H sub 2 O-CO sub 2 fluid inclusions in spontaneously nucleated forsterite, enstatite, and diopside hosts: The method and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, E.L.; Jenkins, D.M.

    1991-04-01

    This paper describes an experimental technique for the production of primary synthetic H{sub 2}O-CO{sub 2} and H{sub 2}O-CO{sub 2}-NaCl fluid inclusions in forsterite, orthopyroxene, and clinopyroxene hosts spontaneously nucleated during the incongruent dissolution of tremolite. The host producing reactions involve the complexation and transport of Ca, Mg, and SiO{sub 2} to the growing product phases in which the inclusions are hosted. This technique, therefore, provides the opportunity to study the effects of a complex host-producing reaction on the composition of the fluids trapped as primary inclusions in the growing host phase. In addition to providing a model for the entrapmentmore » of primary fluid inclusions, the reactions provide an excellent model of the onset of granulite facies metamorphism where, in nature, fluid inclusion compositions are commonly in disequilibrium with the mineral assemblages in which they are hosted.« less

  4. A magnetic field measurement technique using a miniature transducer

    NASA Technical Reports Server (NTRS)

    Fales, C. L., Jr.; Breckenridge, R. A.; Debnam, W. J., Jr.

    1974-01-01

    The development, fabrication, and application of a magnetometer are described. The magnetometer has a miniature transducer and is capable of automatic scanning. The magnetometer described here is capable of detecting static magnetic fields as low as 1.6 A/m and its transducer has an active area 0.64 mm by 0.76 mm. Thin and rugged, the transducer uses wire, 0.05 mm in diameter, which is plated with a magnetic film, enabling measurement of transverse magnetic fields as close as 0.08 mm from a surface. The magnetometer, which is simple to operate and has a fast response, uses an inexpensive clip-on milliammeter (commonly found in most laboratories) for driving and processing the electrical signals and readout. A specially designed transducer holding mechanism replaces the XY recorder ink pen; this mechanism provides the basis for an automatic scanning technique. The instrument has been applied to the measurements of magnetic fields arising from remanent magnetization in experimental plated-wire memory planes and regions of magnetic activity in geological rock specimens.

  5. Affinity-aware checkpoint restart

    DOE PAGES

    Saini, Ajay; Rezaei, Arash; Mueller, Frank; ...

    2014-12-08

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  6. A Robust Automatic Ionospheric O/X Mode Separation Technique for Vertical Incidence Sounders

    NASA Astrophysics Data System (ADS)

    Harris, T. J.; Pederick, L. H.

    2017-12-01

    The sounding of the ionosphere by a vertical incidence sounder (VIS) is the oldest and most common technique for determining the state of the ionosphere. The automatic extraction of relevant ionospheric parameters from the ionogram image, referred to as scaling, is important for the effective utilization of data from large ionospheric sounder networks. Due to the Earth's magnetic field, the ionosphere is birefringent at radio frequencies, so a VIS will typically see two distinct returns for each frequency. For the automatic scaling of ionograms, it is highly desirable to be able to separate the two modes. Defence Science and Technology Group has developed a new VIS solution which is based on direct digital receiver technology and includes an algorithm to separate the O and X modes. This algorithm can provide high-quality separation even in difficult ionospheric conditions. In this paper we describe the algorithm and demonstrate its consistency and reliability in successfully separating 99.4% of the ionograms during a 27 day experimental campaign under sometimes demanding ionospheric conditions.

  7. Dielectrophoretic positioning of single nanoparticles on atomic force microscope tips for tip-enhanced Raman spectroscopy.

    PubMed

    Leiterer, Christian; Deckert-Gaudig, Tanja; Singh, Prabha; Wirth, Janina; Deckert, Volker; Fritzsche, Wolfgang

    2015-05-01

    Tip-enhanced Raman spectroscopy, a combination of Raman spectroscopy and scanning probe microscopy, is a powerful technique to detect the vibrational fingerprint of molecules at the nanometer scale. A metal nanoparticle at the apex of an atomic force microscope tip leads to a large enhancement of the electromagnetic field when illuminated with an appropriate wavelength, resulting in an increased Raman signal. A controlled positioning of individual nanoparticles at the tip would improve the reproducibility of the probes and is quite demanding due to usually serial and labor-intensive approaches. In contrast to commonly used submicron manipulation techniques, dielectrophoresis allows a parallel and scalable production, and provides a novel approach toward reproducible and at the same time affordable tip-enhanced Raman spectroscopy tips. We demonstrate the successful positioning of an individual plasmonic nanoparticle on a commercial atomic force microscope tip by dielectrophoresis followed by experimental proof of the Raman signal enhancing capabilities of such tips. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Improvement of mechanical performance for vibratory microgyroscope based on sense mode closed-loop control

    NASA Astrophysics Data System (ADS)

    Xiao, Dingbang; Su, Jianbin; Chen, Zhihua; Hou, Zhanqiang; Wang, Xinghua; Wu, Xuezhong

    2013-04-01

    In order to improve its structural sensitivity, a vibratory microgyroscope is commonly sealed in high vacuum to increase the drive mode quality factor. The sense mode quality factor of the microgyroscope will also increase simultaneously after vacuum sealing, which will lead to a long decay time of free response and even self-oscillation of the sense mode. As a result, the mechanical performance of the microgyroscope will be seriously degraded. In order to solve this problem, a closed-loop control technique is presented to adjust and optimize the sense mode quality factor. A velocity feedback loop was designed to increase the electric damping of the sense mode vibration. A circuit was fabricated based on this technique, and experimental results indicate that the sense mode quality factor of the microgyroscope was adjusted from 8052 to 428. The decay time of the sense mode free response was shortened from 3 to 0.5 s, and the vibration-rejecting ability of the microgyroscope was improved obviously without sensitivity degradation.

  9. Decryption-decompression of AES protected ZIP files on GPUs

    NASA Astrophysics Data System (ADS)

    Duong, Tan Nhat; Pham, Phong Hong; Nguyen, Duc Huu; Nguyen, Thuy Thanh; Le, Hung Duc

    2011-10-01

    AES is a strong encryption system, so decryption-decompression of AES encrypted ZIP files requires very large computing power and techniques of reducing the password space. This makes implementations of techniques on common computing system not practical. In [1], we reduced the original very large password search space to a much smaller one which surely containing the correct password. Based on reduced set of passwords, in this paper, we parallel decryption, decompression and plain text recognition for encrypted ZIP files by using CUDA computing technology on graphics cards GeForce GTX295 of NVIDIA, to find out the correct password. The experimental results have shown that the speed of decrypting, decompressing, recognizing plain text and finding out the original password increases about from 45 to 180 times (depends on the number of GPUs) compared to sequential execution on the Intel Core 2 Quad Q8400 2.66 GHz. These results have demonstrated the potential applicability of GPUs in this cryptanalysis field.

  10. Affinity-aware checkpoint restart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saini, Ajay; Rezaei, Arash; Mueller, Frank

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  11. Effective atomic numbers of blue topaz at different gamma-rays energies obtained from Compton scattering technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuschareon, S., E-mail: tuscharoen@hotmail.com; Limkitjaroenporn, P., E-mail: tuscharoen@hotmail.com; Kaewkhao, J., E-mail: tuscharoen@hotmail.com

    2014-03-24

    Topaz occurs in a wide range of colors, including yellow, orange, brown, pink-to-violet and blue. All of these color differences are due to color centers. In order to improve the color of natural colorless topaz, the most commonly used is irradiated with x- or gamma-rays, indicated that attenuation parameters is important to enhancements by irradiation. In this work, the mass attenuation coefficients of blue topaz were measured at the different energy of γ-rays using the Compton scattering technique. The results show that, the experimental values of mass attenuation coefficient are in good agreement with the theoretical values. The mass attenuationmore » coefficients increase with the decrease in gamma rays energies. This may be attributed to the higher photon interaction probability of blue topaz at lower energy. This result is a first report of mass attenuation coefficient of blue topaz at different gamma rays energies.« less

  12. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  13. Imaging of human vertebral surface using ultrasound RF data received at each element of probe for thoracic anesthesia

    NASA Astrophysics Data System (ADS)

    Takahashi, Kazuki; Taki, Hirofumi; Onishi, Eiko; Yamauchi, Masanori; Kanai, Hiroshi

    2017-07-01

    Epidural anesthesia is a common technique for perioperative analgesia and chronic pain treatment. Since ultrasonography is insufficient for depicting the human vertebral surface, most examiners apply epidural puncture by body surface landmarks on the back such as the spinous process and scapulae without any imaging, including ultrasonography. The puncture route to the epidural space at thoracic vertebrae is much narrower than that at lumber vertebrae, and therefore, epidural anesthesia at thoracic vertebrae is difficult, especially for a beginner. Herein, a novel imaging method is proposed based on a bi-static imaging technique by making use of the transmit beam width and direction. In an in vivo experimental study on human thoracic vertebrae, the proposed method succeeded in depicting the vertebral surface clearly as compared with conventional B-mode imaging and the conventional envelope method. This indicates the potential of the proposed method in visualizing the vertebral surface for the proper and safe execution of epidural anesthesia.

  14. Splash Dynamics of Falling Surfactant-Laden Droplets

    NASA Astrophysics Data System (ADS)

    Sulaiman, Nur; Buitrago, Lewis; Pereyra, Eduardo

    2017-11-01

    Splashing dynamics is a common issue in oil and gas separation technology. In this study, droplet impact of various surfactant concentrations onto solid and liquid surfaces is studied experimentally using a high-speed imaging analysis. Although this area has been widely studied in the past, there is still not a good understanding of the role of surfactant over droplet impact and characterization of resulting splash dynamics. The experiments are conducted using tap water laden with anionic surfactant. The effects of system parameters on a single droplet impingement such as surfactant concentration (no surfactant, below, at and above critical micelle concentration), parent drop diameter (2-5mm), impact velocity and type of impact surface (thin and deep pool) are investigated. Image analysis technique is shown to be an effective technique for identification of coalescence to splashing transition. In addition, daughter droplets size distributions are analyzed qualitatively in the events of splashing. As expected, it is observed that the formation of secondary droplets is affected by the surfactant concentration. A summary of findings will be discussed.

  15. Hierarchical tailoring of strut architecture to control permeability of additive manufactured titanium implants.

    PubMed

    Zhang, Z; Jones, D; Yue, S; Lee, P D; Jones, J R; Sutcliffe, C J; Jones, E

    2013-10-01

    Porous titanium implants are a common choice for bone augmentation. Implants for spinal fusion and repair of non-union fractures must encourage blood flow after implantation so that there is sufficient cell migration, nutrient and growth factor transport to stimulate bone ingrowth. Additive manufacturing techniques allow a large number of pore network designs. This study investigates how the design factors offered by selective laser melting technique can be used to alter the implant architecture on multiple length scales to control and even tailor the flow. Permeability is a convenient parameter that characterises flow, correlating to structure openness (interconnectivity and pore window size), tortuosity and hence flow shear rates. Using experimentally validated computational simulations, we demonstrate how additive manufacturing can be used to tailor implant properties by controlling surface roughness at a microstructual level (microns), and by altering the strut ordering and density at a mesoscopic level (millimetre). Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  17. Experimental infection of octopus vulgaris (Cuvier, 1797) with Photobacterium damsela subsp. piscicida. Immunohistochemical tracking of antigen and tissue responses.

    PubMed

    Bakopoulos, Vasileios; White, Daniella; Valsamidis, Michail-Aggelos; Vasilaki, Feli

    2017-03-01

    Adult common octopus individuals were intramuscularly infected with Photobacterium damsela subsp. piscicida in order to investigate if this species is sensitive to this common and important fish pathogen. The fate of the bacterial antigens and the tissue responses of Octopus vulgaris were studied employing immunohistochemical techniques. Strong reaction at the site of injection was evident from day 2 post-infection that continued until day 14. Great numbers of hemocytes that were attracted at the site of infection were involved in phagocytosis of bacteria. Very early in the infection, a transition of cells to fibroblasts and an effort to isolate the infection was observed. During the course of the study, very large necrotic cells were seen at the site of infection, whereas during the later stages hemocytes with phagocytosed bacteria were observed in well-defined pockets inside the muscle tissue. None of the internal organs tested for the presence of the bacterium were positive with the exception of the digestive gland where antigen staining was observed which was not associated with hemocyte infiltration. The high doses of bacterial cells used in this experimental infection and the lack of disease signs from Octopus vulgaris suggest that, under normal conditions, octopus is resistant to Photobacterium damsela subsp. piscicida. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Experimental cocrystal screening and solution based scale-up cocrystallization methods.

    PubMed

    Malamatari, Maria; Ross, Steven A; Douroumis, Dennis; Velaga, Sitaram P

    2017-08-01

    Cocrystals are crystalline single phase materials composed of two or more different molecular and/or ionic compounds generally in a stoichiometric ratio which are neither solvates nor simple salts. If one of the components is an active pharmaceutical ingredient (API), the term pharmaceutical cocrystal is often used. There is a growing interest among drug development scientists in exploring cocrystals, as means to address physicochemical, biopharmaceutical and mechanical properties and expand solid form diversity of the API. Conventionally, coformers are selected based on crystal engineering principles, and the equimolar mixtures of API and coformers are subjected to solution-based crystallization that are commonly employed in polymorph and salt screening. However, the availability of new knowledge on cocrystal phase behaviour in solid state and solutions has spurred the development and implementation of more rational experimental cocrystal screening as well as scale-up methods. This review aims to provide overview of commonly employed solid form screening techniques in drug development with an emphasis on cocrystal screening methodologies. The latest developments in understanding and the use of cocrystal phase diagrams in both screening and solution based scale-up methods are also presented. Final section is devoted to reviewing the state of the art research covering solution based scale-up cocrystallization process for different cocrystals besides more recent continuous crystallization methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Why we should use simpler models if the data allow this: relevance for ANOVA designs in experimental biology.

    PubMed

    Lazic, Stanley E

    2008-07-21

    Analysis of variance (ANOVA) is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis - even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159). This will be true in general, and the reasons for this are discussed. There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.

  20. Experimental technique for simultaneous measurement of absorption-, emission cross-sections, and background loss coefficient in doped optical fibers

    NASA Astrophysics Data System (ADS)

    Karimi, M.; Seraji, F. E.

    2010-01-01

    We report a new simple technique for the simultaneous measurements of absorption-, emission cross-sections, background loss coefficient, and dopant density of doped optical fibers with low dopant concentration. Using our proposed technique, the experimental characterization of a sample Ge-Er-doped optical fiber is presented, and the results are analyzed and compared with other reports. This technique is suitable for production line of doped optical fibers.

  1. Experimental evaluation of the thermal properties of two tissue equivalent phantom materials.

    PubMed

    Craciunescu, O I; Howle, L E; Clegg, S T

    1999-01-01

    Tissue equivalent radio frequency (RF) phantoms provide a means for measuring the power deposition of various hyperthermia therapy applicators. Temperature measurements made in phantoms are used to verify the accuracy of various numerical approaches for computing the power and/or temperature distributions. For the numerical simulations to be accurate, the electrical and thermal properties of the materials that form the phantom should be accurately characterized. This paper reports on the experimentally measured thermal properties of two commonly used phantom materials, i.e. a rigid material with the electrical properties of human fat, and a low concentration polymer gel with the electrical properties of human muscle. Particularities of the two samples required the design of alternative measuring techniques for the specific heat and thermal conductivity. For the specific heat, a calorimeter method is used. For the thermal diffusivity, a method derived from the standard guarded comparative-longitudinal heat flow technique was used for both materials. For the 'muscle'-like material, the thermal conductivity, density and specific heat at constant pressure were measured as: k = 0.31 +/- 0.001 W(mK)(-1), p = 1026 +/- 7 kgm(-3), and c(p) = 4584 +/- 107 J(kgK)(-1). For the 'fat'-like material, the literature reports on the density and specific heat such that only the thermal conductivity was measured as k = 0.55 W(mK)(-1).

  2. The impact of oxidation on spore and pollen chemistry: an experimental study

    NASA Astrophysics Data System (ADS)

    Jardine, Phillip; Fraser, Wesley; Lomax, Barry; Gosling, William

    2016-04-01

    Sporomorphs (pollen and spores) form a major component of the land plant fossil record. Sporomorphs have an outer wall composed of sporopollenin, a highly durable biopolymer, the chemistry of which contains both a signature of ambient ultraviolet-B flux and taxonomic information. Despite the high preservation potential of sporopollenin in the geological record, it is currently unknown how sensitive its chemical signature is to standard palynological processing techniques. Oxidation in particular is known to cause physical degradation to sporomorphs, and it is expected that this should have a concordant impact on sporopollenin chemistry. Here, we test this by experimentally oxidizing Lycopodium (clubmoss) spores using two common oxidation techniques: acetolysis and nitric acid. We also carry out acetolysis on eight angiosperm (flowering plant) taxa to test the generality of our results. Using Fourier Transform infrared (FTIR) spectroscopy, we find that acetolysis removes labile, non-fossilizable components of sporomorphs, but has a limited impact upon the chemistry of sporopollenin under normal processing durations. Nitric acid is more aggressive and does break down sporopollenin and reorganize its chemical structure, but when limited to short treatments (i.e. ≤10 min) at room temperature sporomorphs still contain most of the original chemical signal. These findings suggest that when used carefully oxidation does not adversely affect sporopollenin chemistry, and that palaeoclimatic and taxonomic signatures contained within the sporomorph wall are recoverable from standard palynological preparations.

  3. Eddy Current Rail Inspection Using AC Bridge Techniques.

    PubMed

    Liu, Ze; Koffman, Andrew D; Waltrip, Bryan C; Wang, Yicheng

    2013-01-01

    AC bridge techniques commonly used for precision impedance measurements have been adapted to develop an eddy current sensor for rail defect detection. By using two detection coils instead of just one as in a conventional sensor, we can balance out the large baseline signals corresponding to a normal rail. We have significantly enhanced the detection sensitivity of the eddy current method by detecting and demodulating the differential signal of the two coils induced by rail defects, using a digital lock-in amplifier algorithm. We have also explored compensating for the lift-off effect of the eddy current sensor due to vibrations by using the summing signal of the detection coils to measure the lift-off distance. The dominant component of the summing signal is a constant resulting from direct coupling from the excitation coil, which can be experimentally determined. The remainder of the summing signal, which decreases as the lift-off distance increases, is induced by the secondary eddy current. This dependence on the lift-off distance is used to calibrate the differential signal, allowing for a more accurate characterization of the defects. Simulated experiments on a sample rail have been performed using a computer controlled X-Y moving table with the X-axis mimicking the train's motion and the Y-axis mimicking the train's vibrational bumping. Experimental results demonstrate the effectiveness of the new detection method.

  4. A Course in Heterogeneous Catalysis: Principles, Practice, and Modern Experimental Techniques.

    ERIC Educational Resources Information Center

    Wolf, Eduardo E.

    1981-01-01

    Outlines a multidisciplinary course which comprises fundamental, practical, and experimental aspects of heterogeneous catalysis. The course structure is a combination of lectures and demonstrations dealing with the use of spectroscopic techniques for surface analysis. (SK)

  5. Experimental Study of Residual Stresses in Rail by Moire Interferometry

    DOT National Transportation Integrated Search

    1993-09-01

    The residual stresses in rails produced by rolling cycles are studied experimentally by moire interferometry. The dissection technique is adopted for this investigation. The basic principle of the dissection technique is that the residual stress is r...

  6. Crystal structure determination and analysis of 11S coconut allergen: Cocosin.

    PubMed

    Vajravijayan, S; Nandhagopal, N; Gunasekaran, K

    2017-12-01

    Allergy is an abnormal immune response against an innocuous target. Food allergy is an adverse reaction caused by common foods most well-known being those involving peanuts. Apart from mono sensitized food allergy, cross-reactivity with other food allergens is also commonly observed. To understand the phenomenon of cross-reactivity related to immune response, three dimensional structures of the allergens and their antigenic epitopes has to be analysed in detail. The X-ray crystal structure of Cocosin, a common 11S food allergen from coconut, has been determined at 2.2Å resolution using molecular replacement technique. The monomer of 52kDa is composed of two β-jelly roll domains, one with acidic and the other with basic character. The structure shows hexameric association with two trimers facing each other. Though the overall structure of Cocosin is similar to other 11S allergens, the occurrence of experimentally determined epitopes of the peanut allergen Ara h 3 at flexible as well as variable regions could be the reason for the clinically reported result of cross-reactivity that the peanut allergic patients are not sensitized with coconut allergen. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Spectral Analysis and Experimental Modeling of Ice Accretion Roughness

    NASA Technical Reports Server (NTRS)

    Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.

    1996-01-01

    A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.

  8. A fluctuation-induced plasma transport diagnostic based upon fast-Fourier transform spectral analysis

    NASA Technical Reports Server (NTRS)

    Powers, E. J.; Kim, Y. C.; Hong, J. Y.; Roth, J. R.; Krawczonek, W. M.

    1978-01-01

    A diagnostic, based on fast Fourier-transform spectral analysis techniques, that provides experimental insight into the relationship between the experimentally observable spectral characteristics of the fluctuations and the fluctuation-induced plasma transport is described. The model upon which the diagnostic technique is based and its experimental implementation is discussed. Some characteristic results obtained during the course of an experimental study of fluctuation-induced transport in the electric field dominated NASA Lewis bumpy torus plasma are presented.

  9. DH and ESPI laser interferometry applied to the restoration shrinkage assessment

    NASA Astrophysics Data System (ADS)

    Campos, L. M. P.; Parra, D. F.; Vasconcelos, M. R.; Vaz, M.; Monteiro, J.

    2014-01-01

    In dental restoration postoperative marginal leakage is commonly associated to polymerization shrinkage effects. In consequence the longevity and quality of restorative treatment depends on the shrinkage mechanisms of the composite filling during the polymerization. In this work the development of new techniques for evaluation of those effects under light-induced polymerization of dental nano composite fillings is reported. The composite resins activated by visible light, initiate the polymerization process by absorbing light in wavelengths at about 470 nm. The techniques employed in the contraction assessment were digital holography (DH) and Electronic Speckle Pattern Interferometry (ESPI) based on laser interferometry. A satisfactory resolution was achieved in the non-contact displacement field measurements on small objects concerning the experimental dental samples. According to a specific clinical protocol, natural teeth were used (human mandibular premolars). A class I cavity was drilled and restored with nano composite material, according to Black principles. The polymerization was monitored by DH and ESPI in real time during the cure reaction of the restoration. The total displacement reported for the material in relation of the tooth wall was 3.7 μm (natural tooth). The technique showed the entire tooth surface (wall) deforming during polymerization shrinkage.

  10. DrugECs: An Ensemble System with Feature Subspaces for Accurate Drug-Target Interaction Prediction

    PubMed Central

    Jiang, Jinjian; Wang, Nian; Zhang, Jun

    2017-01-01

    Background Drug-target interaction is key in drug discovery, especially in the design of new lead compound. However, the work to find a new lead compound for a specific target is complicated and hard, and it always leads to many mistakes. Therefore computational techniques are commonly adopted in drug design, which can save time and costs to a significant extent. Results To address the issue, a new prediction system is proposed in this work to identify drug-target interaction. First, drug-target pairs are encoded with a fragment technique and the software “PaDEL-Descriptor.” The fragment technique is for encoding target proteins, which divides each protein sequence into several fragments in order and encodes each fragment with several physiochemical properties of amino acids. The software “PaDEL-Descriptor” creates encoding vectors for drug molecules. Second, the dataset of drug-target pairs is resampled and several overlapped subsets are obtained, which are then input into kNN (k-Nearest Neighbor) classifier to build an ensemble system. Conclusion Experimental results on the drug-target dataset showed that our method performs better and runs faster than the state-of-the-art predictors. PMID:28744468

  11. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    NASA Astrophysics Data System (ADS)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  12. Evaluation of respiratory system mechanics in mice using the forced oscillation technique.

    PubMed

    McGovern, Toby K; Robichaud, Annette; Fereydoonzad, Liah; Schuessler, Thomas F; Martin, James G

    2013-05-15

    The forced oscillation technique (FOT) is a powerful, integrative and translational tool permitting the experimental assessment of lung function in mice in a comprehensive, detailed, precise and reproducible manner. It provides measurements of respiratory system mechanics through the analysis of pressure and volume signals acquired in reaction to predefined, small amplitude, oscillatory airflow waveforms, which are typically applied at the subject's airway opening. The present protocol details the steps required to adequately execute forced oscillation measurements in mice using a computer-controlled piston ventilator (flexiVent; SCIREQ Inc, Montreal, Qc, Canada). The description is divided into four parts: preparatory steps, mechanical ventilation, lung function measurements, and data analysis. It also includes details of how to assess airway responsiveness to inhaled methacholine in anesthetized mice, a common application of this technique which also extends to other outcomes and various lung pathologies. Measurements obtained in naïve mice as well as from an oxidative-stress driven model of airway damage are presented to illustrate how this tool can contribute to a better characterization and understanding of studied physiological changes or disease models as well as to applications in new research areas.

  13. Methodology for assessing the probability of corrosion in concrete structures on the basis of half-cell potential and concrete resistivity measurements.

    PubMed

    Sadowski, Lukasz

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures.

  14. A New Approach to Predict user Mobility Using Semantic Analysis and Machine Learning.

    PubMed

    Fernandes, Roshan; D'Souza G L, Rio

    2017-10-19

    Mobility prediction is a technique in which the future location of a user is identified in a given network. Mobility prediction provides solutions to many day-to-day life problems. It helps in seamless handovers in wireless networks to provide better location based services and to recalculate paths in Mobile Ad hoc Networks (MANET). In the present study, a framework is presented which predicts user mobility in presence and absence of mobility history. Naïve Bayesian classification algorithm and Markov Model are used to predict user future location when user mobility history is available. An attempt is made to predict user future location by using Short Message Service (SMS) and instantaneous Geological coordinates in the absence of mobility patterns. The proposed technique compares the performance metrics with commonly used Markov Chain model. From the experimental results it is evident that the techniques used in this work gives better results when considering both spatial and temporal information. The proposed method predicts user's future location in the absence of mobility history quite fairly. The proposed work is applied to predict the mobility of medical rescue vehicles and social security systems.

  15. Fizeau simultaneous phase-shifting interferometry based on extended source

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Zhu, Qiudong; Hou, Yinlong; Cao, Zheng

    2016-09-01

    Coaxial Fizeau simultaneous phase-shifting interferometer plays an important role in many fields for its characteristics of long optical path, miniaturization, and elimination of reference surface high-frequency error. Based on the matching of coherence between extended source and interferometer, orthogonal polarization reference wave and measurement wave can be obtained by Fizeau interferometry with Michelson interferometer preposed. Through matching spatial coherence length between preposed interferometer and primary interferometer, high contrast interference fringes can be obtained and additional interference fringes can be eliminated. Thus, the problem of separation of measurement and reference surface in the common optical path Fizeau interferometer is solved. Numerical simulation and principle experiment is conducted to verify the feasibility of extended source interferometer. Simulation platform is established by using the communication technique of DDE (dynamic data exchange) to connect Zemax and Matlab. The modeling of the extended source interferometer is realized by using Zemax. Matlab codes are programmed to automatically rectify the field parameters of the optical system and conveniently calculate the visibility of interference fringes. Combined with the simulation, the experimental platform of the extended source interferometer is established. After experimental research on the influence law of scattering screen granularity to interference fringes, the granularity of scattering screen is determined. Based on the simulation platform and experimental platform, the impacts on phase measurement accuracy of the imaging system aberration and collimation system aberration of the interferometer are analyzed. Compared the visibility relation curves between experimental measurement and simulation result, the experimental result is in line with the theoretical result.

  16. Anomalous transport in the crowded world of biological cells

    NASA Astrophysics Data System (ADS)

    Höfling, Felix; Franosch, Thomas

    2013-04-01

    A ubiquitous observation in cell biology is that the diffusive motion of macromolecules and organelles is anomalous, and a description simply based on the conventional diffusion equation with diffusion constants measured in dilute solution fails. This is commonly attributed to macromolecular crowding in the interior of cells and in cellular membranes, summarizing their densely packed and heterogeneous structures. The most familiar phenomenon is a sublinear, power-law increase of the mean-square displacement (MSD) as a function of the lag time, but there are other manifestations like strongly reduced and time-dependent diffusion coefficients, persistent correlations in time, non-Gaussian distributions of spatial displacements, heterogeneous diffusion and a fraction of immobile particles. After a general introduction to the statistical description of slow, anomalous transport, we summarize some widely used theoretical models: Gaussian models like fractional Brownian motion and Langevin equations for visco-elastic media, the continuous-time random walk model, and the Lorentz model describing obstructed transport in a heterogeneous environment. Particular emphasis is put on the spatio-temporal properties of the transport in terms of two-point correlation functions, dynamic scaling behaviour, and how the models are distinguished by their propagators even if the MSDs are identical. Then, we review the theory underlying commonly applied experimental techniques in the presence of anomalous transport like single-particle tracking, fluorescence correlation spectroscopy (FCS) and fluorescence recovery after photobleaching (FRAP). We report on the large body of recent experimental evidence for anomalous transport in crowded biological media: in cyto- and nucleoplasm as well as in cellular membranes, complemented by in vitro experiments where a variety of model systems mimic physiological crowding conditions. Finally, computer simulations are discussed which play an important role in testing the theoretical models and corroborating the experimental findings. The review is completed by a synthesis of the theoretical and experimental progress identifying open questions for future investigation.

  17. Modeling and simulation of axisymmetric stagnation flames

    NASA Astrophysics Data System (ADS)

    Sone, Kazuo

    Laminar flame modeling is an important element in turbulent combustion research. The accuracy of a turbulent combustion model is highly dependent upon our understanding of laminar flames and their behavior in many situations. How much we understand combustion can only be measured by how well the model describes and predicts combustion phenomena. One of the most commonly used methane combustion models is GRI-Mech 3.0. However, how well the model describes the reacting flow phenomena is still uncertain even after many attempts to validate the model or quantify uncertainties. In the present study, the behavior of laminar flames under different aerodynamic and thermodynamic conditions is studied numerically in a stagnation-flow configuration. In order to make such a numerical study possible, the spectral element method is reformulated to accommodate the large density variations in methane reacting flows. In addition, a new axisymmetric basis function set for the spectral element method that satisfies the correct behavior near the axis is developed, and efficient integration techniques are developed to accurately model axisymmetric reacting flow within a reasonable amount of computational time. The numerical method is implemented using an object-oriented programming technique, and the resulting computer program is verified with several different verification methods. The present study then shows variances with the commonly used GRI-Mech 3.0 chemical kinetics model through a direct simulation of laboratory flames that allows direct comparison to experimental data. It is shown that the methane combustion model based on GRI-Mech 3.0 works well for methane-air mixtures near stoichiometry. However, GRI-Mech 3.0 leads to an overprediction of laminar flame speed for lean mixtures and an underprediction for rich mixtures. This result is slightly different from conclusion drawn in previous work, in which experimental data are compared with a one-dimensional numerical solutions. Detailed analysis reveals that flame speed is sensitive to even slight flame front curvature as well as its finite extension in the radial direction. Neither of these can be incorporated in one-dimensional flow modeli

  18. Time-resolved confocal fluorescence microscopy: novel technical features and applications for FLIM, FRET and FCS using a sophisticated data acquisition concept in TCSPC

    NASA Astrophysics Data System (ADS)

    Koberling, Felix; Krämer, Benedikt; Kapusta, Peter; Patting, Matthias; Wahl, Michael; Erdmann, Rainer

    2007-05-01

    In recent years time-resolved fluorescence measurement and analysis techniques became a standard in single molecule microscopy. However, considering the equipment and experimental implementation they are typically still an add-on and offer only limited possibilities to study the mutual dependencies with common intensity and spectral information. In contrast, we are using a specially designed instrument with an unrestricted photon data acquisition approach which allows to store spatial, temporal, spectral and intensity information in a generalized format preserving the full experimental information. This format allows us not only to easily study dependencies between various fluorescence parameters but also to use, for example, the photon arrival time for sorting and weighting the detected photons to improve the significance in common FCS and FRET analysis schemes. The power of this approach will be demonstrated for different techniques: In FCS experiments the concentration determination accuracy can be easily improved by a simple time-gated photon analysis to suppress the fast decaying background signal. A more detailed analysis of the arrival times allows even to separate FCS curves for species which differ in their fluorescence lifetime but, for example, cannot be distinguished spectrally. In multichromophoric systems like a photonic wire which undergoes unidirectional multistep FRET the lifetime information complements significantly the intensity based analysis and helps to assign the respective FRET partners. Moreover, together with pulsed excitation the time-correlated analysis enables directly to take advantage of alternating multi-colour laser excitation. This pulsed interleaved excitation (PIE) can be used to identify and rule out inactive FRET molecules which cause interfering artefacts in standard FRET efficiency analysis. We used a piezo scanner based confocal microscope with compact picosecond diode lasers as excitation sources. The timing performance can be significantly increased by using new SPAD detectors which enable, in conjunction with new TCSPC electronics, an overall IRF width of less than 120 ps maintaining single molecule sensitivity.

  19. Energy transformation, transfer, and release dynamics in high speed turbulent flows

    DTIC Science & Technology

    2017-03-01

    experimental techniques developed allowed non -intrusive measurement of convecting velocity fields in supersonic flows and used for validation of LES of...by the absence of (near-)normal shocks that normal injection generates. New experimental techniques were developed that allowed the non -intrusive...and was comprised of several parts in which significant accomplishments were made: 1. An experimental effort focusing on investigations in: a

  20. A refined electrofishing technique for collecting Silver Carp: Implications for management

    USGS Publications Warehouse

    Bouska, Wesley W.; Glover, David C.; Bouska, Kristen; Garvey, James E.

    2017-01-01

    Detecting nuisance species at low abundance or in newly established areas is critical to developing pest management strategies. Due to their sensitivity to disturbance and erratic jumping behavior, Silver Carp Hypophthalmichthys molitrix can be difficult to collect with traditional sampling methods. We compared catch per unit effort (CPUE) of all species from a Long Term Resource Monitoring (LTRM) electrofishing protocol to an experimental electrofishing technique designed to minimize Silver Carp evasion through tactical boat maneuvering and selective application of power. Differences in CPUE between electrofishing methods were detected for 2 of 41 species collected across 2 years of sampling at 20 sites along the Illinois River. The mean catch rate of Silver Carp using the experimental technique was 2.2 times the mean catch rate of the LTRM electrofishing technique; the increased capture efficiency at low relative abundance emphasizes the utility of this method for early detection. The experimental electrofishing also collected slightly larger Silver Carp (mean: 510.7 mm TL versus 495.2 mm TL), and nearly four times as many Silver Carp independently jumped into the boat during experimental transects. Novel sampling approaches, such as the experimental electrofishing technique used in this study, should be considered to increase probability of detection for aquatic nuisance species.

  1. Reference genes for quantitative PCR in the adipose tissue of mice with metabolic disease.

    PubMed

    Almeida-Oliveira, Fernanda; Leandro, João G B; Ausina, Priscila; Sola-Penna, Mauro; Majerowicz, David

    2017-04-01

    Obesity and diabetes are metabolic diseases and they are increasing in prevalence. The dynamics of gene expression associated with these diseases is fundamental to identifying genes involved in related biological processes. qPCR is a sensitive technique for mRNA quantification and the most commonly used method in gene-expression studies. However, the reliability of these results is directly influenced by data normalization. As reference genes are the major normalization method used, this work aims to identify reference genes for qPCR in adipose tissues of mice with type-I diabetes or obesity. We selected 12 genes that are commonly used as reference genes. The expression of these genes in the adipose tissues of mice was analyzed in the context of three different experimental protocols: 1) untreated animals; 2) high-fat-diet animals; and 3) streptozotocin-treated animals. Gene-expression stability was analyzed using four different algorithms. Our data indicate that TATA-binding protein is stably expressed across adipose tissues in control animals. This gene was also a useful reference when the brown adipose tissues of control and obese mice were analyzed. The mitochondrial ATP synthase F1 complex gene exhibits stable expression in subcutaneous and perigonadal adipose tissue from control and obese mice. Moreover, this gene is the best reference for qPCR normalization in adipose tissue from streptozotocin-treated animals. These results show that there is no perfect stable gene suited for use under all experimental conditions. In conclusion, the selection of appropriate genes is a prerequisite to ensure qPCR reliability and must be performed separately for different experimental protocols. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  2. An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.

    ERIC Educational Resources Information Center

    Skakun, Ernest N.; Hakstian, A. Ralph

    Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…

  3. Identification of a Novel Reference Gene for Apple Transcriptional Profiling under Postharvest Conditions

    PubMed Central

    Storch, Tatiane Timm; Pegoraro, Camila; Finatto, Taciane; Quecini, Vera; Rombaldi, Cesar Valmor; Girardi, César Luis

    2015-01-01

    Reverse Transcription quantitative PCR (RT-qPCR) is one of the most important techniques for gene expression profiling due to its high sensibility and reproducibility. However, the reliability of the results is highly dependent on data normalization, performed by comparisons between the expression profiles of the genes of interest against those of constitutively expressed, reference genes. Although the technique is widely used in fruit postharvest experiments, the transcription stability of reference genes has not been thoroughly investigated under these experimental conditions. Thus, we have determined the transcriptional profile, under these conditions, of three genes commonly used as reference—ACTIN (MdACT), PROTEIN DISULPHIDE ISOMERASE (MdPDI) and UBIQUITIN-CONJUGATING ENZYME E2 (MdUBC)—along with two novel candidates—HISTONE 1 (MdH1) and NUCLEOSSOME ASSEMBLY 1 PROTEIN (MdNAP1). The expression profile of the genes was investigated throughout five experiments, with three of them encompassing the postharvest period and the other two, consisting of developmental and spatial phases. The transcriptional stability was comparatively investigated using four distinct software packages: BestKeeper, NormFinder, geNorm and DataAssist. Gene ranking results for transcriptional stability were similar for the investigated software packages, with the exception of BestKeeper. The classic reference gene MdUBC ranked among the most stably transcribed in all investigated experimental conditions. Transcript accumulation profiles for the novel reference candidate gene MdH1 were stable throughout the tested conditions, especially in experiments encompassing the postharvest period. Thus, our results present a novel reference gene for postharvest experiments in apple and reinforce the importance of checking the transcription profile of reference genes under the experimental conditions of interest. PMID:25774904

  4. Identification of a novel reference gene for apple transcriptional profiling under postharvest conditions.

    PubMed

    Storch, Tatiane Timm; Pegoraro, Camila; Finatto, Taciane; Quecini, Vera; Rombaldi, Cesar Valmor; Girardi, César Luis

    2015-01-01

    Reverse Transcription quantitative PCR (RT-qPCR) is one of the most important techniques for gene expression profiling due to its high sensibility and reproducibility. However, the reliability of the results is highly dependent on data normalization, performed by comparisons between the expression profiles of the genes of interest against those of constitutively expressed, reference genes. Although the technique is widely used in fruit postharvest experiments, the transcription stability of reference genes has not been thoroughly investigated under these experimental conditions. Thus, we have determined the transcriptional profile, under these conditions, of three genes commonly used as reference--ACTIN (MdACT), PROTEIN DISULPHIDE ISOMERASE (MdPDI) and UBIQUITIN-CONJUGATING ENZYME E2 (MdUBC)--along with two novel candidates--HISTONE 1 (MdH1) and NUCLEOSSOME ASSEMBLY 1 PROTEIN (MdNAP1). The expression profile of the genes was investigated throughout five experiments, with three of them encompassing the postharvest period and the other two, consisting of developmental and spatial phases. The transcriptional stability was comparatively investigated using four distinct software packages: BestKeeper, NormFinder, geNorm and DataAssist. Gene ranking results for transcriptional stability were similar for the investigated software packages, with the exception of BestKeeper. The classic reference gene MdUBC ranked among the most stably transcribed in all investigated experimental conditions. Transcript accumulation profiles for the novel reference candidate gene MdH1 were stable throughout the tested conditions, especially in experiments encompassing the postharvest period. Thus, our results present a novel reference gene for postharvest experiments in apple and reinforce the importance of checking the transcription profile of reference genes under the experimental conditions of interest.

  5. Prediction of medial and lateral contact force of the knee joint during normal and turning gait after total knee replacement.

    PubMed

    Purevsuren, Tserenchimed; Dorj, Ariunzaya; Kim, Kyungsoo; Kim, Yoon Hyuk

    2016-04-01

    The computational modeling approach has commonly been used to predict knee joint contact forces, muscle forces, and ligament loads during activities of daily living. Knowledge of these forces has several potential applications, for example, within design of equipment to protect the knee joint from injury and to plan adequate rehabilitation protocols, although clinical applications of computational models are still evolving and one of the limiting factors is model validation. The objective of this study was to extend previous modeling technique and to improve the validity of the model prediction using publicly available data set of the fifth "Grand Challenge Competition to Predict In Vivo Knee Loads." A two-stage modeling approach, which combines conventional inverse dynamic analysis (the first stage) with a multi-body subject-specific lower limb model (the second stage), was used to calculate medial and lateral compartment contact forces. The validation was performed by direct comparison of model predictions and experimental measurement of medial and lateral compartment contact forces during normal and turning gait. The model predictions of both medial and lateral contact forces showed strong correlations with experimental measurements in normal gait (r = 0.75 and 0.71) and in turning gait trials (r = 0.86 and 0.72), even though the current technique over-estimated medial compartment contact forces in swing phase. The correlation coefficient, Sprague and Geers metrics, and root mean squared error indicated that the lateral contact forces were predicted better than medial contact forces in comparison with the experimental measurements during both normal and turning gait trials. © IMechE 2016.

  6. Dynamic thermal effects of epidermal melanin and plasmonic nanoparticles during photoacoustic breast imaging

    NASA Astrophysics Data System (ADS)

    Ghassemi, Pejhman; Wang, Quanzeng; Pfefer, T. Joshua

    2016-03-01

    Photoacoustic Tomography (PAT) employs high-power near-infrared (near-IR) laser pulses to generate structural and functional information on tissue chromophores up to several centimeters below the surface. Such insights may facilitate detection of breast cancer - the most common cancer in women. PAT mammography has been the subject of extensive research, including techniques based on exogenous agents for PAT contrast enhancement and molecular specificity. However, photothermal safety risks of PAT due to strong chromophores such as epidermal melanin and plasmonic nanoparticles have not been rigorously studied. We have used computational and experimental approaches to elucidate highly dynamic optical-thermal processes during PAT. A Monte Carlo model was used to simulate light propagation at 800 and 1064 nm in a multi-layer breast tissue geometry with different epidermal pigmentation levels and a tumorsimulating inclusion incorporating nanoparticles. Energy deposition results were then used in a bioheat transfer model to simulate temperature transients. Experimental measurements involved multi-layer hydrogel phantoms with inclusions incorporating gold nanoparticles. Phantom optical properties were measured using the inverse adding-doubling technique. Thermal imaging was performed as phantoms were irradiated with 5 ns near-IR pulses. Scenarios using 10 Hz laser irradiation of breast tissue containing various nanoparticle concentrations were implemented experimentally and computationally. Laser exposure levels were based on ANSI/IEC limits. Surface temperature measurements were compared to corresponding simulation data. In general, the effect of highly pigmented skin on temperature rise was significant, whereas unexpectedly small levels of temperature rise during nanoparticle irradiation were attributed to rapid photodegradation. Results provide key initial insights into light-tissue interactions impacting the safety and effectiveness of PAT.

  7. Histopathological evaluation of the effect of locally administered strontium on healing time in mandibular fractures: An experimental study.

    PubMed

    Durmuş, Kasım; Turgut, Nergiz Hacer; Doğan, Mehtap; Tuncer, Ersin; Özer, Hatice; Altuntaş, Emine Elif; Akyol, Melih

    2017-10-01

    Mandibular fractures are the most common facial fractures. They can be treated by conservative techniques or by surgery. The authors hypothesized that the application of a single local dose of strontium chloride would accelerate the healing of subcondylar mandibular fractures, shorten the recovery time and prevent complications. The aim of the present pilot study was to evaluate the effects of a single local dose of strontium chloride on the healing of subcondylar mandibular fractures in rats. This randomized experimental study was carried out on 24 male Wistar albino rats. The rats were randomly divided into 3 groups: experimental group 1, receiving 3% strontium chloride; experimental group 2, receiving 5% strontium chloride; and the control group. A full thickness surgical osteotomy was created in the subcondylar area. A single dose of strontium solution (0.3 cc/site) was administered locally by injection on the bone surfaces of the fracture line created. Nothing was administered to the control group. The mandibles were dissected on postoperative day 21. The fractured hemimandibles were submitted to histopathological examination. The median bone fracture healing score was 9 (range: 7-9) in experimental group 1; 8 (range: 7-10) in experimental group 2; and 7.50 (range: 7-8) in the control group. When the groups were compared in terms of bone healing scores, there was a statistically significant difference between experimental group 1 and the control group (p < 0.05). This study is the first to show that local strontium may have positive effects on the healing of subcondylar mandibular fractures. In the authors' opinion, 3% strontium was beneficial for accelerating facial skeleton consolidation and bone regeneration in rat subcondylar mandibular fractures. This treatment procedure may be combined with closed fracture treatment or a conservative approach.

  8. Direction dependence of displacement time for two-fluid electroosmotic flow.

    PubMed

    Lim, Chun Yee; Lam, Yee Cheong

    2012-03-01

    Electroosmotic flow that involves one fluid displacing another fluid is commonly encountered in various microfludic applications and experiments, for example, current monitoring technique to determine zeta potential of microchannel. There is experimentally observed anomaly in such flow, namely, the displacement time is flow direction dependent, i.e., it depends if it is a high concentration fluid displacing a low concentration fluid, or vice versa. Thus, this investigation focuses on the displacement flow of two fluids with various concentration differences. The displacement time was determined experimentally with current monitoring method. It is concluded that the time required for a high concentration solution to displace a low concentration solution is smaller than the time required for a low concentration solution to displace a high concentration solution. The percentage displacement time difference increases with increasing concentration difference and independent of the length or width of the channel and the voltage applied. Hitherto, no theoretical analysis or numerical simulation has been conducted to explain this phenomenon. A numerical model based on finite element method was developed to explain the experimental observations. Simulations showed that the velocity profile and ion distribution deviate significantly from a single fluid electroosmotic flow. The distortion of ion distribution near the electrical double layer is responsible for the displacement time difference for the two different flow directions. The trends obtained from simulations agree with the experimental findings.

  9. Direction dependence of displacement time for two-fluid electroosmotic flow

    PubMed Central

    Lim, Chun Yee; Lam, Yee Cheong

    2012-01-01

    Electroosmotic flow that involves one fluid displacing another fluid is commonly encountered in various microfludic applications and experiments, for example, current monitoring technique to determine zeta potential of microchannel. There is experimentally observed anomaly in such flow, namely, the displacement time is flow direction dependent, i.e., it depends if it is a high concentration fluid displacing a low concentration fluid, or vice versa. Thus, this investigation focuses on the displacement flow of two fluids with various concentration differences. The displacement time was determined experimentally with current monitoring method. It is concluded that the time required for a high concentration solution to displace a low concentration solution is smaller than the time required for a low concentration solution to displace a high concentration solution. The percentage displacement time difference increases with increasing concentration difference and independent of the length or width of the channel and the voltage applied. Hitherto, no theoretical analysis or numerical simulation has been conducted to explain this phenomenon. A numerical model based on finite element method was developed to explain the experimental observations. Simulations showed that the velocity profile and ion distribution deviate significantly from a single fluid electroosmotic flow. The distortion of ion distribution near the electrical double layer is responsible for the displacement time difference for the two different flow directions. The trends obtained from simulations agree with the experimental findings. PMID:22662083

  10. Atelocollagen Enhances the Healing of Rotator Cuff Tendon in Rabbit Model.

    PubMed

    Suh, Dong-Sam; Lee, Jun-Keun; Yoo, Ji-Chul; Woo, Sang-Hun; Kim, Ga-Ram; Kim, Ju-Won; Choi, Nam-Yong; Kim, Yongdeok; Song, Hyun-Seok

    2017-07-01

    Failure of rotator cuff healing is a common complication despite the rapid development of surgical repair techniques for the torn rotator cuff. To verify the effect of atelocollagen on tendon-to-bone healing in the rabbit supraspinatus tendon compared with conventional cuff repair. Controlled laboratory study. A tear of the supraspinatus tendon was created and repaired in 46 New Zealand White rabbits. They were then randomly allocated into 2 groups (23 rabbits per group; 15 for histological and 8 for biomechanical test). In the experimental group, patch-type atelocollagen was implanted between bone and tendon during repair; in the control group, the torn tendon was repaired without atelocollagen. Each opposite shoulder served as a sham (tendon was exposed only). Histological evaluation was performed at 4, 8, and 12 weeks. Biomechanical tensile strength was tested 12 weeks after surgery. Histological evaluation scores of the experimental group (4.0 ± 1.0) were significantly superior to those of the control group (7.7 ± 2.7) at 12 weeks ( P = .005). The load to failure was significantly higher in the experimental group (51.4 ± 3.9 N) than in the control group (36.4 ± 5.9 N) ( P = .001). Histological and biomechanical studies demonstrated better results in the experimental group using atelocollagen in a rabbit model of the supraspinatus tendon tear. Atelocollagen patch could be used in the cuff repair site to enhance healing.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Domenico, Giovanni, E-mail: didomenico@fe.infn.it; Cardarelli, Paolo; Taibi, Angelo

    Purpose: The quality of a radiography system is affected by several factors, a major one being the focal spot size of the x-ray tube. In fact, the measurement of such size is recognized to be of primary importance during acceptance tests and image quality evaluations of clinical radiography systems. The most common device providing an image of the focal spot emission distribution is a pin-hole camera, which requires a high tube loading in order to produce a measurable signal. This work introduces an alternative technique to obtain an image of the focal spot, through the processing of a single radiographmore » of a simple test object, acquired with a suitable magnification. Methods: The radiograph of a magnified sharp edge is a well-established method to evaluate the extension of the focal spot profile along the direction perpendicular to the edge. From a single radiograph of a circular x-ray absorber, it is possible to extract simultaneously the radial profiles of several sharp edges with different orientations. The authors propose a technique that allows to obtain an image of the focal spot through the processing of these radial profiles by means of a pseudo-CT reconstruction technique. In order to validate this technique, the reconstruction has been applied to the simulated radiographs of an ideal disk-shaped absorber, generated by various simulated focal spot distributions. Furthermore, the method has been applied to the focal spot of a commercially available mammography unit. Results: In the case of simulated radiographs, the results of the reconstructions have been compared to the original distributions, showing an excellent agreement for what regards both the overall distribution and the full width at half maximum measurements. In the case of the experimental test, the method allowed to obtain images of the focal spot that have been compared with the results obtained through standard techniques, namely, pin-hole camera and slit camera. Conclusions: The method was proven to be effective for simulated images and the results of the experimental test suggest that it could be considered as an alternative technique for focal spot distribution evaluation. The method offers the possibility to measure the actual focal spot size and emission distribution at the same exposure conditions as clinical routine, avoiding high tube loading as in the case of the pin-hole imaging technique.« less

  12. X-ray focal spot reconstruction by circular penumbra analysis-Application to digital radiography systems.

    PubMed

    Di Domenico, Giovanni; Cardarelli, Paolo; Contillo, Adriano; Taibi, Angelo; Gambaccini, Mauro

    2016-01-01

    The quality of a radiography system is affected by several factors, a major one being the focal spot size of the x-ray tube. In fact, the measurement of such size is recognized to be of primary importance during acceptance tests and image quality evaluations of clinical radiography systems. The most common device providing an image of the focal spot emission distribution is a pin-hole camera, which requires a high tube loading in order to produce a measurable signal. This work introduces an alternative technique to obtain an image of the focal spot, through the processing of a single radiograph of a simple test object, acquired with a suitable magnification. The radiograph of a magnified sharp edge is a well-established method to evaluate the extension of the focal spot profile along the direction perpendicular to the edge. From a single radiograph of a circular x-ray absorber, it is possible to extract simultaneously the radial profiles of several sharp edges with different orientations. The authors propose a technique that allows to obtain an image of the focal spot through the processing of these radial profiles by means of a pseudo-CT reconstruction technique. In order to validate this technique, the reconstruction has been applied to the simulated radiographs of an ideal disk-shaped absorber, generated by various simulated focal spot distributions. Furthermore, the method has been applied to the focal spot of a commercially available mammography unit. In the case of simulated radiographs, the results of the reconstructions have been compared to the original distributions, showing an excellent agreement for what regards both the overall distribution and the full width at half maximum measurements. In the case of the experimental test, the method allowed to obtain images of the focal spot that have been compared with the results obtained through standard techniques, namely, pin-hole camera and slit camera. The method was proven to be effective for simulated images and the results of the experimental test suggest that it could be considered as an alternative technique for focal spot distribution evaluation. The method offers the possibility to measure the actual focal spot size and emission distribution at the same exposure conditions as clinical routine, avoiding high tube loading as in the case of the pin-hole imaging technique.

  13. A Kenyan perspective on the use of animals in science education and scientific research in Africa and prospects for improvement

    PubMed Central

    Kimwele, Charles; Matheka, Duncan; Ferdowsian, Hope

    2011-01-01

    Introduction Animal experimentation is common in Africa, a region that accords little priority on animal protection in comparison to economic and social development. The current study aimed at investigating the prevalence of animal experimentation in Kenya, and to review shortfalls in policy, legislation, implementation and enforcement that result in inadequate animal care in Kenya and other African nations. Methods Data was collected using questionnaires, administered at 39 highly ranked academic and research institutions aiming to identify those that used animals, their sources of animals, and application of the three Rs. Perceived challenges to the use of non-animal alternatives and common methods of euthanasia were also queried. Data was analyzed using Epidata, SPSS 16.0 and Microsoft Excel. Results Thirty-eight (97.4%) of thirty-nine institutions reported using animals for education and/or research. Thirty (76.9%) institutions reported using analgesics or anesthetics on a regular basis. Thirteen (33.3%) institutions regularly used statistical methods to minimize the use of animals. Overall, sixteen (41.0%) institutions explored the use of alternatives to animals such as cell cultures and computer simulation techniques, with one (2.6%) academic institution having completely replaced animals with computer modeling, manikins and visual illustrations. The commonest form of euthanasia employed was chloroform administration, reportedly in fourteen (29.8%) of 47 total methods (some institutions used more than one method). Twenty-eight (71.8%) institutions had no designated ethics committee to review or monitor protocols using animals. Conclusion Animals are commonly used in academic and research institutions in Kenya. The relative lack of ethical guidance and oversight regarding the use of animals in research and education presents significant concerns. PMID:22355442

  14. A Kenyan perspective on the use of animals in science education and scientific research in Africa and prospects for improvement.

    PubMed

    Kimwele, Charles; Matheka, Duncan; Ferdowsian, Hope

    2011-01-01

    Animal experimentation is common in Africa, a region that accords little priority on animal protection in comparison to economic and social development. The current study aimed at investigating the prevalence of animal experimentation in Kenya, and to review shortfalls in policy, legislation, implementation and enforcement that result in inadequate animal care in Kenya and other African nations. Data was collected using questionnaires, administered at 39 highly ranked academic and research institutions aiming to identify those that used animals, their sources of animals, and application of the three Rs. Perceived challenges to the use of non-animal alternatives and common methods of euthanasia were also queried. Data was analyzed using Epidata, SPSS 16.0 and Microsoft Excel. Thirty-eight (97.4%) of thirty-nine institutions reported using animals for education and/or research. Thirty (76.9%) institutions reported using analgesics or anesthetics on a regular basis. Thirteen (33.3%) institutions regularly used statistical methods to minimize the use of animals. Overall, sixteen (41.0%) institutions explored the use of alternatives to animals such as cell cultures and computer simulation techniques, with one (2.6%) academic institution having completely replaced animals with computer modeling, manikins and visual illustrations. The commonest form of euthanasia employed was chloroform administration, reportedly in fourteen (29.8%) of 47 total methods (some institutions used more than one method). Twenty-eight (71.8%) institutions had no designated ethics committee to review or monitor protocols using animals. Animals are commonly used in academic and research institutions in Kenya. The relative lack of ethical guidance and oversight regarding the use of animals in research and education presents significant concerns.

  15. Internal consistency tests for evaluation of measurements of anthropogenic hydrocarbons in the troposphere

    NASA Astrophysics Data System (ADS)

    Parrish, D. D.; Trainer, M.; Young, V.; Goldan, P. D.; Kuster, W. C.; Jobson, B. T.; Fehsenfeld, F. C.; Lonneman, W. A.; Zika, R. D.; Farmer, C. T.; Riemer, D. D.; Rodgers, M. O.

    1998-09-01

    Measurements of tropospheric nonmethane hydrocarbons (NMHCs) made in continental North America should exhibit a common pattern determined by photochemical removal and dilution acting upon the typical North American urban emissions. We analyze 11 data sets collected in the United States in the context of this hypothesis, in most cases by analyzing the geometric mean and standard deviations of ratios of selected NMHCs. In the analysis we attribute deviations from the common pattern to plausible systematic and random experimental errors. In some cases the errors have been independently verified and the specific causes identified. Thus this common pattern provides a check for internal consistency in NMHC data sets. Specific tests are presented which should provide useful diagnostics for all data sets of anthropogenic NMHC measurements collected in the United States. Similar tests, based upon the perhaps different emission patterns of other regions, presumably could be developed. The specific tests include (1) a lower limit for ethane concentrations, (2) specific NMHCs that should be detected if any are, (3) the relatively constant mean ratios of the longer-lived NMHCs with similar atmospheric lifetimes, (4) the constant relative patterns of families of NMHCs, and (5) limits on the ambient variability of the NMHC ratios. Many experimental problems are identified in the literature and the Southern Oxidant Study data sets. The most important conclusion of this paper is that a rigorous field intercomparison of simultaneous measurements of ambient NMHCs by different techniques and researchers is of crucial importance to the field of atmospheric chemistry. The tests presented here are suggestive of errors but are not definitive; only a field intercomparison can resolve the uncertainties.

  16. Experimental Techniques for Thermodynamic Measurements of Ceramics

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra

    1999-01-01

    Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.

  17. Overview of Supersonic Aerodynamics Measurement Techniques in the NASA Langley Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.

    2007-01-01

    An overview is given of selected measurement techniques used in the NASA Langley Research Center (NASA LaRC) Unitary Plan Wind Tunnel (UPWT) to determine the aerodynamic characteristics of aerospace vehicles operating at supersonic speeds. A broad definition of a measurement technique is adopted in this paper and is any qualitative or quantitative experimental approach that provides information leading to the improved understanding of the supersonic aerodynamic characteristics. On-surface and off-surface measurement techniques used to obtain discrete (point) and global (field) measurements and planar and global flow visualizations are described, and examples of all methods are included. The discussion is limited to recent experiences in the UPWT and is, therefore, not an exhaustive review of existing experimental techniques. The diversity and high quality of the measurement techniques and the resultant data illustrate the capabilities of a ground-based experimental facility and the key role that it plays in the advancement of our understanding, prediction, and control of supersonic aerodynamics.

  18. Flocculation kinetics and aggregate structure of kaolinite mixtures in laminar tube flow.

    PubMed

    Vaezi G, Farid; Sanders, R Sean; Masliyah, Jacob H

    2011-03-01

    Flocculation is commonly used in various solid-liquid separation processes in chemical and mineral industries to separate desired products or to treat waste streams. This paper presents an experimental technique to study flocculation processes in laminar tube flow. This approach allows for more realistic estimation of the shear rate to which an aggregate is exposed, as compared to more complicated shear fields (e.g. stirred tanks). A direct sampling method is used to minimize the effect of sampling on the aggregate structure. A combination of aggregate settling velocity and image analysis was used to quantify the structure of the aggregate. Aggregate size, density, and fractal dimension were found to be the most important aggregate structural parameters. The two methods used to determine aggregate fractal dimension were in good agreement. The effects of advective flow through an aggregate's porous structure and transition-regime drag coefficient on the evaluation of aggregate density were considered. The technique was applied to investigate the flocculation kinetics and the evolution of the aggregate structure of kaolin particles with an anionic flocculant under conditions similar to those of oil sands fine tailings. Aggregates were formed using a well controlled two-stage aggregation process. Detailed statistical analysis was performed to investigate the establishment of dynamic equilibrium condition in terms of aggregate size and density evolution. An equilibrium steady state condition was obtained within 90 s of the start of flocculation; after which no further change in aggregate structure was observed. Although longer flocculation times inside the shear field could conceivably cause aggregate structure conformation, statistical analysis indicated that this did not occur for the studied conditions. The results show that the technique and experimental conditions employed here produce aggregates having a well-defined, reproducible structure. Copyright © 2011. Published by Elsevier Inc.

  19. Mechanisms of behavior modification in clinical behavioral medicine in China.

    PubMed

    Yang, Zhiyin; Su, Zhonghua; Ji, Feng; Zhu, Min; Bai, Bo

    2014-08-01

    Behavior modification, as the core of clinical behavioral medicine, is often used in clinical settings. We seek to summarize behavior modification techniques that are commonly used in clinical practice of behavioral medicine in China and discuss possible biobehavioral mechanisms. We reviewed common behavior modification techniques in clinical settings in China, and we reviewed studies that explored possible biobehavioral mechanisms. Commonly used clinical approaches of behavior modification in China include behavior therapy, cognitive therapy, cognitive-behavioral therapy, health education, behavior management, behavioral relaxation training, stress management intervention, desensitization therapy, biofeedback therapy, and music therapy. These techniques have been applied in the clinical treatment of a variety of diseases, such as chronic diseases, psychosomatic diseases, and psychological disorders. The biobehavioral mechanisms of these techniques involve the autonomic nervous system, neuroendocrine system, neurobiochemistry, and neuroplasticity. Behavior modification techniques are commonly used in the treatment of a variety of somatic and psychological disorders in China. Multiple biobehavioral mechanisms are involved in successful behavior modification.

  20. Thermoreflectance spectroscopy—Analysis of thermal processes in semiconductor lasers

    NASA Astrophysics Data System (ADS)

    Pierścińska, D.

    2018-01-01

    This review focuses on theoretical foundations, experimental implementation and an overview of experimental results of the thermoreflectance spectroscopy as a powerful technique for temperature monitoring and analysis of thermal processes in semiconductor lasers. This is an optical, non-contact, high spatial resolution technique providing high temperature resolution and mapping capabilities. Thermoreflectance is a thermometric technique based on measuring of relative change of reflectivity of the surface of laser facet, which provides thermal images useful in hot spot detection and reliability studies. In this paper, principles and experimental implementation of the technique as a thermography tool is discussed. Some exemplary applications of TR to various types of lasers are presented, proving that thermoreflectance technique provides new insight into heat management problems in semiconductor lasers and in particular, that it allows studying thermal degradation processes occurring at laser facets. Additionally, thermal processes and basic mechanisms of degradation of the semiconductor laser are discussed.

  1. New experimental techniques for solar cells

    NASA Technical Reports Server (NTRS)

    Lenk, R.

    1993-01-01

    Solar cell capacitance has special importance for an array controlled by shunting. Experimental measurements of solar cell capacitance in the past have shown disagreements of orders of magnitude. Correct measurement technique depends on maintaining the excitation voltage less than the thermal voltage. Two different experimental methods are shown to match theory well, and two effective capacitances are defined for quantifying the effect of the solar cell capacitance on the shunting system.

  2. Crack identification method in beam-like structures using changes in experimentally measured frequencies and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd

    2018-02-01

    In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.

  3. Dynamics of the brain: Mathematical models and non-invasive experimental studies

    NASA Astrophysics Data System (ADS)

    Toronov, V.; Myllylä, T.; Kiviniemi, V.; Tuchin, V. V.

    2013-10-01

    Dynamics is an essential aspect of the brain function. In this article we review theoretical models of neural and haemodynamic processes in the human brain and experimental non-invasive techniques developed to study brain functions and to measure dynamic characteristics, such as neurodynamics, neurovascular coupling, haemodynamic changes due to brain activity and autoregulation, and cerebral metabolic rate of oxygen. We focus on emerging theoretical biophysical models and experimental functional neuroimaging results, obtained mostly by functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). We also included our current results on the effects of blood pressure variations on cerebral haemodynamics and simultaneous measurements of fast processes in the brain by near-infrared spectroscopy and a very novel functional MRI technique called magnetic resonance encephalography. Based on a rapid progress in theoretical and experimental techniques and due to the growing computational capacities and combined use of rapidly improving and emerging neuroimaging techniques we anticipate during next decade great achievements in the overall knowledge of the human brain.

  4. A summary of image segmentation techniques

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.

  5. Mechanical characterisation of agarose-based chromatography resins for biopharmaceutical manufacture.

    PubMed

    Nweke, Mauryn C; McCartney, R Graham; Bracewell, Daniel G

    2017-12-29

    Mechanical characterisation of agarose-based resins is an important factor in ensuring robust chromatographic performance in the manufacture of biopharmaceuticals. Pressure-flow profiles are most commonly used to characterise these properties. There are a number of drawbacks with this method, including the potential need for several re-packs to achieve the desired packing quality, the impact of wall effects on experimental set up and the quantities of chromatography media and buffers required. To address these issues, we have developed a dynamic mechanical analysis (DMA) technique that characterises the mechanical properties of resins based on the viscoelasticity of a 1ml sample of slurry. This technique was conducted on seven resins with varying degrees of mechanical robustness and the results were compared to pressure-flow test results on the same resins. Results show a strong correlation between the two techniques. The most mechanically robust resin (Capto Q) had a critical velocity 3.3 times higher than the weakest (Sepharose CL-4B), whilst the DMA technique showed Capto Q to have a slurry deformation rate 8.3 times lower than Sepharose CL-4B. To ascertain whether polymer structure is indicative of mechanical strength, scanning electron microscopy images were also used to study the structural properties of each resin. Results indicate that DMA can be used as a small volume, complementary technique for the mechanical characterisation of chromatography media. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. Neutron-capture rates for explosive nucleosynthesis: the case of 68Ni(n, γ) 69Ni

    DOE PAGES

    Spyrou, Artemis; Larsen, Ann-Cecilie; Liddick, Sean N.; ...

    2017-02-22

    Neutron-capture reactions play an important role in heavy element nucleosynthesis, since they are the driving force for the two processes that create the vast majority of the heavy elements. When a neutron capture occurs on a short-lived nucleus, it is extremely challenging to study the reaction directly and therefore the use of indirect techniques is essential. The present work reports on such an indirect measurement that provides strong constraints on the 68Ni(n,g) 69Ni reaction rate.The commonly used reaction libraries JINA-REACLIB and BRUSLIB are in relatively good agreement with the experimental rate. The impact of the new rate on weak r-processmore » calculations is discussed.« less

  7. Application of Contact Mode AFM to Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Giordano, Michael A.; Schmid, Steven R.

    A review of the application of contact mode atomic force microscopy (AFM) to manufacturing processes is presented. A brief introduction to common experimental techniques including hardness, scratch, and wear testing is presented, with a discussion of challenges in the extension of manufacturing scale investigations to the AFM. Differences between the macro- and nanoscales tests are discussed, including indentation size effects and their importance in the simulation of processes such as grinding. The basics of lubrication theory are presented and friction force microscopy is introduced as a method of investigating metal forming lubrication on the nano- and microscales that directly simulates tooling/workpiece asperity interactions. These concepts are followed by a discussion of their application to macroscale industrial manufacturing processes and direct correlations are made.

  8. MMM: A toolbox for integrative structure modeling.

    PubMed

    Jeschke, Gunnar

    2018-01-01

    Structural characterization of proteins and their complexes may require integration of restraints from various experimental techniques. MMM (Multiscale Modeling of Macromolecules) is a Matlab-based open-source modeling toolbox for this purpose with a particular emphasis on distance distribution restraints obtained from electron paramagnetic resonance experiments on spin-labelled proteins and nucleic acids and their combination with atomistic structures of domains or whole protomers, small-angle scattering data, secondary structure information, homology information, and elastic network models. MMM does not only integrate various types of restraints, but also various existing modeling tools by providing a common graphical user interface to them. The types of restraints that can support such modeling and the available model types are illustrated by recent application examples. © 2017 The Protein Society.

  9. Quantitative Acoustic Model for Adhesion Evaluation of Pmma/silicon Film Structures

    NASA Astrophysics Data System (ADS)

    Ju, H. S.; Tittmann, B. R.

    2010-02-01

    A Poly-methyl-methacrylate (PMMA) film on a silicon substrate is a main structure for photolithography in semiconductor manufacturing processes. This paper presents a potential of scanning acoustic microscopy (SAM) for nondestructive evaluation of the PMMA/Si film structure, whose adhesion failure is commonly encountered during the fabrication and post-fabrication processes. A physical model employing a partial discontinuity in displacement is developed for rigorously quantitative evaluation of the interfacial weakness. The model is implanted to the matrix method for the surface acoustic wave (SAW) propagation in anisotropic media. Our results show that variations in the SAW velocity and reflectance are predicted to show their sensitivity to the adhesion condition. Experimental results by the v(z) technique and SAW velocity reconstruction verify the prediction.

  10. Lithium-Air Cell Development

    NASA Technical Reports Server (NTRS)

    Reid, Concha M.; Dobley, Arthur; Seymour, Frasier W.

    2014-01-01

    Lithium-air (Li-air) primary batteries have a theoretical specific capacity of 11,400 Wh/kg, the highest of any common metal-air system. NASA is developing Li-air technology for a Mobile Oxygen Concentrator for Spacecraft Emergencies, an application which requires an extremely lightweight primary battery that can discharge over 24 hours continuously. Several vendors were funded through the NASA SBIR program to develop Li-air technology to fulfill the requirements of this application. New catalysts and carbon cathode structures were developed to enhance the oxygen reduction reaction and increase surface area to improve cell performance. Techniques to stabilize the lithium metal anode surface were explored. Experimental results for prototype laboratory cells are given. Projections are made for the performance of hypothetical cells constructed from the materials that were developed.

  11. Applications of Two-Dimensional Electrophoresis Technology to the Study of Atherosclerosis

    PubMed Central

    Lepedda, Antonio J.

    2008-01-01

    Atherosclerosis is a multifactorial disease in which hypertension, diabetes, hyperlipidemia and other risk factors are thought to play a role. However, the molecular processes underlying plaque formation and progression are not yet completely known. In the last years some researchers applied proteomics technologies for the comprehension of biochemical pathways of atherogenesis and to search new cardiovascular biomarkers to be utilized either as early diagnostic traits or as targets for new drug therapies. Due to its intrinsic complexity, the problem has been approached by different strategies, all of which have some limitations. In this review, we summarize the most common critical experimental variables in two-dimensional electrophoresis-based techniques and recent data obtained by applying proteomic approaches in the study of atherosclerosis. PMID:27683313

  12. A temporal phase unwrapping algorithm for photoelastic stress analysis

    NASA Astrophysics Data System (ADS)

    Baldi, Antonio; Bertolino, Filippo; Ginesu, Francesco

    2007-05-01

    Photoelastic stress analysis is a full-field optical technique for experimental stress analysis whose automation has received considerable research attention over the last 15 years. The latest developments have been made possible largely due to the availability of powerful calculators with large memory capacity and colour, high resolution, cameras. A further stimulus is provided by the photoelastic resins now used for rapid prototyping. However, one critical aspect which still deserves attention is phase unwrapping. The algorithms most commonly used for this purpose have been developed in other scientific areas (classical interferometry, profilometry, moiré, etc.) for solving different problems. In this article a new algorithm is proposed for temporal phase unwrapping, which offers several advantages over those used today.

  13. Masonry structures built with fictile tubules: Experimental and numerical analyses

    NASA Astrophysics Data System (ADS)

    Tiberti, Simone; Scuro, Carmelo; Codispoti, Rosamaria; Olivito, Renato S.; Milani, Gabriele

    2017-11-01

    Masonry structures with fictile tubules were a distinctive building technique of the Mediterranean area. This technique dates back to Roman and early Christian times, used to build vaulted constructions and domes with various geometrical forms by virtue of their modular structure. In the present work, experimental tests were carried out to identify the mechanical properties of hollow clay fictile tubules and a possible reinforcing technique for existing buildings employing such elements. The experimental results were then validated by devising and analyzing numerical models with the FE software Abaqus, also aimed at investigating the structural behavior of an arch via linear and nonlinear static analyses.

  14. Sport science integration: An evolutionary synthesis.

    PubMed

    Balagué, N; Torrents, C; Hristovski, R; Kelso, J A S

    2017-02-01

    The aim of the paper is to point out one way of integrating the supposedly incommensurate disciplines investigated in sports science. General, common principles can be found among apparently unrelated disciplines when the focus is put on the dynamics of sports-related phenomena. Dynamical systems approaches that have recently changed research in biological and social sciences among others, offer key concepts to create a common pluricontextual language in sport science. This common language, far from being homogenising, offers key synthesis between diverse fields, respecting and enabling the theoretical and experimental pluralism. It forms a softly integrated sports science characterised by a basic dynamic explanatory backbone as well as context-dependent theoretical flexibility. After defining the dynamic integration in living systems, unable to be captured by structural static approaches, we show the commonalities between the diversity of processes existing on different levels and time scales in biological and social entities. We justify our interpretation by drawing on some recent scientific contributions that use the same general principles and concepts, and diverse methods and techniques of data analysis, to study different types of phenomena in diverse disciplines. We show how the introduction of the dynamic framework in sport science has started to blur the boundaries between physiology, biomechanics, psychology, phenomenology and sociology. The advantages and difficulties of sport science integration and its consequences in research are also discussed.

  15. [Applicational evaluation of split tooth extractions of upper molars using piezosurgery].

    PubMed

    Li, D; Guo, C B; Liu, Y; Wang, E B

    2016-02-18

    To evaluate the efficacy of Piezosurgery in split teeth extractions. A single-center, randomized, split-mouth study was performed using a consecutive serious of unrelated healthy patients attending the departing of oral and maxillofacial surgery, Peking University School and Hospital of Stomatology. 40 patients were selected for extraction of maxillary molars without reservation value,splitting or nonvital teeth. They were divided into control (20 patients) and test groups (20 patients) randomly. Surgical treatments for both groups were under local anesthesia. Molar teeth of control group were extracted by common equipments like dental elevators, chisels, forceps, etc. While molar teeth of experimental group were extracted by Piezosurgery, aided with the use of common equipments if needed. Then we compared the duration of surgery, frequency of the usage of chisels, expansion of postoperative bony socket surgical discomfort and postoperative pain between two groups. The average of operation time was (629.5±171.0) s in control group and (456.0±337.2) s in test group. The buccal alveolar bone reduced (1.07±0.64) mm in control group and (1.49±0.61) mm in test group. There was a significant difference between the two groups (P<0.05). The duration of surgery for experimental group was significantly longer than that of the control group, but the change of buccal alveolar bone was lower than the control group. For visual analogue scale (VAS) value of surgical discomfort, expansion of postoperative bony socket and the operative fear rate, there were no significant difference between the two groups (P>0.05). Piezosugery can be better to preserve alveolar bone, reduce trauma and patient's fear. Application of the piezosugery reflect the characteristics of minimally invasive extraction, which has the value of promotion. The Piezosurgery technique have the advantage of reducing change of buccal alveolar bone during the surgery, but a longer surgical time was required when compared with the conventional technique. VAS value of surgical discomfort, expansion of postoperative bony socket and the operative fear rate, there were no significant difference. Minimally invasive tooth extraction technique has good clinical results and high satisfaction. Piezosurgery proved its worth as the instrument adapted to limiting the destruction of bone tissue.

  16. [Applicational evaluation of split tooth extractions of upper molars using piezosurgery].

    PubMed

    Li, D; Guo, C B; Liu, Y; Wang, E B

    2016-08-18

    To evaluate the efficacy of Piezosurgery in split teeth extractions. A single-center, randomized, split-mouth study was performed using a consecutive serious of unrelated healthy patients attending the departing of oral and maxillofacial surgery, Peking University School and Hospital of Stomatology. 40 patients were selected for extraction of maxillary molars without reservation value,splitting or nonvital teeth. They were divided into control (20 patients) and test groups (20 patients) randomly. Surgical treatments for both groups were under local anesthesia. Molar teeth of control group were extracted by common equipments like dental elevators, chisels, forceps, etc. While molar teeth of experimental group were extracted by Piezosurgery, aided with the use of common equipments if needed. Then we compared the duration of surgery, frequency of the usage of chisels, expansion of postoperative bony socket surgical discomfort and postoperative pain between two groups. The average of operation time was (629.5±171.0) s in control group and (456.0±337.2) s in test group. The buccal alveolar bone reduced (1.07±0.64) mm in control group and (1.49±0.61) mm in test group. There was a significant difference between the two groups (P<0.05). The duration of surgery for experimental group was significantly longer than that of the control group, but the change of buccal alveolar bone was lower than the control group. For visual analogue scale (VAS) value of surgical discomfort, expansion of postoperative bony socket and the operative fear rate, there were no significant difference between the two groups (P>0.05). Piezosugery can be better to preserve alveolar bone, reduce trauma and patient's fear. Application of the piezosugery reflect the characteristics of minimally invasive extraction, which has the value of promotion. The Piezosurgery technique have the advantage of reducing change of buccal alveolar bone during the surgery, but a longer surgical time was required when compared with the conventional technique. VAS value of surgical discomfort, expansion of postoperative bony socket and the operative fear rate, there were no significant difference. Minimally invasive tooth extraction technique has good clinical results and high satisfaction. Piezosurgery proved its worth as the instrument adapted to limiting the destruction of bone tissue.

  17. Approach for gait analysis in persons with limb loss including residuum and prosthesis socket dynamics.

    PubMed

    LaPrè, A K; Price, M A; Wedge, R D; Umberger, B R; Sup, Frank C

    2018-04-01

    Musculoskeletal modeling and marker-based motion capture techniques are commonly used to quantify the motions of body segments, and the forces acting on them during human gait. However, when these techniques are applied to analyze the gait of people with lower limb loss, the clinically relevant interaction between the residual limb and prosthesis socket is typically overlooked. It is known that there is considerable motion and loading at the residuum-socket interface, yet traditional gait analysis techniques do not account for these factors due to the inability to place tracking markers on the residual limb inside of the socket. In the present work, we used a global optimization technique and anatomical constraints to estimate the motion and loading at the residuum-socket interface as part of standard gait analysis procedures. We systematically evaluated a range of parameters related to the residuum-socket interface, such as the number of degrees of freedom, and determined the configuration that yields the best compromise between faithfully tracking experimental marker positions while yielding anatomically realistic residuum-socket kinematics and loads that agree with data from the literature. Application of the present model to gait analysis for people with lower limb loss will deepen our understanding of the biomechanics of walking with a prosthesis, which should facilitate the development of enhanced rehabilitation protocols and improved assistive devices. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Kalman Filter Tracking on Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-11-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.

  19. Identification of modal strains using sub-microstrain FBG data and a novel wavelength-shift detection algorithm

    NASA Astrophysics Data System (ADS)

    Anastasopoulos, Dimitrios; Moretti, Patrizia; Geernaert, Thomas; De Pauw, Ben; Nawrot, Urszula; De Roeck, Guido; Berghmans, Francis; Reynders, Edwin

    2017-03-01

    The presence of damage in a civil structure alters its stiffness and consequently its modal characteristics. The identification of these changes can provide engineers with useful information about the condition of a structure and constitutes the basic principle of the vibration-based structural health monitoring. While eigenfrequencies and mode shapes are the most commonly monitored modal characteristics, their sensitivity to structural damage may be low relative to their sensitivity to environmental influences. Modal strains or curvatures could offer an attractive alternative but current measurement techniques encounter difficulties in capturing the very small strain (sub-microstrain) levels occurring during ambient, or operational excitation, with sufficient accuracy. This paper investigates the ability to obtain sub-microstrain accuracy with standard fiber-optic Bragg gratings using a novel optical signal processing algorithm that identifies the wavelength shift with high accuracy and precision. The novel technique is validated in an extensive experimental modal analysis test on a steel I-beam which is instrumented with FBG sensors at its top and bottom flange. The raw wavelength FBG data are processed into strain values using both a novel correlation-based processing technique and a conventional peak tracking technique. Subsequently, the strain time series are used for identifying the beam's modal characteristics. Finally, the accuracy of both algorithms in identification of modal characteristics is extensively investigated.

  20. Temperature Dependence of Viscosities of Common Carrier Gases

    ERIC Educational Resources Information Center

    Sommers, Trent S.; Nahir, Tal M.

    2005-01-01

    Theoretical and experimental evidence for the dependence of viscosities of the real gases on temperature is described, suggesting that this dependence is greater than that predicted by the kinetic theory of gases. The experimental results were obtained using common modern instrumentation and could be reproduced by students in analytical or…

  1. A Novel Microcharacterization Technique in the Measurement of Strain and Orientation Gradient in Advanced Materials

    NASA Technical Reports Server (NTRS)

    Garmestai, H.; Harris, K.; Lourenco, L.

    1997-01-01

    Representation of morphology and evolution of the microstructure during processing and their relation to properties requires proper experimental techniques. Residual strains, lattice distortion, and texture (micro-texture) at the interface and the matrix of a layered structure or a functionally gradient material and their variation are among parameters important in materials characterization but hard to measure with present experimental techniques. Current techniques available to measure changes in interred material parameters (residual stress, micro-texture, microplasticity) produce results which are either qualitative or unreliable. This problem becomes even more complicated in the case of a temperature variation. These parameters affect many of the mechanical properties of advanced materials including stress-strain relation, ductility, creep, and fatigue. A review of some novel experimental techniques using recent advances in electron microscopy is presented here to measure internal stress, (micro)texture, interracial strength and (sub)grain formation and realignment. Two of these techniques are combined in the chamber of an Environmental Scanning Electron Microscope to measure strain and orientation gradients in advanced materials. These techniques which include Backscattered Kikuchi Diffractometry (BKD) and Microscopic Strain Field Analysis are used to characterize metallic and intermetallic matrix composites and superplastic materials. These techniques are compared with the more conventional x-ray diffraction and indentation techniques.

  2. Method and apparatus for determination of mechanical properties of functionally-graded materials

    DOEpatents

    Giannakopoulos, Antonios E.; Suresh, Subra

    1999-01-01

    Techniques for the determination of mechanical properties of homogenous or functionally-graded materials from indentation testing are presented. The technique is applicable to indentation on the nano-scale through the macro-scale including the geological scale. The technique involves creating a predictive load/depth relationship for a sample, providing an experimental load/depth relationship, comparing the experimental data to the predictive data, and determining a physical characteristic from the comparison.

  3. Eyebrow and Eyelash Hair Transplantation: A Systematic Review.

    PubMed

    Klingbeil, Kyle D; Fertig, Raymond

    2018-06-01

    The objective of this systematic review was to investigate the etiologies of hair loss of the eyebrow and eyelash that required hair transplantation, the optimal surgical technique, patient outcomes, and common complications. A total of 67 articles including 354 patients from 18 countries were included in this study. Most patients were women with an average age of 29 years. The most common etiology requiring hair transplantation was burns, occurring in 57.6 percent of cases. Both eyebrow and eyelash transplantation use follicular unit transplantation techniques most commonly; however, other techniques involving composite grafts and skin flaps continue to be utilized effectively with minimal complication rates. In summary, many techniques have been developed for use in eyebrow/eyelash transplantation and the selection of technique depends upon the dermatologic surgeon's preferences and the unique presentations of their patients.

  4. A processing centre for the CNES CE-GPS experimentation

    NASA Technical Reports Server (NTRS)

    Suard, Norbert; Durand, Jean-Claude

    1994-01-01

    CNES is involved in a GPS (Global Positioning System) geostationary overlay experimentation. The purpose of this experimentation is to test various new techniques in order to select the optimal station synchronization method, as well as the geostationary spacecraft orbitography method. These new techniques are needed to develop the Ranging GPS Integrity Channel services. The CNES experimentation includes three transmitting/receiving ground stations (manufactured by IN-SNEC), one INMARSAT 2 C/L band transponder and a processing center named STE (Station de Traitements de l'Experimentation). Not all the techniques to be tested are implemented, but the experimental system has to include several functions; part of the future system simulation functions, such as a servo-loop function, and in particular a data collection function providing for rapid monitoring of system operation, analysis of existing ground station processes, and several weeks of data coverage for other scientific studies. This paper discusses system architecture and some criteria used in its design, as well as the monitoring function, the approach used to develop a low-cost and short-life processing center in collaboration with a CNES sub-contractor (ATTDATAID), and some results.

  5. NEW 3D TECHNIQUES FOR RANKING AND PRIORITIZATION OF CHEMICAL INVENTORIES

    EPA Science Inventory

    New three-dimensional quantitative structure activity (3-D QSAR) techniques for prioritizing chemical inventories for endocrine activity will be presented. The Common Reactivity Pattern (COREPA) approach permits identification of common steric and/or electronic patterns associate...

  6. Barriers impacting the utilization of supervision techniques in genetic counseling.

    PubMed

    Masunga, Abigail; Wusik, Katie; He, Hua; Yager, Geoffrey; Atzinger, Carrie

    2014-12-01

    Clinical supervision is an essential element in training genetic counselors. Although live supervision has been identified as the most common supervision technique utilized in genetic counseling, there is limited information on factors influencing its use as well as the use of other techniques. The purpose of this study was to identify barriers supervisors face when implementing supervision techniques. All participants (N = 141) reported utilizing co-counseling. This was most used with novice students (96.1%) and intermediate students (93.7%). Other commonly used techniques included live supervision where the supervisor is silent during session (98.6%) which was used most frequently with advanced students (94.0%), and student self-report (64.7%) used most often with advanced students (61.2%). Though no barrier to these commonly used techniques was identified by a majority of participants, the most frequently reported barriers included time and concern about patient's welfare. The remaining supervision techniques (live remote observation, video, and audio recording) were each used by less than 10% of participants. Barriers that significantly influenced use of these techniques included lack of facilities/equipment and concern about patient reactions to technique. Understanding barriers to implementation of supervisory techniques may allow students to be efficiently trained in the future by reducing supervisor burnout and increasing the diversity of techniques used.

  7. Design, Prototyping and Control of a Flexible Cystoscope for Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Sozer, Canberk; Ghorbani, Morteza; Alcan, Gokhan; Uvet, Huseyin; Unel, Mustafa; Kosar, Ali

    2017-07-01

    Kidney stone and prostate hyperplasia are very common urogenital diseases all over the world. To treat these diseases, one of the ESWL (Extracorporeal Shock Wave Lithotripsy), PCNL (Percutaneous Nephrolithotomy), cystoscopes or open surgery techniques can be used. Cystoscopes named devices are used for in-vivo intervention. A flexible or rigid cystoscope device is inserted into human body and operates on interested area. In this study, a flexible cystoscope prototype has been developed. The prototype is able to bend up to ±40°in X and Y axes, has a hydrodynamic cavitation probe for rounding sharp edges of kidney stone or resection of the filled prostate with hydrodynamic cavitation method and contains a waterproof medical camera to give visual feedback to the operator. The operator steers the flexible end-effector via joystick toward target region. This paper presents design, manufacturing, control and experimental setup of the tendon driven flexible cystoscope prototype. The prototype is 10 mm in outer diameter, 70 mm in flexible part only and 120 mm in total length with flexible part and rigid tube. The experimental results show that the prototype bending mechanism, control system, manufactured prototype parts and experimental setup function properly. A small piece of real kidney stone was broken in targeted area.

  8. Guidelines on experimental methods to assess mitochondrial dysfunction in cellular models of neurodegenerative diseases.

    PubMed

    Connolly, Niamh M C; Theurey, Pierre; Adam-Vizi, Vera; Bazan, Nicolas G; Bernardi, Paolo; Bolaños, Juan P; Culmsee, Carsten; Dawson, Valina L; Deshmukh, Mohanish; Duchen, Michael R; Düssmann, Heiko; Fiskum, Gary; Galindo, Maria F; Hardingham, Giles E; Hardwick, J Marie; Jekabsons, Mika B; Jonas, Elizabeth A; Jordán, Joaquin; Lipton, Stuart A; Manfredi, Giovanni; Mattson, Mark P; McLaughlin, BethAnn; Methner, Axel; Murphy, Anne N; Murphy, Michael P; Nicholls, David G; Polster, Brian M; Pozzan, Tullio; Rizzuto, Rosario; Satrústegui, Jorgina; Slack, Ruth S; Swanson, Raymond A; Swerdlow, Russell H; Will, Yvonne; Ying, Zheng; Joselin, Alvin; Gioran, Anna; Moreira Pinho, Catarina; Watters, Orla; Salvucci, Manuela; Llorente-Folch, Irene; Park, David S; Bano, Daniele; Ankarcrona, Maria; Pizzo, Paola; Prehn, Jochen H M

    2018-03-01

    Neurodegenerative diseases are a spectrum of chronic, debilitating disorders characterised by the progressive degeneration and death of neurons. Mitochondrial dysfunction has been implicated in most neurodegenerative diseases, but in many instances it is unclear whether such dysfunction is a cause or an effect of the underlying pathology, and whether it represents a viable therapeutic target. It is therefore imperative to utilise and optimise cellular models and experimental techniques appropriate to determine the contribution of mitochondrial dysfunction to neurodegenerative disease phenotypes. In this consensus article, we collate details on and discuss pitfalls of existing experimental approaches to assess mitochondrial function in in vitro cellular models of neurodegenerative diseases, including specific protocols for the measurement of oxygen consumption rate in primary neuron cultures, and single-neuron, time-lapse fluorescence imaging of the mitochondrial membrane potential and mitochondrial NAD(P)H. As part of the Cellular Bioenergetics of Neurodegenerative Diseases (CeBioND) consortium ( www.cebiond.org ), we are performing cross-disease analyses to identify common and distinct molecular mechanisms involved in mitochondrial bioenergetic dysfunction in cellular models of Alzheimer's, Parkinson's, and Huntington's diseases. Here we provide detailed guidelines and protocols as standardised across the five collaborating laboratories of the CeBioND consortium, with additional contributions from other experts in the field.

  9. The brain acid-base homeostasis and serotonin: A perspective on the use of carbon dioxide as human and rodent experimental model of panic.

    PubMed

    Leibold, N K; van den Hove, D L A; Esquivel, G; De Cort, K; Goossens, L; Strackx, E; Buchanan, G F; Steinbusch, H W M; Lesch, K P; Schruers, K R J

    2015-06-01

    Panic attacks (PAs), the core feature of panic disorder, represent a common phenomenon in the general adult population and are associated with a considerable decrease in quality of life and high health care costs. To date, the underlying pathophysiology of PAs is not well understood. A unique feature of PAs is that they represent a rare example of a psychopathological phenomenon that can be reliably modeled in the laboratory in panic disorder patients and healthy volunteers. The most effective techniques to experimentally trigger PAs are those that acutely disturb the acid-base homeostasis in the brain: inhalation of carbon dioxide (CO2), hyperventilation, and lactate infusion. This review particularly focuses on the use of CO2 inhalation in humans and rodents as an experimental model of panic. Besides highlighting the different methodological approaches, the cardio-respiratory and the endocrine responses to CO2 inhalation are summarized. In addition, the relationships between CO2 level, changes in brain pH, the serotonergic system, and adaptive physiological and behavioral responses to CO2 exposure are presented. We aim to present an integrated psychological and neurobiological perspective. Remaining gaps in the literature and future perspectives are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Electronic stopping powers for heavy ions in SiC and SiO{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, K.; Xue, H.; Zhang, Y., E-mail: Zhangy1@ornl.gov

    2014-01-28

    Accurate information on electronic stopping power is fundamental for broad advances in materials science, electronic industry, space exploration, and sustainable energy technologies. In the case of slow heavy ions in light targets, current codes and models provide significantly inconsistent predictions, among which the Stopping and Range of Ions in Matter (SRIM) code is the most commonly used one. Experimental evidence, however, has demonstrated considerable errors in the predicted ion and damage profiles based on SRIM stopping powers. In this work, electronic stopping powers for Cl, Br, I, and Au ions are experimentally determined in two important functional materials, SiC andmore » SiO{sub 2}, based on a single ion technique, and new electronic stopping power values are derived over the energy regime from 0 to 15 MeV, where large deviations from the SRIM predictions are observed. As an experimental validation, Rutherford backscattering spectrometry (RBS) and secondary ion mass spectrometry (SIMS) are utilized to measure the depth profiles of implanted Au ions in SiC for energies from 700 keV to 15 MeV. The measured ion distributions by both RBS and SIMS are considerably deeper than the SRIM predictions, but agree well with predictions based on our derived stopping powers.« less

  11. Numerical Characterization of Piezoceramics Using Resonance Curves

    PubMed Central

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  12. Numerical Characterization of Piezoceramics Using Resonance Curves.

    PubMed

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-27

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  13. Electronic Stopping Powers For Heavy Ions In SiC And SiO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ke; Zhang, Y.; Zhu, Zihua

    2014-01-24

    Accurate information on electronic stopping power is fundamental for broad advances in materials science, electronic industry, space exploration, and sustainable energy technologies. In the case of slow heavy ions in light targets, current codes and models provide significantly inconsistent predictions, among which the Stopping and Range of Ions in Matter (SRIM) code is the most commonly used one. Experimental evidence, however, has demonstrated considerable errors in the predicted ion and damage profiles based on SRIM stopping powers. In this work, electronic stopping powers for Cl, Br, I, and Au ions are experimentally determined in two important functional materials, SiC andmore » SiO2, based on a single ion technique, and new electronic stopping power values are derived over the energy regime from 0 to 15 MeV, where large deviations from the SRIM predictions are observed. As an experimental validation, Rutherford backscattering spectrometry (RBS) and secondary ion mass spectrometry (SIMS) are utilized to measure the depth profiles of implanted Au ions in SiC for energies from 700 keV to 15MeV. The measured ion distributions by both RBS and SIMS are considerably deeper than the SRIM predictions, but agree well with predictions based on our derived stopping powers.« less

  14. Statistical Modelling of Temperature and Moisture Uptake of Biochars Exposed to Selected Relative Humidity of Air.

    PubMed

    Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel

    2018-02-09

    New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.

  15. Experimental and theoretical elucidation of structural and antioxidant properties of vanillylmandelic acid and its carboxylate anion

    NASA Astrophysics Data System (ADS)

    Dimić, Dušan; Milenković, Dejan; Ilić, Jelica; Šmit, Biljana; Amić, Ana; Marković, Zoran; Dimitrić Marković, Jasmina

    2018-06-01

    Vanillylmandelic acid (VMA), an important metabolite of catecholamines that is routinely screened as tumor marker, was investigated by the various spectroscopic techniques (IR, Raman, UV-Vis, antioxidant decolorization assay and NMR). Structures optimized by the employment of five common functionals (M05-2X, M06-2X, B3LYP, CAM-B3LYP, B3LYP-D3) were compared with the crystallographic data. The M05-2X functional reproduced the most reliable experimental bond lengths and angles (correlation coefficient >0.999). The importance of intramolecular hydrogen bonds for structural stability was discussed and quantified by the NBO analysis. The most prominent bands in vibrational spectrum were analyzed and compared to the experimental data. The positions of the carbon and hydrogen atoms in NMR spectra were well reproduced. The differences in UV-Vis spectrum were investigated by adding the explicit solvent and by performing NBO and QTAIM analyses. The discrepancy in the two spectra of about 50 nm could be explained by the solvent effect on carboxyl group. The most probable antioxidant activity mechanism was discussed for VMA and its carboxylate anion. The Molecular Docking study with the C - reactive protein additionally proved that variety of functional groups present in VMA and its anion allowed strong hydrogen and hydrophobic interactions.

  16. Skin Microbiome Surveys Are Strongly Influenced by Experimental Design.

    PubMed

    Meisel, Jacquelyn S; Hannigan, Geoffrey D; Tyldsley, Amanda S; SanMiguel, Adam J; Hodkinson, Brendan P; Zheng, Qi; Grice, Elizabeth A

    2016-05-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provides more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e., gastrointestinal) and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource and cost intensive, provides evidence of a community's functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This study highlights the importance of experimental design for downstream results in skin microbiome surveys. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Skin microbiome surveys are strongly influenced by experimental design

    PubMed Central

    Meisel, Jacquelyn S.; Hannigan, Geoffrey D.; Tyldsley, Amanda S.; SanMiguel, Adam J.; Hodkinson, Brendan P.; Zheng, Qi; Grice, Elizabeth A.

    2016-01-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provide more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e. gastrointestinal), and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource- and cost-intensive, provides evidence of a community’s functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This work highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  18. The nanoaquarium: A nanofluidic platform for in situ transmission electron microscopy in liquid media

    NASA Astrophysics Data System (ADS)

    Grogan, Joseph M.

    There are many scientifically interesting and technologically relevant nanoscale phenomena that take place in liquid media. Examples include aggregation and assembly of nanoparticles; colloidal crystal formation; liquid phase growth of structures such as nanowires; electrochemical deposition and etching for fabrication processes and battery applications; interfacial phenomena; boiling and cavitation; and biological interactions. Understanding of these fields would benefit greatly from real-time, in situ transmission electron microscope (TEM) imaging with nanoscale resolution. Most liquids cannot be imaged by traditional TEM due to evaporation in the high vacuum environment and the requirement that samples be very thin. Liquid-cell in situ TEM has emerged as an exciting new experimental technique that hermetically seals a thin slice of liquid between two electron transparent membranes to enable TEM imaging of liquid-based processes. This work presents details of the fabrication of a custom-made liquid-cell in situ TEM device, dubbed the nanoaquarium. The nanoaquarium's highlights include an exceptionally thin sample cross section (10s to 100s of nm); wafer scale processing that enables high-yield mass production; robust hermetic sealing that provides leak-free operation without use of glue, epoxy, or any polymers; compatibility with lab-on-chip technology; and on-chip integrated electrodes for sensing and actuation. The fabrication process is described, with an emphasis on direct wafer bonding. Experimental results involving direct observation of colloid aggregation using an aqueous solution of gold nanoparticles are presented. Quantitative analysis of the growth process agrees with prior results and theory, indicating that the experimental technique does not radically alter the observed phenomenon. For the first time, in situ observations of nanoparticles at a contact line and in an evaporating thin film of liquid are reported, with applications for techniques such as dip-coating and drop-casting, commonly used for depositing nanoparticles on a surface via convective-capillary assembly. Theoretical analysis suggests that the observed particle motion and aggregation are caused by gradients in surface tension and disjoining pressure in the thin liquid film.

  19. Controlling for confounding variables in MS-omics protocol: why modularity matters.

    PubMed

    Smith, Rob; Ventura, Dan; Prince, John T

    2014-09-01

    As the field of bioinformatics research continues to grow, more and more novel techniques are proposed to meet new challenges and improvements upon solutions to long-standing problems. These include data processing techniques and wet lab protocol techniques. Although the literature is consistently thorough in experimental detail and variable-controlling rigor for wet lab protocol techniques, bioinformatics techniques tend to be less described and less controlled. As the validation or rejection of hypotheses rests on the experiment's ability to isolate and measure a variable of interest, we urge the importance of reducing confounding variables in bioinformatics techniques during mass spectrometry experimentation. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Experimental observation and simulation of unusual microwave response for the superconducting microstrip resonator at small dc magnetic field

    NASA Astrophysics Data System (ADS)

    Ong, C. K.; Rao, X. S.; Jin, B. B.

    1999-11-01

    An unusual microwave response of the surface impedance Zs of high-Tc thin films at an applied small dc magnetic field (Bdc) at 77 K, namely a decrease of Zs, is observed with the microstrip resonator technique. The resonant frequency is 1.107 GHz. The direction of Bdc is parallel or perpendicular to the a-b plane. Bdc ranges from 0 to 200 G. It is found that the surface resistance (Rs) at Bdc parallel to the a-b plane first decreases with Bdc and then increases above a crossover field. The Rs behaviour for Bdc perpendicular to the a-b plane is the same but with a different crossover field. The two behaviours can be collapsed to one curve by scaling the crossover fields. The changes of surface reactance Xs correlated linearly with the changes of Rs in the ranges of Bdc. The ratios rH of changes of Rs and Xs (rH = icons/Journals/Common/Delta" ALT="Delta" ALIGN="TOP"/> Rs/icons/Journals/Common/Delta" ALT="Delta" ALIGN="TOP"/> Xs) are 0.5 at Bdc less than the crossover field and 0.1 at Bdc greater than the crossover field. The measurements also show that the crossover field is independent of rf input power. A phenomenological model is also proposed to explain this unusual behaviour. By adjusting fitting parameters the computed results agree with the experimental results qualitatively.

  1. Advanced optic fabrication using ultrafast laser radiation

    NASA Astrophysics Data System (ADS)

    Taylor, Lauren L.; Qiao, Jun; Qiao, Jie

    2016-03-01

    Advanced fabrication and finishing techniques are desired for freeform optics and integrated photonics. Methods including grinding, polishing and magnetorheological finishing used for final figuring and polishing of such optics are time consuming, expensive, and may be unsuitable for complex surface features while common photonics fabrication techniques often limit devices to planar geometries. Laser processing has been investigated as an alternative method for optic forming, surface polishing, structure writing, and welding, as direct tuning of laser parameters and flexible beam delivery are advantageous for complex freeform or photonics elements and material-specific processing. Continuous wave and pulsed laser radiation down to the nanosecond regime have been implemented to achieve nanoscale surface finishes through localized material melting, but the temporal extent of the laser-material interaction often results in the formation of a sub-surface heat affected zone. The temporal brevity of ultrafast laser radiation can allow for the direct vaporization of rough surface asperities with minimal melting, offering the potential for smooth, final surface quality with negligible heat affected material. High intensities achieved in focused ultrafast laser radiation can easily induce phase changes in the bulk of materials for processing applications. We have experimentally tested the effectiveness of ultrafast laser radiation as an alternative laser source for surface processing of monocrystalline silicon. Simulation of material heating associated with ultrafast laser-material interaction has been performed and used to investigate optimized processing parameters including repetition rate. The parameter optimization process and results of experimental processing will be presented.

  2. A technique for sequential segmental neuromuscular stimulation with closed loop feedback control.

    PubMed

    Zonnevijlle, Erik D H; Abadia, Gustavo Perez; Somia, Naveen N; Kon, Moshe; Barker, John H; Koenig, Steven; Ewert, D L; Stremel, Richard W

    2002-01-01

    In dynamic myoplasty, dysfunctional muscle is assisted or replaced with skeletal muscle from a donor site. Electrical stimulation is commonly used to train and animate the skeletal muscle to perform its new task. Due to simultaneous tetanic contractions of the entire myoplasty, muscles are deprived of perfusion and fatigue rapidly, causing long-term problems such as excessive scarring and muscle ischemia. Sequential stimulation contracts part of the muscle while other parts rest, thus significantly improving blood perfusion. However, the muscle still fatigues. In this article, we report a test of the feasibility of using closed-loop control to economize the contractions of the sequentially stimulated myoplasty. A simple stimulation algorithm was developed and tested on a sequentially stimulated neo-sphincter designed from a canine gracilis muscle. Pressure generated in the lumen of the myoplasty neo-sphincter was used as feedback to regulate the stimulation signal via three control parameters, thereby optimizing the performance of the myoplasty. Additionally, we investigated and compared the efficiency of amplitude and frequency modulation techniques. Closed-loop feedback enabled us to maintain target pressures within 10% deviation using amplitude modulation and optimized control parameters (correction frequency = 4 Hz, correction threshold = 4%, and transition time = 0.3 s). The large-scale stimulation/feedback setup was unfit for chronic experimentation, but can be used as a blueprint for a small-scale version to unveil the theoretical benefits of closed-loop control in chronic experimentation.

  3. Treatment of textile wastewater by a hybrid electrocoagulation/nanofiltration process.

    PubMed

    Aouni, Anissa; Fersi, Cheïma; Ben Sik Ali, Mourad; Dhahbi, Mahmoud

    2009-09-15

    Untreated effluents from textile industries are usually highly coloured and contain a considerable amount of contaminants and pollutants. Stringent environmental regulation for the control of textile effluents is enforced in several countries. Previous studies showed that many techniques have been used for the treatment of textile wastewater, such as adsorption, biological treatment, oxidation, coagulation and/or flocculation, among them coagulation is one of the most commonly used techniques. Electrocoagulation is a process consisting in creating metallic hydroxide flocks within the wastewater by the electrodissolution of soluble anodes, usually made of iron or aluminium. This method has been practiced for most of the 20th century with limited success. In recent years, however, it started to regain importance with the progress of the electrochemical processes and the increase in environmental restrictions in effluent wastewater. This paper examines the use of electrocoagulation treatment process followed by nanofiltration process of a textile effluent sample. The electrocoagulation process was studied under several conditions such as various current densities and effect of experimental tense. Efficiencies of COD and turbidity reductions and colour removal were studied for each experiment. The electrochemical treatment was indented primarily to remove colour and COD of wastewater while nanofiltration was used to further improve the removal efficiency of the colour, COD, conductivity, alkalinity and total dissolved solids (TDS). The experimental results, throughout the present study, have indicated that electrocoagulation treatment followed by nanofiltration processes were very effective and were capable of elevating quality of the treated textile wastewater effluent.

  4. Novel Application of Postmortem CT Angiography for Evaluation of the Intracranial Vascular Anatomy in Cadaver Heads.

    PubMed

    van Eijk, Ruben P A; van der Zwan, Albert; Bleys, Ronald L A W; Regli, Luca; Esposito, Giuseppe

    2015-12-01

    Postmortem CT angiography is a common procedure used to visualize the entire human vasculature. For visualization of a specific organ's vascular anatomy, casting is the preferred method. Because of the permanent and damaging nature of casting, the organ cannot be further used as an experimental model after angiography. Therefore, there is a need for a minimally traumatic method to visualize organ-specific vascular anatomy. The purpose of this study was to develop and evaluate a contrast enhancement technique that is capable of visualizing the intracranial vascular anatomy while preserving the anatomic integrity in cadaver heads. Seven human heads were used in this study. Heads were prepared by cannulating the vertebral and internal carotid arteries. Contrast agent was injected as a mixture of tap water, polyethylene glycol 600, and an iodinated contrast agent. Postmortem imaging was executed on a 64-MDCT scanner. Primary image review and 3D reconstruction were performed on a CT workstation. Clear visualization of the major cerebral arteries and smaller intracranial branches was achieved. Adequate visualization was obtained for both the anterior and posterior intracranial circulation. The minimally traumatic angiography method preserved the vascular integrity of the cadaver heads. A novel application of postmortem CT angiography is presented here. The technique can be used for radiologic evaluation of the intracranial circulation in cadaver heads. After CT angiography, the specimen can be used for further experimental or laboratory testing and teaching purposes.

  5. The effect of meditation on physical and mental health in junior college students: a quasi-experimental study.

    PubMed

    Yang, Ke-Ping; Su, Whei-Ming; Huang, Chen-Kuan

    2009-12-01

    Physical stress and mental stress are increasingly common phenomena in our rapidly changing and stressful modern society. Research has found meditation to produce positive and demonstrable stress reduction effects on brain and immune functions. This study is grounded in traditional Chinese philosophical mores that teach a process summarized by the keynote activities of "calm, still, quiet, consider, and get" and the potential of this process to reduce stress in adolescents. The purpose of this study was to examine the effects of meditation on the physical and mental health of junior college students. This research employed a quasi-experimental design. Participants included 242 freshmen from a junior college in Taiwan selected using a convenience sampling technique. Participants were then randomly separated into experimental (n = 119) and control (n = 123) groups. The project duration was 18 weeks, during which the experimental group received 2 hours of meditation treatment per week, for a total of 36 hours. Both groups completed pretest and posttest Life Adaptation Scale forms, which included questionnaires addressing information on physical and mental distress and positive and negative coping strategies. Data were analyzed using analysis of covariance. Findings showed that the effect of the experiment treatment was significant when student physical and mental distress pretest scores were controlled. Physical and mental symptoms in the experimental group were lower than those in the control group. Meditation can help students to adapt to life stressors. This study also provides support for traditional Chinese wisdom, which promotes meditation as one way to improve health.

  6. Comparison of three nondestructive and contactless techniques for investigations of recombination parameters on an example of silicon samples

    NASA Astrophysics Data System (ADS)

    Chrobak, Ł.; Maliński, M.

    2018-06-01

    This paper presents a comparison of three nondestructive and contactless techniques used for determination of recombination parameters of silicon samples. They are: photoacoustic method, modulated free carriers absorption method and the photothermal radiometry method. In the paper the experimental set-ups used for measurements of the recombination parameters in these methods as also theoretical models used for interpretation of obtained experimental data have been presented and described. The experimental results and their respective fits obtained with these nondestructive techniques are shown and discussed. The values of the recombination parameters obtained with these methods are also presented and compared. Main advantages and disadvantages of presented methods have been discussed.

  7. A Dye-Tracer Technique for Experimentally Obtaining Impingement Characteristics of Arbitrary Bodies and a Method for Determining Droplet Size Distribution

    NASA Technical Reports Server (NTRS)

    VonGlahn, Uwe H.; Gelder, Thomas F.; Smyers, William H., Jr.

    1955-01-01

    A dye-tracer technique has been developed whereby the quantity of dyed water collected on a blotter-wrapped body exposed to an air stream containing a dyed-water spray cloud can be colorimetrically determined in order to obtain local collection efficiencies, total collection efficiency, and rearward extent of impingement on the body. In addition, a method has been developed whereby the impingement characteristics obtained experimentally for a body can be related to theoretical impingement data for the same body in order to determine the droplet size distribution of the impinging cloud. Several cylinders, a ribbon, and an aspirating device to measure cloud liquid-water content were used in the studies presented herein for the purpose of evaluating the dye-tracer technique. Although the experimental techniques used in the dye-tracer technique require careful control, the methods presented herein should be applicable for any wind tunnel provided the humidity of the air stream can be maintained near saturation.

  8. Fabrication and testing of an enhanced ignition system to reduce cold-start emissions in an ethanol (E85) light-duty truck engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardiner, D; Mallory, R; Todesco, M

    This report describes an experimental investigation of the potential for an enhanced ignition system to lower the cold-start emissions of a light-duty vehicle engine using fuel ethanol (commonly referred to as E85). Plasma jet ignition and conventional inductive ignition were compared for a General Motors 4-cylinder, alcohol-compatible engine. Emission and combustion stability measurements were made over a range of air/fuel ratios and spark timing settings using a steady-state, cold-idle experimental technique in which the engine coolant was maintained at 25 C to simulate cold-running conditions. These tests were aimed at identifying the degree to which calibration strategies such as mixturemore » enleanment and retarded spark timing could lower engine-out hydrocarbon emissions and raise exhaust temperatures, as well as determining how such calibration changes would affect the combustion stability of the engine (as quantified by the coefficient of variation, or COV, of indicated mean effective pressure calculated from successive cylinder pressure measurements). 44 refs., 39 figs.« less

  9. Fabrication of Calix[4]arene Derivative Monolayers to Control Orientation of Antibody Immobilization

    PubMed Central

    Chen, Hongxia; Liu, Feng; Qi, Fangjie; Koh, Kwangnak; Wang, Keming

    2014-01-01

    Three calix[4]arene (Cal-4) derivatives which separately contain ethylester (1), carboxylic acid (2), and crownether (3) at the lower rim with a common reactive thiol at the upper rim were synthesized and constructed to self-assembled monolayers (SAMs) on Au films. After spectroscopic characterization of the monolayers, surface coverage and orientation of antibody immobilized on the Cal-4 derivative SAMs were studied by surface plasmon resonance (SPR) technique. Experimental results revealed that the antibody could be immobilized on the Cal-4 derivatives spontaneously. The orientation of absorbed antibody on the Cal-4 derivative SAMs is related to the SAM’s dipole moment. The possible orientations of the antibody immobilized on the Cal-4 derivative 1 SAM are lying-on or side-on, while on the Cal-4 derivative 2 and Cal-4 derivative 3 head-on and end-on respectively. These experimental results demonstrate the surface dipole moment of Cal-4 derivative appears to be an important factor to antibody orientation. Cal-4 derivatives are useful in developing site direct protein chips. PMID:24690993

  10. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  11. Demonstration of Space Optical Transmitter Development for Multiple High Frequency Bands

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung; Simons, Rainee; Wintucky, Edwin; Freeman, Jon

    2013-01-01

    As the demand for multiple radio frequency carrier bands continues to grow in space communication systems, the design of a cost-effective compact optical transmitter that is capable of transmitting selective multiple RF bands is of great interest, particularly for NASA Space Communications Network Programs. This paper presents experimental results that demonstrate the feasibility of a concept based on an optical wavelength division multiplexing (WDM) technique that enables multiple microwave bands with different modulation formats and bandwidths to be combined and transmitted all in one unit, resulting in many benefits to space communication systems including reduced size, weight and complexity with corresponding savings in cost. Experimental results will be presented including the individual received RF signal power spectra for the L, C, X, Ku, Ka, and Q frequency bands, and measurements of the phase noise associated with each RF frequency. Also to be presented is a swept RF frequency power spectrum showing simultaneous multiple RF frequency bands transmission. The RF frequency bands in this experiment are among those most commonly used in NASA space environment communications.

  12. Analysis of different techniques to improve sound transmission loss in cylindrical shells

    NASA Astrophysics Data System (ADS)

    Oliazadeh, Pouria; Farshidianfar, Anooshiravan

    2017-02-01

    In this study, sound transmission through double- and triple-walled shells is investigated. The structure-acoustic equations based on Donnell's shell theory are presented and transmission losses calculated by this approach are compared with the transmission losses obtained according to Love's theory. An experimental set-up is also constructed to compare natural frequencies obtained from Donnell and Love's theories with experimental results in the high frequency region. Both comparisons show that Donnell's theory predicts the sound transmission characteristics and vibrational behavior better than Love's theory in the high frequency region. The transmission losses of the double- and triple-walled construction are then presented for various radii and thicknesses. Then the effects of air gap size as an important design parameter are studied. Sound transmission characteristics through a circular cylindrical shell are also computed along with consideration of the effects of material damping. Modest absorption is shown to greatly reduce the sound transmission at ring frequency and coincidence frequency. Also the effects of five common gases that are used for filling the gap are investigated.

  13. A protocol for rat in vitro fertilization during conventional laboratory working hours.

    PubMed

    Aoto, Toshihiro; Takahashi, Ri-ichi; Ueda, Masatsugu

    2011-12-01

    In vitro fertilization (IVF) is a valuable technique for the propagation of experimental animals. IVF has typically been used in mice to rapidly expand breeding colonies and create large numbers of embryos. However, applications of IVF in rat breeding experiments have stalled due to the inconvenient laboratory work schedules imposed by current IVF protocols for this species. Here, we developed a new rat IVF protocol that consists of experimental steps performed during common laboratory working hours. Our protocol can be completed within 12 h by shortening the period of sperm capacitation from 5 to 1 h and the fertilization time from 10 to 8 h in human tubal fluid (HTF) medium. This new protocol generated an excellent birth rate and was applicable not only to closed colony rat strains, such as Wistar, Long-Evans, and Sprague-Dawley (SD), but also to the inbred Lewis strain. Moreover, Wistar and Long-Evans embryos prepared by this protocol were successfully frozen by vitrification and later successfully thawed and resuscitated. This protocol is practical and can be easily adopted by laboratory workers.

  14. Anatomic Peculiarities of Pig and Human Liver.

    PubMed

    Nykonenko, Andriy; Vávra, Petr; Zonča, Pavel

    2017-02-01

    Many investigations on surgical methods and medical treatment are currently done on pigs. This is possible because the pig is sufficiently close genetically to humans. In recent years, progress in liver surgery has opened new possibilities in surgical treatment of liver diseases. Because the methods are relatively novel, various improvements are still needed, and it is thus helpful to conduct experimental surgeries on pig livers. We reviewed the literature to compare the anatomic and functional features of pig and human livers, information that will be of great importance for improving surgical techniques. During the literature review, we used various sources, such as PubMed, Scopus, and veterinary journals. Our results were summarized in diagrams to facilitate understanding of the vascular structure and biliary systems. We conclude that, although the shapes of the human and pig livers are quite different, the pig liver is divided into the same number of segments as the human liver, which also shows a common structure of the vascular system. Thus, with the anatomic and structural features of the pig liver taken into account, this animal model can be used in experimental hepatic surgery.

  15. Absolute Paleointensity Estimates using Combined Shaw and Pseudo-Thellier Experimental Protocols

    NASA Astrophysics Data System (ADS)

    Foucher, M. S.; Smirnov, A. V.

    2016-12-01

    Data on the long-term evolution of Earth's magnetic field intensity have a great potential to advance our understanding of many aspects of the Earth's evolution. However, paleointensity determination is one of the most challenging aspects of paleomagnetic research so the quantity and quality of existing paleointensity data remain limited, especially for older epochs. While the Thellier double-heating method remains to be the most commonly used paleointensity technique, its applicability is limited for many rocks that undergo magneto-mineralogical alteration during the successive heating steps required by the method. In order to reduce the probability of alteration, several alternative methods that involve a limited number of or no heating steps have been proposed. However, continued efforts are needed to better understand the physical foundations and relative efficiency of reduced/non-heating methods in recovering the true paleofield strength and to better constrain their calibration factors. We will present the results of our investigation of synthetic and natural magnetite-bearing samples using a combination of the LTD-DHT Shaw and pseudo-Thellier experimental protocols for absolute paleointensity estimation.

  16. Animal models of post-ischemic forced use rehabilitation: methods, considerations, and limitations

    PubMed Central

    2013-01-01

    Many survivors of stroke experience arm impairments, which can severely impact their quality of life. Forcing use of the impaired arm appears to improve functional recovery in post-stroke hemiplegic patients, however the mechanisms underlying improved recovery remain unclear. Animal models of post-stroke rehabilitation could prove critical to investigating such mechanisms, however modeling forced use in animals has proven challenging. Potential problems associated with reported experimental models include variability between stroke methods, rehabilitation paradigms, and reported outcome measures. Herein, we provide an overview of commonly used stroke models, including advantages and disadvantages of each with respect to studying rehabilitation. We then review various forced use rehabilitation paradigms, and highlight potential difficulties and translational problems. Lastly, we discuss the variety of functional outcome measures described by experimental researchers. To conclude, we outline ongoing challenges faced by researchers, and the importance of translational communication. Many stroke patients rely critically on rehabilitation of post-stroke impairments, and continued effort toward progression of rehabilitative techniques is warranted to ensure best possible treatment of the devastating effects of stroke. PMID:23343500

  17. Numerical simulation of the laser welding process for the prediction of temperature distribution on welded aluminium aircraft components

    NASA Astrophysics Data System (ADS)

    Tsirkas, S. A.

    2018-03-01

    The present investigation is focused to the modelling of the temperature field in aluminium aircraft components welded by a CO2 laser. A three-dimensional finite element model has been developed to simulate the laser welding process and predict the temperature distribution in T-joint laser welded plates with fillet material. The simulation of the laser beam welding process was performed using a nonlinear heat transfer analysis, based on a keyhole formation model analysis. The model employs the technique of element ;birth and death; in order to simulate the weld fillet. Various phenomena associated with welding like temperature dependent material properties and heat losses through convection and radiation were accounted for in the model. The materials considered were 6056-T78 and 6013-T4 aluminium alloys, commonly used for aircraft components. The temperature distribution during laser welding process has been calculated numerically and validated by experimental measurements on different locations of the welded structure. The numerical results are in good agreement with the experimental measurements.

  18. Dynamic causal modelling: a critical review of the biophysical and statistical foundations.

    PubMed

    Daunizeau, J; David, O; Stephan, K E

    2011-09-15

    The goal of dynamic causal modelling (DCM) of neuroimaging data is to study experimentally induced changes in functional integration among brain regions. This requires (i) biophysically plausible and physiologically interpretable models of neuronal network dynamics that can predict distributed brain responses to experimental stimuli and (ii) efficient statistical methods for parameter estimation and model comparison. These two key components of DCM have been the focus of more than thirty methodological articles since the seminal work of Friston and colleagues published in 2003. In this paper, we provide a critical review of the current state-of-the-art of DCM. We inspect the properties of DCM in relation to the most common neuroimaging modalities (fMRI and EEG/MEG) and the specificity of inference on neural systems that can be made from these data. We then discuss both the plausibility of the underlying biophysical models and the robustness of the statistical inversion techniques. Finally, we discuss potential extensions of the current DCM framework, such as stochastic DCMs, plastic DCMs and field DCMs. Copyright © 2009 Elsevier Inc. All rights reserved.

  19. Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2011-09-26

    Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America

  20. Experimental investigation of differential confinement effects in a rotating helicon plasma

    NASA Astrophysics Data System (ADS)

    Gueroult, Renaud; Evans, Eugene; Zweben, Stewart J.; Fisch, Nathaniel J.; Levinton, Fred

    2014-10-01

    Although plasmas have long been considered for isotope separation, challenges presented by nuclear waste remediation and nuclear spent fuel reprocessing have recently sparked a renewed interest for high-throughput plasma based mass separation techniques. Different filter concepts relying on rotating plasmas have been proposed to address these needs. However, one of the challenges common to these concepts is the need to control the plasma rotation profile, which is generally assumed to be provided by means of dedicated electrodes. An experimental effort aiming to evaluate the practicality of these plasma filter concepts has recently been started at PPPL. For this purpose, a linear helicon plasma source is used in combination with concentric ring electrodes. Preliminary biasing experiments results indicate floating potential profiles locally suitable for mass discrimination for different gas mixtures (Ar/Ne, Ar/N2, Ar/Kr). Radially resolved spectroscopic measurements and neutral gas composition analysis at two different axial positions are being planned to assess the mass separation effect. Work supported by US DOE under Contract No. DE-AC02-09CH11466.

  1. Support System Effects on the NASA Common Research Model

    NASA Technical Reports Server (NTRS)

    Rivers, S. Melissa B.; Hunter, Craig A.

    2012-01-01

    An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-Foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experimental and the computational data from the 4th Drag Prediction Workshop. This difference led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the Common Research Model. The configurations computed during this investigation were the wing/body/tail=0deg without the support system and the wing/body/tail=0deg with the support system. The results from this investigation confirm that the addition of the support system to the computational cases does shift the pitching moment in the direction of the experimental results.

  2. A comparison of cord gingival displacement with the gingitage technique.

    PubMed

    Tupac, R G; Neacy, K

    1981-11-01

    Fifteen young adult dogs were divided into three groups representing 0, 7- and 21-day healing periods. Randomly selected cuspid teeth were used to compare cord gingival displacement and gingitage techniques for subgingival tooth preparation and impression making. Clinical and histologic measurements were used as a basis for comparison. Results indicate that (1) the experimental teeth were clinically healthy at the beginning of the experiment, (2) clinical health of the gingival tissues was controlled throughout the course of the experiment, and (3) within this experimental setting, there was no significant difference between the cord gingival displacement technique and the gingitage technique.

  3. Sensitivity analysis of hybrid thermoelastic techniques

    Treesearch

    W.A. Samad; J.M. Considine

    2017-01-01

    Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...

  4. The Use of Techniques of Sensory Evaluation as a Framework for Teaching Experimental Methods.

    ERIC Educational Resources Information Center

    Bennett, R.; Hamilton, M.

    1981-01-01

    Describes sensory assessment techniques and conditions for their satisfactory performance, including how they can provide open-ended exercises and advantages as relatively inexpensive and simple methods of teaching experimentation. Experiments described focus on diffusion of salt into potatoes after being cooked in boiled salted water. (Author/JN)

  5. Monitoring Thermal Performance of Hollow Bricks with Different Cavity Fillers in Difference Climate Conditions

    NASA Astrophysics Data System (ADS)

    Pavlík, Zbyšek; Jerman, Miloš; Fořt, Jan; Černý, Robert

    2015-03-01

    Hollow brick blocks have found widespread use in the building industry during the last decades. The increasing requirements to the thermal insulation properties of building envelopes given by the national standards in Europe led the brick producers to reduce the production of common solid bricks. Brick blocks with more or less complex systems of internal cavities replaced the traditional bricks and became dominant on the building ceramics market. However, contrary to the solid bricks where the thermal conductivity can easily be measured by standard methods, the complex geometry of hollow brick blocks makes the application of common techniques impossible. In this paper, a steady-state technique utilizing a system of two climatic chambers separated by a connecting tunnel for sample positioning is used for the determination of the thermal conductivity, thermal resistance, and thermal transmittance ( U value) of hollow bricks with the cavities filled by air, two different types of mineral wool, polystyrene balls, and foam polyurethane. The particular brick block is provided with the necessary temperature- and heat-flux sensors and thermally insulated in the tunnel. In the climatic chambers, different temperatures are set. After steady-state conditions are established in the measuring system, the effective thermal properties of the brick block are calculated using the measured data. Experimental results show that the best results are achieved with hydrophilic mineral wool as a cavity filler; the worst performance exhibits the brick block with air-filled cavities.

  6. Stable and simple quantitative phase-contrast imaging by Fresnel biprism

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Samira; Dashtdar, Masoomeh; Sánchez-Ortiga, Emilio; Martínez-Corral, Manuel; Javidi, Bahram

    2018-03-01

    Digital holographic (DH) microscopy has grown into a powerful nondestructive technique for the real-time study of living cells including dynamic membrane changes and cell fluctuations in nanometer and sub-nanometer scales. The conventional DH microscopy configurations require a separately generated coherent reference wave that results in a low phase stability and a necessity to precisely adjust the intensity ratio between two overlapping beams. In this work, we present a compact, simple, and very stable common-path DH microscope, employing a self-referencing configuration. The microscope is implemented by a diode laser as the source and a Fresnel biprism for splitting and recombining the beams simultaneously. In the overlapping area, linear interference fringes with high contrast are produced. The frequency of the interference pattern could be easily adjusted by displacement of the biprism along the optical axis without a decrease in fringe contrast. To evaluate the validity of the method, the spatial noise and temporal stability of the setup are compared with the common off-axis DH microscope based on a Mach-Zehnder interferometer. It is shown that the proposed technique has low mechanical noise as well as superb temporal stability with sub-nanometer precision without any external vibration isolation. The higher temporal stability improves the capabilities of the microscope for studying micro-object fluctuations, particularly in the case of biological specimens. Experimental results are presented using red blood cells and silica microspheres to demonstrate the system performance.

  7. Enhancing quantum annealing performance for the molecular similarity problem

    NASA Astrophysics Data System (ADS)

    Hernandez, Maritza; Aramon, Maliheh

    2017-05-01

    Quantum annealing is a promising technique which leverages quantum mechanics to solve hard optimization problems. Considerable progress has been made in the development of a physical quantum annealer, motivating the study of methods to enhance the efficiency of such a solver. In this work, we present a quantum annealing approach to measure similarity among molecular structures. Implementing real-world problems on a quantum annealer is challenging due to hardware limitations such as sparse connectivity, intrinsic control error, and limited precision. In order to overcome the limited connectivity, a problem must be reformulated using minor-embedding techniques. Using a real data set, we investigate the performance of a quantum annealer in solving the molecular similarity problem. We provide experimental evidence that common practices for embedding can be replaced by new alternatives which mitigate some of the hardware limitations and enhance its performance. Common practices for embedding include minimizing either the number of qubits or the chain length and determining the strength of ferromagnetic couplers empirically. We show that current criteria for selecting an embedding do not improve the hardware's performance for the molecular similarity problem. Furthermore, we use a theoretical approach to determine the strength of ferromagnetic couplers. Such an approach removes the computational burden of the current empirical approaches and also results in hardware solutions that can benefit from simple local classical improvement. Although our results are limited to the problems considered here, they can be generalized to guide future benchmarking studies.

  8. Precision Blasting Techniques For Avalanche Control

    NASA Astrophysics Data System (ADS)

    Powell, Kevin M.

    Experimental firings sponsored by the Center For Snow Science at Alta, Utah have demonstrated the potential of a unique prototype shaped charge device designed to stimulate snow pack and ice. These studies, conducted against stable snow pack, demonstrated a fourfold increase in crater volume yield and introduced a novel application of Shock Tube technology to facilitate position control, detonation and dud recovery of manually deployed charges. The extraordinary penetration capability of the shaped charge mechanism has been exploited in many non-military applications to meet a wide range of rapidpiercing and/or cutting requirements. The broader exploitation of the potential of the shaped charge mechanism has nevertheless remained confined to defence based applications. In the studies reported in this paper, the inimitable ability of the shaped charge mechanism to project shock energy, or a liner material, into a highly focussed energetic stream has been applied uniquely to the stimulation of snow pack. Recent research and development work, conducted within the UK, has resulted in the integration of shaped charge technology into a common Avalauncher and hand charge device. The potential of the common charge configuration and spooled Shock Tube fire and control system to improve the safety and cost effectiveness of explosives used in avalanche control operations was successfully demonstrated at Alta in March 2001. Future programmes of study will include focussed shock/blast mechanisms for suspended wire traverse techniques, application of the shaped charge mechanism to helibombing, and the desig n and development of non-fragmenting shaped charge ammunition formilitary artillery gun systems.

  9. Digression and Value Concatenation to Enable Privacy-Preserving Regression.

    PubMed

    Li, Xiao-Bai; Sarkar, Sumit

    2014-09-01

    Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.

  10. Methodology for Assessing the Probability of Corrosion in Concrete Structures on the Basis of Half-Cell Potential and Concrete Resistivity Measurements

    PubMed Central

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures. PMID:23766706

  11. Advanced Bode Plot Techniques for Ultrasonic Transducers

    NASA Astrophysics Data System (ADS)

    DeAngelis, D. A.; Schulze, G. W.

    The Bode plot, displayed as either impedance or admittance versus frequency, is the most basic test used by ultrasonic transducer designers. With simplicity and ease-of-use, Bode plots are ideal for baseline comparisons such as spacing of parasitic modes or impedance, but quite often the subtleties that manifest as poor process control are hard to interpret or are nonexistence. In-process testing of transducers is time consuming for quantifying statistical aberrations, and assessments made indirectly via the workpiece are difficult. This research investigates the use of advanced Bode plot techniques to compare ultrasonic transducers with known "good" and known "bad" process performance, with the goal of a-priori process assessment. These advanced techniques expand from the basic constant voltage versus frequency sweep to include constant current and constant velocity interrogated locally on transducer or tool; they also include up and down directional frequency sweeps to quantify hysteresis effects like jumping and dropping phenomena. The investigation focuses solely on the common PZT8 piezoelectric material used with welding transducers for semiconductor wire bonding. Several metrics are investigated such as impedance, displacement/current gain, velocity/current gain, displacement/voltage gain and velocity/voltage gain. The experimental and theoretical research methods include Bode plots, admittance loops, laser vibrometry and coupled-field finite element analysis.

  12. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  13. Progress in EEG-Based Brain Robot Interaction Systems

    PubMed Central

    Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe

    2017-01-01

    The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488

  14. Infrared thermography for condition monitoring - A review

    NASA Astrophysics Data System (ADS)

    Bagavathiappan, S.; Lahiri, B. B.; Saravanan, T.; Philip, John; Jayakumar, T.

    2013-09-01

    Temperature is one of the most common indicators of the structural health of equipment and components. Faulty machineries, corroded electrical connections, damaged material components, etc., can cause abnormal temperature distribution. By now, infrared thermography (IRT) has become a matured and widely accepted condition monitoring tool where the temperature is measured in real time in a non-contact manner. IRT enables early detection of equipment flaws and faulty industrial processes under operating condition thereby, reducing system down time, catastrophic breakdown and maintenance cost. Last three decades witnessed a steady growth in the use of IRT as a condition monitoring technique in civil structures, electrical installations, machineries and equipment, material deformation under various loading conditions, corrosion damages and welding processes. IRT has also found its application in nuclear, aerospace, food, paper, wood and plastic industries. With the advent of newer generations of infrared camera, IRT is becoming a more accurate, reliable and cost effective technique. This review focuses on the advances of IRT as a non-contact and non-invasive condition monitoring tool for machineries, equipment and processes. Various conditions monitoring applications are discussed in details, along with some basics of IRT, experimental procedures and data analysis techniques. Sufficient background information is also provided for the beginners and non-experts for easy understanding of the subject.

  15. The in vitro use of the hair follicle closure technique to study the follicular and percutaneous permeation of topically applied drugs.

    PubMed

    Stahl, Jessica; Niedorf, Frank; Wohlert, Mareike; Kietzmann, Manfred

    2012-03-01

    Recent studies on follicular permeation emphasise the importance of hair follicles as diffusion pathways, but only a limited amount of data are available about the follicular permeation of topically applied drugs. This study examines the use of a hair follicle closure technique in vitro, to determine the participation of hair follicles in transdermal drug penetration. Various substances, with different lipophilicities, were tested: caffeine, diclofenac, flufenamic acid, ibuprofen, paracetamol, salicylic acid and testosterone. Diffusion experiments were conducted with porcine skin, the most common replacement material for human skin, in Franz-type diffusion cells over 28 hours. Different experimental settings allowed the differentiation between interfollicular and follicular permeation after topical application of the test compounds. A comparison of the apparent permeability coefficients of the drugs demonstrates that the percutaneous permeations of caffeine and flufenamic acid were significantly higher along the hair follicles. In the cases of paracetamol and testosterone, the follicular pathway appears to be of importance, while no difference was found between interfollicular and follicular permeation for diclofenac, ibuprofen and salicylic acid. Thus, the hair follicle closure technique represents an adequate in vitro method for gaining information about follicular or percutaneous permeation, and can replace in vivo testing in animals or humans. 2012 FRAME.

  16. All-optical technique for measuring thermal properties of materials at static high pressure

    NASA Astrophysics Data System (ADS)

    Pangilinan, G. I.; Ladouceur, H. D.; Russell, T. P.

    2000-10-01

    The development and implementation of an all-optical technique for measuring thermal transport properties of materials at high pressure in a gem anvil cell are reported. Thermal transport properties are determined by propagating a thermal wave in a material subjected to high pressures, and measuring the temperature as a function of time using an optical sensor embedded downstream in the material. Optical beams are used to deposit energy and to measure the sensor temperature and replace the resistive heat source and the thermocouples of previous methods. This overcomes the problems introduced with pressure-induced resistance changes and the spatial limitations inherent in previous high-pressure experimentation. Consistent with the heat conduction equation, the material's specific heat, thermal conductivity, and thermal diffusivity (κ) determine the sensor's temperature rise and its temporal profile. The all-optical technique described focuses on room-temperature thermal properties but can easily be applied to a wide temperature range (77-600 K). Measurements of thermal transport properties at pressure up to 2.0 GPa are reported, although extension to much higher pressures are feasible. The thermal properties of NaCl, a commonly used material for high-pressure experiments are measured and shown to be consistent with those obtained using the traditional methods.

  17. Prediction of down-gradient impacts of DNAPL source depletion using tracer techniques: Laboratory and modeling validation

    NASA Astrophysics Data System (ADS)

    Jawitz, J. W.; Basu, N.; Chen, X.

    2007-05-01

    Interwell application of coupled nonreactive and reactive tracers through aquifer contaminant source zones enables quantitative characterization of aquifer heterogeneity and contaminant architecture. Parameters obtained from tracer tests are presented here in a Lagrangian framework that can be used to predict the dissolution of nonaqueous phase liquid (NAPL) contaminants. Nonreactive tracers are commonly used to provide information about travel time distributions in hydrologic systems. Reactive tracers have more recently been introduced as a tool to quantify the amount of NAPL contaminant present within the tracer swept volume. Our group has extended reactive tracer techniques to also characterize NAPL spatial distribution heterogeneity. By conceptualizing the flow field through an aquifer as a collection of streamtubes, the aquifer hydrodynamic heterogeneities may be characterized by a nonreactive tracer travel time distribution, and NAPL spatial distribution heterogeneity may be similarly described using reactive travel time distributions. The combined statistics of these distributions are used to derive a simple analytical solution for contaminant dissolution. This analytical solution, and the tracer techniques used for its parameterization, were validated both numerically and experimentally. Illustrative applications are presented from numerical simulations using the multiphase flow and transport simulator UTCHEM, and laboratory experiments of surfactant-enhanced NAPL remediation in two-dimensional flow chambers.

  18. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes

    PubMed Central

    Erkol, Bulent; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Hvatum, Erik

    2011-01-01

    Background Malignant melanoma has a good prognosis if treated early. Dermoscopy images of pigmented lesions are most commonly taken at × 10 magnification under lighting at a low angle of incidence while the skin is immersed in oil under a glass plate. Accurate skin lesion segmentation from the background skin is important because some of the features anticipated to be used for diagnosis deal with shape of the lesion and others deal with the color of the lesion compared with the color of the surrounding skin. Methods In this research, gradient vector flow (GVF) snakes are investigated to find the border of skin lesions in dermoscopy images. An automatic initialization method is introduced to make the skin lesion border determination process fully automated. Results Skin lesion segmentation results are presented for 70 benign and 30 melanoma skin lesion images for the GVF-based method and a color histogram analysis technique. The average errors obtained by the GVF-based method are lower for both the benign and melanoma image sets than for the color histogram analysis technique based on comparison with manually segmented lesions determined by a dermatologist. Conclusions The experimental results for the GVF-based method demonstrate promise as an automated technique for skin lesion segmentation in dermoscopy images. PMID:15691255

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.S.; Zhu, S.; Cai, Y.

    Motion-dependent magnetic forces are the key elements in the study of magnetically levitated vehicle (maglev) system dynamics. In the past, most maglev-system designs were based on a quasisteady-motion theory of magnetic forces. This report presents an experimental and analytical study that will enhance our understanding of the role of unsteady-motion-dependent magnetic forces and demonstrate an experimental technique that can be used to measure those unsteady magnetic forces directly. The experimental technique provides a useful tool to measure motion-dependent magnetic forces for the prediction and control of maglev systems.

  20. Innovative experimental particle physics through technological advances: Past, present and future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Harry W.K.; /Fermilab

    This mini-course gives an introduction to the techniques used in experimental particle physics with an emphasis on the impact of technological advances. The basic detector types and particle accelerator facilities will be briefly covered with examples of their use and with comparisons. The mini-course ends with what can be expected in the near future from current technology advances. The mini-course is intended for graduate students and post-docs and as an introduction to experimental techniques for theorists.

  1. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. II - Experimental evaluations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.

  2. Analytical Electrochemistry: Methodology and Applications of Dynamic Techniques.

    ERIC Educational Resources Information Center

    Heineman, William R.; Kissinger, Peter T.

    1980-01-01

    Reports developments involving the experimental aspects of finite and current analytical electrochemistry including electrode materials (97 cited references), hydrodynamic techniques (56), spectroelectrochemistry (62), stripping voltammetry (70), voltammetric techniques (27), polarographic techniques (59), and miscellany (12). (CS)

  3. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  4. Optomechanical study and optimization of cantilever plate dynamics

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1995-06-01

    Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.

  5. Eukaryotic major facilitator superfamily transporter modeling based on the prokaryotic GlpT crystal structure.

    PubMed

    Lemieux, M Joanne

    2007-01-01

    The major facilitator superfamily (MFS) of transporters represents the largest family of secondary active transporters and has a diverse range of substrates. With structural information for four MFS transporters, we can see a strong structural commonality suggesting, as predicted, a common architecture for MFS transporters. The rate for crystal structure determination of MFS transporters is slow, making modeling of both prokaryotic and eukaryotic transporters more enticing. In this review, models of eukaryotic transporters Glut1, G6PT, OCT1, OCT2 and Pho84, based on the crystal structures of the prokaryotic GlpT, based on the crystal structure of LacY are discussed. The techniques used to generate the different models are compared. In addition, the validity of these models and the strategy of using prokaryotic crystal structures to model eukaryotic proteins are discussed. For comparison, E. coli GlpT was modeled based on the E. coli LacY structure and compared to the crystal structure of GlpT demonstrating that experimental evidence is essential for accurate modeling of membrane proteins.

  6. Flow Structures and Interactions of a Fail-Safe Actuator

    NASA Astrophysics Data System (ADS)

    Khan, Wasif; Elimelech, Yoseph; Amitay, Michael

    2010-11-01

    Vortex generators are passive devices that are commonly used in many aerodynamic applications. In their basic concept, they enhance mixing, reduce or mitigate flow separation; however, they cause drag penalties at off design conditions. Micro vanes implement the same basic idea of vortex generators but their physical dimensions are much smaller. To achieve the same effect on the baseline flow field, micro vanes are combined with an active flow control device, so their net effect is comparable to that of vortex generators when the active device is energized. As a result of their small size, micro vanes have significantly less drag penalty at off design conditions. This concept of "dual-action" is the reason why such actuation is commonly called hybrid or fail-safe actuation. The present study explores experimentally the flow interaction of a synthetic-jet with a micro vane in a zero pressure gradient flow over a flat plate. Using the stereo particle image velocimetry technique a parametric study was conducted, where the effects of the micro vane shape, height and its angle with respect to the flow were examined, at several blowing ratios and synthetic-jet configurations.

  7. The MGDO software library for data analysis in Ge neutrinoless double-beta decay experiments

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Detwiler, J. A.; Finnerty, P.; Kröninger, K.; Lenz, D.; Liu, J.; Marino, M. G.; Martin, R.; Nguyen, K. D.; Pandola, L.; Schubert, A. G.; Volynets, O.; Zavarise, P.

    2012-07-01

    The Gerda and Majorana experiments will search for neutrinoless double-beta decay of 76Ge using isotopically enriched high-purity germanium detectors. Although the experiments differ in conceptual design, they share many aspects in common, and in particular will employ similar data analysis techniques. The collaborations are jointly developing a C++ software library, MGDO, which contains a set of data objects and interfaces to encapsulate, store and manage physical quantities of interest, such as waveforms and high-purity germanium detector geometries. These data objects define a common format for persistent data, whether it is generated by Monte Carlo simulations or an experimental apparatus, to reduce code duplication and to ease the exchange of information between detector systems. MGDO also includes general-purpose analysis tools that can be used for the processing of measured or simulated digital signals. The MGDO design is based on the Object-Oriented programming paradigm and is very flexible, allowing for easy extension and customization of the components. The tools provided by the MGDO libraries are used by both Gerda and Majorana.

  8. Advanced Design and Implementation of a Control Architecture for Long Range Autonomous Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Martin-Alvarez, A.; Hayati, S.; Volpe, R.; Petras, R.

    1999-01-01

    An advanced design and implementation of a Control Architecture for Long Range Autonomous Planetary Rovers is presented using a hierarchical top-down task decomposition, and the common structure of each design is presented based on feedback control theory. Graphical programming is presented as a common intuitive language for the design when a large design team is composed of managers, architecture designers, engineers, programmers, and maintenance personnel. The whole design of the control architecture consists in the classic control concepts of cyclic data processing and event-driven reaction to achieve all the reasoning and behaviors needed. For this purpose, a commercial graphical tool is presented that includes the mentioned control capabilities. Messages queues are used for inter-communication among control functions, allowing Artificial Intelligence (AI) reasoning techniques based on queue manipulation. Experimental results show a highly autonomous control system running in real time on top the JPL micro-rover Rocky 7 controlling simultaneously several robotic devices. This paper validates the sinergy between Artificial Intelligence and classic control concepts in having in advanced Control Architecture for Long Range Autonomous Planetary Rovers.

  9. A sensitivity study of the effects of evaporation/condensation accommodation coefficients on transient heat pipe modeling

    NASA Astrophysics Data System (ADS)

    Hall, Michael L.; Doster, J. Michael

    1990-03-01

    The dynamic behavior of liquid metal heat pipe models is strongly influenced by the choice of evaporation and condensation modeling techniques. Classic kinetic theory descriptions of the evaporation and condensation processes are often inadequate for real situations; empirical accommodation coefficients are commonly utilized to reflect nonideal mass transfer rates. The complex geometries and flow fields found in proposed heat pipe systems cause considerable deviation from the classical models. the THROHPUT code, which has been described in previous works, was developed to model transient liquid metal heat pipe behavior from frozen startup conditions to steady state full power operation. It is used here to evaluate the sensitivity of transient liquid metal heat pipe models to the choice of evaporation and condensation accommodation coefficients. Comparisons are made with experimental liquid metal heat pipe data. It is found that heat pipe behavior can be predicted with the proper choice of the accommodation coefficients. However, the common assumption of spatially constant accommodation coefficients is found to be a limiting factor in the model.

  10. Shedding light on the puzzle of drug-membrane interactions: Experimental techniques and molecular dynamics simulations.

    PubMed

    Lopes, Daniela; Jakobtorweihen, Sven; Nunes, Cláudia; Sarmento, Bruno; Reis, Salette

    2017-01-01

    Lipid membranes work as barriers, which leads to inevitable drug-membrane interactions in vivo. These interactions affect the pharmacokinetic properties of drugs, such as their diffusion, transport, distribution, and accumulation inside the membrane. Furthermore, these interactions also affect their pharmacodynamic properties with respect to both therapeutic and toxic effects. Experimental membrane models have been used to perform in vitro assessment of the effects of drugs on the biophysical properties of membranes by employing different experimental techniques. In in silico studies, molecular dynamics simulations have been used to provide new insights at an atomistic level, which enables the study of properties that are difficult or even impossible to measure experimentally. Each model and technique has its advantages and disadvantages. Hence, combining different models and techniques is necessary for a more reliable study. In this review, the theoretical backgrounds of these (in vitro and in silico) approaches are presented, followed by a discussion of the pharmacokinetic and pharmacodynamic properties of drugs that are related to their interactions with membranes. All approaches are discussed in parallel to present for a better connection between experimental and simulation studies. Finally, an overview of the molecular dynamics simulation studies used for drug-membrane interactions is provided. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  12. Computer-aided light sheet flow visualization using photogrammetry

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1994-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.

  13. Computer-Aided Light Sheet Flow Visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  14. Measurement Issues In Pulsed Laser Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinko, John E.; Scharring, Stefan; Eckel, Hans-Albert

    Various measurement techniques have been used throughout the over 40-year history of laser propulsion. Often, these approaches suffered from inconsistencies in definitions of the key parameters that define the physics of laser ablation impulse generation. Such parameters include, but are not limited to the pulse energy, spot area, imparted impulse, and ablated mass. The limits and characteristics of common measurement techniques in each of these areas will be explored as they relate to laser propulsion. The idea of establishing some standardization system for laser propulsion data is introduced in this paper, so that reported results may be considered and studiedmore » by the general community with more certain understanding of particular merits and limitations. In particular, it is the intention to propose a minimum set of requirements a literature study should meet. Some international standards for measurements are already published, but modifications or revisions of such standards may be necessary for application to laser ablation propulsion. Issues relating to development of standards will be discussed, as well as some examples of specific experimental circumstances in which standardization would have prevented misinterpretation or misuse of past data.« less

  15. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  16. Ocean acoustic reverberation tomography.

    PubMed

    Dunn, Robert A

    2015-12-01

    Seismic wide-angle imaging using ship-towed acoustic sources and networks of ocean bottom seismographs is a common technique for exploring earth structure beneath the oceans. In these studies, the recorded data are dominated by acoustic waves propagating as reverberations in the water column. For surveys with a small receiver spacing (e.g., <10 km), the acoustic wave field densely samples properties of the water column over the width of the receiver array. A method, referred to as ocean acoustic reverberation tomography, is developed that uses the travel times of direct and reflected waves to image ocean acoustic structure. Reverberation tomography offers an alternative approach for determining the structure of the oceans and advancing the understanding of ocean heat content and mixing processes. The technique has the potential for revealing small-scale ocean thermal structure over the entire vertical height of the water column and along long survey profiles or across three-dimensional volumes of the ocean. For realistic experimental geometries and data noise levels, the method can produce images of ocean sound speed on a smaller scale than traditional acoustic tomography.

  17. Bevacizumab loaded solid lipid nanoparticles prepared by the coacervation technique: preliminary in vitro studies

    NASA Astrophysics Data System (ADS)

    Battaglia, Luigi; Gallarate, Marina; Peira, Elena; Chirio, Daniela; Solazzi, Ilaria; Giordano, Susanna Marzia Adele; Gigliotti, Casimiro Luca; Riganti, Chiara; Dianzani, Chiara

    2015-06-01

    Glioblastoma, the most common primary brain tumor in adults, has an inauspicious prognosis, given that overcoming the blood-brain barrier is the major obstacle to the pharmacological treatment of brain tumors. As neoangiogenesis plays a key role in glioblastoma growth, the US Food and Drug Administration approved bevacizumab (BVZ), an antivascular endothelial growth factor antibody for the treatment of recurrent glioblastoma in patients whose the initial therapy has failed. In this experimental work, BVZ was entrapped in solid lipid nanoparticles (SLNs) prepared by the fatty-acid coacervation technique, thanks to the formation of a hydrophobic ion pair. BVZ activity, which was evaluated by means of four different in vitro tests on HUVEC cells, increased by 100- to 200-fold when delivered in SLNs. Moreover, SLNs can enhance the permeation of fluorescently labelled BVZ through an hCMEC/D3 cell monolayer—an in vitro model of the blood brain barrier. These results are promising, even if further in vivo studies are required to evaluate the effective potential of BVZ-loaded SLNs in glioblastoma treatment.

  18. An effective parameter optimization technique for vibration flow field characterization of PP melts via LS-SVM combined with SALS in an electromagnetism dynamic extruder

    NASA Astrophysics Data System (ADS)

    Xian, Guangming

    2018-03-01

    A method for predicting the optimal vibration field parameters by least square support vector machine (LS-SVM) is presented in this paper. One convenient and commonly used technique for characterizing the the vibration flow field of polymer melts films is small angle light scattering (SALS) in a visualized slit die of the electromagnetism dynamic extruder. The optimal value of vibration vibration frequency, vibration amplitude, and the maximum light intensity projection area can be obtained by using LS-SVM for prediction. For illustrating this method and show its validity, the flowing material is used with polypropylene (PP) and fifteen samples are tested at the rotation speed of screw at 36rpm. This paper first describes the apparatus of SALS to perform the experiments, then gives the theoretical basis of this new method, and detail the experimental results for parameter prediction of vibration flow field. It is demonstrated that it is possible to use the method of SALS and obtain detailed information on optimal parameter of vibration flow field of PP melts by LS-SVM.

  19. Efficient Analysis of Mass Spectrometry Data Using the Isotope Wavelet

    NASA Astrophysics Data System (ADS)

    Hussong, Rene; Tholey, Andreas; Hildebrandt, Andreas

    2007-09-01

    Mass spectrometry (MS) has become today's de-facto standard for high-throughput analysis in proteomics research. Its applications range from toxicity analysis to MS-based diagnostics. Often, the time spent on the MS experiment itself is significantly less than the time necessary to interpret the measured signals, since the amount of data can easily exceed several gigabytes. In addition, automated analysis is hampered by baseline artifacts, chemical as well as electrical noise, and an irregular spacing of data points. Thus, filtering techniques originating from signal and image analysis are commonly employed to address these problems. Unfortunately, smoothing, base-line reduction, and in particular a resampling of data points can affect important characteristics of the experimental signal. To overcome these problems, we propose a new family of wavelet functions based on the isotope wavelet, which is hand-tailored for the analysis of mass spectrometry data. The resulting technique is theoretically well-founded and compares very well with standard peak picking tools, since it is highly robust against noise spoiling the data, but at the same time sufficiently sensitive to detect even low-abundant peptides.

  20. Wind Tunnel Experiments to Study Chaparral Crown Fires.

    PubMed

    Cobian-Iñiguez, Jeanette; Aminfar, AmirHessam; Chong, Joey; Burke, Gloria; Zuniga, Albertina; Weise, David R; Princevac, Marko

    2017-11-14

    The present protocol presents a laboratory technique designed to study chaparral crown fire ignition and spread. Experiments were conducted in a low velocity fire wind tunnel where two distinct layers of fuel were constructed to represent surface and crown fuels in chaparral. Chamise, a common chaparral shrub, comprised the live crown layer. The dead fuel surface layer was constructed with excelsior (shredded wood). We developed a methodology to measure mass loss, temperature, and flame height for both fuel layers. Thermocouples placed in each layer estimated temperature. A video camera captured the visible flame. Post-processing of digital imagery yielded flame characteristics including height and flame tilt. A custom crown mass loss instrument developed in-house measured the evolution of the mass of the crown layer during the burn. Mass loss and temperature trends obtained using the technique matched theory and other empirical studies. In this study, we present detailed experimental procedures and information about the instrumentation used. The representative results for the fuel mass loss rate and temperature filed within the fuel bed are also included and discussed.

Top