Sample records for standard experimental techniques

  1. Radar cross sections of standard and complex shape targets

    NASA Technical Reports Server (NTRS)

    Sohel, M. S.

    1974-01-01

    The theoretical, analytical, and experimental results are described for radar cross sections (RCS) of different-shaped targets. Various techniques for predicting RCS are given, and RCS of finite standard targets are presented. Techniques used to predict the RCS of complex targets are made, and the RCS complex shapes are provided.

  2. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Energetic studies and phase diagram of thioxanthene.

    PubMed

    Freitas, Vera L S; Monte, Manuel J S; Santos, Luís M N B F; Gomes, José R B; Ribeiro da Silva, Maria D M C

    2009-11-19

    The molecular stability of thioxanthene, a key species from which very important compounds with industrial relevance are derived, has been studied by a combination of several experimental techniques and computational approaches. The standard (p degrees = 0.1 MPa) molar enthalpy of formation of crystalline thioxanthene (117.4 +/- 4.1 kJ x mol(-1)) was determined from the experimental standard molar energy of combustion, in oxygen, measured by rotating-bomb combustion calorimetry at T = 298.15 K. The enthalpy of sublimation was determined by a direct method, using the vacuum drop microcalorimetric technique, and also by an indirect method, using a static apparatus, where the vapor pressures at different temperatures were measured. The latter technique was used for both crystalline and undercooled liquid samples, and the phase diagram of thioxanthene near the triple point was obtained (triple point coordinates T = 402.71 K and p = 144.7 Pa). From the two methods, a mean value for the standard (p degrees = 0.1 MPa) molar enthalpy of sublimation, at T = 298.15 K (101.3 +/- 0.8 kJ x mol(-1)), was derived. From the latter value and from the enthalpy of formation of the solid, the standard (p degrees = 0.1 MPa) enthalpy of formation of gaseous thioxanthene was calculated as 218.7 +/- 4.2 kJ x mol(-1). Standard ab initio molecular orbital calculations were performed using the G3(MP2)//B3LYP composite procedure and several homodesmotic reactions in order to derive the standard molar enthalpy of formation of thioxanthene. The ab initio results are in excellent agreement with the experimental data.

  4. Comparison of Quadrapolar™ radiofrequency lesions produced by standard versus modified technique: an experimental model.

    PubMed

    Safakish, Ramin

    2017-01-01

    Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%-30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques.

  5. Proposed minimum reporting standards for chemical analysis Chemical Analysis Working Group (CAWG) Metabolomics Standards Initiative (MSI)

    PubMed Central

    Amberg, Alexander; Barrett, Dave; Beale, Michael H.; Beger, Richard; Daykin, Clare A.; Fan, Teresa W.-M.; Fiehn, Oliver; Goodacre, Royston; Griffin, Julian L.; Hankemeier, Thomas; Hardy, Nigel; Harnly, James; Higashi, Richard; Kopka, Joachim; Lane, Andrew N.; Lindon, John C.; Marriott, Philip; Nicholls, Andrew W.; Reily, Michael D.; Thaden, John J.; Viant, Mark R.

    2013-01-01

    There is a general consensus that supports the need for standardized reporting of metadata or information describing large-scale metabolomics and other functional genomics data sets. Reporting of standard metadata provides a biological and empirical context for the data, facilitates experimental replication, and enables the re-interrogation and comparison of data by others. Accordingly, the Metabolomics Standards Initiative is building a general consensus concerning the minimum reporting standards for metabolomics experiments of which the Chemical Analysis Working Group (CAWG) is a member of this community effort. This article proposes the minimum reporting standards related to the chemical analysis aspects of metabolomics experiments including: sample preparation, experimental analysis, quality control, metabolite identification, and data pre-processing. These minimum standards currently focus mostly upon mass spectrometry and nuclear magnetic resonance spectroscopy due to the popularity of these techniques in metabolomics. However, additional input concerning other techniques is welcomed and can be provided via the CAWG on-line discussion forum at http://msi-workgroups.sourceforge.net/ or http://Msi-workgroups-feedback@lists.sourceforge.net. Further, community input related to this document can also be provided via this electronic forum. PMID:24039616

  6. Standardization of proton-induced x-ray emission technique for analysis of thick samples

    NASA Astrophysics Data System (ADS)

    Ali, Shad; Zeb, Johar; Ahad, Abdul; Ahmad, Ishfaq; Haneef, M.; Akbar, Jehan

    2015-09-01

    This paper describes the standardization of the proton-induced x-ray emission (PIXE) technique for finding the elemental composition of thick samples. For the standardization, three different samples of standard reference materials (SRMs) were analyzed using this technique and the data were compared with the already known data of these certified SRMs. These samples were selected in order to cover the maximum range of elements in the periodic table. Each sample was irradiated for three different values of collected beam charges at three different times. A proton beam of 2.57 MeV obtained using 5UDH-II Pelletron accelerator was used for excitation of x-rays from the sample. The acquired experimental data were analyzed using the GUPIXWIN software. The results show that the SRM data and the data obtained using the PIXE technique are in good agreement.

  7. Chemical vapor deposition growth

    NASA Technical Reports Server (NTRS)

    Ruth, R. P.; Manasevit, H. M.; Campbell, A. G.; Johnson, R. E.; Kenty, J. L.; Moudy, L. A.; Shaw, G. L.; Simpson, W. I.; Yang, J. J.

    1978-01-01

    The objective was to investigate and develop chemical vapor deposition (CVD) techniques for the growth of large areas of Si sheet on inexpensive substrate materials, with resulting sheet properties suitable for fabricating solar cells that would meet the technical goals of the Low Cost Silicon Solar Array Project. The program involved six main technical tasks: (1) modification and test of an existing vertical-chamber CVD reactor system; (2) identification and/or development of suitable inexpensive substrate materials; (3) experimental investigation of CVD process parameters using various candidate substrate materials; (4) preparation of Si sheet samples for various special studies, including solar cell fabrication; (5) evaluation of the properties of the Si sheet material produced by the CVD process; and (6) fabrication and evaluation of experimental solar cell structures, using impurity diffusion and other standard and near-standard processing techniques supplemented late in the program by the in situ CVD growth of n(+)/p/p(+) sheet structures subsequently processed into experimental cells.

  8. Laboratory validation of four black carbon measurement methods for the determination of non-volatile particulate matter (PM) mass emissions . . .

    EPA Science Inventory

    A laboratory-scale experimental program was designed to standardize each of four black carbon measurement methods, provide appropriate quality assurance/control procedures for these techniques, and compare measurements made by these methods to a NIST traceable standard (filter gr...

  9. New Techniques to Evaluate the Incendiary Behavior of Insulators

    NASA Technical Reports Server (NTRS)

    Buhler, Charles; Calle, Carlos; Clements, Sid; Trigwell, Steve; Ritz, Mindy

    2008-01-01

    New techniques for evaluating the incendiary behavior of insulators is presented. The onset of incendive brush discharges in air is evaluated using standard spark probe techniques for the case simulating approaches of an electrically grounded sphere to a charged insulator in the presence of a flammable atmosphere. However, this standard technique is unsuitable for the case of brush discharges that may occur during the charging-separation process for two insulator materials. We present experimental techniques to evaluate this hazard in the presence of a flammable atmosphere which is ideally suited to measure the incendiary nature of micro-discharges upon separation, a measurement never before performed. Other measurement techniques unique to this study include; surface potential measurements of insulators before, during and after contact and separation, as well as methods to verify fieldmeter calibrations using a charge insulator surface opposed to standard high voltage plates. Key words: Kapton polyimide film, incendiary discharges, brush discharges, contact and frictional electrification, ignition hazards, insulators, contact angle, surface potential measurements.

  10. Narrow band imaging combined with water immersion technique in the diagnosis of celiac disease.

    PubMed

    Valitutti, Francesco; Oliva, Salvatore; Iorfida, Donatella; Aloi, Marina; Gatti, Silvia; Trovato, Chiara Maria; Montuori, Monica; Tiberti, Antonio; Cucchiara, Salvatore; Di Nardo, Giovanni

    2014-12-01

    The "multiple-biopsy" approach both in duodenum and bulb is the best strategy to confirm the diagnosis of celiac disease; however, this increases the invasiveness of the procedure itself and is time-consuming. To evaluate the diagnostic yield of a single biopsy guided by narrow-band imaging combined with water immersion technique in paediatric patients. Prospective assessment of the diagnostic accuracy of narrow-band imaging/water immersion technique-driven biopsy approach versus standard protocol in suspected celiac disease. The experimental approach correctly diagnosed 35/40 children with celiac disease, with an overall diagnostic sensitivity of 87.5% (95% CI: 77.3-97.7). An altered pattern of narrow-band imaging/water immersion technique endoscopic visualization was significantly associated with villous atrophy at guided biopsy (Spearman Rho 0.637, p<0.001). Concordance of narrow-band imaging/water immersion technique endoscopic assessments was high between two operators (K: 0.884). The experimental protocol was highly timesaving compared to the standard protocol. An altered narrow-band imaging/water immersion technique pattern coupled with high anti-transglutaminase antibodies could allow a single guided biopsy to diagnose celiac disease. When no altered mucosal pattern is visible even by narrow-band imaging/water immersion technique, multiple bulbar and duodenal biopsies should be obtained. Copyright © 2014. Published by Elsevier Ltd.

  11. Cross-platform comparison of nucleic acid hybridization: toward quantitative reference standards.

    PubMed

    Halvorsen, Ken; Agris, Paul F

    2014-11-15

    Measuring interactions between biological molecules is vitally important to both basic and applied research as well as development of pharmaceuticals. Although a wide and growing range of techniques is available to measure various kinetic and thermodynamic properties of interacting biomolecules, it can be difficult to compare data across techniques of different laboratories and personnel or even across different instruments using the same technique. Here we evaluate relevant biological interactions based on complementary DNA and RNA oligonucleotides that could be used as reference standards for many experimental systems. We measured thermodynamics of duplex formation using isothermal titration calorimetry, differential scanning calorimetry, and ultraviolet-visible (UV-vis) monitored denaturation/renaturation. These standards can be used to validate results, compare data from disparate techniques, act as a teaching tool for laboratory classes, or potentially to calibrate instruments. The RNA and DNA standards have many attractive features, including low cost, high purity, easily measurable concentrations, and minimal handling concerns, making them ideal for use as a reference material. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Chemical vapor deposition growth

    NASA Technical Reports Server (NTRS)

    Ruth, R. P.; Manasevit, H. M.; Kenty, J. L.; Moudy, L. A.; Simpson, W. I.; Yang, J. J.

    1976-01-01

    The chemical vapor deposition (CVD) method for the growth of Si sheet on inexpensive substrate materials is investigated. The objective is to develop CVD techniques for producing large areas of Si sheet on inexpensive substrate materials, with sheet properties suitable for fabricating solar cells meeting the technical goals of the Low Cost Silicon Solar Array Project. Specific areas covered include: (1) modification and test of existing CVD reactor system; (2) identification and/or development of suitable inexpensive substrate materials; (3) experimental investigation of CVD process parameters using various candidate substrate materials; (4) preparation of Si sheet samples for various special studies, including solar cell fabrication; (5) evaluation of the properties of the Si sheet material produced by the CVD process; and (6) fabrication and evaluation of experimental solar cell structures, using standard and near-standard processing techniques.

  13. Fertility preservation options in breast cancer patients.

    PubMed

    Kasum, Miro; von Wolff, Michael; Franulić, Daniela; Čehić, Ermin; Klepac-Pulanić, Tajana; Orešković, Slavko; Juras, Josip

    2015-01-01

    The purpose of this review is to analyse current options for fertility preservation in young women with breast cancer (BC). Considering an increasing number of BC survivors, owing to improvements in cancer treatment and delaying of childbearing, fertility preservation appears to be an important issue. Current fertility preservation options in BC survivors range from well-established standard techniques to experimental or investigational interventions. Among the standard options, random-start ovarian stimulation protocol represents a new technique, which significantly decreases the total time of the in vitro fertilisation cycle. However, in patients with oestrogen-sensitive tumours, stimulation protocols using aromatase inhibitors are currently preferred over tamoxifen regimens. Cryopreservation of embryos and oocytes are nowadays deemed the most successful techniques for fertility preservation in BC patients. GnRH agonists during chemotherapy represent an experimental method for fertility preservation due to conflicting long-term outcome results regarding its safety and efficacy. Cryopreservation of ovarian tissue, in vitro maturation of immature oocytes and other strategies are considered experimental and should only be offered within the context of a clinical trial. An early pretreatment referral to reproductive endocrinologists and oncologists should be suggested to young BC women at risk of infertility, concerning the risks and benefits of fertility preservation options.

  14. Mission Systems Open Architecture Science and Technology (MOAST) program

    NASA Astrophysics Data System (ADS)

    Littlejohn, Kenneth; Rajabian-Schwart, Vahid; Kovach, Nicholas; Satterthwaite, Charles P.

    2017-04-01

    The Mission Systems Open Architecture Science and Technology (MOAST) program is an AFRL effort that is developing and demonstrating Open System Architecture (OSA) component prototypes, along with methods and tools, to strategically evolve current OSA standards and technical approaches, promote affordable capability evolution, reduce integration risk, and address emerging challenges [1]. Within the context of open architectures, the program is conducting advanced research and concept development in the following areas: (1) Evolution of standards; (2) Cyber-Resiliency; (3) Emerging Concepts and Technologies; (4) Risk Reduction Studies and Experimentation; and (5) Advanced Technology Demonstrations. Current research includes the development of methods, tools, and techniques to characterize the performance of OMS data interconnection methods for representative mission system applications. Of particular interest are the OMS Critical Abstraction Layer (CAL), the Avionics Service Bus (ASB), and the Bulk Data Transfer interconnects, as well as to develop and demonstrate cybersecurity countermeasures techniques to detect and mitigate cyberattacks against open architecture based mission systems and ensure continued mission operations. Focus is on cybersecurity techniques that augment traditional cybersecurity controls and those currently defined within the Open Mission System and UCI standards. AFRL is also developing code generation tools and simulation tools to support evaluation and experimentation of OSA-compliant implementations.

  15. Numeric data distribution: The vital role of data exchange in today's world

    NASA Technical Reports Server (NTRS)

    Chase, Malcolm W.

    1994-01-01

    The major aim of the NIST standard Reference Data Program (SRD) is to provide critically evaluated numeric data to the scientific and technical community in a convenient and accessible form. A second aim of the program is to provide feedback into the experimental and theoretical programs to help raise the general standards of measurement. By communicating the experience gained in evaluating the world output of data in the physical sciences, NIST/SRD helps to advance the level of experimental techniques and improve the reliability of physical measurements.

  16. Improvements to the YbF electron electric dipole moment experiment

    NASA Astrophysics Data System (ADS)

    Sauer, B. E.; Rabey, I. M.; Devlin, J. A.; Tarbutt, M. R.; Ho, C. J.; Hinds, E. A.

    2017-04-01

    The standard model of particle physics predicts that the permanent electric dipole moment (EDM) of the electron is very nearly zero. Many extensions to the standard model predict an electron EDM just below current experimental limits. We are currently working to improve the sensitivity of the Imperial College YbF experiment. We have implemented combined laser-radiofrequency pumping techniques which both increase the number of molecules which participate in the EDM experiment and also increase the probability of detection. Combined, these techniques give nearly two orders of magnitude increase in the experimental sensitivity. At this enhanced sensitivity magnetic effects which were negligible become important. We have developed a new way to construct the electrodes for electric field plates which minimizes the effect of magnetic Johnson noise. The new YbF experiment is expected to comparable in sensitivity to the most sensitive measurements of the electron EDM to date. We will also discuss laser cooling techniques which promise an even larger increase in sensitivity.

  17. Effectiveness of Mind Mapping in English Teaching among VIII Standard Students

    ERIC Educational Resources Information Center

    Hallen, D.; Sangeetha, N.

    2015-01-01

    The aim of the study is to find out the effectiveness of mind mapping technique over conventional method in teaching English at high school level (VIII), in terms of Control and Experimental group. The sample of the study comprised, 60 VIII Standard students in Tiruchendur Taluk. Mind Maps and Achievement Test (Pretest & Posttest) were…

  18. One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.

    PubMed

    Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz

    2009-07-15

    The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.

  19. Comparison of competing segmentation standards for X-ray computed topographic imaging using Lattice Boltzmann techniques

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.

    2013-12-01

    Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.

  20. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  1. Precise measurement of the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Rebecca J.; Thompson, Maxwell N.; Rassool, Roger P.

    2011-08-15

    State-of-the-art signal digitization and analysis techniques have been used to measure the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}. The half-life was determined to be 6347.8 {+-} 2.5 ms. This new datum contributes to the experimental testing of the conserved-vector-current hypothesis and the required unitarity of the Cabibbo-Kobayashi-Maskawa matrix: two essential components of the standard model. Detailed discussion of the experimental techniques and data analysis and a thorough investigation of the statistical and systematic uncertainties are presented.

  2. Cross-platform comparison of nucleic acid hybridization: toward quantitative reference standardsa

    PubMed Central

    Halvorsen, Ken; Agris, Paul F.

    2014-01-01

    Measuring interactions between biological molecules is vitally important to both basic and applied research, as well as development of pharmaceuticals. While a wide and growing range of techniques are available to measure various kinetic and thermodynamic properties of interacting biomolecules, it can be difficult to compare data across techniques of different laboratories and personnel, or even across different instruments using the same technique. Here we evaluate relevant biological interactions based on complementary DNA and RNA oligonucleotides that could be used as reference standards for many experimental systems. We measured thermodynamics of duplex formation using Isothermal Titration Calorimetry, Differential Scanning Calorimetry, and UV-Vis monitored denaturation/renaturation. These standards can be used to validate results, compare data from disparate techniques, act as a teaching tool for laboratory classes, or potentially to calibrate instruments. The RNA and DNA standards have many attractive features including low cost, high purity, easily measureable concentrations, and minimal handling concerns, making them ideal for use as a reference material. PMID:25124363

  3. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  4. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  5. Data Mining of Macromolecular Structures.

    PubMed

    van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P

    2016-01-01

    The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.

  6. Growth rate measurement in free jet experiments

    NASA Astrophysics Data System (ADS)

    Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent

    2017-07-01

    An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.

  7. Experimental and computational thermochemical study of α-alanine (DL) and β-alanine.

    PubMed

    da Silva, Manuel A V Ribeiro; da Silva, Maria das Dores M C Ribeiro; Santos, Ana Filipa L O M; Roux, Maria Victoria; Foces-Foces, Concepción; Notario, Rafael; Guzmán-Mejía, Ramón; Juaristi, Eusebio

    2010-12-16

    This paper reports an experimental and theoretical study of the gas phase standard (p° = 0.1 MPa) molar enthalpies of formation, at T = 298.15 K, of α-alanine (DL) and β-alanine. The standard (p° = 0.1 MPa) molar enthalpies of formation of crystalline α-alanine (DL) and β-alanine were calculated from the standard molar energies of combustion, in oxygen, to yield CO2(g), N2(g), and H2O(l), measured by static-bomb combustion calorimetry at T = 298.15 K. The vapor pressures of both amino acids were measured as function of temperature by the Knudsen effusion mass-loss technique. The standard molar enthalpies of sublimation at T = 298.15 K was derived from the Clausius−Clapeyron equation. The experimental values were used to calculate the standard (p° = 0.1 MPa) enthalpy of formation of α-alanine (DL) and β-alanine in the gaseous phase, Δ(f)H(m)°(g), as −426.3 ± 2.9 and −421.2 ± 1.9 kJ·mol(−1), respectively. Standard ab initio molecular orbital calculations at the G3 level were performed. Enthalpies of formation, using atomization reactions, were calculated and compared with experimental data. Detailed inspections of the molecular and electronic structures of the compounds studied were carried out.

  8. Microscopic insight into the structure of gallium isotopes

    NASA Astrophysics Data System (ADS)

    Verma, Preeti; Sharma, Chetan; Singh, Suram; Bharti, Arun; Khosa, S. K.

    2012-07-01

    Projected Shell Model technique has been applied to odd-A71-81Ga nuclei with the deformed single-particle states generated by the standard Nilsson potential. Various nuclear structure quantities have been calculated with this technique and compared with the available experimental data in the present work. The known experimental data of the yrast bands in these nuclei are persuasively described and the band diagrams obtained for these nuclei show that the yrast bands in these odd-A Ga isotopes don't belong to the single intrinsic state only but also have multi-particle states. The back-bending in moment of inertia and the electric quadrupole transitions are also calculated.

  9. Water-tight knee arthrotomy closure: comparison of a novel single bidirectional barbed self-retaining running suture versus conventional interrupted sutures.

    PubMed

    Nett, Michael; Avelar, Rui; Sheehan, Michael; Cushner, Fred

    2011-03-01

    Standard medial parapatellar arthrotomies of 10 cadaveric knees were closed with either conventional interrupted absorbable sutures (control group, mean of 19.4 sutures) or a single running knotless bidirectional barbed absorbable suture (experimental group). Water-tightness of the arthrotomy closure was compared by simulating a tense hemarthrosis and measuring arthrotomy leakage over 3 minutes. Mean total leakage was 356 mL and 89 mL in the control and experimental groups, respectively (p = 0.027). Using 8 of the 10 knees (4 closed with control sutures, 4 closed with an experimental suture), a tense hemarthrosis was again created, and iatrogenic suture rupture was performed: a proximal suture was cut at 1 minute; a distal suture was cut at 2 minutes. The impact of suture rupture was compared by measuring total arthrotomy leakage over 3 minutes. Mean total leakage was 601 mL and 174 mL in the control and experimental groups, respectively (p = 0.3). In summary, using a cadaveric model, arthrotomies closed with a single bidirectional barbed running suture were statistically significantly more water-tight than those closed using a standard interrupted technique. The sample size was insufficient to determine whether the two closure techniques differed in leakage volume after suture rupture.

  10. An Example of Process Evaluation.

    ERIC Educational Resources Information Center

    Karl, Marion C.

    The inappropriateness of standard experimental research design, which can stifle innovations, is discussed in connection with the problems of designing practical techniques for evaluating a Title III curriculum development project. The project, involving 12 school districts and 2,500 students, teaches concept understanding, critical thinking, and…

  11. A critical comparison of several low Reynolds number k-epsilon turbulence models for flow over a backward facing step

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.

    1993-01-01

    Turbulent backward-facing step flow was examined using four low turbulent Reynolds number k-epsilon models and one standard high Reynolds number technique. A tunnel configuration of 1:9 (step height: exit tunnel height) was used. The models tested include: the original Jones and Launder; Chien; Launder and Sharma; and the recent Shih and Lumley formulation. The experimental reference of Driver and Seegmiller was used to make detailed comparisons between reattachment length, velocity, pressure, turbulent kinetic energy, Reynolds shear stress, and skin friction predictions. The results indicated that the use of a wall function for the standard k-epsilon technique did not reduce the calculation accuracy for this separated flow when compared to the low turbulent Reynolds number techniques.

  12. Laser Propulsion Standardization Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scharring, Stefan; Eckel, Hans-Albert; Roeser, Hans-Peter

    It is a relevant issue in the research on laser propulsion that experimental results are treated seriously and that meaningful scientific comparison is possible between groups using different equipment and measurement techniques. However, critical aspects of experimental measurements are sparsely addressed in the literature. In addition, few studies so far have the benefit of independent confirmation by other laser propulsion groups. In this paper, we recommend several approaches towards standardization of published laser propulsion experiments. Such standards are particularly important for the measurement of laser ablation pulse energy, laser spot area, imparted impulse or thrust, and mass removal during ablation.more » Related examples are presented from experiences of an actual scientific cooperation between NU and DLR. On the basis of a given standardization, researchers may better understand and contribute their findings more clearly in the future, and compare those findings confidently with those already published in the laser propulsion literature. Relevant ISO standards are analyzed, and revised formats are recommended for application to laser propulsion studies.« less

  13. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting.

  14. Knudsen Cell Studies of Ti-Al Thermodynamics

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Copland, Evan H.; Mehrotra, Gopal M.; Auping, Judith; Gray, Hugh R. (Technical Monitor)

    2002-01-01

    In this paper we describe the Knudsen cell technique for measurement of thermodynamic activities in alloys. Numerous experimental details must be adhered to in order to obtain useful experimental data. These include introduction of an in-situ standard, precise temperature measurement, elimination of thermal gradients, and precise cell positioning. Our first design is discussed and some sample data on Ti-Al alloys is presented. The second modification and associated improvements are also discussed.

  15. Recent Work in Hybrid Radiation Transport Methods with Applications to Commercial Nuclear Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesza, Joel A.

    This talk will begin with an overview of hybrid radiation transport methods followed by a discussion of the author’s work to advance current capabilities. The talk will then describe applications for these methods in commercial nuclear power reactor analyses and techniques for experimental validation. When discussing these analytical and experimental activities, the importance of technical standards such as those created and maintained by ASTM International will be demonstrated.

  16. A Practical Method for Identifying Significant Change Scores

    ERIC Educational Resources Information Center

    Cascio, Wayne F.; Kurtines, William M.

    1977-01-01

    A test of significance for identifying individuals who are most influenced by an experimental treatment as measured by pre-post test change score is presented. The technique requires true difference scores, the reliability of obtained differences, and their standard error of measurement. (Author/JKS)

  17. Transferable Calibration Standard Developed for Quantitative Raman Scattering Diagnostics in High-Pressure Flames

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Kojima, Jun

    2005-01-01

    Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.

  18. A Cost-effective and Reliable Method to Predict Mechanical Stress in Single-use and Standard Pumps

    PubMed Central

    Dittler, Ina; Dornfeld, Wolfgang; Schöb, Reto; Cocke, Jared; Rojahn, Jürgen; Kraume, Matthias; Eibl, Dieter

    2015-01-01

    Pumps are mainly used when transferring sterile culture broths in biopharmaceutical and biotechnological production processes. However, during the pumping process shear forces occur which can lead to qualitative and/or quantitative product loss. To calculate the mechanical stress with limited experimental expense, an oil-water emulsion system was used, whose suitability was demonstrated for drop size detections in bioreactors1. As drop breakup of the oil-water emulsion system is a function of mechanical stress, drop sizes need to be counted over the experimental time of shear stress investigations. In previous studies, the inline endoscopy has been shown to be an accurate and reliable measurement technique for drop size detections in liquid/liquid dispersions. The aim of this protocol is to show the suitability of the inline endoscopy technique for drop size measurements in pumping processes. In order to express the drop size, the Sauter mean diameter d32 was used as the representative diameter of drops in the oil-water emulsion. The results showed low variation in the Sauter mean diameters, which were quantified by standard deviations of below 15%, indicating the reliability of the measurement technique. PMID:26274765

  19. The Inhibition of the Rayleigh-Taylor Instability by Rotation.

    PubMed

    Baldwin, Kyle A; Scase, Matthew M; Hill, Richard J A

    2015-07-01

    It is well-established that the Coriolis force that acts on fluid in a rotating system can act to stabilise otherwise unstable flows. Chandrasekhar considered theoretically the effect of the Coriolis force on the Rayleigh-Taylor instability, which occurs at the interface between a dense fluid lying on top of a lighter fluid under gravity, concluding that rotation alone could not stabilise this system indefinitely. Recent numerical work suggests that rotation may, nevertheless, slow the growth of the instability. Experimental verification of these results using standard techniques is problematic, owing to the practical difficulty in establishing the initial conditions. Here, we present a new experimental technique for studying the Rayleigh-Taylor instability under rotation that side-steps the problems encountered with standard techniques by using a strong magnetic field to destabilize an otherwise stable system. We find that rotation about an axis normal to the interface acts to retard the growth rate of the instability and stabilise long wavelength modes; the scale of the observed structures decreases with increasing rotation rate, asymptoting to a minimum wavelength controlled by viscosity. We present a critical rotation rate, dependent on Atwood number and the aspect ratio of the system, for stabilising the most unstable mode.

  20. The Inhibition of the Rayleigh-Taylor Instability by Rotation

    PubMed Central

    Baldwin, Kyle A.; Scase, Matthew M.; Hill, Richard J. A.

    2015-01-01

    It is well-established that the Coriolis force that acts on fluid in a rotating system can act to stabilise otherwise unstable flows. Chandrasekhar considered theoretically the effect of the Coriolis force on the Rayleigh-Taylor instability, which occurs at the interface between a dense fluid lying on top of a lighter fluid under gravity, concluding that rotation alone could not stabilise this system indefinitely. Recent numerical work suggests that rotation may, nevertheless, slow the growth of the instability. Experimental verification of these results using standard techniques is problematic, owing to the practical difficulty in establishing the initial conditions. Here, we present a new experimental technique for studying the Rayleigh-Taylor instability under rotation that side-steps the problems encountered with standard techniques by using a strong magnetic field to destabilize an otherwise stable system. We find that rotation about an axis normal to the interface acts to retard the growth rate of the instability and stabilise long wavelength modes; the scale of the observed structures decreases with increasing rotation rate, asymptoting to a minimum wavelength controlled by viscosity. We present a critical rotation rate, dependent on Atwood number and the aspect ratio of the system, for stabilising the most unstable mode. PMID:26130005

  1. Centralized light-source optical access network based on polarization multiplexing.

    PubMed

    Grassi, Fulvio; Mora, José; Ortega, Beatriz; Capmany, José

    2010-03-01

    This paper presents and demonstrates a centralized light source optical access network based on optical polarization multiplexing technique. By using two optical sources emitting light orthogonally polarized in the Central Node for downstream and upstream operations, the Remote Node is kept source-free. EVM values below telecommunication standard requirements have been measured experimentally when bidirectional digital signals have been transmitted over 10 km of SMF employing subcarrier multiplexing technique in the electrical domain.

  2. Higher-Order Spectral Analysis of F-18 Flight Flutter Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Dunn, Shane

    2005-01-01

    Royal Australian Air Force (RAAF) F/A-18 flight flutter test data is presented and analyzed using various techniques. The data includes high-quality measurements of forced responses and limit cycle oscillation (LCO) phenomena. Standard correlation and power spectral density (PSD) techniques are applied to the data and presented. Novel applications of experimentally-identified impulse responses and higher-order spectral techniques are also applied to the data and presented. The goal of this research is to develop methods that can identify the onset of nonlinear aeroelastic phenomena, such as LCO, during flutter testing.

  3. Non-thermal plasma destruction of allyl alcohol in waste gas: kinetics and modelling

    NASA Astrophysics Data System (ADS)

    DeVisscher, A.; Dewulf, J.; Van Durme, J.; Leys, C.; Morent, R.; Van Langenhove, H.

    2008-02-01

    Non-thermal plasma treatment is a promising technique for the destruction of volatile organic compounds in waste gas. A relatively unexplored technique is the atmospheric negative dc multi-pin-to-plate glow discharge. This paper reports experimental results of allyl alcohol degradation and ozone production in this type of plasma. A new model was developed to describe these processes quantitatively. The model contains a detailed chemical degradation scheme, and describes the physics of the plasma by assuming that the fraction of electrons that takes part in chemical reactions is an exponential function of the reduced field. The model captured the experimental kinetic data to less than 2 ppm standard deviation.

  4. New method for stock-tank oil compositional analysis.

    PubMed

    McAndrews, Kristine; Nighswander, John; Kotzakoulakis, Konstantin; Ross, Paul; Schroeder, Helmut

    2009-01-01

    A new method for accurately determining stock-tank oil composition to normal pentatriacontane using gas chromatography is developed and validated. The new method addresses the potential errors associated with the traditional equipment and technique employed for extended hydrocarbon gas chromatography outside a controlled laboratory environment, such as on an offshore oil platform. In particular, the experimental measurement of stock-tank oil molecular weight with the freezing point depression technique and the use of an internal standard to find the unrecovered sample fraction are replaced with correlations for estimating these properties. The use of correlations reduces the number of necessary experimental steps in completing the required sample preparation and analysis, resulting in reduced uncertainty in the analysis.

  5. Computational Methods to Predict Protein Interaction Partners

    NASA Astrophysics Data System (ADS)

    Valencia, Alfonso; Pazos, Florencio

    In the new paradigm for studying biological phenomena represented by Systems Biology, cellular components are not considered in isolation but as forming complex networks of relationships. Protein interaction networks are among the first objects studied from this new point of view. Deciphering the interactome (the whole network of interactions for a given proteome) has been shown to be a very complex task. Computational techniques for detecting protein interactions have become standard tools for dealing with this problem, helping and complementing their experimental counterparts. Most of these techniques use genomic or sequence features intuitively related with protein interactions and are based on "first principles" in the sense that they do not involve training with examples. There are also other computational techniques that use other sources of information (i.e. structural information or even experimental data) or are based on training with examples.

  6. Glass transition temperatures of liquid prepolymers obtained by thermal penetrometry

    NASA Technical Reports Server (NTRS)

    Potts, J. E., Jr.; Ashcraft, A. C.

    1973-01-01

    Thermal penetrometry is experimental technique for detecting temperature at which frozen prepolymer becomes soft enough to be pierced by weighted penetrometer needle; temperature at which this occurs is called penetration temperature. Apparatus used to obtain penetration temperatures can be set up largely from standard parts.

  7. TECHNICAL MANUAL: A SURVEY OF EQUIPMENT AND METHODS FOR PARTICULATE SAMPLING IN INDUSTRIAL PROCESS STREAMS

    EPA Science Inventory

    The manual lists and describes the instruments and techniques that are available for measuring the concentration or size distribution of particles suspended in process streams. The standard, official, well established methods are described as well as some experimental methods and...

  8. Hydrogen as an atomic beam standard

    NASA Technical Reports Server (NTRS)

    Peters, H. E.

    1972-01-01

    After a preliminary discussion of feasibility, new experimental work with a hydrogen beam is described. A space focused magnetic resonance technique with separated oscillatory fields is used with a monochromatic beam of cold hydrogen atoms which are selected from a higher temperature source. The first resonance curves and other experimental results are presented. These results are interpreted from the point of view of accuracy potential and frequency stability, and are compared with hydrogen maser and cesium beam capabilities.

  9. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  10. Crystallographic Determination of Molecular Parameters for K2SiF6: A Physical Chemistry Laboratory Experiment.

    ERIC Educational Resources Information Center

    Loehlin, James H.; Norton, Alexandra P.

    1988-01-01

    Describes a crystallography experiment using both diffraction-angle and diffraction-intensity information to determine the lattice constant and a lattice independent molecular parameter, while still employing standard X-ray powder diffraction techniques. Details the method, experimental details, and analysis for this activity. (CW)

  11. Pupils' Cognitive Activity Stimulation by Means of Physical Training

    ERIC Educational Resources Information Center

    Nekhoroshkov, Anatolij V.

    2016-01-01

    The article presents the research results of the physical activity influence on the intellectual performance of high school students. The methods of experiments and standardized observation were used. The efficiency of the cognitive activity was assessed by "Proof test" technique of B. Burdon. Within the experimental class, the program…

  12. CMB EB and TB cross-spectrum estimation via pseudospectrum techniques

    NASA Astrophysics Data System (ADS)

    Grain, J.; Tristram, M.; Stompor, R.

    2012-10-01

    We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.

  13. Experimental stress–strain analysis of tapered silica optical fibers with nanofiber waist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holleis, S.; Hoinkes, T.; Wuttke, C.

    2014-04-21

    We experimentally determine tensile force–elongation diagrams of tapered optical fibers with a nanofiber waist. The tapered optical fibers are produced from standard silica optical fibers using a heat and pull process. Both, the force–elongation data and scanning electron microscope images of the rupture points indicate a brittle material. Despite the small waist radii of only a few hundred nanometers, our experimental data can be fully explained by a nonlinear stress–strain model that relies on material properties of macroscopic silica optical fibers. This is an important asset when it comes to designing miniaturized optical elements as one can rely on themore » well-founded material characteristics of standard optical fibers. Based on this understanding, we demonstrate a simple and non-destructive technique that allows us to determine the waist radius of the tapered optical fiber. We find excellent agreement with independent scanning electron microscope measurements of the waist radius.« less

  14. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  15. Reference Correlation for the Density and Viscosity of Eutectic Liquid Alloys Al+Si, Pb+Bi, and Pb+Sn

    NASA Astrophysics Data System (ADS)

    Assael, M. J.; Mihailidou, E. K.; Brillo, J.; Stankus, S. V.; Wu, J. T.; Wakeham, W. A.

    2012-09-01

    In this paper, the available experimental data for the density and viscosity of eutectic liquid alloys Al+Si, Pb+Bi, and Pb+Sn have been critically examined with the intention of establishing a reference standard representation of both density and viscosity. All experimental data have been categorized as primary or secondary according to the quality of measurement, the technique employed, and the presentation of the data, as specified by a series of carefully defined criteria. The proposed standard reference correlations for the density of liquid Al+Si, Pb+Bi, and Pb+Sn are, respectively, characterized by deviations of 2.0%, 2.9%, and 0.5% at the 95% confidence level. The standard reference correlations for the viscosity of liquid Al+Si, Pb+Bi, and Pb+Sn are, respectively, characterized by deviations of 7.7%, 14.2%, and 12.4% at the 95% confidence level.

  16. Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, J.R.; et al.

    This Report summarizes the proceedings of the 2017 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) theoretical uncertainties and dataset dependence of parton distribution functions, (III) new developments in jet substructure techniques, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (V) phenomenological studies essential for comparing LHC data from Run II with theoretical predictions and projections for future measurements, and (VI) new developments in Monte Carlo event generators.

  17. Development of a standardized sequential extraction protocol for simultaneous extraction of multiple actinide elements

    DOE PAGES

    Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.; ...

    2017-02-07

    Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.

  18. Fifty years of solid-phase extraction in water analysis--historical development and overview.

    PubMed

    Liska, I

    2000-07-14

    The use of an appropriate sample handling technique is a must in an analysis of organic micropollutants in water. The efforts to use a solid phase for the recovery of analytes from a water matrix prior to their detection have a long history. Since the first experimental trials using activated carbon filters that were performed 50 years ago, solid-phase extraction (SPE) has become an established sample preparation technique. The initial experimental applications of SPE resulted in widespread use of this technique in current water analysis and also to adoption of SPE into standardized analytical methods. During the decades of its evolution, chromatographers became aware of the advantages of SPE and, despite many innovations that appeared in the last decade, new SPE developments are still expected in the future. A brief overview of 50 years of the history of the use of SPE in organic trace analysis of water is given in presented paper.

  19. Hypnotherapy for labor and birth.

    PubMed

    Beebe, Kathleen R

    2014-01-01

    Hypnotherapy is an integrative mind-body technique with therapeutic potential in various health care applications, including labor and birth. Evaluating the efficacy of this modality in controlled studies can be difficult, because of methodologic challenges, such as obtaining adequate sample sizes and standardizing experimental conditions. Women using hypnosis techniques for childbirth in hospital settings may face barriers related to caregiver resistance or institutional policies. The potential anxiolytic and analgesic effects of clinical hypnosis for childbirth merit further study. Nurses caring for women during labor and birth can increase their knowledge and skills with strategies for supporting hypnotherapeutic techniques. © 2014 AWHONN.

  20. Single crystals and nonlinear process for outstanding vibration-powered electrical generators.

    PubMed

    Badel, Adrien; Benayad, Abdelmjid; Lefeuvre, Elie; Lebrun, Laurent; Richard, Claude; Guyomar, Daniel

    2006-04-01

    This paper compares the performances of vibration-powered electrical generators using a piezoelectric ceramic and a piezoelectric single crystal associated to several power conditioning circuits. A new approach of the piezoelectric power conversion based on a nonlinear voltage processing is presented, leading to three novel high performance power conditioning interfaces. Theoretical predictions and experimental results show that the nonlinear processing technique may increase the power harvested by a factor of 8 compared to standard techniques. Moreover, it is shown that, for a given energy harvesting technique, generators using single crystals deliver 20 times more power than generators using piezoelectric ceramics.

  1. A methodology for small scale rural land use mapping in semi-arid developing countries using orbital imagery. Part 5: Experimental and operational techniques of mapping land use

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator); Lock, B. F.

    1976-01-01

    The author has identified the following significant results. Scope of the preprocessing techniques was restricted to standard material from the EROS Data Center accompanied by some enlarging procedures and the use of the diazo process. Investigation has shown that the most appropriate sampling strategy for this study is the stratified random technique. A viable sampling procedure, together with a method for determining minimum number of sample points in order to test results of any interpretation are presented.

  2. Raman microimaging of murine lungs: insight into the vitamin A content.

    PubMed

    Marzec, K M; Kochan, K; Fedorowicz, A; Jasztal, A; Chruszcz-Lipska, K; Dobrowolski, J Cz; Chlopicki, S; Baranska, M

    2015-04-07

    The composition of the lung tissue of mice was investigated using Raman confocal microscopy at 532 nm excitation wavelength and was supported with various staining techniques as well as DFT calculations. This combination of experimental and theoretical techniques allows for the study of the distribution of lung lipofibroblasts (LIFs), rich in vitamin A, as well as the chemical structure of vitamin A. The comparison of the Raman spectra derived from LIFs with the experimental and theoretical spectra of standard retinoids showed the ability of LIFs to store all-trans retinol, which is partially oxidized to all-trans retinal and retinoic acid. Moreover, we were able to visualize the distribution of other lung tissue components including the surfactant and selected enzymes (lipoxygenase/glucose oxidase).

  3. Experimental progress in positronium laser physics

    NASA Astrophysics Data System (ADS)

    Cassidy, David B.

    2018-03-01

    The field of experimental positronium physics has advanced significantly in the last few decades, with new areas of research driven by the development of techniques for trapping and manipulating positrons using Surko-type buffer gas traps. Large numbers of positrons (typically ≥106) accumulated in such a device may be ejected all at once, so as to generate an intense pulse. Standard bunching techniques can produce pulses with ns (mm) temporal (spatial) beam profiles. These pulses can be converted into a dilute Ps gas in vacuum with densities on the order of 107 cm-3 which can be probed by standard ns pulsed laser systems. This allows for the efficient production of excited Ps states, including long-lived Rydberg states, which in turn facilitates numerous experimental programs, such as precision optical and microwave spectroscopy of Ps, the application of Stark deceleration methods to guide, decelerate and focus Rydberg Ps beams, and studies of the interactions of such beams with other atomic and molecular species. These methods are also applicable to antihydrogen production and spectroscopic studies of energy levels and resonances in positronium ions and molecules. A summary of recent progress in this area will be given, with the objective of providing an overview of the field as it currently exists, and a brief discussion of some future directions.

  4. Feasibility and Initial Dosimetric Findings for a Randomized Trial Using Dose-Painted Multiparametric Magnetic Resonance Imaging–Defined Targets in Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossart, Elizabeth L., E-mail: EBossart@med.miami.edu; Stoyanova, Radka; Sandler, Kiri

    2016-06-01

    Purpose: To compare dosimetric characteristics with multiparametric magnetic resonance imaging–identified imaging tumor volume (gross tumor volume, GTV), prostate clinical target volume and planning target volume, and organs at risk (OARs) for 2 treatment techniques representing 2 arms of an institutional phase 3 randomized trial of hypofractionated external beam image guided highly targeted radiation therapy. Methods and Materials: Group 1 (n=20) patients were treated before the trial inception with the standard dose prescription. Each patient had an additional treatment plan generated per the experimental arm. A total of 40 treatment plans were compared (20 plans for each technique). Group 2 (n=15)more » consists of patients currently accrued to the hypofractionated external beam image guided highly targeted radiation therapy trial. Plans were created as per the treatment arm, with additional plans for 5 of the group 2 experimental arm with a 3-mm expansion in the imaging GTV. Results: For all plans in both patient groups, planning target volume coverage ranged from 95% to 100%; GTV coverage of 89.3 Gy for the experimental treatment plans ranged from 95.2% to 99.8%. For both groups 1 and 2, the percent volumes of rectum/anus and bladder receiving 40 Gy, 65 Gy, and 80 Gy were smaller in the experimental plans than in the standard plans. The percent volume at 1 Gy per fraction and 1.625 Gy per fraction were compared between the standard and the experimental arms, and these were found to be equivalent. Conclusions: The dose per fraction to the OARs can be made equal even when giving a large simultaneous integrated boost to the GTV. The data suggest that a GTV margin may be added without significant dose effects on the OARs.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.

    Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.

  6. Modems for emerging digital cellular-mobile radio system

    NASA Technical Reports Server (NTRS)

    Feher, Kamilo

    1991-01-01

    Digital modem techniques for emerging digital cellular telecommunications-mobile radio system applications are described and analyzed. In particular, theoretical performance, experimental results, principles of operation, and various architectures of pi/4-QPSK (pi/4-shifted coherent or differential QPSK) modems for second-generation US digital cellular radio system applications are presented. The spectral/power efficiency and performance of the pi/4-QPSK modems (American and Japanese digital cellular emerging standards) are studied and briefly compared to GMSK (Gaussian minimum-shift keying) modems (proposed for European DECT and GSM cellular standards). Improved filtering strategies and digital pilot-aided (digital channel sounding) techniques are also considered for pi/4-QPSK and other digital modems. These techniques could significantly improve the performance of digital cellular and other digital land mobile and satellite mobile radio systems. More spectrally efficient modem trends for future cellular/mobile (land mobile) and satellite communication systems applications are also highlighted.

  7. Measurement Issues In Pulsed Laser Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinko, John E.; Scharring, Stefan; Eckel, Hans-Albert

    Various measurement techniques have been used throughout the over 40-year history of laser propulsion. Often, these approaches suffered from inconsistencies in definitions of the key parameters that define the physics of laser ablation impulse generation. Such parameters include, but are not limited to the pulse energy, spot area, imparted impulse, and ablated mass. The limits and characteristics of common measurement techniques in each of these areas will be explored as they relate to laser propulsion. The idea of establishing some standardization system for laser propulsion data is introduced in this paper, so that reported results may be considered and studiedmore » by the general community with more certain understanding of particular merits and limitations. In particular, it is the intention to propose a minimum set of requirements a literature study should meet. Some international standards for measurements are already published, but modifications or revisions of such standards may be necessary for application to laser ablation propulsion. Issues relating to development of standards will be discussed, as well as some examples of specific experimental circumstances in which standardization would have prevented misinterpretation or misuse of past data.« less

  8. The Effect of Student-Driven Projects on the Development of Statistical Reasoning

    ERIC Educational Resources Information Center

    Sovak, Melissa M.

    2010-01-01

    Research has shown that even if students pass a standard introductory statistics course, they often still lack the ability to reason statistically. Many instructional techniques for enhancing the development of statistical reasoning have been discussed, although there is often little to no experimental evidence that they produce effective results…

  9. Real-time determination of laser beam quality by modal decomposition.

    PubMed

    Schmidt, Oliver A; Schulze, Christian; Flamm, Daniel; Brüning, Robert; Kaiser, Thomas; Schröter, Siegmund; Duparré, Michael

    2011-03-28

    We present a real-time method to determine the beam propagation ratio M2 of laser beams. The all-optical measurement of modal amplitudes yields M2 parameters conform to the ISO standard method. The experimental technique is simple and fast, which allows to investigate laser beams under conditions inaccessible to other methods.

  10. Crystalline cellulose elastic modulus predicted by atomistic models of uniform deformation and nanoscale indentation

    Treesearch

    Xiawa Wu; Robert J. Moon; Ashlie Martini

    2013-01-01

    The elastic modulus of cellulose Iß in the axial and transverse directions was obtained from atomistic simulations using both the standard uniform deformation approach and a complementary approach based on nanoscale indentation. This allowed comparisons between the methods and closer connectivity to experimental measurement techniques. A reactive...

  11. Blending and nudging in fluid dynamics: some simple observations

    NASA Astrophysics Data System (ADS)

    Germano, M.

    2017-10-01

    Blending and nudging methods have been recently applied in fluid dynamics, particularly regarding the assimilation of experimental data into the computations. In the paper we formally derive the differential equation associated to blending and compare it to the standard nudging equation. Some simple considerations related to these techniques and their mutual relations are exposed.

  12. Global Journal of Computer Science and Technology. Volume 1.2

    ERIC Educational Resources Information Center

    Dixit, R. K.

    2009-01-01

    Articles in this issue of "Global Journal of Computer Science and Technology" include: (1) Input Data Processing Techniques in Intrusion Detection Systems--Short Review (Suhair H. Amer and John A. Hamilton, Jr.); (2) Semantic Annotation of Stock Photography for CBIR Using MPEG-7 standards (R. Balasubramani and V. Kannan); (3) An Experimental Study…

  13. Three-particle annihilation in a 2D heterostructure revealed through data-hypercubic photoresponse microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gabor, Nathaniel M.

    2017-05-01

    Van de Waals (vdW) heterostructures - which consist of precisely assembled atomically thin electronic materials - exhibit unusual quantum behavior. These quantum materials-by-design are of fundamental interest in basic scientific research and hold tremendous potential in advanced technological applications. Problematically, the fundamental optoelectronic response in these heterostructures is difficult to access using the standard techniques within the traditions of materials science and condensed matter physics. In the standard approach, characterization is based on the measurement of a small amount of one-dimensional data, which is used to gain a precise picture of the material properties of the sample. However, these techniques are fundamentally lacking in describing the complex interdependency of experimental degrees of freedom in vdW heterostructures. In this talk, I will present our recent experiments that utilize a highly data-intensive approach to gain deep understanding of the infrared photoresponse in vdW heterostructure photodetectors. These measurements, which combine state-of-the-art data analytics and measurement design with fundamentally new device structures and experimental parameters, give a clear picture of electron-hole pair interactions at ultrafast time scales.

  14. Influence of apical enlargement on the repair of apical periodontitis in rats.

    PubMed

    Jara, C M; Hartmann, R C; Böttcher, D E; Souza, T S; Gomes, M S; Figueiredo, J A P

    2018-05-08

    To evaluate the influence of different apical enlargement protocols on the radiographic and histological healing of apical periodontitis in rats. Apical periodontitis was induced bilaterally in the mandibular right and left first molars of 24 Wistar rats by pulp exposure to the oral cavity for 3 weeks. A standard serial root canal preparation technique was performed in the molar of one side, whilst the opposite side was the control group. Rats were randomly divided into three experimental groups (n = 8), according to the diameter of apical enlargement during root canal preparation: K-files size 20 (EG1), size 25 (EG2) and size 30 (EG3). Each animal was its own positive control, because the opposite arch remained untreated. Root canals were filled with a standard technique. After 3 weeks, the animals were euthanized. The main outcome of apical periodontitis healing was evaluated radiographically (mm 2 ) and histologically (ordinal scores of inflammation) using a HE staining technique. The measurement of effect was obtained between the three experimental groups by carrying out generalized estimating equations, with Poisson regression with robust variance, pairing each experimental group with its respective control group within animals, adjusted for the mean within animal differences, with α = 5%. The mean and standard deviations of radiographic apical periodontitis size (mm 2 ) and intensity of histological inflammatory scores were, respectively: EG1 (0.44 ± 0.27; 2.25 ± 0.46), EG2 (0.33 ± 0.10; 2.50 ± 0.53) and EG3 (0.22 ± 0.08; 2.63 ± 0.74). After 3 weeks, a significantly more favourable radiographic repair was observed when larger apical enlargement was performed (EG3), compared to EG1 and EG2 (P = 0.001). All experimental groups were associated with a significant difference on the radiographic and histological healing of apical periodontitis compared with its respective control group. Under the experimental conditions of this study, a larger apical enlargement protocol favoured a more rapid radiographic repair of apical periodontitis in rats after a 3-week follow-up. © 2018 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  15. A Review of NIST Primary Activity Standards for (18)F: 1982 to 2013.

    PubMed

    Bergeron, Denis E; Cessna, Jeffrey T; Coursey, Bert M; Fitzgerald, Ryan; Zimmerman, Brian E

    2014-01-01

    The new NIST activity standardization for (18)F, described in 2014 in Applied Radiation and Isotopes (v. 85, p. 77), differs from results obtained between 1998 and 2008 by 4 %. The new results are considered to be very reliable; they are based on a battery of robust primary measurement techniques and bring the NIST standard into accord with other national metrology institutes. This paper reviews all ten (18)F activity standardizations performed at NIST from 1982 to 2013, with a focus on experimental variables that might account for discrepancies. We have identified many possible sources of measurement bias and eliminated most of them, but we have not adequately accounted for the 1998-2008 results.

  16. Standards for data acquisition and software-based analysis of in vivo electroencephalography recordings from animals. A TASK1-WG5 report of the AES/ILAE Translational Task Force of the ILAE.

    PubMed

    Moyer, Jason T; Gnatkovsky, Vadym; Ono, Tomonori; Otáhal, Jakub; Wagenaar, Joost; Stacey, William C; Noebels, Jeffrey; Ikeda, Akio; Staley, Kevin; de Curtis, Marco; Litt, Brian; Galanopoulou, Aristea S

    2017-11-01

    Electroencephalography (EEG)-the direct recording of the electrical activity of populations of neurons-is a tremendously important tool for diagnosing, treating, and researching epilepsy. Although standard procedures for recording and analyzing human EEG exist and are broadly accepted, there are no such standards for research in animal models of seizures and epilepsy-recording montages, acquisition systems, and processing algorithms may differ substantially among investigators and laboratories. The lack of standard procedures for acquiring and analyzing EEG from animal models of epilepsy hinders the interpretation of experimental results and reduces the ability of the scientific community to efficiently translate new experimental findings into clinical practice. Accordingly, the intention of this report is twofold: (1) to review current techniques for the collection and software-based analysis of neural field recordings in animal models of epilepsy, and (2) to offer pertinent standards and reporting guidelines for this research. Specifically, we review current techniques for signal acquisition, signal conditioning, signal processing, data storage, and data sharing, and include applicable recommendations to standardize collection and reporting. We close with a discussion of challenges and future opportunities, and include a supplemental report of currently available acquisition systems and analysis tools. This work represents a collaboration on behalf of the American Epilepsy Society/International League Against Epilepsy (AES/ILAE) Translational Task Force (TASK1-Workgroup 5), and is part of a larger effort to harmonize video-EEG interpretation and analysis methods across studies using in vivo and in vitro seizure and epilepsy models. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  17. Approaches to answering critical CER questions.

    PubMed

    Kinnier, Christine V; Chung, Jeanette W; Bilimoria, Karl Y

    2015-01-01

    While randomized controlled trials (RCTs) are the gold standard for research, many research questions cannot be ethically and practically answered using an RCT. Comparative effectiveness research (CER) techniques are often better suited than RCTs to address the effects of an intervention under routine care conditions, an outcome otherwise known as effectiveness. CER research techniques covered in this section include: effectiveness-oriented experimental studies such as pragmatic trials and cluster randomized trials, treatment response heterogeneity, observational and database studies including adjustment techniques such as sensitivity analysis and propensity score analysis, systematic reviews and meta-analysis, decision analysis, and cost effectiveness analysis. Each section describes the technique and covers the strengths and weaknesses of the approach.

  18. Control designs for low-loss active magnetic bearings: Theory and implementation

    NASA Astrophysics Data System (ADS)

    Wilson, Brian Christopher David

    Active Magnetic Bearings (AMB) have been proposed for use in Electromechanical Flywheel Batteries. In these devices, kinetic energy is stored in a magnetically levitated flywheel which spins in a vacuum. The AMB eliminates all mechanical losses, however, electrical loss, which is proportional to the square of the magnetic flux, is still significant. For efficient operation, the flux bias, which is typically introduced into the electromagnets to improve the AMB stiffness, must be reduced, preferably to zero. This zero-bias (ZB) mode of operation cripples the classical control techniques which are customarily used and nonlinear control is required. As a compromise between AMB stiffness and efficiency, a new flux bias scheme is proposed called the generalized complementary flux condition (gcfc). A flux-bias dependent trade-off exists between AMB stiffness, power consumption, and power loss. This work theoretically develops and experimentally verifies new low-loss AMB control designs which employ the gcfc condition. Particular attention is paid to the removal of the singularity present in the standard nonlinear control techniques when operating in ZB. Experimental verification is conduced on a 6-DOF AMB reaction wheel. Practical aspects of the gcfc implementation such as flux measurement and flux-bias implementation with voltage mode amplifiers using IR compensation are investigated. Comparisons are made between the gcfc bias technique and the standard constant-flux-sum (cfs) bias method. Under typical operating circumstances, theoretical analysis and experimental data show that the new gcfc bias scheme is more efficient in producing the control flux required for rotor stabilization than the ordinary cfs bias strategy.

  19. Failure of Standard Training Sets in the Analysis of Fast-Scan Cyclic Voltammetry Data.

    PubMed

    Johnson, Justin A; Rodeberg, Nathan T; Wightman, R Mark

    2016-03-16

    The use of principal component regression, a multivariate calibration method, in the analysis of in vivo fast-scan cyclic voltammetry data allows for separation of overlapping signal contributions, permitting evaluation of the temporal dynamics of multiple neurotransmitters simultaneously. To accomplish this, the technique relies on information about current-concentration relationships across the scan-potential window gained from analysis of training sets. The ability of the constructed models to resolve analytes depends critically on the quality of these data. Recently, the use of standard training sets obtained under conditions other than those of the experimental data collection (e.g., with different electrodes, animals, or equipment) has been reported. This study evaluates the analyte resolution capabilities of models constructed using this approach from both a theoretical and experimental viewpoint. A detailed discussion of the theory of principal component regression is provided to inform this discussion. The findings demonstrate that the use of standard training sets leads to misassignment of the current-concentration relationships across the scan-potential window. This directly results in poor analyte resolution and, consequently, inaccurate quantitation, which may lead to erroneous conclusions being drawn from experimental data. Thus, it is strongly advocated that training sets be obtained under the experimental conditions to allow for accurate data analysis.

  20. EXPERIMENTAL MODELLING OF AORTIC ANEURYSMS

    PubMed Central

    Doyle, Barry J; Corbett, Timothy J; Cloonan, Aidan J; O’Donnell, Michael R; Walsh, Michael T; Vorp, David A; McGloughlin, Timothy M

    2009-01-01

    A range of silicone rubbers were created based on existing commercially available materials. These silicones were designed to be visually different from one another and have distinct material properties, in particular, ultimate tensile strengths and tear strengths. In total, eleven silicone rubbers were manufactured, with the materials designed to have a range of increasing tensile strengths from approximately 2-4MPa, and increasing tear strengths from approximately 0.45-0.7N/mm. The variations in silicones were detected using a standard colour analysis technique. Calibration curves were then created relating colour intensity to individual material properties. All eleven materials were characterised and a 1st order Ogden strain energy function applied. Material coefficients were determined and examined for effectiveness. Six idealised abdominal aortic aneurysm models were also created using the two base materials of the study, with a further model created using a new mixing technique to create a rubber model with randomly assigned material properties. These models were then examined using videoextensometry and compared to numerical results. Colour analysis revealed a statistically significant linear relationship (p<0.0009) with both tensile strength and tear strength, allowing material strength to be determined using a non-destructive experimental technique. The effectiveness of this technique was assessed by comparing predicted material properties to experimentally measured methods, with good agreement in the results. Videoextensometry and numerical modelling revealed minor percentage differences, with all results achieving significance (p<0.0009). This study has successfully designed and developed a range of silicone rubbers that have unique colour intensities and material strengths. Strengths can be readily determined using a non-destructive analysis technique with proven effectiveness. These silicones may further aid towards an improved understanding of the biomechanical behaviour of aneurysms using experimental techniques. PMID:19595622

  1. Weak value amplification considered harmful

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-03-01

    We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.

  2. Microspoiler Actuation for Guided Projectiles

    DTIC Science & Technology

    2016-01-06

    and be hardened to gun -launch. Several alternative designs will be explored using various actuation techniques, and downselection to an optimal design...aerodynamic optimization of the microspoiler mechanism, mechanical design/ gun hardening, and parameter estimation from experimental data. These...performed using the aerodynamic parameters in Table 2. Projectile trajectories were simulated without gravity at zero gun elevation. The standard 30mm

  3. The Effects of Equal Status Cross-Sex Contact on Students' Sex Stereotyped Attitudes and Behavior.

    ERIC Educational Resources Information Center

    Lockheed, Marlaine E.; Harris, Abigail M.

    Standard least squares regression techniques are used to estimate the effects of non-sex-role stereotypes, equal-status cross-sex interaction and female leadership on changes in children's sex stereotyped attitudes. Included are a pretest, experimental treatment, and post-test. Teachers of approximately 400 fourth and fifth grade children received…

  4. Baseline measurement of the noise generated by a short-to-medium range jet transport flying standard ILS approaches and level flyovers

    NASA Technical Reports Server (NTRS)

    Hastings, E. C., Jr.; Shanks, R. E.; Mueller, A. W.

    1975-01-01

    The results of baseline noise flight tests are presented. Data are given for a point 1.85 kilometers (1.0 nautical mile) from the runway threshold, and experimental results of level flyover noise at altitudes of 122 meters (400 feet) and 610 meters (2,000 feet) are also shown for several different power levels. The experimental data are compared with data from other sources and reasonable agreement is noted. A description of the test technique, instrumentation, and data analysis methods is included.

  5. Evaluation of IKTS Transparent Polycrystalline Magnesium Aluminate Spinel (MgAl2O4) for Armor and Infrared Dome/Window Applications

    DTIC Science & Technology

    2013-03-01

    interacted with (15). 4.3.3 Experimental Procedure Two MgAl2O4 spinel samples with nominal 0.6- and 1.6-μm mean grain sizes were tested using advanced...unable to make specific quantitative predictions at this time. Due to the nature of the experimental process, this technique is suitable only for...Information From Spherical Indentation; ARL-TR-4229; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2007. 24. ASTM E112. Standard Test

  6. Experimental liver fibrosis research: update on animal models, legal issues and translational aspects

    PubMed Central

    2013-01-01

    Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perl, M.L.

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics atmore » very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.« less

  8. Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick; Klein, Vladislav

    2011-01-01

    Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.

  9. Precision phase estimation based on weak-value amplification

    NASA Astrophysics Data System (ADS)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  10. MEMS scanning micromirror for optical coherence tomography.

    PubMed

    Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y

    2015-01-01

    This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique.

  11. MEMS scanning micromirror for optical coherence tomography

    PubMed Central

    Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G.; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y.

    2014-01-01

    This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique. PMID:25657887

  12. What We Have Learned About the Existing Trace Element Partitioning data During the Population Phase of traceDs

    NASA Astrophysics Data System (ADS)

    Nielsen, R. L.; Ghiorso, M. S.; Trischman, T.

    2015-12-01

    The database traceDs is designed to provide a transparent and accessible resource of experimental partitioning data. It now includes ~ 90% of all the experimental trace element partitioning data (~4000 experiments) produced over the past 45 years, and is accessible through a web based interface (using the portal lepr.ofm-research.org). We set a minimum standard for inclusion, with the threshold criteria being the inclusion of: Experimental conditions (temperature, pressure, device, container, time, etc.) Major element composition of the phases Trace element analyses of the phases Data sources that did not report these minimum components were not included. The rationale for not including such data is that the degree of equilibration is unknown, and more important, no rigorous approach to modeling the behavior of trace elements is possible without knowledge of composition of the phases, and the temperature and pressure of formation/equilibration. The data are stored using a schema derived from that of the Library of Experimental Phase Relations (LEPR), modified to account for additional metadata, and restructured to permit multiple analytical entries for various element/technique/standard combinations. In the process of populating the database, we have learned a number of things about the existing published experimental partitioning data. Most important are: ~ 20% of the papers do not satisfy one or more of the threshold criteria. The standard format for presenting data is the average. This was developed as the standard during the time where there were space constraints for publication in spite of fact that all the information can now be published as electronic supplements. The uncertainties that are published with the compositional data are often not adequately explained (e.g. 1 or 2 sigma, standard deviation of the average, etc.). We propose a new set of publication standards for experimental data that include the minimum criteria described above, the publication of all analyses with error based on peak count rates and background, plus information on the structural state of the mineral (e.g. orthopyroxene vs. pigeonite).

  13. A Review of NIST Primary Activity Standards for 18F: 1982 to 2013

    PubMed Central

    Bergeron, Denis E; Cessna, Jeffrey T; Coursey, Bert M; Fitzgerald, Ryan; Zimmerman, Brian E

    2014-01-01

    The new NIST activity standardization for 18F, described in 2014 in Applied Radiation and Isotopes (v. 85, p. 77), differs from results obtained between 1998 and 2008 by 4 %. The new results are considered to be very reliable; they are based on a battery of robust primary measurement techniques and bring the NIST standard into accord with other national metrology institutes. This paper reviews all ten 18F activity standardizations performed at NIST from 1982 to 2013, with a focus on experimental variables that might account for discrepancies. We have identified many possible sources of measurement bias and eliminated most of them, but we have not adequately accounted for the 1998–2008 results. PMID:26601035

  14. Quantum simulation of a quantum stochastic walk

    NASA Astrophysics Data System (ADS)

    Govia, Luke C. G.; Taketani, Bruno G.; Schuhmacher, Peter K.; Wilhelm, Frank K.

    2017-03-01

    The study of quantum walks has been shown to have a wide range of applications in areas such as artificial intelligence, the study of biological processes, and quantum transport. The quantum stochastic walk (QSW), which allows for incoherent movement of the walker, and therefore, directionality, is a generalization on the fully coherent quantum walk. While a QSW can always be described in Lindblad formalism, this does not mean that it can be microscopically derived in the standard weak-coupling limit under the Born-Markov approximation. This restricts the class of QSWs that can be experimentally realized in a simple manner. To circumvent this restriction, we introduce a technique to simulate open system evolution on a fully coherent quantum computer, using a quantum trajectories style approach. We apply this technique to a broad class of QSWs, and show that they can be simulated with minimal experimental resources. Our work opens the path towards the experimental realization of QSWs on large graphs with existing quantum technologies.

  15. A fractional Fourier transform analysis of a bubble excited by an ultrasonic chirp.

    PubMed

    Barlow, Euan; Mulholland, Anthony J

    2011-11-01

    The fractional Fourier transform is proposed here as a model based, signal processing technique for determining the size of a bubble in a fluid. The bubble is insonified with an ultrasonic chirp and the radiated pressure field is recorded. This experimental bubble response is then compared with a series of theoretical model responses to identify the most accurate match between experiment and theory which allows the correct bubble size to be identified. The fractional Fourier transform is used to produce a more detailed description of each response, and two-dimensional cross correlation is then employed to identify the similarities between the experimental response and each theoretical response. In this paper the experimental bubble response is simulated by adding various levels of noise to the theoretical model output. The method is compared to the standard technique of using time-domain cross correlation. The proposed method is shown to be far more robust at correctly sizing the bubble and can cope with much lower signal to noise ratios.

  16. Effectiveness of project ACORDE materials: applied evaluative research in a preclinical technique course.

    PubMed

    Shugars, D A; Trent, P J; Heymann, H O

    1979-08-01

    Two instructional strategies, the traditional lecture method and a standardized self-instructional (ACORDE) format, were compared for efficiency and perceived usefulness in a preclinical restorative dentistry technique course through the use of a posttest-only control group research design. Control and experimental groups were compared on (a) technique grades, (b) didactic grades, (c) amount of time spent, (d) student and faculty perceptions, and (e) observation of social dynamics. The results of this study demonstrated the effectiveness of Project ACORDE materials in teaching dental students, provided an example of applied research designed to test contemplated instructional innovations prior to use and used a method which highlighted qualitative, as well as quantitative, techniques for data gathering in applied research.

  17. Double 90 Degrees Counterrotated End-to-End-Anastomosis: An Experimental Study of an Intestinal Anastomosis Technique.

    PubMed

    Holzner, Philipp; Kulemann, Birte; Seifert, Gabriel; Glatz, Torben; Chikhladze, Sophia; Höppner, Jens; Hopt, Ulrich; Timme, Sylvia; Bronsert, Peter; Sick, Olivia; Zhou, Cheng; Marjanovic, Goran

    2015-06-01

    The aim of the article is to investigate a new anastomotic technique compared with standardized intestinal anastomotic procedures. A total of 32 male Wistar rats were randomized to three groups. In the Experimental Group (n = 10), the new double 90 degrees inversely rotated anastomosis was used, in the End Group (n = 10) a single-layer end-to-end anastomosis, and in the Side Group (n = 12) a single-layer side-to-side anastomosis. All anastomoses were done using interrupted sutures. On postoperative day 4, rats were relaparotomized. Bursting pressure, hydroxyproline concentration, a semiquantitative adhesion score and two histological anastomotic healing scores (mucosal healing according to Chiu and overall anastomotic healing according to Verhofstad) were collected. Most data are presented as median (range). p < 0.05 was considered significant. Anastomotic insufficiency occurred only in one rat of the Side Group. Median bursting pressure in the Experimental Group was 105 mm Hg (range = 72-161 mm Hg), significantly higher in the End Group (164 mm Hg; range = 99-210 mm Hg; p = 0.021) and lower in the Side Group by trend (81 mm Hg; range = 59-122 mm Hg; p = 0.093). Hydroxyproline concentration did not differ significantly in between the groups. The adhesion score was 2.5 (range = 1-3) in the Experimental Group, 2 (range = 1-2) in the End Group, but there were significantly more adhesions in the Side Group (range = 3-4); p = 0.020 versus Experimental Group, p < 0.001 versus End Group. The Chiu Score showed the worst mucosal healing in the Experimental Group. The overall Verhofstad Score was significantly worse (mean = 2.032; standard deviation [SD] = 0.842) p = 0.031 and p = 0.002 in the Experimental Group, compared with the Side Group (mean = 1.729; SD = 0.682) and the End Group (mean = 1.571; SD = 0.612). The new anastomotic technique is feasible and did not show any relevant complication. Even though it was superior to the side-to-side anastomosis by trend with respect to functional stability, mucosal healing surprisingly showed the worst results. Classical end-to-end anastomosis still seems to be the best choice regarding structural and functional anastomotic stability. Georg Thieme Verlag KG Stuttgart · New York.

  18. Improving estimates of streamflow characteristics by using Landsat-1 imagery

    USGS Publications Warehouse

    Hollyday, Este F.

    1976-01-01

    Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.

  19. Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.

    PubMed

    Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J

    2018-03-01

    Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.

  20. Experimental measurements with Monte Carlo corrections and theoretical calculations of neutron inelastic scattering cross section of 115In

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Xiao, Jun; Luo, Xiaobing

    2016-10-01

    The neutron inelastic scattering cross section of 115In has been measured by the activation technique at neutron energies of 2.95, 3.94, and 5.24 MeV with the neutron capture cross sections of 197Au as an internal standard. The effects of multiple scattering and flux attenuation were corrected using the Monte Carlo code GEANT4. Based on the experimental values, the 115In neutron inelastic scattering cross sections data were theoretically calculated between the 1 and 15 MeV with the TALYS software code, the theoretical results of this study are in reasonable agreement with the available experimental results.

  1. Leaving No Stone Unturned in the Pursuit of New Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Timothy

    The major goal of this project was to investigate a variety of topics in theoretical particle physics, with an emphasis on beyond the Standard Model phenomena. A particular emphasis is placed on making a connection to ongoing experimental efforts designed to extend our knowledge of the fundamental physics frontiers. The principal investigator aimed to play a leading role in theoretical research that complements this impressive experimental endeavor. Progress requires a strong synergy between the theoretical and experimental communities to design and interpret the data that is produced. Thus, this project's main goal was to improve our understanding of models, signatures,more » and techniques as we continue the hunt for new physics.« less

  2. Overview: Homogeneous nucleation from the vapor phase-The experimental science.

    PubMed

    Wyslouzil, Barbara E; Wölk, Judith

    2016-12-07

    Homogeneous nucleation from the vapor phase has been a well-defined area of research for ∼120 yr. In this paper, we present an overview of the key experimental and theoretical developments that have made it possible to address some of the fundamental questions first delineated and investigated in C. T. R. Wilson's pioneering paper of 1897 [C. T. R. Wilson, Philos. Trans. R. Soc., A 189, 265-307 (1897)]. We review the principles behind the standard experimental techniques currently used to measure isothermal nucleation rates, and discuss the molecular level information that can be extracted from these measurements. We then highlight recent approaches that interrogate the vapor and intermediate clusters leading to particle formation, more directly.

  3. Density-cluster NMA: A new protein decomposition technique for coarse-grained normal mode analysis.

    PubMed

    Demerdash, Omar N A; Mitchell, Julie C

    2012-07-01

    Normal mode analysis has emerged as a useful technique for investigating protein motions on long time scales. This is largely due to the advent of coarse-graining techniques, particularly Hooke's Law-based potentials and the rotational-translational blocking (RTB) method for reducing the size of the force-constant matrix, the Hessian. Here we present a new method for domain decomposition for use in RTB that is based on hierarchical clustering of atomic density gradients, which we call Density-Cluster RTB (DCRTB). The method reduces the number of degrees of freedom by 85-90% compared with the standard blocking approaches. We compared the normal modes from DCRTB against standard RTB using 1-4 residues in sequence in a single block, with good agreement between the two methods. We also show that Density-Cluster RTB and standard RTB perform well in capturing the experimentally determined direction of conformational change. Significantly, we report superior correlation of DCRTB with B-factors compared with 1-4 residue per block RTB. Finally, we show significant reduction in computational cost for Density-Cluster RTB that is nearly 100-fold for many examples. Copyright © 2012 Wiley Periodicals, Inc.

  4. Micro- and nano-scale optical devices for high density photonic integrated circuits at near-infrared wavelengths

    NASA Astrophysics Data System (ADS)

    Chatterjee, Rohit

    In this research work, we explore fundamental silicon-based active and passive photonic devices that can be integrated together to form functional photonic integrated circuits. The devices which include power splitters, switches and lenses are studied starting from their physics, their design and fabrication techniques and finally from an experimental standpoint. The experimental results reveal high performance devices that are compatible with standard CMOS fabrication processes and can be easily integrated with other devices for near infrared telecom applications. In Chapter 2, a novel method for optical switching using nanomechanical proximity perturbation technique is described and demonstrated. The method which is experimentally demonstrated employs relatively low powers, small chip footprint and is compatible with standard CMOS fabrication processes. Further, in Chapter 3, this method is applied to develop a hitless bypass switch aimed at solving an important issue in current wavelength division multiplexing systems namely hitless switching of reconfigurable optical add drop multiplexers. Experimental results are presented to demonstrate the application of the nanomechanical proximity perturbation technique to practical situations. In Chapter 4, a fundamental photonic component namely the power splitter is described. Power splitters are important components for any photonic integrated circuits because they help split the power from a single light source to multiple devices on the same chip so that different operations can be performed simultaneously. The power splitters demonstrated in this chapter are based on multimode interference principles resulting in highly compact low loss and highly uniform power splitting to split the power of the light from a single channel to two and four channels. These devices can further be scaled to achieve higher order splitting such as 1x16 and 1x32 power splits. Finally in Chapter 5 we overcome challenges in device fabrication and measurement techniques to demonstrate for the first time a "superlens" for the technologically important near infrared wavelength ranges with the opportunity to scale down further to visible wavelengths. The observed resolution is 0.47lambda, clearly smaller than the diffraction limit of 0.61lambda and is supported by detailed theoretical analyses and comprehensive numerical simulations. Importantly, we clearly show for the first time this subdiffraction limit imaging is due to the resonant excitation of surface slab modes, permitting amplification of evanescent waves. The demonstrated "superlens" has the largest figure of merit ever reported till date both theoretically and experimentally. The techniques and devices described in this thesis can be further applied to develop new devices with different functionalities. In Chapter 6 we describe two examples using these ideas. First, we experimentally demonstrate the use of the nanomechanical proximity perturbation technique to develop a phase retarder for on-chip all state polarization control. Next, we use the negative refraction photonic crystals described in Chapter 5 to achieve a special kind of bandgap called the zero-n¯ bandgap having unique properties.

  5. Some computational techniques for estimating human operator describing functions

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1986-01-01

    Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.

  6. About a method for compressing x-ray computed microtomography data

    NASA Astrophysics Data System (ADS)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  7. The effects of using diagramming as a representational technique on high school students' achievement in solving math word problems

    NASA Astrophysics Data System (ADS)

    Banerjee, Banmali

    Methods and procedures for successfully solving math word problems have been, and continue to be a mystery to many U.S. high school students. Previous studies suggest that the contextual and mathematical understanding of a word problem, along with the development of schemas and their related external representations, positively contribute to students' accomplishments when solving word problems. Some studies have examined the effects of diagramming on students' abilities to solve word problems that only involved basic arithmetic operations. Other studies have investigated how instructional models that used technology influenced students' problem solving achievements. Still other studies have used schema-based instruction involving students with learning disabilities. No study has evaluated regular high school students' achievements in solving standard math word problems using a diagramming technique without technological aid. This study evaluated students' achievement in solving math word problems using a diagramming technique. Using a quasi-experimental experimental pretest-posttest research design, quantitative data were collected from 172 grade 11 Hispanic English language learners (ELLS) and African American learners whose first language is English (EFLLs) in 18 classes at an inner city high school in Northern New Jersey. There were 88 control and 84 experimental students. The pretest and posttest of each participating student and samples of the experimental students' class assignments provided the qualitative data for the study. The data from this study exhibited that the diagramming method of solving math word problems significantly improved student achievement in the experimental group (p<.01) compared to the control group. The study demonstrated that urban, high school, ELLs benefited from instruction that placed emphasis on the mathematical vocabulary and symbols used in word problems and that both ELLs and EFLLs improved their problem solving success through careful attention to the creation and labeling of diagrams to represent the mathematics involved in standard word problems. Although Learnertype (ELL, EFLL), Classtype (Bilingual and Mixed), and Gender (Female, Male) were not significant indicators of student achievement, there was significant interaction between Treatment and Classtype at the level of the Bilingual students ( p<.01) and between Treatment and Learnertype at the level of the ELLs (p<.01).

  8. Hollow Core Bragg Waveguide Design and Fabrication for Enhanced Raman Spectroscopy

    NASA Astrophysics Data System (ADS)

    Ramanan, Janahan

    Raman spectroscopy is a widely used technique to unambiguously ascertain the chemical composition of a sample. The caveat with this technique is its extremely weak optical cross-section, making it difficult to measure Raman signal with standard optical setups. In this thesis, a novel hollow core Bragg Reflection Waveguide was designed to simultaneously increase the generation and collection of Raman scattered photons. A robust fabrication process of this waveguide was developed employing flip-chip bonding methods to securely seal the hollow core channel. The waveguide air-core propagation loss was experimentally measured to be 0.17 dB/cm, and the Raman sensitivity limit was measured to be 3 mmol/L for glycerol solution. The waveguide was also shown to enhance Raman modes of standard household aerosols that could not be seen with other devices.

  9. Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier

    NASA Astrophysics Data System (ADS)

    Dalhoff, Ernst; Gummer, Anthony W.

    2011-11-01

    Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.

  10. Crystal Growth of ZnSe and Related Ternary Compound Semiconductors by Vapor Transport in Low Gravity

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua; Ramachandran, N.

    2013-01-01

    Crystals of ZnSe and related ternary compounds, such as ZnSeS and ZnSeTe, will be grown by physical vapor transport in the Material Science Research Rack (MSRR) on International Space Station (ISS). The objective of the project is to determine the relative contributions of gravity-driven fluid flows to the compositional distribution, incorporation of impurities and defects, and deviation from stoichiometry observed in the crystals grown by vapor transport as results of buoyance-driven convection and growth interface fluctuations caused by irregular fluid-flows on Earth. The investigation consists of extensive ground-based experimental and theoretical research efforts and concurrent flight experimentation. The objectives of the ground-based studies are (1) obtain the experimental data and conduct the analyses required to define the optimum growth parameters for the flight experiments, (2) perfect various characterization techniques to establish the standard procedure for material characterization, (3) quantitatively establish the characteristics of the crystals grown on Earth as a basis for subsequent comparative evaluations of the crystals grown in a low-gravity environment and (4) develop theoretical and analytical methods required for such evaluations. ZnSe and related ternary compounds have been grown by vapor transport technique with real time in-situ non-invasive monitoring techniques. The grown crystals have been characterized extensively by various techniques to correlate the grown crystal properties with the growth conditions.

  11. Techniques for video compression

    NASA Technical Reports Server (NTRS)

    Wu, Chwan-Hwa

    1995-01-01

    In this report, we present our study on multiprocessor implementation of a MPEG2 encoding algorithm. First, we compare two approaches to implementing video standards, VLSI technology and multiprocessor processing, in terms of design complexity, applications, and cost. Then we evaluate the functional modules of MPEG2 encoding process in terms of their computation time. Two crucial modules are identified based on this evaluation. Then we present our experimental study on the multiprocessor implementation of the two crucial modules. Data partitioning is used for job assignment. Experimental results show that high speedup ratio and good scalability can be achieved by using this kind of job assignment strategy.

  12. Fiber-optic coupling based on nonimaging expanded-beam optics.

    PubMed

    Moslehi, B; Ng, J; Kasimoff, I; Jannson, T

    1989-12-01

    We have fabricated and experimentally tested low-cost and mass-producible multimode fiber-optic couplers and connectors based on nonimaging beam-expanding optics and Liouville's theorem. Analysis indicates that a pair coupling loss of -0.25 dB can be achieved. Experimentally, we measured insertion losses as low as -0.38 dB. The beam expanders can be mass produced owing to the use of plastic injection-molding fabrication techniques and packaged in standard connector housings. This design is compatible with the fiber geometry and can yield highly stable coupling owing to its high tolerance for misalignments.

  13. Effects of methoxy and formyl substituents on the energetics and reactivity of α-naphthalenes: a calorimetric and computational study.

    PubMed

    Silva, Ana L R; Freitas, Vera L S; Ribeiro da Silva, Maria D M C

    2014-07-01

    A combined experimental and computational study was developed to evaluate and understand the energetics and reactivity of formyl and methoxy α-naphthalene derivatives. Static bomb combustion calorimetry and the Calvet microcalorimetry were the experimental techniques used to determine the standard (p(o)=0.1 MPa) molar enthalpies of formation, in the liquid phase, ΔfHm(o)(l), and of vaporization, Δl(g)Hm(o), at T=298.15K, respectively, of the two liquid naphthalene derivatives. Those experimental values were used to derive the values of the experimental standard molar enthalpies of formation, in the gaseous phase, ΔfHm(o)(g), of 1-methoxynaphthalene, (-3.0 ± 3.1)kJmol(-1), and of 1-formylnaphthalene, (36.3 ± 4.1)kJ mol(-1). High-level quantum chemical calculations at the composite G3(MP2)//B3LYP level were performed to estimate the values of the ΔfHm(o)(g) of the two compounds studied resulting in values in very good agreement with experimental ones. Natural bond orbital (NBO) calculations were also performed to determine more about the structure and reactivity of this class of compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Virtual lab demonstrations improve students' mastery of basic biology laboratory techniques.

    PubMed

    Maldarelli, Grace A; Hartmann, Erica M; Cummings, Patrick J; Horner, Robert D; Obom, Kristina M; Shingles, Richard; Pearlman, Rebecca S

    2009-01-01

    Biology laboratory classes are designed to teach concepts and techniques through experiential learning. Students who have never performed a technique must be guided through the process, which is often difficult to standardize across multiple lab sections. Visual demonstration of laboratory procedures is a key element in teaching pedagogy. The main goals of the study were to create videos explaining and demonstrating a variety of lab techniques that would serve as teaching tools for undergraduate and graduate lab courses and to assess the impact of these videos on student learning. Demonstrations of individual laboratory procedures were videotaped and then edited with iMovie. Narration for the videos was edited with Audacity. Undergraduate students were surveyed anonymously prior to and following screening to assess the impact of the videos on student lab performance by completion of two Participant Perception Indicator surveys. A total of 203 and 171 students completed the pre- and posttesting surveys, respectively. Statistical analyses were performed to compare student perceptions of knowledge of, confidence in, and experience with the lab techniques before and after viewing the videos. Eleven demonstrations were recorded. Chi-square analysis revealed a significant increase in the number of students reporting increased knowledge of, confidence in, and experience with the lab techniques after viewing the videos. Incorporation of instructional videos as prelaboratory exercises has the potential to standardize techniques and to promote successful experimental outcomes.

  15. Efficient GPS Position Determination Algorithms

    DTIC Science & Technology

    2007-06-01

    provides two types of services. The Standard Positioning Service (SPS) is designated for the civilian users. The Precise Positioning Service (PPS) is...meters RMS. Military receivers utilized de -encryption techniques to remove SA and provide position accuracy of 10-meters root-mean-square (RMS) [1...difficulties. This type of scenario can be expected in test range applications ([20] and [21]). In this dissertation, the experimental test environment

  16. Electronic Theory of 2-6 and Related Semiconducting Materials and Structures

    DTIC Science & Technology

    1985-10-01

    standard crystalline band-structure techniques to ordered alloy configurations. This approach is especially interesting in view of recent experimental5(’fid...WEAKLY NONLINEAR... 10973 Eq. (10). The resulting expression for Z, exhibits interest- -. ing behavior, especially near the percolation threshold, (b...of A. Metal-insulator composite composites, especially near the percolation threshold. In It is well known that normal-metal-insulator compos

  17. Designing Successful Proteomics Experiments.

    PubMed

    Ruderman, Daniel

    2017-01-01

    Because proteomics experiments are so complex they can readily fail, and do so without clear cause. Using standard experimental design techniques and incorporating quality control can greatly increase the chances of success. This chapter introduces the relevant concepts and provides examples specific to proteomic workflows. Applying these notions to design successful proteomics experiments is straightforward. It can help identify failure causes and greatly increase the likelihood of inter-laboratory reproducibility.

  18. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  19. Primary standardization of 57Co.

    PubMed

    Koskinas, Marina F; Moreira, Denise S; Yamazaki, Ione M; de Toledo, Fábio; Brancaccio, Franco; Dias, Mauro S

    2010-01-01

    This work describes the method developed by the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, for the standardization of a (57)Co radioactive solution. Cobalt-57 is a radionuclide used for calibrating gamma-ray and X-ray spectrometers, as well as a gamma reference source for dose calibrators used in nuclear medicine services. Two 4pibeta-gamma coincidence systems were used to perform the standardization, the first used a 4pi(PC) counter coupled to a pair of 76 mm x 76 mm NaI(Tl) scintillators for detecting gamma-rays, the other one used a HPGe spectrometer for gamma detection. The measurements were performed by selecting a gamma-ray window comprising the (122 keV+136 keV) total absorption energy peaks in the NaI(Tl) and selecting the total absorption peak of 122 keV in the germanium detector. The electronic system used the TAC method developed at LMN for registering the observed events. The methodology recently developed by the LMN for simulating all detection processes in a 4pibeta-gamma coincidence system, by means of the Monte Carlo technique, was applied and the behavior of extrapolation curve compared to experimental data. The final activity obtained by the Monte Carlo calculation agrees with the experimental results within the experimental uncertainty. Copyright 2009 Elsevier Ltd. All rights reserved.

  20. Results of the CCRI(II)-S12.H-3 supplementary comparison: Comparison of methods for the calculation of the activity and standard uncertainty of a tritiated-water source measured using the LSC-TDCR method.

    PubMed

    Cassette, Philippe; Altzitzoglou, Timotheos; Antohe, Andrei; Rossi, Mario; Arinc, Arzu; Capogni, Marco; Galea, Raphael; Gudelis, Arunas; Kossert, Karsten; Lee, K B; Liang, Juncheng; Nedjadi, Youcef; Oropesa Verdecia, Pilar; Shilnikova, Tanya; van Wyngaardt, Winifred; Ziemek, Tomasz; Zimmerman, Brian

    2018-04-01

    A comparison of calculations of the activity of a 3 H 2 O liquid scintillation source using the same experimental data set collected at the LNE-LNHB with a triple-to-double coincidence ratio (TDCR) counter was completed. A total of 17 laboratories calculated the activity and standard uncertainty of the LS source using the files with experimental data provided by the LNE-LNHB. The results as well as relevant information on the computation techniques are presented and analysed in this paper. All results are compatible, even if there is a significant dispersion between the reported uncertainties. An output of this comparison is the estimation of the dispersion of TDCR measurement results when measurement conditions are well defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Diagnostic methodology is critical for accurately determining the prevalence of ichthyophonus infections in wild fish populations

    USGS Publications Warehouse

    Kocan, R.; Dolan, H.; Hershberger, P.

    2011-01-01

    Several different techniques have been employed to detect and identify Ichthyophonus spp. in infected fish hosts; these include macroscopic observation, microscopic examination of tissue squashes, histological evaluation, in vitro culture, and molecular techniques. Examination of the peer-reviewed literature revealed that when more than 1 diagnostic method is used, they often result in significantly different results; for example, when in vitro culture was used to identify infected trout in an experimentally exposed population, 98.7% of infected trout were detected, but when standard histology was used to confirm known infected tissues from wild salmon, it detected ~50% of low-intensity infections and ~85% of high-intensity infections. Other studies on different species reported similar differences. When we examined a possible mechanism to explain the disparity between different diagnostic techniques, we observed non-random distribution of the parasite in 3-dimensionally visualized tissue sections from infected hosts, thus providing a possible explanation for the different sensitivities of commonly used diagnostic techniques. Based on experimental evidence and a review of the peer-reviewed literature, we have concluded that in vitro culture is currently the most accurate diagnostic technique for determining infection prevalence of Ichthyophonus, particularly when the exposure history of the population is not known.

  2. A new experimental method to determine the sorption isotherm of a liquid in a porous medium.

    PubMed

    Ouoba, Samuel; Cherblanc, Fabien; Cousin, Bruno; Bénet, Jean-Claude

    2010-08-01

    Sorption from the vapor phase is an important factor controlling the transport of volatile organic compounds (VOCs) in the vadose zone. Therefore, an accurate description of sorption behavior is essential to predict the ultimate fate of contaminants. Several measurement techniques are available in the case of water, however, when dealing with VOCs, the determination of sorption characteristics generally relies on gas chromatography. To avoid some drawbacks associated with this technology, we propose a new method to determine the sorption isotherm of any liquid compounds adsorbed in a soil. This method is based on standard and costless transducers (gas pressure, temperature) leading to a simple and transportable experimental device. A numerical estimation underlines the good accuracy and this technique is validated on two examples. Finally, this method is applied to determine the sorption isotherm of three liquid compounds (water, heptane, and trichloroethylene) in a clayey soil.

  3. An overview of clinical and experimental treatment modalities for port wine stains

    PubMed Central

    Chen, Jennifer K.; Ghasri, Pedram; Aguilar, Guillermo; van Drooge, Anne Margreet; Wolkerstorfer, Albert; Kelly, Kristen M.; Heger, Michal

    2014-01-01

    Port wine stains (PWS) are the most common vascular malformation of the skin, occurring in 0.3% to 0.5% of the population. Noninvasive laser irradiation with flashlamp-pumped pulsed dye lasers (selective photothermolysis) currently comprises the gold standard treatment of PWS; however, the majority of PWS fail to clear completely after selective photothermolysis. In this review, the clinically used PWS treatment modalities (pulsed dye lasers, alexandrite lasers, neodymium:yttrium-aluminum-garnet lasers, and intense pulsed light) and techniques (combination approaches, multiple passes, and epidermal cooling) are discussed. Retrospective analysis of clinical studies published between 1990 and 2011 was performed to determine therapeutic efficacies for each clinically used modality/technique. In addition, factors that have resulted in the high degree of therapeutic recalcitrance are identified, and emerging experimental treatment strategies are addressed, including the use of photodynamic therapy, immunomodulators, angiogenesis inhibitors, hypobaric pressure, and site-specific pharmaco-laser therapy. PMID:22305042

  4. Elderly quality of life impacted by traditional chinese medicine techniques

    PubMed Central

    Figueira, Helena A; Figueira, Olivia A; Figueira, Alan A; Figueira, Joana A; Giani, Tania S; Dantas, Estélio HM

    2010-01-01

    Background: The shift in age structure is having a profound impact, suggesting that the aged should be consulted as reporters on the quality of their own lives. Objectives: The aim of this research was to establish the possible impact of traditional Chinese medicine (TCM) techniques on the quality of life (QOL) of the elderly. Sample: Two non-selected, volunteer groups of Rio de Janeiro municipality inhabitants: a control group (36 individuals), not using TCM, and an experimental group (28 individuals), using TCM at ABACO/Sohaku-in Institute, Brazil. Methods: A questionnaire on elderly QOL devised by the World Health Organization, the WHOQOL-Old, was adopted and descriptive statistical techniques were used: mean and standard deviation. The Shapiro–Wilk test checked the normality of the distribution. Furthermore, based on its normality distribution for the intergroup comparison, the Student t test was applied to facets 2, 4, 5, 6, and total score, and the Mann–Whitney U rank test to facets 1 and 3, both tests aiming to analyze the P value between experimental and control groups. The significance level utilized was 95% (P < 0.05). Results: The experimental group reported the highest QOL for every facet and the total score. Conclusions: The results suggest that TCM raises the level of QOL. PMID:21103400

  5. Structural health monitoring in composite materials using frequency response methods

    NASA Astrophysics Data System (ADS)

    Kessler, Seth S.; Spearing, S. Mark; Atalla, Mauro J.; Cesnik, Carlos E. S.; Soutis, Constantinos

    2001-08-01

    Cost effective and reliable damage detection is critical for the utilization of composite materials in structural applications. Non-destructive evaluation techniques (e.g. ultrasound, radiography, infra-red imaging) are available for use during standard repair and maintenance cycles, however by comparison to the techniques used for metals these are relatively expensive and time consuming. This paper presents part of an experimental and analytical survey of candidate methods for the detection of damage in composite materials. The experimental results are presented for the application of modal analysis techniques applied to rectangular laminated graphite/epoxy specimens containing representative damage modes, including delamination, transverse ply cracks and through-holes. Changes in natural frequencies and modes were then found using a scanning laser vibrometer, and 2-D finite element models were created for comparison with the experimental results. The models accurately predicted the response of the specimems at low frequencies, but the local excitation and coalescence of higher frequency modes make mode-dependent damage detection difficult and most likely impractical for structural applications. The frequency response method was found to be reliable for detecting even small amounts of damage in a simple composite structure, however the potentially important information about damage type, size, location and orientation were lost using this method since several combinations of these variables can yield identical response signatures.

  6. Metrology of vibration measurements by laser techniques

    NASA Astrophysics Data System (ADS)

    von Martens, Hans-Jürgen

    2008-06-01

    Metrology as the art of careful measurement has been understood as uniform methodology for measurements in natural sciences, covering methods for the consistent assessment of experimental data and a corpus of rules regulating application in technology and in trade and industry. The knowledge, methods and tools available for precision measurements can be exploited for measurements at any level of uncertainty in any field of science and technology. A metrological approach to the preparation, execution and evaluation (including expression of uncertainty) of measurements of translational and rotational motion quantities using laser interferometer methods and techniques will be presented. The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and upgraded ISO standards are reviewed with respect to their suitability for ensuring traceable vibration measurements and calibrations in an extended frequency range of 0.4 Hz to higher than 100 kHz. Using adequate vibration exciters to generate sufficient displacement or velocity amplitudes, the upper frequency limits of the laser interferometer methods specified in ISO 16063-11 for frequencies <= 10 kHz can be expanded to 100 kHz and beyond. A comparison of different methods simultaneously used for vibration measurements at 100 kHz will be demonstrated. A statistical analysis of numerous experimental results proves the highest accuracy achievable currently in vibration measurements by specific laser methods, techniques and procedures (i.e. measurement uncertainty 0.05 % at frequencies <= 10 kHz, <= 1 % up to 100 kHz).

  7. Estimating structure quality trends in the Protein Data Bank by equivalent resolution.

    PubMed

    Bagaria, Anurag; Jaravine, Victor; Güntert, Peter

    2013-10-01

    The quality of protein structures obtained by different experimental and ab-initio calculation methods varies considerably. The methods have been evolving over time by improving both experimental designs and computational techniques, and since the primary aim of these developments is the procurement of reliable and high-quality data, better techniques resulted on average in an evolution toward higher quality structures in the Protein Data Bank (PDB). Each method leaves a specific quantitative and qualitative "trace" in the PDB entry. Certain information relevant to one method (e.g. dynamics for NMR) may be lacking for another method. Furthermore, some standard measures of quality for one method cannot be calculated for other experimental methods, e.g. crystal resolution or NMR bundle RMSD. Consequently, structures are classified in the PDB by the method used. Here we introduce a method to estimate a measure of equivalent X-ray resolution (e-resolution), expressed in units of Å, to assess the quality of any type of monomeric, single-chain protein structure, irrespective of the experimental structure determination method. We showed and compared the trends in the quality of structures in the Protein Data Bank over the last two decades for five different experimental techniques, excluding theoretical structure predictions. We observed that as new methods are introduced, they undergo a rapid method development evolution: within several years the e-resolution score becomes similar for structures obtained from the five methods and they improve from initially poor performance to acceptable quality, comparable with previously established methods, the performance of which is essentially stable. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Combination of water-jet dissection and needle-knife as a hybrid knife simplifies endoscopic submucosal dissection.

    PubMed

    Lingenfelder, Tobias; Fischer, Klaus; Sold, Moritz G; Post, Stefan; Enderle, Markus D; Kaehler, Georg F B A

    2009-07-01

    The safety and efficacy of endoscopic submucosal dissection (ESD) is very dependent on an effective injection beneath the submucosal lamina and on a controlled cutting technique. After our study group demonstrated the efficacy of the HydroJet in needleless submucosal injections under various physical conditions to create a submucosal fluid cushion (Selective tissue elevation by pressure = STEP technique), the next step was to develop a new instrument to combine the capabilities of an IT-Knife with a high-pressure water-jet in a single instrument. In this experimental study, we compared this new instrument with a standard ESD technique. Twelve gastric ESD were performed in six pigs under endotracheal anesthesia. Square areas measuring 4-cm x 4-cm were marked out on the anterior and posterior wall in the corpus-antrum transition region. The HybridKnife was used as an standard needle knife with insulated tip (i.e., the submucosal injection was performed with an injection needle and only the radiofrequency (RF) part of the HybridKnife was used for cutting (conventional technique)) or the HybridKnife was used in all the individual stages of the ESD, making use of the HybridKnife's combined functions (HybridKnife technique). The size of the resected specimens, the operating time, the frequency with which instruments were changed, the number of bleeding episodes, and the number of injuries to the gastric wall together with the subjective overall assessment of the intervention by the operating physician were recorded. The resected specimens were the same size, with average sizes of 16.96 cm(2) and 15.85 cm(2) resp (p = 0.8125). Bleeding episodes have been less frequent in the HybridKnife group (2.83 vs. 3.5; p = 0.5625). The standard knife caused more injuries to the lamina muscularis propria (0.17 vs. 1.33; p = 0.0313). The operating times had a tendency to be shorter with the HybridKnife technique (47.18 vs. 58.32 minute; p = 0.0313). The combination of a needle-knife with high-pressure water-jet dissection improved the results of endoscopic submucosal dissection in this experimental setting. Because the frequency of complications is still high, further improvements to the instrument are necessary.

  9. A method to enhance the measurement accuracy of Raman shift based on high precision calibration technique

    NASA Astrophysics Data System (ADS)

    Ding, Xiang; Li, Fei; Zhang, Jiyan; Liu, Wenli

    2016-10-01

    Raman spectrometers are usually calibrated periodically to ensure their measurement accuracy of Raman shift. A combination of a piece of monocrystalline silicon chip and a low pressure discharge lamp is proposed as a candidate for the reference standard of Raman shift. A high precision calibration technique is developed to accurately determine the standard value of the silicon's Raman shift around 520cm-1. The technique is described and illustrated by measuring a piece of silicon chip against three atomic spectral lines of a neon lamp. A commercial Raman spectrometer is employed and its error characteristics of Raman shift are investigated. Error sources are evaluated based on theoretical analysis and experiments, including the sample factor, the instrumental factor, the laser factor and random factors. Experimental results show that the expanded uncertainty of the silicon's Raman shift around 520cm-1 can acheive 0.3 cm-1 (k=2), which is more accurate than most of currently used reference materials. The results are validated by comparison measurement between three Raman spectrometers. It is proved that the technique can remarkably enhance the accuracy of Raman shift, making it possible to use the silicon and the lamp to calibrate Raman spectrometers.

  10. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  11. Measuring The Neutron Lifetime to One Second Using in Beam Techniques

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan; NIST In Beam Lifetime Collaboration

    2013-10-01

    The decay of the free neutron is the simplest nuclear beta decay and is the prototype for charged current semi-leptonic weak interactions. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The technical improvements in the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.

  12. High channel count and high precision channel spacing multi-wavelength laser array for future PICs.

    PubMed

    Shi, Yuechun; Li, Simin; Chen, Xiangfei; Li, Lianyan; Li, Jingsi; Zhang, Tingting; Zheng, Jilin; Zhang, Yunshan; Tang, Song; Hou, Lianping; Marsh, John H; Qiu, Bocang

    2014-12-09

    Multi-wavelength semiconductor laser arrays (MLAs) have wide applications in wavelength multiplexing division (WDM) networks. In spite of their tremendous potential, adoption of the MLA has been hampered by a number of issues, particularly wavelength precision and fabrication cost. In this paper, we report high channel count MLAs in which the wavelengths of each channel can be determined precisely through low-cost standard μm-level photolithography/holographic lithography and the reconstruction-equivalent-chirp (REC) technique. 60-wavelength MLAs with good wavelength spacing uniformity have been demonstrated experimentally, in which nearly 83% lasers are within a wavelength deviation of ±0.20 nm, corresponding to a tolerance of ±0.032 nm in the period pitch. As a result of employing the equivalent phase shift technique, the single longitudinal mode (SLM) yield is nearly 100%, while the theoretical yield of standard DFB lasers is only around 33.3%.

  13. The Use of a Laser Doppler Velocimeter in a Standard Flammability Tube

    NASA Technical Reports Server (NTRS)

    Strehlow, R. A.; Flynn, E. M.

    1985-01-01

    The use of the Laser Doppler Velocimeter, (LDV), to measure the flow associated with the passage of a flame through a standard flammability limit tube (SFLT) was studied. Four major results are presented: (1) it is shown that by using standard ray tracing calculations, the displacement of the LDV volume and the fringe rotation within the experimental error of measurement can be predicted; (2) the flow velocity vector field associated with passage of an upward propagating flame in an SFLT is determined; (3) it is determined that the use of a light interruption technique to track particles is not feasible; and (4) it is shown that a 25 mW laser is adequate for LDV measurements in the Shuttle or Spacelab.

  14. Double synchronized switch harvesting (DSSH): a new energy harvesting scheme for efficient energy extraction.

    PubMed

    Lallart, Mickaël; Garbuio, Lauric; Petit, Lionel; Richard, Claude; Guyomar, Daniel

    2008-10-01

    This paper presents a new technique for optimized energy harvesting using piezoelectric microgenerators called double synchronized switch harvesting (DSSH). This technique consists of a nonlinear treatment of the output voltage of the piezoelectric element. It also integrates an intermediate switching stage that ensures an optimal harvested power whatever the load connected to the microgenerator. Theoretical developments are presented considering either constant vibration magnitude, constant driving force, or independent extraction. Then experimental measurements are carried out to validate the theoretical predictions. This technique exhibits a constant output power for a wide range of load connected to the microgenerator. In addition, the extracted power obtained using such a technique allows a gain up to 500% in terms of maximal power output compared with the standard energy harvesting method. It is also shown that such a technique allows a fine-tuning of the trade-off between vibration damping and energy harvesting.

  15. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  16. Experimental validation of a new heterogeneous mechanical test design

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  17. Development of a 3-D X-ray system

    NASA Astrophysics Data System (ADS)

    Evans, James Paul Owain

    The interpretation of standard two-dimensional x-ray images by humans is often very difficult. This is due to the lack of visual cues to depth in an image which has been produced by transmitted radiation. The solution put forward in this research is to introduce binocular parallax, a powerful physiological depth cue, into the resultant shadowgraph x-ray image. This has been achieved by developing a binocular stereoscopic x-ray imaging technique, which can be used for both visual inspection by human observers and also for the extraction of three-dimensional co-ordinate information. The technique is implemented in the design and development of two experimental x-ray systems and also the development of measurement algorithms. The first experimental machine is based on standard linear x-ray detector arrays and was designed as an optimum configuration for visual inspection by human observers. However, it was felt that a combination of the 3-D visual inspection capability together with a measurement facility would enhance the usefulness of the technique. Therefore, both a theoretical and an empirical analysis of the co-ordinate measurement capability of the machine has been carried out. The measurement is based on close-range photogrammetric techniques. The accuracy of the measurement has been found to be of the order of 4mm in x, 3mm in y and 6mm in z. A second experimental machine was developed and based on the same technique as that used for the first machine. However, a major departure has been the introduction of a dual energy linear x-ray detector array which will allow, in general, the discrimination between organic and inorganic substances. The second design is a compromise between ease of visual inspection for human observers and optimum three-dimensional co-ordinate measurement capability. The system is part of an on going research programme into the possibility of introducing psychological depth cues into the resultant x-ray images. The research presented in this thesis was initiated to enhance the visual interpretation of complex x-ray images, specifically in response to problems encountered in the routine screening of freight by HM. Customs and Excise. This phase of the work culminated in the development of the first experimental machine. During this work the security industry was starting to adopt a new type of x-ray detector, namely the dual energy x-ray sensor. The Department of Transport made available funding to the Police Scientific Development Branch (P.S.D.B.), part of The Home Office Science and Technology Group, to investigate the possibility of utilising the dual energy sensor in a 3-D x-ray screening system. This phase of the work culminated in the development of the second experimental machine.

  18. Accelerated testing of space mechanisms

    NASA Technical Reports Server (NTRS)

    Murray, S. Frank; Heshmat, Hooshang

    1995-01-01

    This report contains a review of various existing life prediction techniques used for a wide range of space mechanisms. Life prediction techniques utilized in other non-space fields such as turbine engine design are also reviewed for applicability to many space mechanism issues. The development of new concepts on how various tribological processes are involved in the life of the complex mechanisms used for space applications are examined. A 'roadmap' for the complete implementation of a tribological prediction approach for complex mechanical systems including standard procedures for test planning, analytical models for life prediction and experimental verification of the life prediction and accelerated testing techniques are discussed. A plan is presented to demonstrate a method for predicting the life and/or performance of a selected space mechanism mechanical component.

  19. Graphene-based quantum Hall resistance standards grown by chemical vapor deposition on silicon carbide

    NASA Astrophysics Data System (ADS)

    Ribeiro-Palau, Rebeca; Lafont, Fabien; Kazazis, Dimitris; Michon, Adrien; Couturaud, Olivier; Consejo, Christophe; Jouault, Benoit; Poirier, Wilfrid; Schopfer, Felicien

    2015-03-01

    Replace GaAs-based quantum Hall resistance standards (GaAs-QHRS) by a more convenient one, based on graphene (Gr-QHRS), is an ongoing goal in metrology. The new Gr-QHRS are expected to work in less demanding experimental conditions than GaAs ones. It will open the way to a broad dissemination of quantum standards, potentially towards industrial end-users, and it will support the implementation of a new International System of Units based on fixed fundamental constants. Here, we present accurate quantum Hall resistance measurements in large graphene Hall bars, grown by the hybrid scalable technique of propane/hydrogen chemical vapor deposition (CVD) on silicon carbide (SiC). This new Gr-QHRS shows a relative accuracy of 1 ×10-9 of the Hall resistance under the lowest magnetic field ever achieved in graphene. These experimental conditions surpass those of the most wildely used GaAs-QHRS. These results confirm the promises of graphene for resistance metrology applications and emphasizes the quality of the graphene produced by the CVD on SiC for applications as demanding as the resistance metrology.

  20. Reference Data for the Density and Viscosity of Liquid Cadmium, Cobalt, Gallium, Indium, Mercury, Silicon, Thallium, and Zinc

    NASA Astrophysics Data System (ADS)

    Assael, Marc J.; Armyra, Ivi J.; Brillo, Juergen; Stankus, Sergei V.; Wu, Jiangtao; Wakeham, William A.

    2012-09-01

    The available experimental data for the density and viscosity of liquid cadmium, cobalt, gallium, indium, mercury, silicon, thallium, and zinc have been critically examined with the intention of establishing both a density and a viscosity standard. All experimental data have been categorized into primary and secondary data according to the quality of measurement, the technique employed and the presentation of the data, as specified by a series of criteria. The proposed standard reference correlations for the density of liquid cadmium, cobalt, gallium, indium, silicon, thallium, and zinc are characterized by percent deviations at the 95% confidence level of 0.6, 2.1, 0.4, 0.5, 2.2, 0.9, and 0.7, respectively. In the case of mercury, since density reference values already exist, no further work was carried out. The standard reference correlations for the viscosity of liquid cadmium, cobalt, gallium, indium, mercury, silicon, thallium, and zinc are characterized by percent deviations at the 95% confidence level of 9.4, 14.0, 13.5, 2.1, 7.3, 15.7, 5.1, and 9.3, respectively.

  1. Qualitative and quantitative imaging in microgravity combustion

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1995-01-01

    An overview of the imaging techniques implemented by researchers in the microgravity combustion program shows that for almost any system, imaging of the flame may be accomplished in a variety of ways. Standard and intensified video, high speed, and infrared cameras and fluorescence, laser schlieren, rainbow schlieren, soot volume fraction, and soot temperature imaging have all been used in the laboratory and many in reduced gravity to make the necessary experimental measurements.

  2. Study of the angular coefficients and corresponding helicity cross sections of the W boson in hadron collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strologas, John; Errede, Steven; Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801

    We present the standard model prediction for the eight angular coefficients of the W boson, which completely describes its differential cross section in hadron collisions. These coefficients are ratios of the W helicity cross sections and the total unpolarized cross section. We also suggest a technique to experimentally extract the coefficients, which we demonstrate in the Collins-Soper azimuthal-angle analysis.

  3. Osteosarcoma Overview.

    PubMed

    Lindsey, Brock A; Markel, Justin E; Kleinerman, Eugenie S

    2017-06-01

    Osteosarcoma (OS) is the most common primary malignancy of bone and patients with metastatic disease or recurrences continue to have very poor outcomes. Unfortunately, little prognostic improvement has been generated from the last 20 years of research and a new perspective is warranted. OS is extremely heterogeneous in both its origins and manifestations. Although multiple associations have been made between the development of osteosarcoma and race, gender, age, various genomic alterations, and exposure situations among others, the etiology remains unclear and controversial. Noninvasive diagnostic methods include serum markers like alkaline phosphatase and a growing variety of imaging techniques including X-ray, computed tomography, magnetic resonance imaging, and positron emission as well as combinations thereof. Still, biopsy and microscopic examination are required to confirm the diagnosis and carry additional prognostic implications such as subtype classification and histological response to neoadjuvant chemotherapy. The current standard of care combines surgical and chemotherapeutic techniques, with a multitude of experimental biologics and small molecules currently in development and some in clinical trial phases. In this review, in addition to summarizing the current understanding of OS etiology, diagnostic methods, and the current standard of care, our group describes various experimental therapeutics and provides evidence to encourage a potential paradigm shift toward the introduction of immunomodulation, which may offer a more comprehensive approach to battling cancer pleomorphism.

  4. High-Accuracy Surface Figure Measurement of Silicon Mirrors at 80 K

    NASA Technical Reports Server (NTRS)

    Blake, Peter; Mink, Ronald G.; Chambers, John; Davila, Pamela; Robinson, F. David

    2004-01-01

    This report describes the equipment, experimental methods, and first results at a new facility for interferometric measurement of cryogenically-cooled spherical mirrors at the Goddard Space Flight Center Optics Branch. The procedure, using standard phase-shifting interferometry, has an standard combined uncertainty of 3.6 nm rms in its representation of the two-dimensional surface figure error at 80, and an uncertainty of plus or minus 1 nm in the rms statistic itself. The first mirror tested was a concave spherical silicon foam-core mirror, with a clear aperture of 120 mm. The optic surface was measured at room temperature using standard absolute techniques; and then the change in surface figure error from room temperature to 80 K was measured. The mirror was cooled within a cryostat. and its surface figure error measured through a fused-silica window. The facility and techniques will be used to measure the surface figure error at 20K of prototype lightweight silicon carbide and Cesic mirrors developed by Galileo Avionica (Italy) for the European Space Agency (ESA).

  5. Detection of microbial concentration in ice-cream using the impedance technique.

    PubMed

    Grossi, M; Lanzoni, M; Pompei, A; Lazzarini, R; Matteuzzi, D; Riccò, B

    2008-06-15

    The detection of microbial concentration, essential for safe and high quality food products, is traditionally made with the plate count technique, that is reliable, but also slow and not easily realized in the automatic form, as required for direct use in industrial machines. To this purpose, the method based on impedance measurements represents an attractive alternative since it can produce results in about 10h, instead of the 24-48h needed by standard plate counts and can be easily realized in automatic form. In this paper such a method has been experimentally studied in the case of ice-cream products. In particular, all main ice-cream compositions of real interest have been considered and no nutrient media has been used to dilute the samples. A measurement set-up has been realized using benchtop instruments for impedance measurements on samples whose bacteria concentration was independently measured by means of standard plate counts. The obtained results clearly indicate that impedance measurement represents a feasible and reliable technique to detect total microbial concentration in ice-cream, suitable to be implemented as an embedded system for industrial machines.

  6. Nanoscale welding aerosol sensing based on whispering gallery modes in a cylindrical silica resonator.

    PubMed

    Lee, Aram; Mills, Thomas; Xu, Yong

    2015-03-23

    We report an experimental technique where one uses a standard silica fiber as a cylindrical whispering gallery mode (WGM) resonator to sense airborne nanoscale aerosols produced by electric arc welding. We find that the accumulation of aerosols on the resonator surface induces a measurable red-shift in resonance frequency, and establish an empirical relation that links the magnitude of resonance shift with the amount of aerosol deposition. The WGM quality factors, by contrast, do not decrease significantly, even for samples with a large percentage of surface area covered by aerosols. Our experimental results are discussed and compared with existing literature on WGM-based nanoparticle sensing.

  7. Metamaterial-based half Maxwell fish-eye lens for broadband directive emissions

    NASA Astrophysics Data System (ADS)

    Dhouibi, Abdallah; Nawaz Burokur, Shah; de Lustrac, André; Priou, Alain

    2013-01-01

    The broadband directive emission from a metamaterial surface is numerically and experimentally reported. The metasurface, composed of non-resonant complementary closed ring structures, is designed to obey the refractive index of a half Maxwell fish-eye lens. A planar microstrip Vivaldi antenna is used as transverse magnetic polarized wave launcher for the lens. A prototype of the lens associated with its feed structure has been fabricated using standard lithography techniques. To experimentally demonstrate the broadband focusing properties and directive emissions, both the far-field radiation patterns and the near-field distributions have been measured. Measurements agree quantitatively and qualitatively with theoretical simulations.

  8. Advanced analysis techniques for uranium assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples countmore » rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.« less

  9. Novel radio-frequency gun structures for ultrafast relativistic electron diffraction.

    PubMed

    Musumeci, P; Faillace, L; Fukasawa, A; Moody, J T; O'Shea, B; Rosenzweig, J B; Scoby, C M

    2009-08-01

    Radio-frequency (RF) photoinjector-based relativistic ultrafast electron diffraction (UED) is a promising new technique that has the potential to probe structural changes at the atomic scale with sub-100 fs temporal resolution in a single shot. We analyze the limitations on the temporal and spatial resolution of this technique considering the operating parameters of a standard 1.6 cell RF gun (which is the RF photoinjector used for the first experimental tests of relativistic UED at Stanford Linear Accelerator Center; University of California, Los Angeles; Brookhaven National Laboratory), and study the possibility of employing novel RF structures to circumvent some of these limits.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Gangadhar, E-mail: gdas@rrcat.gov.in; Tiwari, M. K.; Singh, A. K.

    The Compton and elastic scattering radiations are the major contributor to the spectral background of an x-ray fluorescence spectrum, which eventually limits the element detection sensitivities of the technique to µg/g (ppm) range. In the present work, we provide a detail mathematical descriptions and show that how polarization properties of the synchrotron radiation influence the spectral background in the x-ray fluorescence technique. We demonstrate our theoretical understandings through experimental observations using total x-ray fluorescence measurements on standard reference materials. Interestingly, the azimuthal anisotropy of the scattered radiation is shown to have a vital role on the significance of the x-raymore » fluorescence detection sensitivities.« less

  11. [A method for inducing standardized spiral fractures of the tibia in the animal experiment].

    PubMed

    Seibold, R; Schlegel, U; Cordey, J

    1995-07-01

    A method for the deliberate weakening of cortical bone has been developed on the basis of an already established technique for creating butterfly fractures. It enables one to create the same type of fracture, i.e., a spiral fracture, every time. The fracturing process is recorded as a force-strain curve. The results of the in vitro investigations form a basis for the preparation of experimental tasks aimed at demonstrating internal fixation techniques and their influence on the vascularity of the bone in simulated fractures. Animal protection law lays down that this fracture model must not fail in animal experiments.

  12. Remeasurement and compilation of excitation function of proton induced reactions on iron for activation techniques

    NASA Astrophysics Data System (ADS)

    Takács, S.; Vasváry, L.; Tárkányi, F.

    1994-05-01

    Excitation functions of proton induced reactions on natFe(p, xn) 56Co have been remeasured in the energy region up to 18 MeV using stacked foil technique and standard high resolution gamma-ray spectrometry at the Debrecen MGC-20E cyclotron. Compilation of the available data measured between 1959 and 1993 has been made. The corresponding excitation functions have been reviewed, critical comparison of all the available data was done to obtain the most accurate data set. The feasibility of the evaluated data set was checked by reproducing experimental calibration curves for TLA by calculation.

  13. Projecting non-diffracting waves with intermediate-plane holography.

    PubMed

    Mondal, Argha; Yevick, Aaron; Blackburn, Lauren C; Kanellakopoulos, Nikitas; Grier, David G

    2018-02-19

    We introduce intermediate-plane holography, which substantially improves the ability of holographic trapping systems to project propagation-invariant modes of light using phase-only diffractive optical elements. Translating the mode-forming hologram to an intermediate plane in the optical train can reduce the need to encode amplitude variations in the field, and therefore complements well-established techniques for encoding complex-valued transfer functions into phase-only holograms. Compared to standard holographic trapping implementations, intermediate-plane holograms greatly improve diffraction efficiency and mode purity of propagation-invariant modes, and so increase their useful non-diffracting range. We demonstrate this technique through experimental realizations of accelerating modes and long-range tractor beams.

  14. A numerical and experimental comparison of human head phantoms for compliance testing of mobile telephone equipment.

    PubMed

    Christ, Andreas; Chavannes, Nicolas; Nikoloski, Neviana; Gerber, Hans-Ulrich; Poković, Katja; Kuster, Niels

    2005-02-01

    A new human head phantom has been proposed by CENELEC/IEEE, based on a large scale anthropometric survey. This phantom is compared to a homogeneous Generic Head Phantom and three high resolution anatomical head models with respect to specific absorption rate (SAR) assessment. The head phantoms are exposed to the radiation of a generic mobile phone (GMP) with different antenna types and a commercial mobile phone. The phones are placed in the standardized testing positions and operate at 900 and 1800 MHz. The average peak SAR is evaluated using both experimental (DASY3 near field scanner) and numerical (FDTD simulations) techniques. The numerical and experimental results compare well and confirm that the applied SAR assessment methods constitute a conservative approach.

  15. A new simple technique for improving the random properties of chaos-based cryptosystems

    NASA Astrophysics Data System (ADS)

    Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.

    2018-03-01

    A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.

  16. Ventricular-subcutaneous shunt for the treatment of experimental hydrocephalus in young rats: technical note.

    PubMed

    Santos, Marcelo Volpon; Garcia, Camila Araujo Bernardino; Jardini, Evelise Oliveira; Romeiro, Thais Helena; da Silva Lopes, Luiza; Machado, Hélio Rubens; de Oliveira, Ricardo Santos

    2016-08-01

    Hydrocephalus is a complex disease that affects cerebrospinal fluid (CSF) dynamics and is very common in children. To this date, CSF shunting is still the standard treatment for childhood hydrocephalus, but, nevertheless, the effects of such an operation on the developing brain are widely unknown. To help overcome this, experimental models of CSF shunts are surely very useful tools. The objective of this study was to describe a feasible and reliable technique of an adapted ventricular-subcutaneous shunt for the treatment of kaolin-induced hydrocephalus in young rats. We developed a ventricular-subcutaneous shunt (VSCS) technique which was used in 31 Wistar young rats with kaolin-induced hydrocephalus. Hydrocephalus was induced at 7 days of age, and shunt implantation was performed 7 days later. Our technique used a 0.7-mm gauge polypropylene catheter tunneled to a subcutaneous pocket created over the animal's back and inserted into the right lateral ventricle. All animals were sacrificed 14 days after shunt insertion. Twenty-four rats survived and remained well until the study was ended. No major complications were seen. Their weight gain went back to normal. They all underwent ambulatory behavioral testing prior and after VSCS, which showed improvement in their motor skills. We have also obtained magnetic resonance (MR) scans of 16 pups confirming reduction of ventricular size after shunting and indicating effective treatment. Histopathological analysis of brain samples before and after shunting showed reversion of ependymal and corpus callosum disruption, as well as fewer reactive astrocytes in shunted animals. An experimental CSF shunt technique was devised. Excessive CSF of hydrocephalic rats is diverted into the subcutaneous space where it can be resorbed. This technique has a low complication rate and is effective. It might be applied to various types of experimental studies involving induction and treatment of hydrocephalus.

  17. Using direct numerical simulation to improve experimental measurements of inertial particle radial relative velocities

    NASA Astrophysics Data System (ADS)

    Ireland, Peter J.; Collins, Lance R.

    2012-11-01

    Turbulence-induced collision of inertial particles may contribute to the rapid onset of precipitation in warm cumulus clouds. The particle collision frequency is determined from two parameters: the radial distribution function g (r) and the mean inward radial relative velocity . These quantities have been measured in three dimensions computationally, using direct numerical simulation (DNS), and experimentally, using digital holographic particle image velocimetry (DHPIV). While good quantitative agreement has been attained between computational and experimental measures of g (r) (Salazar et al. 2008), measures of wr have not reached that stage (de Jong et al. 2010). We apply DNS to mimic the experimental image analysis used in the relative velocity measurement. To account for experimental errors, we add noise to the particle positions and `measure' the velocity from these positions. Our DNS shows that the experimental errors are inherent to the DHPIV setup, and so we explore an alternate approach, in which velocities are measured along thin two-dimensional planes using standard PIV. We show that this technique better recovers the correct radial relative velocity PDFs and suggest optimal parameter ranges for the experimental measurements.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  19. Novel Experimental Techniques to Investigate Wellbore Damage Mechanisms

    NASA Astrophysics Data System (ADS)

    Choens, R. C., II; Ingraham, M. D.; Lee, M.; Dewers, T. A.

    2017-12-01

    A new experimental technique with unique geometry is presented investigating deformation of simulated boreholes using standard axisymmetric triaxial deformation equipment. The Sandia WEllbore SImulation, SWESI, geometry, uses right cylinders of rock 50mm in diameter and 75mm in length. A 11.3mm hole is drilled perpendicular to the axis of the cylinder in the center of the sample to simulate a borehole. The hole is covered with a solid metal cover, and sealed with polyurethane. The metal cover can be machined with a high-pressure port to introduce different fluid chemistries into the borehole at controlled pressures. Samples are deformed in a standard load frame under confinement, allowing for a broad range of possible stresses, load paths, and temperatures. Experiments in this study are loaded to the desired confining pressure, then deformed at a constant axial strain rate or 10-5 sec-1. Two different suites of experiments are conducted in this study on sedimentary and crystalline rock types. The first series of experiments are conducted on Mancos Shale, a finely laminated transversely isotropic rock. Samples are cored at three different orientations to the laminations. A second series of experiments is conducted on Sierra White granite with different fluid chemistries inside the borehole. Numerical modelling and experimental observations including CT-microtomography demonstrate that stresses are concentrated around the simulated wellbore and recreate wellbore deformation mechanisms. Borehole strength and damage development is dependent on anisotropy orientation and fluid chemistry. Observed failure geometries, particularly for Mancos shale, can be highly asymmetric. These results demonstrate uncertainties in in situ stresses measurements using commonly-applied borehole breakout techniques in complicated borehole physico-chemical environments. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. SAND2017-8259 A

  20. Hydrogen Fuel Quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rockward, Tommy

    2012-07-16

    For the past 6 years, open discussions and/or meetings have been held and are still on-going with OEM, Hydrogen Suppliers, other test facilities from the North America Team and International collaborators regarding experimental results, fuel clean-up cost, modeling, and analytical techniques to help determine levels of constituents for the development of an international standard for hydrogen fuel quality (ISO TC197 WG-12). Significant progress has been made. The process for the fuel standard is entering final stages as a result of the technical accomplishments. The objectives are to: (1) Determine the allowable levels of hydrogen fuel contaminants in support of themore » development of science-based international standards for hydrogen fuel quality (ISO TC197 WG-12); and (2) Validate the ASTM test method for determining low levels of non-hydrogen constituents.« less

  1. A Group Wage Incentive System for Production Workers at Pearl Harbor Naval Shipyard: Test and Evaluation

    DTIC Science & Technology

    1985-09-01

    assume they will result in a sweatshop atmosphere. Workers may have fears that management will tighten standards when performance improves or...the shop’s performance can have a major impact on overall shipyard performance. In addition, the potential for accurate performance measurement was... impact of this experimental productivity improvement technique on participants’ job attitudes is supported in the literature. White et al. (in

  2. Holographic interferometry of transparent media with reflection from imbedded test objects

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1981-01-01

    In applying holographic interferometry, opaque objects blocking a portion of the optical beam used to form the interferogram give rise to incomplete data for standard computer tomography algorithms. An experimental technique for circumventing the problem of data blocked by opaque objects is presented. The missing data are completed by forming an interferogram using light backscattered from the opaque object, which is assumed to be diffuse. The problem of fringe localization is considered.

  3. Standardizing the experimental conditions for using urine in NMR-based metabolomic studies with a particular focus on diagnostic studies: a review.

    PubMed

    Emwas, Abdul-Hamid; Luchinat, Claudio; Turano, Paola; Tenori, Leonardo; Roy, Raja; Salek, Reza M; Ryan, Danielle; Merzaban, Jasmeen S; Kaddurah-Daouk, Rima; Zeri, Ana Carolina; Nagana Gowda, G A; Raftery, Daniel; Wang, Yulan; Brennan, Lorraine; Wishart, David S

    The metabolic composition of human biofluids can provide important diagnostic and prognostic information. Among the biofluids most commonly analyzed in metabolomic studies, urine appears to be particularly useful. It is abundant, readily available, easily stored and can be collected by simple, noninvasive techniques. Moreover, given its chemical complexity, urine is particularly rich in potential disease biomarkers. This makes it an ideal biofluid for detecting or monitoring disease processes. Among the metabolomic tools available for urine analysis, NMR spectroscopy has proven to be particularly well-suited, because the technique is highly reproducible and requires minimal sample handling. As it permits the identification and quantification of a wide range of compounds, independent of their chemical properties, NMR spectroscopy has been frequently used to detect or discover disease fingerprints and biomarkers in urine. Although protocols for NMR data acquisition and processing have been standardized, no consensus on protocols for urine sample selection, collection, storage and preparation in NMR-based metabolomic studies have been developed. This lack of consensus may be leading to spurious biomarkers being reported and may account for a general lack of reproducibility between laboratories. Here, we review a large number of published studies on NMR-based urine metabolic profiling with the aim of identifying key variables that may affect the results of metabolomics studies. From this survey, we identify a number of issues that require either standardization or careful accounting in experimental design and provide some recommendations for urine collection, sample preparation and data acquisition.

  4. Making Mass Spectrometry See the Light: The Promises and Challenges of Cryogenic Infrared Ion Spectroscopy as a Bioanalytical Technique

    PubMed Central

    Cismesia, Adam P.; Bailey, Laura S.; Bell, Matthew R.; Tesler, Larry F.; Polfer, Nicolas C.

    2016-01-01

    The detailed chemical information contained in the vibrational spectrum of a cryogenically cooled analyte would, in principle, make infrared (IR) ion spectroscopy a gold standard technique for molecular identification in mass spectrometry. Despite this immense potential, there are considerable challenges in both instrumentation and methodology to overcome before the technique is analytically useful. Here, we discuss the promise of IR ion spectroscopy for small molecule analysis in the context of metabolite identification. Experimental strategies to address sensitivity constraints, poor overall duty cycle, and speed of the experiment are intimately tied to the development of a mass-selective cryogenic trap. Therefore, the most likely avenues for success, in the authors? opinion, are presented here, alongside alternative approaches and some thoughts on data interpretation. PMID:26975370

  5. Use of Tc-99m-galactosyl-neoglycoalbumin (Tc-NGA) to determine hepatic blood flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadalnik, R.C.; Vera, D.R.; Woodle, E.S.

    1984-01-01

    Tc-NGA is a new liver radiopharmaceutical which binds to a hepatocyte-specific membrane receptor. Three characteristics of Tc-NGA can be exploited in the measurement of hepatic blood flow (HBF): 1) ability to alter the affinity of Tc-NGA for its receptor by changing the galactose: albumin ratio; 2) ability to achieve a high specific activity with Tc-99m labeling; and 3) ability to administer a high molar dose of Tc-NGA without physiologic side effects. In addition, kinetic modeling of Tc-NGA dynamic data can provide estimates of hepatic receptor concentration. In experimental studies in young pigs, HBF was determined using two techniques: 1) kineticmore » modeling of dynamic data using moderate affinity, low specific activity Tc-NGA (Group A, n=12); and 2) clearance (CL) technique using high affinity, high specific activity Tc-NGA (Group B, n=4). In both groups, HBF was determined simultaneously by continuous infusion of indocyanine green (CI-ICG) with hepatic vein sampling. Regression analysis of HBF measurements obtained with the Tc-NGA kinetic modeling technique and the CI-ICG technique (Group A) revealed good correlation between the two techniques (r=0.802, p=0.02). Similarly, HBF determination by the clearance technique (Group B) provided highly accurate measurements when compared to the CI-ICG technique. Hepatic blood flow measurements by the clearance technique (CL-NGA) fell within one standard deviation of the error associated with each CI-ICG HBF measurement (all CI-ICG standard deviations were less than 10%).« less

  6. Characterizing the energy output generated by a standard electric detonator using shadowgraph imaging

    NASA Astrophysics Data System (ADS)

    Petr, V.; Lozano, E.

    2017-09-01

    This paper overviews a complete method for the characterization of the explosive energy output from a standard detonator. Measurements of the output of explosives are commonly based upon the detonation parameters of the chemical energy content of the explosive. These quantities provide a correct understanding of the energy stored in an explosive, but they do not provide a direct measure of the different modes in which the energy is released. This optically based technique combines high-speed and ultra-high-speed imaging to characterize the casing fragmentation and the detonator-driven shock load. The procedure presented here could be used as an alternative to current indirect methods—such as the Trauzl lead block test—because of its simplicity, high data accuracy, and minimum demand for test repetition. This technique was applied to experimentally measure air shock expansion versus time and calculating the blast wave energy from the detonation of the high explosive charge inside the detonator. Direct measurements of the shock front geometry provide insight into the physics of the initiation buildup. Because of their geometry, standard detonators show an initial ellipsoidal shock expansion that degenerates into a final spherical wave. This non-uniform shape creates variable blast parameters along the primary blast wave. Additionally, optical measurements are validated using piezoelectric pressure transducers. The energy fraction spent in the acceleration of the metal shell is experimentally measured and correlated with the Gurney model, as well as to several empirical formulations for blasts from fragmenting munitions. The fragment area distribution is also studied using digital particle imaging analysis and correlated with the Mott distribution. Understanding the fragmentation distribution plays a critical role when performing hazard evaluation from these types of devices. In general, this technique allows for characterization of the detonator within 6-8% error with no knowledge of the amount or type of explosive contained within the shell, making it also suitable for the study of unknown improvised explosive devices.

  7. Does Angling Technique Selectively Target Fishes Based on Their Behavioural Type?

    PubMed Central

    Wilson, Alexander D. M.; Brownscombe, Jacob W.; Sullivan, Brittany; Jain-Schlaepfer, Sofia; Cooke, Steven J.

    2015-01-01

    Recently, there has been growing recognition that fish harvesting practices can have important impacts on the phenotypic distributions and diversity of natural populations through a phenomenon known as fisheries-induced evolution. Here we experimentally show that two common recreational angling techniques (active crank baits versus passive soft plastics) differentially target wild largemouth bass (Micropterus salmoides) and rock bass (Ambloplites rupestris) based on variation in their behavioural tendencies. Fish were first angled in the wild using both techniques and then brought back to the laboratory and tested for individual-level differences in common estimates of personality (refuge emergence, flight-initiation-distance, latency-to-recapture and with a net, and general activity) in an in-lake experimental arena. We found that different angling techniques appear to selectively target these species based on their boldness (as characterized by refuge emergence, a standard measure of boldness in fishes) but not other assays of personality. We also observed that body size was independently a significant predictor of personality in both species, though this varied between traits and species. Our results suggest a context-dependency for vulnerability to capture relative to behaviour in these fish species. Ascertaining the selective pressures angling practices exert on natural populations is an important area of fisheries research with significant implications for ecology, evolution, and resource management. PMID:26284779

  8. Novel Augmentation Technique for Patellar Tendon Repair Improves Strength and Decreases Gap Formation: A Cadaveric Study.

    PubMed

    Black, James C; Ricci, William M; Gardner, Michael J; McAndrew, Christopher M; Agarwalla, Avinesh; Wojahn, Robert D; Abar, Orchid; Tang, Simon Y

    2016-12-01

    Patellar tendon ruptures commonly are repaired using transosseous patellar drill tunnels with modified-Krackow sutures in the patellar tendon. This simple suture technique has been associated with failure rates and poor clinical outcomes in a modest proportion of patients. Failure of this repair technique can result from gap formation during loading or a single catastrophic event. Several augmentation techniques have been described to improve the integrity of the repair, but standardized biomechanical evaluation of repair strength among different techniques is lacking. The purpose of this study was to describe a novel figure-of-eight suture technique to augment traditional fixation and evaluate its biomechanical performance. We hypothesized that the augmentation technique would (1) reduce gap formation during cyclic loading and (2) increase the maximum load to failure. Ten pairs (two male, eight female) of fresh-frozen cadaveric knees free of overt disorders or patellar tendon damage were used (average donor age, 76 years; range, 65-87 years). For each pair, one specimen underwent the standard transosseous tunnel suture repair with a modified-Krackow suture technique and the second underwent the standard repair with our experimental augmentation method. Nine pairs were suitable for testing. Each specimen underwent cyclic loading while continuously measuring gap formation across the repair. At the completion of cyclic loading, load to failure testing was performed. A difference in gap formation and mean load to failure was seen in favor of the augmentation technique. At 250 cycles, a 68% increase in gap formation was seen for the control group (control: 5.96 ± 0.86 mm [95% CI, 5.30-6.62 mm]; augmentation: 3.55 ± 0.56 mm [95% CI, 3.12-3.98 mm]; p = 0.02). The mean load to failure was 13% greater in the augmentation group (control: 899.57 ± 96.94 N [95% CI, 825.06-974.09 N]; augmentation: 1030.70 ± 122.41 N [95% CI, 936.61-1124.79 N]; p = 0.01). This biomechanical study showed improved performance of a novel augmentation technique compared with the standard repair, in terms of reduced gap formation during cyclic loading and increased maximum load to failure. Decreased gap formation and higher load to failure may improve healing potential and minimize failure risk. This study shows a potential biomechanical advantage of the augmentation technique, providing support for future clinical investigations comparing this technique with other repair methods that are in common use such as transosseous suture repair.

  9. Localized analysis of paint-coat drying using dynamic speckle interferometry

    NASA Astrophysics Data System (ADS)

    Sierra-Sosa, Daniel; Tebaldi, Myrian; Grumel, Eduardo; Rabal, Hector; Elmaghraby, Adel

    2018-07-01

    The paint-coating is part of several industrial processes, including the automotive industry, architectural coatings, machinery and appliances. These paint-coatings must comply with high quality standards, for this reason evaluation techniques from paint-coatings are in constant development. One important factor from the paint-coating process is the drying, as it has influence on the quality of final results. In this work we present an assessment technique based on the optical dynamic speckle interferometry, this technique allows for the temporal activity evaluation of the paint-coating drying process, providing localized information from drying. This localized information is relevant in order to address the drying homogeneity, optimal drying, and quality control. The technique relies in the definition of a new temporal history of the speckle patterns to obtain the local activity; this information is then clustered to provide a convenient indicative of different drying process stages. The experimental results presented were validated using the gravimetric drying curves

  10. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012). Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Physiotherapists use a small number of behaviour change techniques when promoting physical activity: A systematic review comparing experimental and observational studies.

    PubMed

    Kunstler, Breanne E; Cook, Jill L; Freene, Nicole; Finch, Caroline F; Kemp, Joanne L; O'Halloran, Paul D; Gaida, James E

    2018-06-01

    Physiotherapists promote physical activity as part of their practice. This study reviewed the behaviour change techniques physiotherapists use when promoting physical activity in experimental and observational studies. Systematic review of experimental and observational studies. Twelve databases were searched using terms related to physiotherapy and physical activity. We included experimental studies evaluating the efficacy of physiotherapist-led physical activity interventions delivered to adults in clinic-based private practice and outpatient settings to individuals with, or at risk of, non-communicable diseases. Observational studies reporting the techniques physiotherapists use when promoting physical activity were also included. The behaviour change techniques used in all studies were identified using the Behaviour Change Technique Taxonomy. The behaviour change techniques appearing in efficacious and inefficacious experimental interventions were compared using a narrative approach. Twelve studies (nine experimental and three observational) were retained from the initial search yield of 4141. Risk of bias ranged from low to high. Physiotherapists used seven behaviour change techniques in the observational studies, compared to 30 behaviour change techniques in the experimental studies. Social support (unspecified) was the most frequently identified behaviour change technique across both settings. Efficacious experimental interventions used more behaviour change techniques (n=29) and functioned in more ways (n=6) than did inefficacious experimental interventions (behaviour change techniques=10 and functions=1). Physiotherapists use a small number of behaviour change techniques. Less behaviour change techniques were identified in observational studies compared to experimental studies, suggesting physiotherapists use less BCTs clinically than experimentally. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  12. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  13. Experimental Validation of Displacement Underestimation in ARFI Ultrasound

    PubMed Central

    Czernuszewicz, Tomasz J.; Streeter, Jason E.; Dayton, Paul A.; Gallippi, Caterina M.

    2014-01-01

    Acoustic radiation force impulse (ARFI) imaging is an elastography technique that uses ultrasonic pulses to both displace and track tissue motion. Previous modeling studies have shown that ARFI displacements are susceptible to underestimation due to lateral and elevational shearing that occurs within the tracking resolution cell. In this study, optical tracking was utilized to experimentally measure the displacement underestimation achieved by acoustic tracking using a clinical ultrasound system. Three optically translucent phantoms of varying stiffness were created, embedded with sub-wavelength diameter microspheres, and ARFI excitation pulses with F/1.5 or F/3 lateral focal configurations were transmitted from a standard linear array to induce phantom motion. Displacements were tracked using confocal optical and acoustic methods. As predicted by earlier FEM studies, significant acoustic displacement underestimation was observed for both excitation focal configurations; the maximum underestimation error was 35% of the optically measured displacement for the F/1.5 excitation pulse in the softest phantom. Using higher F/#, less tightly focused beams in the lateral dimension improved accuracy of displacements by approximately 10 percentage points. This work experimentally demonstrates limitations of ARFI implemented on a clinical scanner using a standard linear array and sets up a framework for future displacement tracking validation studies. PMID:23858054

  14. A straightforward experimental method to evaluate the Lamb-Mössbauer factor of a 57Co/Rh source

    NASA Astrophysics Data System (ADS)

    Spina, G.; Lantieri, M.

    2014-01-01

    In analyzing Mössbauer spectra by means of the integral transmission function, a correct evaluation of the recoilless fs factor of the source at the position of the sample is needed. A novel method to evaluate fs for a 57Co source is proposed. The method uses the standard transmission experimental set up and it does not need further measurements but the ones that are mandatory in order to center the Mössbauer line and to calibrate the Mössbauer transducer. Firstly, the background counts are evaluated by collecting a standard Multi Channel Scaling (MCS) spectrum of a tick metal iron foil absorber and two Pulse Height Analysis (PHA) spectra with the same life-time and setting the maximum velocity of the transducer at the same value of the MCS spectrum. Secondly, fs is evaluated by fitting the collected MCS spectrum throughout the integral transmission approach. A test of the suitability of the technique is presented, too.

  15. [Research progress on mechanical performance evaluation of artificial intervertebral disc].

    PubMed

    Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang

    2018-03-01

    The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.

  16. Structural characterization and numerical simulations of flow properties of standard and reservoir carbonate rocks using micro-tomography

    NASA Astrophysics Data System (ADS)

    Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed

    2018-04-01

    With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.

  17. An Experimental Introduction to Acoustics

    NASA Astrophysics Data System (ADS)

    Black, Andy Nicholas; Magruder, Robert H.

    2017-11-01

    Learning and understanding physics requires more than studying physics texts. It requires doing physics. Doing research is a key opportunity for students to connect physical principles with their everyday experience. A powerful way to introduce students to research and technique is through subjects in which they might find interest. Presented is an experiment that serves to introduce an advanced undergraduate or high school student to conducting research in acoustics via an experiment involving a standard dreadnought acoustic guitar, recording industry-related equipment, and relevant industrial analysis software. This experimental process is applicable to a wide range of acoustical topics including both acoustic and electric instruments. Also, the student has a hands-on experience with relevant audio engineering technology to study physical principles.

  18. Experimenter's laboratory for visualized interactive science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.

    1992-01-01

    The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.

  19. Experimenter's laboratory for visualized interactive science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.

    1993-01-01

    The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperatable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color-shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.

  20. Parametric source of two-photon states with a tunable degree of entanglement and mixing: Experimental preparation of Werner states and maximally entangled mixed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinelli, C.; Di Nepi, G.; De Martini, F.

    2004-08-01

    A parametric source of polarization-entangled photon pairs with striking spatial characteristics is reported. The distribution of the output electromagnetic k modes excited by spontaneous parametric down-conversion and coupled to the output detectors can be very broad. Using these states realized over a full entanglement ring output distribution, the nonlocal properties of the generated entanglement have been tested by standard Bell measurements and by Ou-Mandel interferometry. A 'mode-patchwork' technique based on the quantum superposition principle is adopted to synthesize in a straightforward and reliable way any kind of mixed state, of large conceptual and technological interest in modern quantum information. Tunablemore » Werner states and maximally entangled mixed states have indeed been created by this technique and investigated by quantum tomography. A study of the entropic and nonlocal properties of these states has been undertaken experimentally and theoretically, by a unifying variational approach.« less

  1. Joint OSNR monitoring and modulation format identification in digital coherent receivers using deep neural networks.

    PubMed

    Khan, Faisal Nadeem; Zhong, Kangping; Zhou, Xian; Al-Arashi, Waled Hussein; Yu, Changyuan; Lu, Chao; Lau, Alan Pak Tao

    2017-07-24

    We experimentally demonstrate the use of deep neural networks (DNNs) in combination with signals' amplitude histograms (AHs) for simultaneous optical signal-to-noise ratio (OSNR) monitoring and modulation format identification (MFI) in digital coherent receivers. The proposed technique automatically extracts OSNR and modulation format dependent features of AHs, obtained after constant modulus algorithm (CMA) equalization, and exploits them for the joint estimation of these parameters. Experimental results for 112 Gbps polarization-multiplexed (PM) quadrature phase-shift keying (QPSK), 112 Gbps PM 16 quadrature amplitude modulation (16-QAM), and 240 Gbps PM 64-QAM signals demonstrate OSNR monitoring with mean estimation errors of 1.2 dB, 0.4 dB, and 1 dB, respectively. Similarly, the results for MFI show 100% identification accuracy for all three modulation formats. The proposed technique applies deep machine learning algorithms inside standard digital coherent receiver and does not require any additional hardware. Therefore, it is attractive for cost-effective multi-parameter estimation in next-generation elastic optical networks (EONs).

  2. Reference Data for the Density, Viscosity, and Surface Tension of Liquid Al-Zn, Ag-Sn, Bi-Sn, Cu-Sn, and Sn-Zn Eutectic Alloys

    NASA Astrophysics Data System (ADS)

    Dobosz, Alexandra; Gancarz, Tomasz

    2018-03-01

    The data for the physicochemical properties viscosity, density, and surface tension obtained by different experimental techniques have been analyzed for liquid Al-Zn, Ag-Sn, Bi-Sn, Cu-Sn, and Sn-Zn eutectic alloys. All experimental data sets have been categorized and described by the year of publication, the technique used to obtain the data, the purity of the samples and their compositions, the quoted uncertainty, the number of data in the data set, the form of data, and the temperature range. The proposed standard deviations of liquid eutectic Al-Zn, Ag-Sn, Bi-Sn, Cu-Sn, and Sn-Zn alloys are 0.8%, 0.1%, 0.5%, 0.2%, and 0.1% for the density, 8.7%, 4.1%, 3.6%, 5.1%, and 4.0% for viscosity, and 1.0%, 0.5%, 0.3%, N/A, and 0.4% for surface tension, respectively, at a confidence level of 95%.

  3. Improving Identification of Dijet Resonances at Hadron Colliders

    NASA Astrophysics Data System (ADS)

    Izaguirre, Eder; Shuve, Brian; Yavin, Itay

    2015-01-01

    The experimental detection of resonances has played a vital role in the development of subatomic physics. The overwhelming multijet backgrounds at the Large Hadron Collider (LHC) necessitate the invention of new techniques to identify resonances decaying into a pair of partons. In this Letter we introduce an observable that achieves a significant improvement in several key measurements at the LHC: the Higgs boson decay to a pair of b quarks; W±/Z0 vector-boson hadronic decay; and extensions of the standard model (SM) with a new hadronic resonance. Measuring the Higgs decay to b quarks is a central test of the fermion mass generation mechanism in the SM, whereas the W±/Z0 production rates are important observables of the electroweak sector. Our technique is effective in large parts of phase space where the resonance is mildly boosted and is particularly well suited for experimental searches dominated by systematic uncertainties, which is true of many analyses in the high-luminosity running of the LHC.

  4. Improving identification of dijet resonances at hadron colliders.

    PubMed

    Izaguirre, Eder; Shuve, Brian; Yavin, Itay

    2015-01-30

    The experimental detection of resonances has played a vital role in the development of subatomic physics. The overwhelming multijet backgrounds at the Large Hadron Collider (LHC) necessitate the invention of new techniques to identify resonances decaying into a pair of partons. In this Letter we introduce an observable that achieves a significant improvement in several key measurements at the LHC: the Higgs boson decay to a pair of b quarks; W±/Z0 vector-boson hadronic decay; and extensions of the standard model (SM) with a new hadronic resonance. Measuring the Higgs decay to b quarks is a central test of the fermion mass generation mechanism in the SM, whereas the W±/Z0 production rates are important observables of the electroweak sector. Our technique is effective in large parts of phase space where the resonance is mildly boosted and is particularly well suited for experimental searches dominated by systematic uncertainties, which is true of many analyses in the high-luminosity running of the LHC.

  5. Determining the wavelength spectrum of neutrons on the NG6 beam line at NCNR

    NASA Astrophysics Data System (ADS)

    Ivanov, Juliet

    2016-09-01

    Historically, in-beam experiments and bottle experiments have been performed to determine the lifetime of a free neutron. However, these two different experimental techniques have provided conflicting results. It is crucial to precisely and accurately elucidate the neutron lifetime for Big Bang Nucleosynthesis calculations and to investigate physics beyond the Standard Model. Therefore, we aimed to understand and minimize systematic errors present in the neutron beam experiment at the NIST Center for Neutron Research (NCNR). In order to reduce the uncertainty related to wavelength dependent corrections present in previous beam experiments, the wavelength spectrum of the NCNR reactor cold neutron beam must be known. We utilized a beam chopper and lithium detector to characterize the wavelength spectrum on the NG6 beam line at the NCNR. The experimental design and techniques employed will be discussed, and our results will be presented. Future plans to utilize our findings to improve the neutron lifetime measurement at NCNR will also be described.

  6. Distributed phase birefringence measurements based on polarization correlation in phase-sensitive optical time-domain reflectometers.

    PubMed

    Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc

    2015-09-21

    In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.

  7. Modeling behavior dynamics using computational psychometrics within virtual worlds.

    PubMed

    Cipresso, Pietro

    2015-01-01

    In case of fire in a building, how will people behave in the crowd? The behavior of each individual affects the behavior of others and, conversely, each one behaves considering the crowd as a whole and the individual others. In this article, I propose a three-step method to explore a brand new way to study behavior dynamics. The first step relies on the creation of specific situations with standard techniques (such as mental imagery, text, video, and audio) and an advanced technique [Virtual Reality (VR)] to manipulate experimental settings. The second step concerns the measurement of behavior in one, two, or many individuals focusing on parameters extractions to provide information about the behavior dynamics. Finally, the third step, which uses the parameters collected and measured in the previous two steps in order to simulate possible scenarios to forecast through computational models, understand, and explain behavior dynamics at the social level. An experimental study was also included to demonstrate the three-step method and a possible scenario.

  8. An overview of clinical and experimental treatment modalities for port wine stains.

    PubMed

    Chen, Jennifer K; Ghasri, Pedram; Aguilar, Guillermo; van Drooge, Anne Margreet; Wolkerstorfer, Albert; Kelly, Kristen M; Heger, Michal

    2012-08-01

    Port wine stains (PWS) are the most common vascular malformation of the skin, occurring in 0.3% to 0.5% of the population. Noninvasive laser irradiation with flashlamp-pumped pulsed dye lasers (selective photothermolysis) currently comprises the gold standard treatment of PWS; however, the majority of PWS fail to clear completely after selective photothermolysis. In this review, the clinically used PWS treatment modalities (pulsed dye lasers, alexandrite lasers, neodymium:yttrium-aluminum-garnet lasers, and intense pulsed light) and techniques (combination approaches, multiple passes, and epidermal cooling) are discussed. Retrospective analysis of clinical studies published between 1990 and 2011 was performed to determine therapeutic efficacies for each clinically used modality/technique. In addition, factors that have resulted in the high degree of therapeutic recalcitrance are identified, and emerging experimental treatment strategies are addressed, including the use of photodynamic therapy, immunomodulators, angiogenesis inhibitors, hypobaric pressure, and site-specific pharmaco-laser therapy. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  9. Experimental and computational study on the molecular energetics of indoline and indole.

    PubMed

    da Silva, Manuel A V Ribeiro; Cabral, Joana I T A; Gomes, José R B

    2008-11-27

    Static bomb calorimetry, Calvet microcalorimetry and the Knudsen effusion technique were used to determine the standard molar enthalpy of formation in the gas phase, at T = 298.15 K, of the indole and indoline heterocyclic compounds. The values obtained were 164.3 +/- 1.3 kJ x mol(-1) and 120.0 +/- 2.9 kJ x mol(-1), respectively. Several different computational approaches and different working reactions were used to estimate the gas-phase enthalpies of formation for indole and indoline. The computational approaches support the experimental results reported. The calculations were further extended to the determination of other properties such as bond dissociation enthalpies, gas-phase acidities, proton and electron affinities and ionization energies. The agreement between theoretical and experimental data for indole is very good supporting the data calculated for indoline.

  10. Jet Substructure at the Large Hadron Collider : Experimental Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asquith, Lily; Campanelli, Mario; Delitzsch, Chris

    Jet substructure has emerged to play a central role at the Large Hadron Collider (LHC), where it has provided numerous innovative new ways to search for new physics and to probe the Standard Model, particularly in extreme regions of phase space. In this article we focus on a review of the development and use of state-of-the-art jet substructure techniques by the ATLAS and CMS experiments. ALICE and LHCb have been probing fragmentation functions since the start of the LHC and have also recently started studying other jet substructure techniques. It is likely that in the near future all LHC collaborationsmore » will make significant use of jet substructure and grooming techniques. Much of the work in this field in recent years has been galvanized by the Boost Workshop Series, which continues to inspire fruitful collaborations between experimentalists and theorists. We hope that this review will prove a useful introduction and reference to experimental aspects of jet substructure at the LHC. A companion overview of recent progress in theory and machine learning approaches is given in 1709.04464, the complete review will be submitted to Reviews of Modern Physics.« less

  11. Phase change events of volatile liquid perfluorocarbon contrast agents produce unique acoustic signatures

    PubMed Central

    Sheeran, Paul S.; Matsunaga, Terry O.; Dayton, Paul A.

    2015-01-01

    Phase-change contrast agents (PCCAs) provide a dynamic platform to approach problems in medical ultrasound (US). Upon US-mediated activation, the liquid core vaporizes and expands to produce a gas bubble ideal for US imaging and therapy. In this study, we demonstrate through high-speed video microscopy and US interrogation that PCCAs composed of highly volatile perfluorocarbons (PFCs) exhibit unique acoustic behavior that can be detected and differentiated from standard microbubble contrast agents. Experimental results show that when activated with short pulses PCCAs will over-expand and undergo unforced radial oscillation while settling to a final bubble diameter. The size-dependent oscillation phenomenon generates a unique acoustic signal that can be passively detected in both time and frequency domain using confocal piston transducers with an ‘activate high’ (8 MHz, 2 cycles), ‘listen low’ (1 MHz) scheme. Results show that the magnitude of the acoustic ‘signature’ increases as PFC boiling point decreases. By using a band-limited spectral processing technique, the droplet signals can be isolated from controls and used to build experimental relationships between concentration and vaporization pressure. The techniques shown here may be useful for physical studies as well as development of droplet-specific imaging techniques. PMID:24351961

  12. Laser-induced dissociation processes of protonated glucose: dehydration reactions vs cross-ring dissociation

    NASA Astrophysics Data System (ADS)

    Dyakov, Y. A.; Kazaryan, M. A.; Golubkov, M. G.; Gubanova, D. P.; Bulychev, N. A.; Kazaryan, S. M.

    2018-04-01

    Studying the processes occurring in biological systems under irradiation is critically important for understanding the principles of working of biological systems. One of the main problems, which stimulate interest to the processes of photo-induced excitation and ionization of biomolecules, is the necessity of their identification by various mass spectrometry (MS) methods. While simple analysis of small molecules became a standard MS technique long time ago, recognition of large molecules, especially carbohydrates, is still a difficult problem, and requires sophisticated techniques and complicated computer analysis. Due to the large variety of substances in the samples, as far as the complexity of the processes occurring after excitation/ionization of the molecules, the recognition efficiency of MS technique in terms of carbohydrates is still not high enough. Additional theoretical and experimental analysis of ionization and dissociation processes in various kinds of polysaccharides, beginning from the simplest ones, is necessary. In our work, we extent previous theoretical and experimental studies of saccharides, and concentrate our attention to protonated glucose. In this article we paid the most attention to the cross-ring dissociation and water loss reactions due to their importance for identification of various isomers of hydrocarbon molecules (for example, distinguish α- and β-glucose).

  13. Fractality of pulsatile flow in speckle images

    NASA Astrophysics Data System (ADS)

    Nemati, M.; Kenjeres, S.; Urbach, H. P.; Bhattacharya, N.

    2016-05-01

    The scattering of coherent light from a system with underlying flow can be used to yield essential information about dynamics of the process. In the case of pulsatile flow, there is a rapid change in the properties of the speckle images. This can be studied using the standard laser speckle contrast and also the fractality of images. In this paper, we report the results of experiments performed to study pulsatile flow with speckle images, under different experimental configurations to verify the robustness of the techniques for applications. In order to study flow under various levels of complexity, the measurements were done for three in-vitro phantoms and two in-vivo situations. The pumping mechanisms were varied ranging from mechanical pumps to the human heart for the in vivo case. The speckle images were analyzed using the techniques of fractal dimension and speckle contrast analysis. The results of these techniques for the various experimental scenarios were compared. The fractal dimension is a more sensitive measure to capture the complexity of the signal though it was observed that it is also extremely sensitive to the properties of the scattering medium and cannot recover the signal for thicker diffusers in comparison to speckle contrast.

  14. The Shock and Vibration Digest. Volume 12, Number 12,

    DTIC Science & Technology

    1980-12-01

    accelerations is presented. R.G. Schwarz It is shown that while the technique is theoretically cor- Fortschritt-Berichte der VDI -Zt., Series 8, No. 30, rect, it...is subject to experimental limitations due to in- 188 pp, 22 figs, 7 tables (1980). Summary in VDI -Z accuracies in current accelerometer technology...relationship of the so- better understanding of the fatigue life of wind turbine called K-value of the proposed standard VDI 2057 to the pal blades

  15. Evaluated activation cross sections of longer-lived radionuclides produced by deuteron-induced reactions on natural copper

    NASA Astrophysics Data System (ADS)

    Takács, S.; Tárkányi, F.; Király, B.; Hermanne, A.; Sonck, M.

    2006-09-01

    Activation cross sections for deuteron-induced reactions on natural copper were measured by using a standard stacked foil technique up to 50 MeV deuteron bombarding energy. Reaction products with half-life longer than half an hour were studied. Experimental elemental cross sections were determined and compared with earlier measured data for 62,63,65Zn, 64Cu, 57,65Ni, 57,58,60Co and 59Fe isotopes.

  16. Defense switched network technology and experiments program

    NASA Astrophysics Data System (ADS)

    Weinstein, C. J.

    1983-09-01

    This report documents work performed during FY 1983 on the DCA-sponsored Defense Switched Network Technology and Experiments Program. The areas of work reported are: (1) development of routing algorithms for application in the Defense Switched Network (DSN); (2) instrumentation and integration of the Experimental Integrated Switched Network (EISN) test facility; (3) development and test of data communication techniques using DoD-standard data protocols in an integrated voice/data network; and (4) EISN system coordination and experiment planning.

  17. Frequency standards requirements of the NASA deep space network to support outer planet missions

    NASA Technical Reports Server (NTRS)

    Fliegel, H. F.; Chao, C. C.

    1974-01-01

    Navigation of Mariner spacecraft to Jupiter and beyond will require greater accuracy of positional determination than heretofore obtained if the full experimental capabilities of this type of spacecraft are to be utilized. Advanced navigational techniques which will be available by 1977 include Very Long Baseline Interferometry (VLBI), three-way Doppler tracking (sometimes called quasi-VLBI), and two-way Doppler tracking. It is shown that VLBI and quasi-VLBI methods depend on the same basic concept, and that they impose nearly the same requirements on the stability of frequency standards at the tracking stations. It is also shown how a realistic modelling of spacecraft navigational errors prevents overspecifying the requirements to frequency stability.

  18. The impact of oxidation on spore and pollen chemistry: an experimental study

    NASA Astrophysics Data System (ADS)

    Jardine, Phillip; Fraser, Wesley; Lomax, Barry; Gosling, William

    2016-04-01

    Sporomorphs (pollen and spores) form a major component of the land plant fossil record. Sporomorphs have an outer wall composed of sporopollenin, a highly durable biopolymer, the chemistry of which contains both a signature of ambient ultraviolet-B flux and taxonomic information. Despite the high preservation potential of sporopollenin in the geological record, it is currently unknown how sensitive its chemical signature is to standard palynological processing techniques. Oxidation in particular is known to cause physical degradation to sporomorphs, and it is expected that this should have a concordant impact on sporopollenin chemistry. Here, we test this by experimentally oxidizing Lycopodium (clubmoss) spores using two common oxidation techniques: acetolysis and nitric acid. We also carry out acetolysis on eight angiosperm (flowering plant) taxa to test the generality of our results. Using Fourier Transform infrared (FTIR) spectroscopy, we find that acetolysis removes labile, non-fossilizable components of sporomorphs, but has a limited impact upon the chemistry of sporopollenin under normal processing durations. Nitric acid is more aggressive and does break down sporopollenin and reorganize its chemical structure, but when limited to short treatments (i.e. ≤10 min) at room temperature sporomorphs still contain most of the original chemical signal. These findings suggest that when used carefully oxidation does not adversely affect sporopollenin chemistry, and that palaeoclimatic and taxonomic signatures contained within the sporomorph wall are recoverable from standard palynological preparations.

  19. High-resolution frequency measurement method with a wide-frequency range based on a quantized phase step law.

    PubMed

    Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan

    2013-11-01

    A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.

  20. Going Beyond QCD in Lattice Gauge Theory

    NASA Astrophysics Data System (ADS)

    Fleming, G. T.

    2011-01-01

    Strongly coupled gauge theories (SCGT's) have been studied theoretically for many decades using numerous techniques. The obvious motivation for these efforts stemmed from a desire to understand the source of the strong nuclear force: Quantum Chromo-dynamics (QCD). Guided by experimental results, theorists generally consider QCD to be a well-understood SCGT. Unfortunately, it is not clear how to extend the lessons learned from QCD to other SCGT's. Particularly urgent motivators for new studies of other SCGT's are the ongoing searches for physics beyond the standard model (BSM) at the Large Hadron Collider (LHC) and the Tevatron. Lattice gauge theory (LGT) is a technique for systematically-improvable calculations in many SCGT's. It has become the standard for non-perturbative calculations in QCD and it is widely believed that it may be useful for study of other SCGT's in the realm of BSM physics. We will discuss the prospects and potential pitfalls for these LGT studies, focusing primarily on the flavor dependence of SU(3) gauge theory.

  1. Risk Factors of Recurrence and Malignant Transformation of Sinonasal Inverted Papilloma

    PubMed Central

    Ścierski, Wojciech; Misiołek, Maciej

    2017-01-01

    Sinonasal inverted papilloma is a relatively rare disease; however, it is prevalent enough for every ENT practitioner to encounter it several times throughout medical routines. Despite the developments in experimental and clinical medicine as well as surgical techniques, our knowledge of this disease is still inadequate. With improved imaging and better diagnostic techniques, proper diagnosis and qualification for surgical approaches leave no doubt. Although the endoscopic approach seems to be the gold standard for such condition, some cases may additionally require an external approach. Regardless of the type of surgery, postoperative management is crucial for both healing and long-term follow-up. Unfortunately, the procedures are still lacking in explicit and standardized postoperative management guidelines. Moreover, an important issue is still the need for a biomarker indicative of inverted papilloma and its malignant transformation. Several particles, within the spotlight of the researchers, have been SCCA, Ki-67, Bcl-2, Wnt proteins, and many more. Nevertheless, the topic requires further investigations. PMID:29250552

  2. Chromatographic Separation of Cd from Plants via Anion-Exchange Resin for an Isotope Determination by Multiple Collector ICP-MS.

    PubMed

    Wei, Rongfei; Guo, Qingjun; Wen, Hanjie; Peters, Marc; Yang, Junxing; Tian, Liyan; Han, Xiaokun

    2017-01-01

    In this study, key factors affecting the chromatographic separation of Cd from plants, such as the resin column, digestion and purification procedures, were experimentally investigated. A technique for separating Cd from plant samples based on single ion-exchange chromatography has been developed, which is suitable for the high-precision analysis of Cd isotopes by multiple-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). The robustness of the technique was assessed by replicate analyses of Cd standard solutions and plant samples. The Cd yields of the whole separation process were higher than 95%, and the 114/110 Cd values of three Cd second standard solutions (Münster Cd, Spex Cd, Spex-1 Cd solutions) relative to the NIST SRM 3108 were measured accurately, which enabled the comparisons of Cd isotope results obtained in other laboratories. Hence, stable Cd isotope analyses represent a powerful tool for fingerprinting specific Cd sources and/or examining biogeochemical reactions in ecological and environmental systems.

  3. Repetition code of 15 qubits

    NASA Astrophysics Data System (ADS)

    Wootton, James R.; Loss, Daniel

    2018-05-01

    The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.

  4. Biodegradable Magnesium Stent Treatment of Saccular Aneurysms in a Rat Model - Introduction of the Surgical Technique.

    PubMed

    Nevzati, Edin; Rey, Jeannine; Coluccia, Daniel; D'Alonzo, Donato; Grüter, Basil; Remonda, Luca; Fandino, Javier; Marbacher, Serge

    2017-10-01

    The steady progess in the armamentarium of techniques available for endovascular treatment of intracranial aneurysms requires affordable and reproducable experimental animal models to test novel embolization materials such as stents and flow diverters. The aim of the present project was to design a safe, fast, and standardized surgical technique for stent assisted embolization of saccular aneurysms in a rat animal model. Saccular aneurysms were created from an arterial graft from the descending aorta.The aneurysms were microsurgically transplanted through end-to-side anastomosis to the infrarenal abdominal aorta of a syngenic male Wistar rat weighing >500 g. Following aneurysm anastomosis, aneurysm embolization was performed using balloon expandable magnesium stents (2.5 mm x 6 mm). The stent system was retrograde introduced from the lower abdominal aorta using a modified Seldinger technique. Following a pilot series of 6 animals, a total of 67 rats were operated according to established standard operating procedures. Mean surgery time, mean anastomosis time, and mean suturing time of the artery puncture site were 167 ± 22 min, 26 ± 6 min and 11 ± 5 min, respectively. The mortality rate was 6% (n=4). The morbidity rate was 7.5% (n=5), and in-stent thrombosis was found in 4 cases (n=2 early, n=2 late in stent thrombosis). The results demonstrate the feasibility of standardized stent occlusion of saccular sidewall aneurysms in rats - with low rates of morbidity and mortality. This stent embolization procedure combines the opportunity to study novel concepts of stent or flow diverter based devices as well as the molecular aspects of healing.

  5. Ultrasound-assisted emulsification microextraction for determination of 2,4,6-trichloroanisole in wine samples by gas chromatography tandem mass spectrometry.

    PubMed

    Fontana, Ariel R; Patil, Sangram H; Banerjee, Kaushik; Altamirano, Jorgelina C

    2010-04-28

    A fast and effective microextraction technique is proposed for preconcentration of 2,4,6-trichloroanisole (2,4,6-TCA) from wine samples prior gas chromatography tandem mass spectrometric (GC-MS/MS) analysis. The proposed technique is based on ultrasonication (US) for favoring the emulsification phenomenon during the extraction stage. Several variables influencing the relative response of the target analyte were studied and optimized. Under optimal experimental conditions, 2,4,6-TCA was quantitatively extracted achieving enhancement factors (EF) > or = 400 and limits of detection (LODs) 0.6-0.7 ng L(-1) with relative standard deviations (RSDs) < or = 11.3%, when 10 ng L(-1) 2,4,6-TCA standard-wine sample blend was analyzed. The calibration graphs for white and red wine were linear within the range of 5-1000 ng L(-1), and estimation coefficients (r(2)) were > or = 0.9995. Validation of the methodology was carried out by standard addition method at two concentrations (10 and 50 ng L(-1)) achieving recoveries >80% indicating satisfactory robustness of the method. The methodology was successfully applied for determination of 2,4,6-TCA in different wine samples.

  6. GNSS derived TEC data ingestion into IRI 2012

    NASA Astrophysics Data System (ADS)

    Migoya-Orué, Yenca; Nava, Bruno; Radicella, Sandro; Alazo-Cuartas, Katy

    2015-04-01

    Experimental vertical total electron content (VTEC) data given by Global Ionospheric Maps (GIM) has been ingested into the IRI version 2012, aiming to obtain grids of effective input parameter values that allow to minimize the difference between the experimental and modeled vertical TEC. Making use of the experience gained with the technique of model adaptation applied to NeQuick (Nava et al., 2005), it has been found possible to compute IRI world grids of effective ionosphere index parameters (IG). The IG grids thus obtained can be interpolated in space and time to calculate with IRI the 3D electron density at any location and also the TEC along any ground-to-satellite ray-path for a given epoch. In this study, the ingestion technique is presented and a posteriori validation, along with an assessment of the capability of the 'ingested' IRI to reproduce the ionosphere day-to-day foF2 variability during disturbed and quiet periods. The foF2 values retrieved are compared with data from about 20 worldwide ionosondes for selected periods of high (year 2000) and moderate to low solar activity (year 2006). It was found that the use of the ingestion scheme enhances the performance of the model when compared with its standard use based on solar activity drivers (R12 and F10.7), especially for high solar activity. As an example, the mean and standard deviation of the differences between experimental and reconstructed F2-peak values for April of year 2000 is 0.09 and 1.28 MHz for ingested IRI, compared to -0.81 and 1.27 MHz (IRI with R12 input) and -0.02 and 1.46 MHz (IRI with F10.7 input).

  7. A novel computer-aided method to fabricate a custom one-piece glass fiber dowel-and-core based on digitized impression and crown preparation data.

    PubMed

    Chen, Zhiyu; Li, Ya; Deng, Xuliang; Wang, Xinzhi

    2014-06-01

    Fiber-reinforced composite dowels have been widely used for their superior biomechanical properties; however, their preformed shape cannot fit irregularly shaped root canals. This study aimed to describe a novel computer-aided method to create a custom-made one-piece dowel-and-core based on the digitization of impressions and clinical standard crown preparations. A standard maxillary die stone model containing three prepared teeth each (maxillary lateral incisor, canine, premolar) requiring dowel restorations was made. It was then mounted on an average value articulator with the mandibular stone model to simulate natural occlusion. Impressions for each tooth were obtained using vinylpolysiloxane with a sectional dual-arch tray and digitized with an optical scanner. The dowel-and-core virtual model was created by slicing 3D dowel data from impression digitization with core data selected from a standard crown preparation database of 107 records collected from clinics and digitized. The position of the chosen digital core was manually regulated to coordinate with the adjacent teeth to fulfill the crown restorative requirements. Based on virtual models, one-piece custom dowel-and-cores for three experimental teeth were milled from a glass fiber block with computer-aided manufacturing techniques. Furthermore, two patients were treated to evaluate the practicality of this new method. The one-piece glass fiber dowel-and-core made for experimental teeth fulfilled the clinical requirements for dowel restorations. Moreover, two patients were treated to validate the technique. This novel computer-aided method to create a custom one-piece glass fiber dowel-and-core proved to be practical and efficient. © 2013 by the American College of Prosthodontists.

  8. Standard plane localization in ultrasound by radial component model and selective search.

    PubMed

    Ni, Dong; Yang, Xin; Chen, Xin; Chin, Chien-Ting; Chen, Siping; Heng, Pheng Ann; Li, Shengli; Qin, Jing; Wang, Tianfu

    2014-11-01

    Acquisition of the standard plane is crucial for medical ultrasound diagnosis. However, this process requires substantial experience and a thorough knowledge of human anatomy. Therefore it is very challenging for novices and even time consuming for experienced examiners. We proposed a hierarchical, supervised learning framework for automatically detecting the standard plane from consecutive 2-D ultrasound images. We tested this technique by developing a system that localizes the fetal abdominal standard plane from ultrasound video by detecting three key anatomical structures: the stomach bubble, umbilical vein and spine. We first proposed a novel radial component-based model to describe the geometric constraints of these key anatomical structures. We then introduced a novel selective search method which exploits the vessel probability algorithm to produce probable locations for the spine and umbilical vein. Next, using component classifiers trained by random forests, we detected the key anatomical structures at their probable locations within the regions constrained by the radial component-based model. Finally, a second-level classifier combined the results from the component detection to identify an ultrasound image as either a "fetal abdominal standard plane" or a "non- fetal abdominal standard plane." Experimental results on 223 fetal abdomen videos showed that the detection accuracy of our method was as high as 85.6% and significantly outperformed both the full abdomen and the separate anatomy detection methods without geometric constraints. The experimental results demonstrated that our system shows great promise for application to clinical practice. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  9. Symmetric Phase Only Filtering for Improved DPIV Data Processing

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    2006-01-01

    The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."

  10. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  11. Characterization and Uncertainty Analysis of a Reference Pressure Measurement System for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley

    2004-01-01

    This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.

  12. Physics of a novel magnetic resonance and electrical impedance combination for breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Kallergi, Maria; Heine, John J.; Wollin, Ernest

    2015-03-01

    A new technique is proposed and experimentally validated for breast cancer detection and diagnosis. The technique combines magnetic resonance with electrical impedance measurements and has the potential to increase the specificity of magnetic resonance mammography (MRM) thereby reducing false positive biopsy rates. The new magnetic resonance electrical impedance mammography (MREIM) adds a time varying electric field during a supplementary sequence to a standard MRM examination with an apparatus that is "invisible" to the patient. The applied electric field produces a current that creates an additional magnetic field with a component aligned with the bore magnetic field that can alter the native signal in areas of higher electrical conductivity. The justification for adding the electric field is that the electrical conductivity of cancerous breast tissue is approximately 3-40 times higher than normal breast tissue and, hence, conductivity of malignant tissue represents a known clinical disease biomarker. In a pilot study with custom-made phantoms and experimental protocols, it was demonstrated that MREIM can produce, as theoretically predicted, a detectable differential signal in areas of higher electrical conductivity (tumor surrogate regions); the evidence indicates that the differential signal is produced by the confluence of two different effects at full image resolution without gadolinium chelate contrast agent injection, without extraneous reconstruction techniques, and without cumbersome multi-positioned patient electrode configurations. This paper describes the theoretical model that predicts and explains the observed experimental results that were also confirmed by simulation studies.

  13. Experimental demonstration of passive coherent combining of fiber lasers by phase contrast filtering.

    PubMed

    Jeux, François; Desfarges-Berthelemot, Agnès; Kermène, Vincent; Barthelemy, Alain

    2012-12-17

    We report experiments on a new laser architecture involving phase contrast filtering to coherently combine an array of fiber lasers. We demonstrate that the new technique yields a more stable phase-locking than standard methods using only amplitude filtering. A spectral analysis of the output beams shows that the new scheme generates more resonant frequencies common to the coupled lasers. This property can enhance the combining efficiency when the number of lasers to be coupled is large.

  14. Using Genotype Abundance to Improve Phylogenetic Inference

    PubMed Central

    Mesin, Luka; Victora, Gabriel D; Minin, Vladimir N; Matsen, Frederick A

    2018-01-01

    Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. PMID:29474671

  15. Use of Kinesiology Taping in Rehabilitation after Knee Arthroplasty: a Randomised Clinical Study.

    PubMed

    Woźniak-Czekierda, Weronika; Woźniak, Kamil; Hadamus, Anna; Białoszewski, Dariusz

    2017-10-31

    Proprioception and body balance after knee arthroplasty have a considerable impact on restoration of joint function and a normal gait pattern. Kinesiology Taping (KT) is a method that may be able to influence these factors. The aim of this study was to assess the effects of KT application on sensorimotor efficiency, balance and gait in patients undergoing rehabili-ta--tion after knee replacement surgery. The study involved 120 male and female patients (mean age was 69 years) after total knee repla-cement. The patients were randomly assigned to one of two groups: Experimental Group (n=51) and Control Group (n=60). Both groups underwent standard rehabilitation lasting 20 days. In addition, the Experimental Group received KT applications. Treat-ment outcomes were assessed based on tests evaluating balance, joint position sense and functional gait performance, conducted both before and after the therapy. Statistically significant improvements were noted across all the parameters assessed in the Experimental Group (p<0.005). Significant improvements were also seen in the Control Group (p<0.005), but, in percentage terms, the improvement was higher in the Experimental Group. The only exception was the right/left foot load distribution, whose symmetry improved proportionally in both groups. 1. Patients after knee replacement surgery have considerable proprioception deficits, impaired body balance and reduced functional performance, which may increase the risk of falls in this group of patients. 2. Both standard physiotherapy and combination therapy with Kinesiology Taping (modified by the present authors) used in patients after knee arthroplasty may considerably improve the level of proprioception, body balance and overall functional performance. 3. The technique of dynamic taping proposed in this paper may optimise standard physiotherapy used in patients after knee arthroplasty and increase its clinical efficacy. Further studies are required.

  16. Use of simulated experiments for material characterization of brittle materials subjected to high strain rate dynamic tension

    PubMed Central

    Saletti, Dominique

    2017-01-01

    Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505

  17. Excited-state dissociation dynamics of phenol studied by a new time-resolved technique

    NASA Astrophysics Data System (ADS)

    Lin, Yen-Cheng; Lee, Chin; Lee, Shih-Huang; Lee, Yin-Yu; Lee, Yuan T.; Tseng, Chien-Ming; Ni, Chi-Kung

    2018-02-01

    Phenol is an important model molecule for the theoretical and experimental investigation of dissociation in the multistate potential energy surfaces. Recent theoretical calculations [X. Xu et al., J. Am. Chem. Soc. 136, 16378 (2014)] suggest that the phenoxyl radical produced in both the X and A states from the O-H bond fission in phenol can contribute substantially to the slow component of photofragment translational energy distribution. However, current experimental techniques struggle to separate the contributions from different dissociation pathways. A new type of time-resolved pump-probe experiment is described that enables the selection of the products generated from a specific time window after molecules are excited by a pump laser pulse and can quantitatively characterize the translational energy distribution and branching ratio of each dissociation pathway. This method modifies conventional photofragment translational spectroscopy by reducing the acceptance angles of the detection region and changing the interaction region of the pump laser beam and the molecular beam along the molecular beam axis. The translational energy distributions and branching ratios of the phenoxyl radicals produced in the X, A, and B states from the photodissociation of phenol at 213 and 193 nm are reported. Unlike other techniques, this method has no interference from the undissociated hot molecules. It can ultimately become a standard pump-probe technique for the study of large molecule photodissociation in multistates.

  18. Simultaneous measurement of the Young's modulus and the Poisson ratio of thin elastic layers.

    PubMed

    Gross, Wolfgang; Kress, Holger

    2017-02-07

    The behavior of cells and tissue is greatly influenced by the mechanical properties of their environment. For studies on the interactions between cells and soft matrices, especially those applying traction force microscopy the characterization of the mechanical properties of thin substrate layers is essential. Various techniques to measure the elastic modulus are available. Methods to accurately measure the Poisson ratio of such substrates are rare and often imply either a combination of multiple techniques or additional equipment which is not needed for the actual biological studies. Here we describe a novel technique to measure both parameters, the Youngs's modulus and the Poisson ratio in a single experiment. The technique requires only a standard inverted epifluorescence microscope. As a model system, we chose cross-linked polyacrylamide and poly-N-isopropylacrylamide hydrogels which are known to obey Hooke's law. We place millimeter-sized steel spheres on the substrates which indent the surface. The data are evaluated using a previously published model which takes finite thickness effects of the substrate layer into account. We demonstrate experimentally for the first time that the application of the model allows the simultaneous determination of both the Young's modulus and the Poisson ratio. Since the method is easy to adapt and comes without the need of special equipment, we envision the technique to become a standard tool for the characterization of substrates for a wide range of investigations of cell and tissue behavior in various mechanical environments as well as other samples, including biological materials.

  19. ENFIN--A European network for integrative systems biology.

    PubMed

    Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan

    2009-11-01

    Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.

  20. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  1. Flight Experiments of Physical Vapor Transport of ZnSe: Growth of Crystals in Various Convective Conditions

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua

    2015-01-01

    A low gravity material experiment will be performed in the Material Science Research Rack (MSRR) on International Space Station (ISS). The flight experiment will conduct crystal growths of ZnSe and related ternary compounds, such as ZnSeS and ZnSeTe, by physical vapor transport (PVT). The main objective of the project is to determine the relative contributions of gravity-driven fluid flows to the compositional distribution, incorporation of impurities and defects, and deviation from stoichiometry observed in the grown crystals as results of buoyancy-driven convection and growth interface fluctuations caused by irregular fluid-flows on Earth. The investigation consists of extensive ground-based experimental and theoretical research efforts and concurrent flight experimentation. The objectives of the ground-based studies are (1) obtain the experimental data and conduct the analyses required to define the optimum growth parameters for the flight experiments, (2) perfect various characterization techniques to establish the standard procedure for material characterization, (3) quantitatively establish the characteristics of the crystals grown on Earth as a basis for subsequent comparative evaluations of the crystals grown in a low-gravity environment and (4) develop theoretical and analytical methods required for such evaluations. ZnSe and related ternary compounds have been grown by vapor transport technique with real time in-situ non-invasive monitoring techniques. The grown crystals have been characterized extensively by various techniques to correlate the grown crystal properties with the growth conditions. This talk will focus on the ground-based studies on the PVT crystal growth of ZnSe and related ternary compounds, especially the effects of different growth orientations related to gravity direction on the grown crystals.

  2. Crystal Growth of Ternary Compound Semiconductors in Low Gravity Environment

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua

    2014-01-01

    A low gravity material experiment will be performed in the Material Science Research Rack (MSRR) on International Space Station (ISS). There are two sections of the flight experiment: (I) crystal growth of ZnSe and related ternary compounds, such as ZnSeS and ZnSeTe, by physical vapor transport (PVT) and (II) melt growth of CdZnTe by directional solidification. The main objective of the project is to determine the relative contributions of gravity-driven fluid flows to the compositional distribution, incorporation of impurities and defects, and deviation from stoichiometry observed in the grown crystals as results of buoyancy-driven convection and growth interface fluctuations caused by irregular fluid-flows on Earth. The investigation consists of extensive ground-based experimental and theoretical research efforts and concurrent flight experimentation. This talk will focus on the ground-based studies on the PVT crystal growth of ZnSe and related ternary compounds. The objectives of the ground-based studies are (1) obtain the experimental data and conduct the analyses required to define the optimum growth parameters for the flight experiments, (2) perfect various characterization techniques to establish the standard procedure for material characterization, (3) quantitatively establish the characteristics of the crystals grown on Earth as a basis for subsequent comparative evaluations of the crystals grown in a low-gravity environment and (4) develop theoretical and analytical methods required for such evaluations. ZnSe and related ternary compounds have been grown by vapor transport technique with real time in-situ non-invasive monitoring techniques. The grown crystals have been characterized extensively by various techniques to correlate the grown crystal properties with the growth conditions.

  3. Spectral separation of gaseous fluorocarbon mixtures and measurement of diffusion constants by 19F gas phase DOSY NMR.

    PubMed

    Marchione, Alexander A; McCord, Elizabeth F

    2009-11-01

    Diffusion-ordered (DOSY) NMR techniques have for the first time been applied to the spectral separation of mixtures of fluorinated gases by diffusion rates. A mixture of linear perfluoroalkanes from methane to hexane was readily separated at 25 degrees C in an ordinary experimental setup with standard DOSY pulse sequences. Partial separation of variously fluorinated ethanes was also achieved. The constants of self-diffusion of a set of pure perfluoroalkanes were obtained at pressures from 0.25 to 1.34 atm and temperatures from 20 to 122 degrees C. Under all conditions there was agreement within 20% of experimental self-diffusion constant D and values calculated by the semiempirical Fuller method.

  4. Usage of machine learning for the separation of electroweak and strong Zγ production at the LHC experiments

    NASA Astrophysics Data System (ADS)

    Petukhov, A. M.; Soldatov, E. Yu

    2017-12-01

    Separation of electroweak component from strong component of associated Zγ production on hadron colliders is a very challenging task due to identical final states of such processes. The only difference is the origin of two leading jets in these two processes. Rectangular cuts on jet kinematic variables from ATLAS/CMS 8 TeV Zγ experimental analyses were improved using machine learning techniques. New selection variables were also tested. The expected significance of separation for LHC experiments conditions at the second datataking period (Run2) and 120 fb-1 amount of data reaches more than 5σ. Future experimental observation of electroweak Zγ production can also lead to the observation physics beyond Standard Model.

  5. A Comparative Study of Random Patterns for Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Stoilov, G.; Kavardzhikov, V.; Pashkouleva, D.

    2012-06-01

    Digital Image Correlation (DIC) is a computer based image analysis technique utilizing random patterns, which finds applications in experimental mechanics of solids and structures. In this paper a comparative study of three simulated random patterns is done. One of them is generated according to a new algorithm, introduced by the authors. A criterion for quantitative evaluation of random patterns after the calculation of their autocorrelation functions is introduced. The patterns' deformations are simulated numerically and realized experimentally. The displacements are measured by using the DIC method. Tensile tests are performed after printing the generated random patterns on surfaces of standard iron sheet specimens. It is found that the new designed random pattern keeps relatively good quality until reaching 20% deformation.

  6. Blood-Banking Techniques for Plateletpheresis in Swine

    PubMed Central

    Sondeen, Jill L; Prince, Malcolm D; Polykratis, Irene A; Hernandez, Orlando; Torres-Mendoza, Jaime; Guzman, Rodolfo De; Aden, James K; Dubick, Michael A

    2014-01-01

    During the past several years, trauma resuscitation in human patients has evolved from decreased use of crystalloids to increased use of blood products. Of high interest is the role of platelets in trauma resuscitation. Because conducting prehospital resuscitation in human trauma patients is very difficult, swine are often the animal model of choice for such studies because their coagulation and hemodynamic systems are similar to those in humans. However, consistent production of sufficient swine platelets for such studies has not previously been achieved. We developed a method for producing swine platelets by using standard human techniques and equipment. We assessed pH, pO2, pCO2, lactate, thromboelastography, and platelet aggregation over 5 d of storage to determine whether the swine platelet product met the American Association of Blood Banks (AABB) standards for transfusion. Swine platelets met AABB standards at 24 h but not at later time points. In addition, we fluorescently labeled nonautologous platelets and then measured their percentage recovery over 5 h (the time used in subsequent experimental studies) when transfused into a recipient pig. We showed that 80% of the platelets stored for 24 h remained in the circulation and increased the recipient pigs’ thromboelastographic responses, indicating that the platelets were viable and active. Therefore, swine platelets stored for 24 h by using standard human products met the AABB criteria and were functional. PMID:24827574

  7. Extending neutron autoradiography technique for boron concentration measurements in hard tissues.

    PubMed

    Provenzano, Lucas; Olivera, María Silvina; Saint Martin, Gisela; Rodríguez, Luis Miguel; Fregenal, Daniel; Thorp, Silvia I; Pozzi, Emiliano C C; Curotto, Paula; Postuma, Ian; Altieri, Saverio; González, Sara J; Bortolussi, Silva; Portu, Agustina

    2018-07-01

    The neutron autoradiography technique using polycarbonate nuclear track detectors (NTD) has been extended to quantify the boron concentration in hard tissues, an application of special interest in Boron Neutron Capture Therapy (BNCT). Chemical and mechanical processing methods to prepare thin tissue sections as required by this technique have been explored. Four different decalcification methods governed by slow and fast kinetics were tested in boron-loaded bones. Due to the significant loss of the boron content, this technique was discarded. On the contrary, mechanical manipulation to obtain bone powder and tissue sections of tens of microns thick proved reproducible and suitable, ensuring a proper conservation of the boron content in the samples. A calibration curve that relates the 10 B concentration of a bone sample and the track density in a Lexan NTD is presented. Bone powder embedded in boric acid solution with known boron concentrations between 0 and 100 ppm was used as a standard material. The samples, contained in slim Lexan cases, were exposed to a neutron fluence of 10 12 cm -2 at the thermal column central facility of the RA-3 reactor (Argentina). The revealed tracks in the NTD were counted with an image processing software. The effect of track overlapping was studied and corresponding corrections were implemented in the presented calibration curve. Stochastic simulations of the track densities produced by the products of the 10 B thermal neutron capture reaction for different boron concentrations in bone were performed and compared with the experimental results. The remarkable agreement between the two curves suggested the suitability of the obtained experimental calibration curve. This neutron autoradiography technique was finally applied to determine the boron concentration in pulverized and compact bone samples coming from a sheep experimental model. The obtained results for both type of samples agreed with boron measurements carried out by ICP-OES within experimental uncertainties. The fact that the histological structure of bone sections remains preserved allows for future boron microdistribution analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  9. A new bioimpedance research device (BIRD) for measuring the electrical impedance of acupuncture meridians.

    PubMed

    Wong, Felix Wu Shun; Lim, Chi Eung Danforn; Smith, Warren

    2010-03-01

    The aim of this article is to introduce an electrical bioimpedance device that uses an old and little-known impedance measuring technique to study the impedance of the meridian and nonmeridian tissue segments. Three (3) pilot experimental studies involving both a tissue phantom (a cucumber) and 3 human subjects were performed using this BIRD-I (Bioimpedance Research Device) device. This device consists of a Fluke RCL meter, a multiplexer box, a laptop computer, and a medical-grade isolation transformer. Segment and surface sheath (or local) impedances were estimated using formulae first published in the 1930s, in an approach that differs from that of the standard four-electrode technique used in most meridian studies to date. Our study found that, when using a quasilinear four-electrode arrangement, the reference electrodes should be positioned at least 10 cm from the test electrodes to ensure that the segment (or core) impedance estimation is not affected by the proximity of the reference electrodes. A tissue phantom was used to determine the repeatability of segment (core) impedance measurement by the device. An applied frequency of 100 kHz was found to produce the best repeatability among the various frequencies tested. In another preliminary study, with a segment of the triple energizer meridian on the lower arm selected as reference segment, core resistance-based profiles around the lower arm showed three of the other five meridians to exist as local resistance minima relative to neighboring nonmeridian segments. The profiles of the 2 subjects tested were very similar, suggesting that the results are unlikely to be spurious. In electrical bioimpedance studies, it is recommended that the measuring technique and device be clearly defined and standardized to provide optimal working conditions. In our study using the BIRD I device, we defined our standard experimental conditions as a test frequency of 100 kHz and the position of the reference electrodes of at least 10 cm from the test electrodes. Our device has demonstrated potential for use in quantifying the degree of electrical interconnection between any two surface-defined test meridian or nonmeridian segments. Issues arising from use of this device and the measurement Horton and van Ravenswaay technique were also presented.

  10. COordination of Standards in MetabOlomicS (COSMOS): facilitating integrated metabolomics data access.

    PubMed

    Salek, Reza M; Neumann, Steffen; Schober, Daniel; Hummel, Jan; Billiau, Kenny; Kopka, Joachim; Correa, Elon; Reijmers, Theo; Rosato, Antonio; Tenori, Leonardo; Turano, Paola; Marin, Silvia; Deborde, Catherine; Jacob, Daniel; Rolin, Dominique; Dartigues, Benjamin; Conesa, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; O'Hagan, Steve; Hao, Jie; van Vliet, Michael; Sysi-Aho, Marko; Ludwig, Christian; Bouwman, Jildau; Cascante, Marta; Ebbels, Timothy; Griffin, Julian L; Moing, Annick; Nikolski, Macha; Oresic, Matej; Sansone, Susanna-Assunta; Viant, Mark R; Goodacre, Royston; Günther, Ulrich L; Hankemeier, Thomas; Luchinat, Claudio; Walther, Dirk; Steinbeck, Christoph

    Metabolomics has become a crucial phenotyping technique in a range of research fields including medicine, the life sciences, biotechnology and the environmental sciences. This necessitates the transfer of experimental information between research groups, as well as potentially to publishers and funders. After the initial efforts of the metabolomics standards initiative, minimum reporting standards were proposed which included the concepts for metabolomics databases. Built by the community, standards and infrastructure for metabolomics are still needed to allow storage, exchange, comparison and re-utilization of metabolomics data. The Framework Programme 7 EU Initiative 'coordination of standards in metabolomics' (COSMOS) is developing a robust data infrastructure and exchange standards for metabolomics data and metadata. This is to support workflows for a broad range of metabolomics applications within the European metabolomics community and the wider metabolomics and biomedical communities' participation. Here we announce our concepts and efforts asking for re-engagement of the metabolomics community, academics and industry, journal publishers, software and hardware vendors, as well as those interested in standardisation worldwide (addressing missing metabolomics ontologies, complex-metadata capturing and XML based open source data exchange format), to join and work towards updating and implementing metabolomics standards.

  11. Laser intensity modulated real time monitoring cell growth sensor for bioprocess applications

    NASA Astrophysics Data System (ADS)

    Kishore, P.; Babu, P. Ravindra; Devi, V. Rama; Maunika, T.; Soujanya, P.; Kishore, P. V. N.; Dinakar, D.

    2016-04-01

    This article proposes an optical method for monitoring the growth of Escherichia coli in Luria Bertani medium and Saccharomyces cereviciae in YPD. Suitable light is selected which on interaction with the analyte under consideration, gets adsorption / scattered. Required electronic circuitry is designed to drive the laser source and to detect the intensity of light using Photo-detector. All these components are embedded and arranged in a proper way and monitored the growth of the microbs in real time. The sensors results are compared with standard techniques such as colorimeter, Nephelometer and hemocytometer. The experimental results are in good agreement with the existed techniques and well suitable for real time monitoring applications of the growth of the microbs.

  12. HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features.

    PubMed

    Zaman, Rianon; Chowdhury, Shahana Yasmin; Rashid, Mahmood A; Sharma, Alok; Dehzangi, Abdollah; Shatabda, Swakkhar

    2017-01-01

    DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM) as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.

  13. Evaluating the effectiveness of SW-only video coding for real-time video transmission over low-rate wireless networks

    NASA Astrophysics Data System (ADS)

    Bartolini, Franco; Pasquini, Cristina; Piva, Alessandro

    2001-04-01

    The recent development of video compression algorithms allowed the diffusion of systems for the transmission of video sequences over data networks. However, the transmission over error prone mobile communication channels is yet an open issue. In this paper, a system developed for the real time transmission of H263 video coded sequences over TETRA mobile networks is presented. TETRA is an open digital trunked radio standard defined by the European Telecommunications Standardization Institute developed for professional mobile radio users, providing full integration of voice and data services. Experimental tests demonstrate that, in spite of the low frame rate allowed by the SW only implementation of the decoder and by the low channel rate a video compression technique such as that complying with the H263 standard, is still preferable to a simpler but less effective frame based compression system.

  14. Standards and Methodologies for Characterizing Radiobiological Impact of High-Z Nanoparticles

    PubMed Central

    Subiel, Anna; Ashmore, Reece; Schettino, Giuseppe

    2016-01-01

    Research on the application of high-Z nanoparticles (NPs) in cancer treatment and diagnosis has recently been the subject of growing interest, with much promise being shown with regards to a potential transition into clinical practice. In spite of numerous publications related to the development and application of nanoparticles for use with ionizing radiation, the literature is lacking coherent and systematic experimental approaches to fully evaluate the radiobiological effectiveness of NPs, validate mechanistic models and allow direct comparison of the studies undertaken by various research groups. The lack of standards and established methodology is commonly recognised as a major obstacle for the transition of innovative research ideas into clinical practice. This review provides a comprehensive overview of radiobiological techniques and quantification methods used in in vitro studies on high-Z nanoparticles and aims to provide recommendations for future standardization for NP-mediated radiation research. PMID:27446499

  15. Experimental and computational investigation of the thermochemistry of the six isomers of dichloroaniline.

    PubMed

    Ribeiro da Silva, Manuel A V; Amaral, Luísa M P F; Gomes, José R B

    2006-07-27

    The standard (p(o) = 0.1 MPa) molar enthalpies of formation of 2,3-, 2,4-, 2,5-, 2,6-, 3,4- and 3,5-dichloroanilines were derived from the standard molar energies of combustion, in oxygen, to yield CO(2)(g), N(2)(g) and HCl.600H(2)O(l), at T = 298.15 K, measured by rotating bomb combustion calorimetry. The Calvet high-temperature vacuum sublimation technique was used to measure the enthalpies of sublimation of the six isomers. These two thermodynamic parameters yielded the standard molar enthalpies of formation of the six isomers of dichloroaniline, in the gaseous phase, at T = 298.15 K. The gas-phase enthalpies of formation were also estimated by G3MP2B3 calculations, which were further extended to the computation of gas-phase acidities, proton affinities, and ionization enthalpies.

  16. An Analog Macroscopic Technique for Studying Molecular Hydrodynamic Processes in Dense Gases and Liquids.

    PubMed

    Dahlberg, Jerry; Tkacik, Peter T; Mullany, Brigid; Fleischhauer, Eric; Shahinian, Hossein; Azimi, Farzad; Navare, Jayesh; Owen, Spencer; Bisel, Tucker; Martin, Tony; Sholar, Jodie; Keanini, Russell G

    2017-12-04

    An analog, macroscopic method for studying molecular-scale hydrodynamic processes in dense gases and liquids is described. The technique applies a standard fluid dynamic diagnostic, particle image velocimetry (PIV), to measure: i) velocities of individual particles (grains), extant on short, grain-collision time-scales, ii) velocities of systems of particles, on both short collision-time- and long, continuum-flow-time-scales, iii) collective hydrodynamic modes known to exist in dense molecular fluids, and iv) short- and long-time-scale velocity autocorrelation functions, central to understanding particle-scale dynamics in strongly interacting, dense fluid systems. The basic system is composed of an imaging system, light source, vibrational sensors, vibrational system with a known media, and PIV and analysis software. Required experimental measurements and an outline of the theoretical tools needed when using the analog technique to study molecular-scale hydrodynamic processes are highlighted. The proposed technique provides a relatively straightforward alternative to photonic and neutron beam scattering methods traditionally used in molecular hydrodynamic studies.

  17. Analysis of formalin-fixed, paraffin-embedded (FFPE) tissue via proteomic techniques and misconceptions of antigen retrieval.

    PubMed

    O'Rourke, Matthew B; Padula, Matthew P

    2016-01-01

    Since emerging in the late 19(th) century, formaldehyde fixation has become a standard method for preservation of tissues from clinical samples. The advantage of formaldehyde fixation is that fixed tissues can be stored at room temperature for decades without concern for degradation. This has led to the generation of huge tissue banks containing thousands of clinically significant samples. Here we review techniques for proteomic analysis of formalin-fixed, paraffin-embedded (FFPE) tissue samples with a specific focus on the methods used to extract and break formaldehyde crosslinks. We also discuss an error-of-interpretation associated with the technique known as "antigen retrieval." We have discovered that this term has been mistakenly applied to two disparate molecular techniques; therefore, we argue that a terminology change is needed to ensure accurate reporting of experimental results. Finally, we suggest that more investigation is required to fully understand the process of formaldehyde fixation and its subsequent reversal.

  18. Detecting salt deposition on a wind turbine blade using laser induced breakdown spectroscopy technique

    NASA Astrophysics Data System (ADS)

    Sathiesh Kumar, V.; Vasa, Nilesh J.; Sarathi, R.

    2013-07-01

    The study of pollution performance on a wind turbine blade due to lightning is important, as it can cause major damage to wind turbine blades. In the present work, optical emission spectroscopy (OES) technique is used to understand the influence of pollutant deposited on a wind turbine blade in an off-shore environment. A methodical experimental study was carried out by adopting IEC 60507 standards, and it was observed that the lightning discharge propagates at the interface between the pollutant and the glass fiber reinforced plastic (Material used in manufacturing of wind turbine blades). In addition, as a diagnostic condition monitoring technique, laser-induced breakdown spectroscopy (LIBS) is proposed and demonstrated to rank the severity of pollutant on the wind turbine blades from a remote area. Optical emission spectra observed during surface discharge process induced by lightning impulse voltage is in agreement with the spectra observed during LIBS.

  19. Application and use of spinal immobilization devices in zero-gravity flight

    NASA Technical Reports Server (NTRS)

    Krupa, Debra T.; Gosbee, John; Billica, Roger; Boyce, Joey B.

    1991-01-01

    A KC-135 parabolic flight was performed for the purpose of evaluation of spinal immobilization techniques in microgravity. The flight followed the standard 40 parabola profile with four NASA/KRUG experimenters involved. One performed as coordinator/recorder, one as test subject, and two as the Crew Medical Officers (CMO). The flight was to evaluate the application of spinal immobilization devices and techniques in microgravity as are performed during initial stabilization or patient transport scenarios. The sequence of detail for examination of the following objectives included: attempted cervical spine immobilization with all free floating, the patient restrained to the floor, various hand positioning techniques; c-collar placement; Kendrick Extrication Device (KED) application with various restraints for patient and CMO; patient immobilization and transport using the KED; patient transported on KED and spine board. Observations for each task are included. Major conclusions and issues are also included.

  20. Pancreatic islet blood flow and its measurement

    PubMed Central

    Jansson, Leif; Barbu, Andreea; Bodin, Birgitta; Drott, Carl Johan; Espes, Daniel; Gao, Xiang; Grapensparr, Liza; Källskog, Örjan; Lau, Joey; Liljebäck, Hanna; Palm, Fredrik; Quach, My; Sandberg, Monica; Strömberg, Victoria; Ullsten, Sara; Carlsson, Per-Ola

    2016-01-01

    Pancreatic islets are richly vascularized, and islet blood vessels are uniquely adapted to maintain and support the internal milieu of the islets favoring normal endocrine function. Islet blood flow is normally very high compared with that to the exocrine pancreas and is autonomously regulated through complex interactions between the nervous system, metabolites from insulin secreting β-cells, endothelium-derived mediators, and hormones. The islet blood flow is normally coupled to the needs for insulin release and is usually disturbed during glucose intolerance and overt diabetes. The present review provides a brief background on islet vascular function and especially focuses on available techniques to measure islet blood perfusion. The gold standard for islet blood flow measurements in experimental animals is the microsphere technique, and its advantages and disadvantages will be discussed. In humans there are still no methods to measure islet blood flow selectively, but new developments in radiological techniques hold great hopes for the future. PMID:27124642

  1. Steering optical comb frequencies by rotating the polarization state

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Zhang, Xiaofei; Yan, Lulu; Zhang, Pan; Rao, Bingjie; Han, Wei; Guo, Wenge; Zhang, Shougang; Jiang, Haifeng

    2017-12-01

    Optical frequency combs, with precise control of repetition rate and carrier-envelope-offset frequency, have revolutionized many fields, such as fine optical spectroscopy, optical frequency standards, ultra-fast science research, ultra-stable microwave generation and precise ranging measurement. However, existing high bandwidth frequency control methods have small dynamic range, requiring complex hybrid control techniques. To overcome this limitation, we develop a new approach, where a home-made intra-cavity electro-optic modulator tunes polarization state of laser signal rather than only optical length of the cavity, to steer frequencies of a nonlinear-polarization-rotation mode-locked laser. By taking advantage of birefringence of the whole cavity, this approach results in not only broadband but also relative large-dynamic frequency control. Experimental results show that frequency control dynamic range increase at least one order in comparison with the traditional intra-cavity electro-optic modulator technique. In additional, this technique exhibits less side-effect than traditional frequency control methods.

  2. Supermicrosurgery: History, Applications, Training and the Future

    PubMed Central

    Badash, Ido; Gould, Daniel J.; Patel, Ketan M.

    2018-01-01

    Supermicrosurgery, a technique of dissection and anastomosis of small vessels ranging from 0.3 to 0.8 mm, has revolutionized the fields of lymphedema treatment and soft tissue reconstruction. The technique offers several distinct benefits to microsurgeons, including the ability to manipulate small vessels that were previously inaccessible, and to minimize donor-site morbidity by dissecting short pedicles in a suprafascial plane. Thus, supermicrosurgery has become increasingly popular in recent years, and its applications have greatly expanded since it was first introduced 20 years ago. While supermicrosurgery was originally developed for procedures involving salvage of the digit tip, the technique is now routinely used in a wide variety of microsurgical cases, including lymphovenous anastomoses, vascularized lymph node transfers and perforator-to-perforator anastomoses. With continued experimentation, standardization of supermicrosurgical training, and high quality studies focusing on the outcomes of these novel procedures, supermicrosurgery can become a routine and valuable component of every microsurgeon’s practice. PMID:29740586

  3. Technical aspects and recommendations for single-cell qPCR.

    PubMed

    Ståhlberg, Anders; Kubista, Mikael

    2018-02-01

    Single cells are basic physiological and biological units that can function individually as well as in groups in tissues and organs. It is central to identify, characterize and profile single cells at molecular level to be able to distinguish different kinds, to understand their functions and determine how they interact with each other. During the last decade several technologies for single-cell profiling have been developed and used in various applications, revealing many novel findings. Quantitative PCR (qPCR) is one of the most developed methods for single-cell profiling that can be used to interrogate several analytes, including DNA, RNA and protein. Single-cell qPCR has the potential to become routine methodology but the technique is still challenging, as it involves several experimental steps and few molecules are handled. Here, we discuss technical aspects and provide recommendation for single-cell qPCR analysis. The workflow includes experimental design, sample preparation, single-cell collection, direct lysis, reverse transcription, preamplification, qPCR and data analysis. Detailed reporting and sharing of experimental details and data will promote further development and make validation studies possible. Efforts aiming to standardize single-cell qPCR open up means to move single-cell analysis from specialized research settings to standard research laboratories. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. An experimental investigation of gas fuel injection with X-ray radiography

    DOE PAGES

    Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.; ...

    2017-04-21

    In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less

  5. Fundamental Physics with Antihydrogen

    NASA Astrophysics Data System (ADS)

    Hangst, J. S.

    Antihydrogen—the antimatter equivalent of the hydrogen atom—is of fundamental interest as a test bed for universal symmetries—such as CPT and the Weak Equivalence Principle for gravitation. Invariance under CPT requires that hydrogen and antihydrogen have the same spectrum. Antimatter is of course intriguing because of the observed baryon asymmetry in the universe—currently unexplained by the Standard Model. At the CERN Antiproton Decelerator (AD) [1], several groups have been working diligently since 1999 to produce, trap, and study the structure and behaviour of the antihydrogen atom. One of the main thrusts of the AD experimental program is to apply precision techniques from atomic physics to the study of antimatter. Such experiments complement the high-energy searches for physics beyond the Standard Model. Antihydrogen is the only atom of antimatter to be produced in the laboratory. This is not so unfortunate, as its matter equivalent, hydrogen, is one of the most well-understood and accurately measured systems in all of physics. It is thus very compelling to undertake experimental examinations of the structure of antihydrogen. As experimental spectroscopy of antihydrogen has yet to begin in earnest, I will give here a brief introduction to some of the ion and atom trap developments necessary for synthesizing and trapping antihydrogen, so that it can be studied.

  6. An experimental investigation of gas fuel injection with X-ray radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.

    In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less

  7. Active music therapy approach for stroke patients in the post-acute rehabilitation.

    PubMed

    Raglio, Alfredo; Zaliani, Alberto; Baiardi, Paola; Bossi, Daniela; Sguazzin, Cinzia; Capodaglio, Edda; Imbriani, Chiara; Gontero, Giulia; Imbriani, Marcello

    2017-05-01

    Guidelines in stroke rehabilitation recommend the use of a multidisciplinary approach. Different approaches and techniques with music are used in the stroke rehabilitation to improve motor and cognitive functions but also psychological outcomes. In this randomized controlled pilot trial, relational active music therapy approaches were tested in the post-acute phase of disease. Thirty-eight hospitalized patients with ischemic and hemorrhagic stroke were recruited and allocated in two groups. The experimental group underwent the standard of care (physiotherapy and occupational therapy daily sessions) and relational active music therapy treatments. The control group underwent the standard of care only. Motor functions and psychological aspects were assessed before and after treatments. Music therapy process was also evaluated using a specific rating scale. All groups showed a positive trend in quality of life, functional and disability levels, and gross mobility. The experimental group showed a decrease of anxiety and, in particular, of depression (p = 0.016). In addition, the strength of non-dominant hand (grip) significantly increased in the experimental group (p = 0.041). Music therapy assessment showed a significant improvement over time of non-verbal and sonorous-music relationships. Future studies, including a greater number of patients and follow-up evaluations, are needed to confirm promising results of this study.

  8. Experimental study of starting plumes simulating cumulus cloud flows in the atmosphere

    NASA Astrophysics Data System (ADS)

    Subrahmanyam, Duvvuri; Sreenivas, K. R.; Bhat, G. S.; Diwan, S. S.; Narasimha, Roddam

    2009-11-01

    Turbulent jets and plumes subjected to off-source volumetric heating have been studied experimentally and numerically by Narasimha and co-workers and others over the past two decades. The off-source heating attempts to simulate the latent heat release that occurs in cumulus clouds on condensation of water vapour. This heat release plays a crucial role in determining the overall cloud shape among other things. Previous studies investigated steady state jets and plumes that had attained similarity upstream of heat injection. A better understanding and appreciation of the fluid dynamics of cumulus clouds should be possible by study of starting plumes. Experiments have been set up at JNCASR (Bangalore) using experimental techniques developed previously but incorporating various improvements. Till date, experiments have been performed on plumes at Re of 1000 and 2250, with three different heating levels in each case. Axial sections of the flow have been studied using standard PLIF techniques. The flow visualization provides us with data on the temporal evolution of the starting plume. It is observed that the broad nature of the effect of off-source heating on the starting plumes is generally consistent with the results obtained previously on steady state flows. More complete results and a critical discussion will be presented at the upcoming meeting.

  9. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  10. Quantification of the overall measurement uncertainty associated with the passive moss biomonitoring technique: Sample collection and processing.

    PubMed

    Aboal, J R; Boquete, M T; Carballeira, A; Casanova, A; Debén, S; Fernández, J A

    2017-05-01

    In this study we examined 6080 data gathered by our research group during more than 20 years of research on the moss biomonitoring technique, in order to quantify the variability generated by different aspects of the protocol and to calculate the overall measurement uncertainty associated with the technique. The median variance of the concentrations of different pollutants measured in moss tissues attributed to the different methodological aspects was high, reaching values of 2851 (ng·g -1 ) 2 for Cd (sample treatment), 35.1 (μg·g -1 ) 2 for Cu (sample treatment), 861.7 (ng·g -1 ) 2 and for Hg (material selection). These variances correspond to standard deviations that constitute 67, 126 and 59% the regional background levels of these elements in the study region. The overall measurement uncertainty associated with the worst experimental protocol (5 subsamples, refrigerated, washed, 5 × 5 m size of the sampling area and once a year sampling) was between 2 and 6 times higher than that associated with the optimal protocol (30 subsamples, dried, unwashed, 20 × 20 m size of the sampling area and once a week sampling), and between 1.5 and 7 times higher than that associated with the standardized protocol (30 subsamples and once a year sampling). The overall measurement uncertainty associated with the standardized protocol could generate variations of between 14 and 47% in the regional background levels of Cd, Cu, Hg, Pb and Zn in the study area and much higher levels of variation in polluted sampling sites. We demonstrated that although the overall measurement uncertainty of the technique is still high, it can be reduced by using already well defined aspects of the protocol. Further standardization of the protocol together with application of the information on the overall measurement uncertainty would improve the reliability and comparability of the results of different biomonitoring studies, thus extending use of the technique beyond the context of scientific research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study.

    PubMed

    Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R

    2017-10-01

    To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p < 0.008) in the sacroiliac osteotomies, 6.2 mm (p < 0.008) and 5.8 mm (p < 0.032) in the biplanar supra-acetabular, 3 mm (p < 0.016) in the ischial and 2.2 mm (p < 0.032) and 2.6 mm (p < 0.008) in the parallel iliopubic osteotomies, with a mean linear deviation of 4.9 mm (p < 0.001) for all osteotomies. Of the manual osteotomies, 53% (n = 16) had a linear deviation > 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p < 0.001) in pitch and 2.94° (p < 0.001) in roll, comparing PSI and the standard manual technique. In an experimental study, computer-assisted planning and PSIs improved accuracy in pelvic tumour resections, bringing osteotomy results closer to the parameters set in pre-operative planning, as compared with standard manual techniques. Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.

  12. Higher resolution satellite remote sensing and the impact on image mapping

    USGS Publications Warehouse

    Watkins, Allen H.; Thormodsgard, June M.

    1987-01-01

    Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.

  13. Lung Morphometry with Hyperpolarized 129Xe: Theoretical Background

    PubMed Central

    Sukstanskii, A.L.; Yablonskiy, D.A.

    2011-01-01

    The 3He lung morphometry technique, based on MRI measurements of hyperpolarized 3He gas diffusion in lung airspaces, provides unique information on the lung microstructure at the alveolar level. In vivo 3D tomographic images of standard morphological parameters (airspace chord length, lung parenchyma surface-to-volume ratio, number of alveoli per unit volume) can be generated from a rather short (several seconds) MRI scan. The technique is based on a theory of gas diffusion in lung acinar airways and experimental measurements of diffusion attenuated MRI signal. The present work aims at developing the theoretical background of a similar technique based on hyperpolarized 129Xe gas. As the diffusion coefficient and gyromagnetic ratio of 129Xe gas are substantially different from those of 3He gas, the specific details of the theory and experimental measurements with 129Xe should be amended. We establish phenomenological relationships between acinar airway geometrical parameters and the diffusion attenuated MR signal for human and small animal lungs, both normal lungs and lungs with mild emphysema. Optimal diffusion times are shown to be about 5 ms for human and 1.3 ms for small animals. The expected uncertainties in measuring main morphometrical parameters of the lungs are estimated in the framework of Bayesian probability theory. PMID:21713985

  14. Nanosilica coating for bonding improvements to zirconia.

    PubMed

    Chen, Chen; Chen, Gang; Xie, Haifeng; Dai, Wenyong; Zhang, Feimin

    2013-01-01

    Resin bonding to zirconia cannot be established from standard methods that are currently utilized in conventional silica-based dental ceramics. The solution-gelatin (sol-gel) process is a well developed silica-coating technique used to modify the surface of nonsilica-based ceramics. Here, we use this technique to improve resin bonding to zirconia, which we compared to zirconia surfaces treated with alumina sandblasting and tribochemical silica coating. We used the shear bond strength test to examine the effect of the various coatings on the short-term resin bonding of zirconia. Furthermore, we employed field emission scanning electron microscopy, energy-dispersive X-ray spectroscopy, atomic force microscopy, and Fourier transform infrared spectroscopy to characterize the zirconia surfaces. Water-mist spraying was used to evaluate the durability of the coatings. To evaluate the biological safety of the experimental sol-gel silica coating, we conducted an in vitro Salmonella typhimurium reverse mutation assay (Ames mutagenicity test), cytotoxicity tests, and in vivo oral mucous membrane irritation tests. When compared to the conventional tribochemical silica coating, the experimental sol-gel silica coating provided the same shear bond strength, higher silicon contents, and better durability. Moreover, we observed no apparent mutagenicity, cytotoxicity, or irritation in this study. Therefore, the sol-gel technique represents a promising method for producing silica coatings on zirconia.

  15. Nanosilica coating for bonding improvements to zirconia

    PubMed Central

    Chen, Chen; Chen, Gang; Xie, Haifeng; Dai, Wenyong; Zhang, Feimin

    2013-01-01

    Resin bonding to zirconia cannot be established from standard methods that are currently utilized in conventional silica-based dental ceramics. The solution–gelatin (sol–gel) process is a well developed silica-coating technique used to modify the surface of nonsilica-based ceramics. Here, we use this technique to improve resin bonding to zirconia, which we compared to zirconia surfaces treated with alumina sandblasting and tribochemical silica coating. We used the shear bond strength test to examine the effect of the various coatings on the short-term resin bonding of zirconia. Furthermore, we employed field emission scanning electron microscopy, energy-dispersive X-ray spectroscopy, atomic force microscopy, and Fourier transform infrared spectroscopy to characterize the zirconia surfaces. Water–mist spraying was used to evaluate the durability of the coatings. To evaluate the biological safety of the experimental sol–gel silica coating, we conducted an in vitro Salmonella typhimurium reverse mutation assay (Ames mutagenicity test), cytotoxicity tests, and in vivo oral mucous membrane irritation tests. When compared to the conventional tribochemical silica coating, the experimental sol–gel silica coating provided the same shear bond strength, higher silicon contents, and better durability. Moreover, we observed no apparent mutagenicity, cytotoxicity, or irritation in this study. Therefore, the sol–gel technique represents a promising method for producing silica coatings on zirconia. PMID:24179333

  16. Determination of dynamic fracture toughness using a new experimental technique

    NASA Astrophysics Data System (ADS)

    Cady, Carl M.; Liu, Cheng; Lovato, Manuel L.

    2015-09-01

    In other studies dynamic fracture toughness has been measured using Charpy impact and modified Hopkinson Bar techniques. In this paper results will be shown for the measurement of fracture toughness using a new test geometry. The crack propagation velocities range from ˜0.15 mm/s to 2.5 m/s. Digital image correlation (DIC) will be the technique used to measure both the strain and the crack growth rates. The boundary of the crack is determined using the correlation coefficient generated during image analysis and with interframe timing the crack growth rate and crack opening can be determined. A comparison of static and dynamic loading experiments will be made for brittle polymeric materials. The analysis technique presented by Sammis et al. [1] is a semi-empirical solution, however, additional Linear Elastic Fracture Mechanics analysis of the strain fields generated as part of the DIC analysis allow for the more commonly used method resembling the crack tip opening displacement (CTOD) experiment. It should be noted that this technique was developed because limited amounts of material were available and crack growth rates were to fast for a standard CTOD method.

  17. Backscatter X-Ray Development for Space Vehicle Thermal Protection Systems

    NASA Astrophysics Data System (ADS)

    Bartha, Bence B.; Hope, Dale; Vona, Paul; Born, Martin; Corak, Tony

    2011-06-01

    The Backscatter X-Ray (BSX) imaging technique is used for various single sided inspection purposes. Previously developed BSX techniques for spray-on-foam insulation (SOFI) have been used for detecting defects in Space Shuttle External Tank foam insulation. The developed BSX hardware and techniques are currently being enhanced to advance Non-Destructive Evaluation (NDE) methods for future space vehicle applications. Various Thermal Protection System (TPS) materials were inspected using the enhanced BSX imaging techniques, investigating the capability of the method to detect voids and other discontinuities at various locations within each material. Calibration standards were developed for the TPS materials in order to characterize and develop enhanced BSX inspection capabilities. The ability of the BSX technique to detect both manufactured and natural defects was also studied and compared to through-transmission x-ray techniques. The energy of the x-ray, source to object distance, angle of x-ray, focal spot size and x-ray detector configurations were parameters playing a significant role in the sensitivity of the BSX technique to image various materials and defects. The image processing of the results also showed significant increase in the sensitivity of the technique. The experimental results showed BSX to be a viable inspection technique for space vehicle TPS systems.

  18. Development of a flexible microfluidic system integrating magnetic micro-actuators for trapping biological species

    NASA Astrophysics Data System (ADS)

    Fulcrand, R.; Jugieu, D.; Escriba, C.; Bancaud, A.; Bourrier, D.; Boukabache, A.; Gué, A. M.

    2009-10-01

    A flexible microfluidic system embedding microelectromagnets has been designed, modeled and fabricated by using a photosensitive resin as structural material. The fabrication process involves the integration of micro-coils in a multilayer SU-8 microfluidic system by combining standard electroplating and dry films lamination. This technique offers numerous advantages in terms of integration, biocompatibility and chemical resistance. Various designs of micro-coils, including spiral, square or serpentine wires, have been simulated and experimentally tested. It has been established that thermal dissipation in micro-coils depends strongly on the number of turns and current density but remains compatible with biological applications. Real-time experimentations show that these micro-actuators are efficient in trapping magnetic micro-beads without any external field source or a permanent magnet and highlight that the size of microfluidic channels has been adequately designed for optimal trapping. Moreover, we trap magnetic beads in less than 2 s and release them instantaneously into the micro-channel. The actuation solely relies on electric fields, which are easier to control than standard magneto-fluidic modules.

  19. A systematic review of interventions to increase the use of standardized outcome measures by rehabilitation professionals.

    PubMed

    Colquhoun, Heather L; Lamontagne, Marie-Eve; Duncan, Edward As; Fiander, Michelle; Champagne, Catherine; Grimshaw, Jeremy M

    2017-03-01

    To determine the types and effectiveness of interventions to increase the knowledge about, attitudes towards, and use of standardized outcome measures in rehabilitation professionals. An electronic search using Medline, EMBASE, PsycINFO, CINAHL, Ergonomics Abstracts, Sports Discus. The search is current to February 2016. All study designs testing interventions were included as were all provider and patient types. Two reviewers independently conducted a title and abstract review, followed by a full-text review. Two reviewers independently extracted a priori variables and used consensus for disagreements. Quality assessment was conducted using the Assessment of Quantitative Studies published by the Effective Public Health Practice Group. We identified 11 studies involving at least 1200 providers. Nine of the studies showed improvements in outcome measure use rates but only three of these studies used an experimental or quasi-experimental design. Eight of the studies used an educational approach in the intervention and three used audit and feedback. Poor intervention description and quality of studies limited recommendations. Increased attention to testing interventions focused on known barriers, matched to behavior change techniques, and with stronger designs is warranted.

  20. Experimental study on the flow separation and self-excited oscillation phenomenon in a rectangular duct

    NASA Astrophysics Data System (ADS)

    Xiong, Bing; Wang, Zhen-Guo; Fan, Xiao-Qiang; Wang, Yi

    2017-04-01

    To study the characteristics of flow separation and self-excited oscillation of a shock train in a rectangular duct, a simple test case has been conducted and analyzed. The high-speed Schlieren technique and high-frequency pressure measurements have been adopted to collect the data. The experimental results show that there are two separation modes in the duct under M3 incoming condition. The separation mode switch has great effects on the flow effects, such as the pressure distribution, the standard deviation distribution and so on. The separation mode switch can be judged by the history of pressure standard deviation. When it comes to the self-excited oscillation of a shock train, the frequency contents in the undisturbed region, the intermittent region, and the separated bubble have been compared. It was found that the low-frequency disturbance induced by the upstream shock foot motions can travel downstream and the frequency will be magnified by the separation bubble. The oscillation of the small shock foot and the oscillation of the large shock foot are associated with each other rather than oscillating independently.

  1. Novel Gemini cationic surfactants as anti-corrosion for X-65 steel dissolution in oilfield produced water under sweet conditions: Combined experimental and computational investigations

    NASA Astrophysics Data System (ADS)

    Migahed, M. A.; elgendy, Amr.; EL-Rabiei, M. M.; Nady, H.; Zaki, E. G.

    2018-05-01

    Two new sequences of Gemini di-quaternary ammonium salts were synthesized characterized by FTIR and 1HNMR spectroscopic techniques and evaluated as corrosion inhibitor for X-65 steel dissolution in deep oil wells formation water saturated with CO2. The anti-corrosion performance of these compounds was studied by different electrochemical techniques i.e. (potentiodynamic polarization and AC impedance methods), Surface morphology (SEM and EDX) analysis and quantum chemical calculations. Results showed that the synthesized compounds were of mixed-type inhibitors and the inhibition capability was influenced by the inhibitor dose and the spacer substitution in their structure as indicated by Tafel plots. Surface active parameters were determined from the surface tension profile. The synthesized compounds adsorbed via Langmuir adsorption model with physiochemical adsorption as inferred from the standard free energy (ΔG°ads) values. Surface morphology (SEM and EDX) data for inhibitor (II) shows the development of adsorbed film on steel specimen. Finally, the experimental results were supported by the quantum chemical calculations using DFT theory.

  2. Demonstration of Wavelet Techniques in the Spectral Analysis of Bypass Transition Data

    NASA Technical Reports Server (NTRS)

    Lewalle, Jacques; Ashpis, David E.; Sohn, Ki-Hyeon

    1997-01-01

    A number of wavelet-based techniques for the analysis of experimental data are developed and illustrated. A multiscale analysis based on the Mexican hat wavelet is demonstrated as a tool for acquiring physical and quantitative information not obtainable by standard signal analysis methods. Experimental data for the analysis came from simultaneous hot-wire velocity traces in a bypass transition of the boundary layer on a heated flat plate. A pair of traces (two components of velocity) at one location was excerpted. A number of ensemble and conditional statistics related to dominant time scales for energy and momentum transport were calculated. The analysis revealed a lack of energy-dominant time scales inside turbulent spots but identified transport-dominant scales inside spots that account for the largest part of the Reynolds stress. Momentum transport was much more intermittent than were energetic fluctuations. This work is the first step in a continuing study of the spatial evolution of these scale-related statistics, the goal being to apply the multiscale analysis results to improve the modeling of transitional and turbulent industrial flows.

  3. Educated Guesses and Other Ways to Address the Pharmacological Uncertainty of Designer Drugs

    PubMed Central

    Berning, Moritz

    2016-01-01

    This study examines how experimentation with designer drugs is mediated by the Internet. We selected a popular drug forum that presents reports on self-experimentation with little or even completely unexplored designer drugs to examine: (1) how participants report their “trying out” of new compounds and (2) how participants reduce the pharmacological uncertainty associated with using these substances. Our methods included passive observation online, engaging more actively with the online community using an avatar, and off-line interviews with key interlocutors to validate our online findings. This article reflects on how forum participants experiment with designer drugs, their trust in suppliers and the testimonials of others, the use of ethno-scientific techniques that involve numerical weighing, “allergy dosing,” and the use of standardized trip reports. We suggest that these techniques contribute to a sense of control in the face of the possible toxicity of unknown or little-known designer drugs. The online reporting of effects allows users to experience not only the thrill of a new kind of high but also connection with others in the self-experimenting drug community. PMID:27721526

  4. Development of a custom-designed echo particle image velocimetry system for multi-component hemodynamic measurements: system characterization and initial experimental results

    NASA Astrophysics Data System (ADS)

    Liu, Lingli; Zheng, Hairong; Williams, Logan; Zhang, Fuxing; Wang, Rui; Hertzberg, Jean; Shandas, Robin

    2008-03-01

    We have recently developed an ultrasound-based velocimetry technique, termed echo particle image velocimetry (Echo PIV), to measure multi-component velocity vectors and local shear rates in arteries and opaque fluid flows by identifying and tracking flow tracers (ultrasound contrast microbubbles) within these flow fields. The original system was implemented on images obtained from a commercial echocardiography scanner. Although promising, this system was limited in spatial resolution and measurable velocity range. In this work, we propose standard rules for characterizing Echo PIV performance and report on a custom-designed Echo PIV system with increased spatial resolution and measurable velocity range. Then we employed this system for initial measurements on tube flows, rotating flows and in vitro carotid artery and abdominal aortic aneurysm (AAA) models to acquire the local velocity and shear rate distributions in these flow fields. The experimental results verified the accuracy of this technique and indicated the promise of the custom Echo PIV system in capturing complex flow fields non-invasively.

  5. Online and offline experimental techniques for polycyclic aromatic hydrocarbons recovery and measurement.

    PubMed

    Comandini, A; Malewicki, T; Brezinsky, K

    2012-03-01

    The implementation of techniques aimed at improving engine performance and reducing particulate matter (PM) pollutant emissions is strongly influenced by the limited understanding of the polycyclic aromatic hydrocarbons (PAH) formation chemistry, in combustion devices, that produces the PM emissions. New experimental results which examine the formation of multi-ring compounds are required. The present investigation focuses on two techniques for such an experimental examination by recovery of PAH compounds from a typical combustion oriented experimental apparatus. The online technique discussed constitutes an optimal solution but not always feasible approach. Nevertheless, a detailed description of a new online sampling system is provided which can serve as reference for future applications to different experimental set-ups. In comparison, an offline technique, which is sometimes more experimentally feasible but not necessarily optimal, has been studied in detail for the recovery of a variety of compounds with different properties, including naphthalene, biphenyl, and iodobenzene. The recovery results from both techniques were excellent with an error in the total carbon balance of around 10% for the online technique and an uncertainty in the measurement of the single species of around 7% for the offline technique. Although both techniques proved to be suitable for measurement of large PAH compounds, the online technique represents the optimal solution in view of the simplicity of the corresponding experimental procedure. On the other hand, the offline technique represents a valuable solution in those cases where the online technique cannot be implemented.

  6. Insecticide resistance profile of Anopheles gambiae from a phase II field station in Cové, southern Benin: implications for the evaluation of novel vector control products.

    PubMed

    Ngufor, Corine; N'Guessan, Raphael; Fagbohoun, Josias; Subramaniam, Krishanthi; Odjo, Abibatou; Fongnikin, Augustin; Akogbeto, Martin; Weetman, David; Rowland, Mark

    2015-11-18

    Novel indoor residual spraying (IRS) and long-lasting insecticidal net (LLIN) products aimed at improving the control of pyrethroid-resistant malaria vectors have to be evaluated in Phase II semi-field experimental studies against highly pyrethroid-resistant mosquitoes. To better understand their performance it is necessary to fully characterize the species composition, resistance status and resistance mechanisms of the vector populations in the experimental hut sites. Bioassays were performed to assess phenotypic insecticide resistance in the malaria vector population at a newly constructed experimental hut site in Cové, a rice growing area in southern Benin, being used for WHOPES Phase II evaluation of newly developed LLIN and IRS products. The efficacy of standard WHOPES-approved pyrethroid LLIN and IRS products was also assessed in the experimental huts. Diagnostic genotyping techniques and microarray studies were performed to investigate the genetic basis of pyrethroid resistance in the Cové Anopheles gambiae population. The vector population at the Cové experimental hut site consisted of a mixture of Anopheles coluzzii and An. gambiae s.s. with the latter occurring at lower frequencies (23 %) and only in samples collected in the dry season. There was a high prevalence of resistance to pyrethroids and DDT (>90 % bioassay survival) with pyrethroid resistance intensity reaching 200-fold compared to the laboratory susceptible An. gambiae Kisumu strain. Standard WHOPES-approved pyrethroid IRS and LLIN products were ineffective in the experimental huts against this vector population (8-29 % mortality). The L1014F allele frequency was 89 %. CYP6P3, a cytochrome P450 validated as an efficient metabolizer of pyrethroids, was over-expressed. Characterizing pyrethroid resistance at Phase II field sites is crucial to the accurate interpretation of the performance of novel vector control products. The strong levels of pyrethroid resistance at the Cové experimental hut station make it a suitable site for Phase II experimental hut evaluations of novel vector control products, which aim for improved efficacy against pyrethroid-resistant malaria vectors to WHOPES standards. The resistance genes identified can be used as markers for further studies investigating the resistance management potential of novel mixture LLIN and IRS products tested at the site.

  7. Efficient Analysis of Mass Spectrometry Data Using the Isotope Wavelet

    NASA Astrophysics Data System (ADS)

    Hussong, Rene; Tholey, Andreas; Hildebrandt, Andreas

    2007-09-01

    Mass spectrometry (MS) has become today's de-facto standard for high-throughput analysis in proteomics research. Its applications range from toxicity analysis to MS-based diagnostics. Often, the time spent on the MS experiment itself is significantly less than the time necessary to interpret the measured signals, since the amount of data can easily exceed several gigabytes. In addition, automated analysis is hampered by baseline artifacts, chemical as well as electrical noise, and an irregular spacing of data points. Thus, filtering techniques originating from signal and image analysis are commonly employed to address these problems. Unfortunately, smoothing, base-line reduction, and in particular a resampling of data points can affect important characteristics of the experimental signal. To overcome these problems, we propose a new family of wavelet functions based on the isotope wavelet, which is hand-tailored for the analysis of mass spectrometry data. The resulting technique is theoretically well-founded and compares very well with standard peak picking tools, since it is highly robust against noise spoiling the data, but at the same time sufficiently sensitive to detect even low-abundant peptides.

  8. Novel D-π-A-π-D type organic chromophores for second harmonic generation and multi-photon absorption applications

    NASA Astrophysics Data System (ADS)

    Aditya, Pusala; Kumar, Hari; Kumar, Sunil; Rajashekar, Muralikrishna, M.; Muthukumar, V. Sai; Kumar, B. Siva; Sai, S. Siva Sankara; Rao, G. Nageshwar

    2013-06-01

    We report here the optical and non-linear optical properties of six different novel bis-chalcones of D-π-A-π-D derivatives of diarylideneacetone (DBA). These derivatives have been synthesized by Claisen-Schmidt condensation reaction and were well characterized by using FTIR, 1HNMR, 13CNMR, UV-Visible absorption and mass spectroscopic techniques. The optical bandgap for each of the DBA derivatives were determined both experimentally (UV-Visible spectra & Tauc Plot) and theoretically by ab intio DFT calculations using SIESTA software package. They were found to be in close agreement with each other. The Second Harmonic Generation from these organic chromophores were studied by standard Kurtz and Perry Powder SHG method at 1064 nm. They were found to have superior SHG conversion efficiency when compared to urea (standard sample). Further, we investigated the Multi-Photon absorption properties were using conventional open aperture z-scan technique. These DBA derivatives exhibited strong two photon absorption in the order of 1e-11m/W. Hence, these are potential candidate for various photonic applications like optical power limiting, photonic switching and frequency conversion.

  9. A Single-Block TRL Test Fixture for the Cryogenic Characterization of Planar Microwave Components

    NASA Technical Reports Server (NTRS)

    Mejia, M.; Creason, A. S.; Toncich, S. S.; Ebihara, B. T.; Miranda, F. A.

    1996-01-01

    The High-Temperature-Superconductivity (HTS) group of the RF Technology Branch, Space Electronics Division, is actively involved in the fabrication and cryogenic characterization of planar microwave components for space applications. This process requires fast, reliable, and accurate measurement techniques not readily available. A new calibration standard/test fixture that enhances the integrity and reliability of the component characterization process has been developed. The fixture consists of 50 omega thru, reflect, delay, and device under test gold lines etched onto a 254 microns (0.010 in) thick alumina substrate. The Thru-Reflect-Line (TRL) fixture was tested at room temperature using a 30 omega, 7.62 mm (300 mil) long, gold line as a known standard. Good agreement between the experimental data and the data modelled using Sonnet's em(C) software was obtained for both the return (S(sub 11)) and insertion (S( 21)) losses. A gold two-pole bandpass filter with a 7.3 GHz center frequency was used as our Device Under Test (DUT), and the results compared with those obtained using a Short-Open-Load-Thru (SOLT) calibration technique.

  10. Trajectory modulated prone breast irradiation: a LINAC-based technique combining intensity modulated delivery and motion of the couch.

    PubMed

    Fahimian, Benjamin; Yu, Victoria; Horst, Kathleen; Xing, Lei; Hristov, Dimitre

    2013-12-01

    External beam radiation therapy (EBRT) provides a non-invasive treatment alternative for accelerated partial breast irradiation (APBI), however, limitations in achievable dose conformity of current EBRT techniques have been correlated to reported toxicity. To enhance the conformity of EBRT APBI, a technique for conventional LINACs is developed, which through combined motion of the couch, intensity modulated delivery, and a prone breast setup, enables wide-angular coronal arc irradiation of the ipsilateral breast without irradiating through the thorax and contralateral breast. A couch trajectory optimization technique was developed to determine the trajectories that concurrently avoid collision with the LINAC and maintain the target within the MLC apertures. Inverse treatment planning was performed along the derived trajectory. The technique was experimentally implemented by programming the Varian TrueBeam™ STx in Developer Mode. The dosimetric accuracy of the delivery was evaluated by ion chamber and film measurements in phantom. The resulting optimized trajectory was shown to be necessarily non-isocentric, and contain both translation and rotations of the couch. Film measurements resulted in 93% of the points in the measured two-dimensional dose maps passing the 3%/3mm Gamma criterion. Preliminary treatment plan comparison to 5-field 3D-conformal, IMRT, and VMAT demonstrated enhancement in conformity, and reduction of the normal tissue V50% and V100% parameters that have been correlated with EBRT toxicity. The feasibility of wide-angular intensity modulated partial breast irradiation using motion of the couch has been demonstrated experimentally on a standard LINAC for the first time. For patients eligible for a prone setup, the technique may enable improvement of dose conformity and associated dose-volume parameters correlated with toxicity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Optimization of low-level light therapy's illumination parameters for spinal cord injury in a rat model

    NASA Astrophysics Data System (ADS)

    Shuaib, Ali; Bourisly, Ali

    2018-02-01

    Spinal cord injury (SCI) can result in complete or partial loss of sensation and motor function due to interruption along the severed axonal tract(s). SCI can result in tetraplegia or paraplegia, which can have prohibitive lifetime medical costs and result in shorter life expectancy. A promising therapeutic technique that is currently in experimental phase and that has the potential to be used to treat SCI is Low-level light therapy (LLLT). Preclinical studies have shown that LLLT has reparative and regenerative capabilities on transected spinal cords, and that LLLT can enhance axonal sprouting in animal models. However, despite the promising effects of LLLT as a therapy for SCI, it remains difficult to compare published results due to the use of a wide range of illumination parameters (i.e. different wavelengths, fluences, beam types, and beam diameter), and due to the lack of a standardized experimental protocol(s). Before any clinical applications of LLLT for SCI treatment, it is crucial to standardize illumination parameters and efficacy of light delivery. Therefore, in this study we aim to evaluate the light fluence distribution on a 3D voxelated SCI rat model with different illumination parameters (wavelengths: 660, 810, and 980 nm; beam types: Gaussian and Flat; and beam diameters: 0.1, 0.2, and 0.3 cm) for LLLT using Monte Carlo simulation. This study provides an efficient approach to guide researchers in optimizing the illumination parameters for LLLT spinal cord injury in an experimental model and will aid in quantitative and qualitative standardization of LLLT-SCI treatment.

  12. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Heard, W.; Song, B.; Williams, B.; Martin, B.; Sparks, P.; Nie, X.

    2018-01-01

    This review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior of geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Uniqueness and limitations for each experimental technique are also discussed.

  13. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    DOE PAGES

    Heard, W.; Song, B.; Williams, B.; ...

    2018-01-03

    Here, this review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior ofmore » geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Finally, uniqueness and limitations for each experimental technique are also discussed.« less

  14. Dynamic Tensile Experimental Techniques for Geomaterials: A Comprehensive Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heard, W.; Song, B.; Williams, B.

    Here, this review article is dedicated to the Dynamic Behavior of Materials Technical Division for celebrating the 75th anniversary of the Society for Experimental Mechanics (SEM). Understanding dynamic behavior of geomaterials is critical for analyzing and solving engineering problems of various applications related to underground explosions, seismic, airblast, and penetration events. Determining the dynamic tensile response of geomaterials has been a great challenge in experiments due to the nature of relatively low tensile strength and high brittleness. Various experimental approaches have been made in the past century, especially in the most recent half century, to understand the dynamic behavior ofmore » geomaterials in tension. In this review paper, we summarized the dynamic tensile experimental techniques for geomaterials that have been developed. The major dynamic tensile experimental techniques include dynamic direct tension, dynamic split tension, and spall tension. All three of the experimental techniques are based on Hopkinson or split Hopkinson (also known as Kolsky) bar techniques and principles. Finally, uniqueness and limitations for each experimental technique are also discussed.« less

  15. Results of interlaboratory comparison of fission-track age standards: Fission-track workshop-1984

    USGS Publications Warehouse

    Miller, D.S.; Duddy, I.R.; Green, P.F.; Hurford, A.J.; Naeser, C.W.

    1985-01-01

    Five samples were made available as standards for the 1984 Fission Track Workshop held in the summer of 1984 (Rensselaer Polytechnic Institute, Troy, New York). Two zircons, two apatites and a sphene were distributed prior to the meeting to 40 different laboratories. To date, 24 different analysts have reported results. The isotopic ages of the standards ranged from 16.8 to 98.7 Myr. Only the statement that the age of each sample was less than 200 Myr was provided with the set of standards distributed. Consequently, each laboratory was required to use their laboratory's accepted treatment (irradiation level, etching conditions, counting conditions, etc.) for these samples. The results show that some workers have serious problems in achieving accurate age determinations. This emphasizes the need to calibrate experimental techniques and counting procedures against age standards before unknown ages are determined. Any fission-track age determination published or submitted for publication can only be considered reliable if it is supported by evidence of consistent determinations on age standards. Only this can provide the scientific community with the background to build up confidence concerning the validity of the fission-track method. ?? 1985.

  16. Numerical simulation and experimental investigation about internal and external flows†

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Yang, Guowei; Huang, Guojun; Zhou, Liandi

    2006-06-01

    In this paper, TASCflow3D is used to solve inner and outer 3D viscous incompressible turbulent flow (Re=5.6×106) around axisymmetric body with duct. The governing equation is a RANS equation with standard k ɛ turbulence model. The discrete method used is a finite volume method based on the finite element approach. In this method, the description of geometry is very flexible and at the same time important conservative properties are retained. The multi-block and algebraic multi-grid techniques are used for the convergence acceleration. Agreement between experimental results and calculation is good. It indicates that this novel approach can be used to simulate complex flow such as the interaction between rotor and stator or propulsion systems containing tip clearance and cavitation.

  17. Implementation of a finite element analysis procedure for structural analysis of shape memory behaviour of fibre reinforced shape memory polymer composites

    NASA Astrophysics Data System (ADS)

    Azzawi, Wessam Al; Epaarachchi, J. A.; Islam, Mainul; Leng, Jinsong

    2017-12-01

    Shape memory polymers (SMPs) offer a unique ability to undergo a substantial shape deformation and subsequently recover the original shape when exposed to a particular external stimulus. Comparatively low mechanical properties being the major drawback for extended use of SMPs in engineering applications. However the inclusion of reinforcing fibres in to SMPs improves mechanical properties significantly while retaining intrinsic shape memory effects. The implementation of shape memory polymer composites (SMPCs) in any engineering application is a unique task which requires profound materials and design optimization. However currently available analytical tools have critical limitations to undertake accurate analysis/simulations of SMPC structures and slower derestrict transformation of breakthrough research outcomes to real-life applications. Many finite element (FE) models have been presented. But majority of them require a complicated user-subroutines to integrate with standard FE software packages. Furthermore, those subroutines are problem specific and difficult to use for a wider range of SMPC materials and related structures. This paper presents a FE simulation technique to model the thermomechanical behaviour of the SMPCs using commercial FE software ABAQUS. Proposed technique incorporates material time-dependent viscoelastic behaviour. The ability of the proposed technique to predict the shape fixity and shape recovery was evaluated by experimental data acquired by a bending of a SMPC cantilever beam. The excellent correlation between the experimental and FE simulation results has confirmed the robustness of the proposed technique.

  18. Investigating electrical resonance in eddy-current array probes

    NASA Astrophysics Data System (ADS)

    Hughes, R.; Fan, Y.; Dixon, S.

    2016-02-01

    The sensitivity enhancing effects of eddy-current testing at frequencies close to electrical resonance are explored. Var-ied techniques exploiting the phenomenon, dubbed near electrical resonance signal enhancement (NERSE), were experimentally investigated to evaluate its potential exploitation for other interesting applications in aerospace materials, in particular its potential for boosting the sensitivity of standard ECT measurements. Methods for setting and controlling the typically unstable resonant frequencies of such systems are discussed. This research is funded by the EPSRC, via the Research Centre for Non-Destructive Evaluation RCNDE, and Rolls-Royce plc.

  19. Evaluated activation cross sections of longer-lived radionuclides produced by deuteron induced reactions on natural nickel

    NASA Astrophysics Data System (ADS)

    Takács, S.; Tárkányi, F.; Király, B.; Hermanne, A.; Sonck, M.

    2007-07-01

    Activation cross sections for deuteron induced nuclear reactions on natural nickel target were studied by using a standard stacked foil technique and gamma spectrometry up to 50 MeV deuteron bombarding energy. Reaction products with half life of at least half an hour were studied. Experimental elemental activation cross sections were determined for reactions on nickel resulting in 61,64Cu, 56,57Ni, 55,56,57,58,60,61Co, 52,54,56Mn and 51Cr radionuclides and were compared with earlier measured data.

  20. The receptive field is dead. Long live the receptive field?

    PubMed Central

    Fairhall, Adrienne

    2014-01-01

    Advances in experimental techniques, including behavioral paradigms using rich stimuli under closed loop conditions and the interfacing of neural systems with external inputs and outputs, reveal complex dynamics in the neural code and require a revisiting of standard concepts of representation. High-throughput recording and imaging methods along with the ability to observe and control neuronal subpopulations allow increasingly detailed access to the neural circuitry that subserves these representations and the computations they support. How do we harness theory to build biologically grounded models of complex neural function? PMID:24618227

  1. Generation of development environments for the Arden Syntax.

    PubMed Central

    Bång, M.; Eriksson, H.

    1997-01-01

    Providing appropriate development environments for specialized languages requires a significant development and maintenance effort. Specialized environments are therefore expensive when compared to their general-language counterparts. The Arden Syntax for Medical Logic Modules (MLM) is a standardized language for representing medical knowledge. We have used PROTEGE-II, a knowledge-engineering environment, to generate a number of experimental development environments for the Arden Syntax. MEDAILLE is the resulting MLM editor, which provides a user-friendly environment that allows users to create and modify MLM definitions. Although MEDAILLE is a generated editor, it has similar functionality, while reducing the programming effort, as compared to other MLM editors developed using traditional programming techniques. We discuss how developers can use PROTEGE-II to generate development environments for other standardized languages and for general programming languages. PMID:9357639

  2. An information theory approach to the density of the earth

    NASA Technical Reports Server (NTRS)

    Graber, M. A.

    1977-01-01

    Information theory can develop a technique which takes experimentally determined numbers and produces a uniquely specified best density model satisfying those numbers. A model was generated using five numerical parameters: the mass of the earth, its moment of inertia, three zero-node torsional normal modes (L = 2, 8, 26). In order to determine the stability of the solution, six additional densities were generated, in each of which the period of one of the three normal modes was increased or decreased by one standard deviation. The superposition of the seven models is shown. It indicates that current knowledge of the torsional modes is sufficient to specify the density in the upper mantle but that the lower mantle and core will require smaller standard deviations before they can be accurately specified.

  3. Towards an In-Beam Measurement of the Neutron Lifetime to 1 Second

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan

    2014-03-01

    A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The dependence of the primordial mass fraction on the neutron lifetime, technical improvements of the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.

  4. Linewidth-tolerant 10-Gbit/s 16-QAM transmission using a pilot-carrier based phase-noise cancelling technique.

    PubMed

    Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya

    2008-07-07

    We experimentally demonstrated linewidth-tolerant 10-Gbit/s (2.5-Gsymbol/s) 16-quadrature amplitude modulation (QAM) by using a distributed-feedback laser diode (DFB-LD) with a linewidth of 30 MHz. Error-free operation, a bit-error rate (BER) of <10(-9) was achieved in transmission over 120 km of standard single mode fiber (SSMF) without any dispersion compensation. The phase-noise canceling capability provided by a pilot-carrier and standard electronic pre-equalization to suppress inter-symbol interference (ISI) gave clear 16-QAM constellations and floor-less BER characteristics. We evaluated the BER characteristics by real-time measurement of six (three different thresholds for each I- and Q-component) symbol error rates (SERs) with simultaneous constellation observation.

  5. Automatic recognition of light source from color negative films using sorting classification techniques

    NASA Astrophysics Data System (ADS)

    Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi

    1995-08-01

    This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.

  6. Loop-mediated isothermal amplification (LAMP) assay for the diagnosis of fasciolosis in sheep and its application under field conditions.

    PubMed

    Martínez-Valladares, María; Rojo-Vázquez, Francisco Antonio

    2016-02-05

    Loop-mediated isothermal amplification (LAMP) is a very specific, efficient, and rapid gene amplification procedure in which the reaction can run at a constant temperature. In the current study we have developed a LAMP assay to improve the diagnosis of Fasciola spp. in the faeces of sheep. After the optimisation of the LAMP assay we have shown similar results between this technique and the standard PCR using the outer primers of the LAMP reaction. In both cases the limit of detection was 10 pg; also, the diagnosis of fasciolosis was confirmed during the first week post-infection in experimental infected sheep by both techniques. In eight naturally infected sheep, the infection with F. hepatica was confirmed in all animals before a treatment with triclabendazole and on day 30 post treatment in two sheep using the LAMP assay; however, when we carried out the standard PCR with the outer primers, the results before treatment were the same but on day 30 post-treatment the infection was only confirmed in one out of the two sheep. On the other hand, the standard PCR took around 3 h to obtain a result, comparing with 1 h and 10 min for the LAMP assay. The LAMP assay described here could be a good alternative to conventional diagnostic methods to detect F. hepatica in faeces since it solves the drawbacks of the standard PCR.

  7. Label free sensing of creatinine using a 6 GHz CMOS near-field dielectric immunosensor.

    PubMed

    Guha, S; Warsinke, A; Tientcheu, Ch M; Schmalz, K; Meliani, C; Wenger, Ch

    2015-05-07

    In this work we present a CMOS high frequency direct immunosensor operating at 6 GHz (C-band) for label free determination of creatinine. The sensor is fabricated in standard 0.13 μm SiGe:C BiCMOS process. The report also demonstrates the ability to immobilize creatinine molecules on a Si3N4 passivation layer of the standard BiCMOS/CMOS process, therefore, evading any further need of cumbersome post processing of the fabricated sensor chip. The sensor is based on capacitive detection of the amount of non-creatinine bound antibodies binding to an immobilized creatinine layer on the passivated sensor. The chip bound antibody amount in turn corresponds indirectly to the creatinine concentration used in the incubation phase. The determination of creatinine in the concentration range of 0.88-880 μM is successfully demonstrated in this work. A sensitivity of 35 MHz/10 fold increase in creatinine concentration (during incubation) at the centre frequency of 6 GHz is gained by the immunosensor. The results are compared with a standard optical measurement technique and the dynamic range and sensitivity is of the order of the established optical indication technique. The C-band immunosensor chip comprising an area of 0.3 mm(2) reduces the sensing area considerably, therefore, requiring a sample volume as low as 2 μl. The small analyte sample volume and label free approach also reduce the experimental costs in addition to the low fabrication costs offered by the batch fabrication technique of CMOS/BiCMOS process.

  8. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  9. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  10. The measurements of water flow rates in the straight microchannel based on the scanning micro-PIV technique

    NASA Astrophysics Data System (ADS)

    Wang, H. L.; Han, W.; Xu, M.

    2011-12-01

    Measurement of the water flow rate in microchannel has been one of the hottest points in the applications of microfluidics, medical, biological, chemical analyses and so on. In this study, the scanning microscale particle image velocimetry (scanning micro-PIV) technique is used for the measurements of water flow rates in a straight microchannel of 200μm width and 60μm depth under the standard flow rates ranging from 2.481μL/min to 8.269μL/min. The main effort of this measurement technique is to obtain three-dimensional velocity distribution on the cross sections of microchannel by measuring velocities of the different fluid layers along the out-of-plane direction in the microchannel, so the water flow rates can be evaluated from the discrete surface integral of velocities on the cross section. At the same time, the three-dimensional velocity fields in the measured microchannel are simulated numerically using the FLUENT software in order to verify the velocity accuracy of measurement results. The results show that the experimental values of flow rates are well consistent to the standard flow rates input by the syringe pump and the compared results between numerical simulation and experiment are consistent fundamentally. This study indicates that the micro-flow rate evaluated from three-dimensional velocity by the scanning micro-PIV technique is a promising method for the micro-flow rate research.

  11. [Application of hand-use ProTaper instruments in endodontic treatment of molar canals].

    PubMed

    Ma, Sui-qi; Xie, Qian; Zhou, Yin-feng

    2010-07-01

    To evaluate the application of hand-use ProTaper instruments in endodontic treatment of molar canals. A total of 203 permanent molars were randomly divided into the experimental group (99 molars) and control group (104 molars) prepared by hand-use ProTaper instruments and standard stainless steel K-file, respectively. The molars in the two groups were obturated by cold lateral condensation technique. The root canal preparation and obturation were evaluated by radiograph, and the working time of preparation and post-operative emergencies were analyzed. The preparation time in the experimental group was obviously shorter than that in the control group (P<0.01). The rate of satisfactory effect was significantly higher in the experimental group than in the control group (P<0.01), and the rate of post-operative emergencies was significantly lower in the experimental group (P<0.01). The application of hand-use ProTaper instruments may improve the effect of root canal treatment of the molars and shorten the working time and reduce the post-operative emergencies.

  12. Critical assessment of precracked specimen configuration and experimental test variables for stress corrosion testing of 7075-T6 aluminum alloy plate

    NASA Technical Reports Server (NTRS)

    Domack, M. S.

    1985-01-01

    A research program was conducted to critically assess the effects of precracked specimen configuration, stress intensity solutions, compliance relationships and other experimental test variables for stress corrosion testing of 7075-T6 aluminum alloy plate. Modified compact and double beam wedge-loaded specimens were tested and analyzed to determine the threshold stress intensity factor and stress corrosion crack growth rate. Stress intensity solutions and experimentally determined compliance relationships were developed and compared with other solutions available in the literature. Crack growth data suggests that more effective crack length measurement techniques are necessary to better characterize stress corrosion crack growth. Final load determined by specimen reloading and by compliance did not correlate well, and was considered a major source of interlaboratory variability. Test duration must be determined systematically, accounting for crack length measurement resolution, time for crack arrest, and experimental interferences. This work was conducted as part of a round robin program sponsored by ASTM committees G1.06 and E24.04 to develop a standard test method for stress corrosion testing using precracked specimens.

  13. Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.; Holloway, C. M.

    2015-01-01

    Safe use of software in safety-critical applications requires well-founded means of determining whether software is fit for such use. While software in industries such as aviation has a good safety record, little is known about whether standards for software in safety-critical applications 'work' (or even what that means). It is often (implicitly) argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard works, such reliance is an experiment; without carefully collecting assessment data, that experiment is unplanned. To help plan the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on the workshop discussion, which revealed subtle but important study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to 'work,' and key assessment strategies and study techniques and the pros and cons of each. Finally, we conclude with thoughts about the kinds of research that will be required and how academia, industry, and regulators might collaborate to overcome the noted barriers.

  14. Imaging experimental intraabdominal abscesses with 99mTc-PEG liposomes and 99mTc-HYNIC IgG.

    PubMed Central

    Dams, E T; Reijnen, M M; Oyen, W J; Boerman, O C; Laverman, P; Storm, G; van der Meer, J W; Corstens, F H; van Goor, H

    1999-01-01

    OBJECTIVE: To evaluate the accuracy of technetium-99m-labeled polyethylene glycol-coated liposomes (99mTc-PEG liposomes) and technetium-99m-labeled nonspecific human immunoglobulin G (99mTc-HYNIC IgG) for the scintigraphic detection of experimental intraabdominal abscesses in comparison with that of a standard agent, gallium-67 citrate. BACKGROUND: Scintigraphic imaging techniques can be very useful for the rapid and accurate localization of intraabdominal abscesses. Two newly developed radiolabeled agents, 99mTc-PEG liposomes and 99mTc-HYNIC IgG, have shown to be excellent agents for imaging experimental focal infection, but have not yet been studied in the detection of abdominal abscesses. METHODS: Intraabdominal abscesses were induced in 42 rats using the cecal ligation and puncture technique. Seven days later, randomized groups of rats received 99mTc-PEG liposomes, 99mTc-HYNIC IgG, or 67Ga citrate intravenously. The rats were imaged up to 24 hours after the injection. The biodistribution of the radiolabel was determined by counting dissected tissues ex vivo. Macroscopic intraabdominal abnormalities and focal uptake on the images were independently scored on a semiquantitative scale. RESULTS: 99mTc-PEG liposomes provided the earliest scintigraphic visualization of the abscess (as soon as 2 hours after the injection vs. 4 hours for the other two agents). Liposomes, IgG, and gallium all showed similarly high absolute uptake in the abscess. Focal uptake of liposomes and gallium correlated best with the extent of the macroscopic abnormalities. CONCLUSIONS: 99mTc-PEG liposomes and 99mTc-HYNIC IgG performed at least as well as the standard agent, 67Ga citrate, in the detection of experimental intraabdominal abscesses, with obvious advantages such as lower radiation exposure and more favorable physical properties. Of the two technetium agents, the liposomes seemed to be superior, providing the earliest diagnostic image and the best correlation with the inflammatory abnormalities. In addition, the preferential localization of radiolabeled PEG liposomes holds promise for targeted delivery of liposome-encapsulated drugs. Images Figure 1. PMID:10203089

  15. Evaluation of possible head injuries ensuing a cricket ball impact.

    PubMed

    Mohotti, Damith; Fernando, P L N; Zaghloul, Amir

    2018-05-01

    The aim of this research is to study the behaviour of a human head during the event of an impact of a cricket ball. While many recent incidents were reported in relation to head injuries caused by the impact of cricket balls, there is no clear information available in the published literature about the possible threat levels and the protection level of the current protective equipment. This research investigates the effects of an impact of a cricket ball on a human head and the level of protection offered by the existing standard cricket helmet. An experimental program was carried out to measure the localised pressure caused by the impact of standard cricket balls. The balls were directed at a speed of 110 km/h on a 3D printed head model, with and without a standard cricket helmet. Numerical simulations were carried out using advanced finite element package LS-DYNA to validate the experimental results. The experimental and numerical results showed approximately a 60% reduction in the pressure on the head model when the helmet was used. Both frontal and side impact resulted in head acceleration values in the range of 225-250 g at a ball speed of 110 km/h. There was a 36% reduction observed in the peak acceleration of the brain when wearing a helmet. Furthermore, numerical simulations showed a 67% reduction in the force on the skull and a 95% reduction in the skull internal energy when introducing the helmet. (1) Upon impact, high localised pressure could cause concussion for a player without helmet. (2) When a helmet was used, the acceleration of the brain observed in the numerical results was at non-critical levels according to existing standards. (3) A significant increase in the threat levels was observed for a player without helmet, based on force, pressure, acceleration and energy criteria, which resulted in recommending the compulsory use of the cricket helmet. (4) Numerical results showed a good correlation with experimental results and hence, the numerical technique used in this study can be recommended for future applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Strain analysis from nano-beam electron diffraction: Influence of specimen tilt and beam convergence.

    PubMed

    Grieb, Tim; Krause, Florian F; Schowalter, Marco; Zillmann, Dennis; Sellin, Roman; Müller-Caspary, Knut; Mahr, Christoph; Mehrtens, Thorsten; Bimberg, Dieter; Rosenauer, Andreas

    2018-07-01

    Strain analyses from experimental series of nano-beam electron diffraction (NBED) patterns in scanning transmission electron microscopy are performed for different specimen tilts. Simulations of NBED series are presented for which strain analysis gives results that are in accordance with experiment. This consequently allows to study the relation between measured strain and actual underlying strain. A two-tilt method which can be seen as lowest-order electron beam precession is suggested and experimentally implemented. Strain determination from NBED series with increasing beam convergence is performed in combination with the experimental realization of a probe-forming aperture with a cross inside. It is shown that using standard evaluation techniques, the influence of beam convergence on spatial resolution is lower than the influence of sharp rings around the diffraction disc which occur at interfaces and which are caused by the tails of the intensity distribution of the electron probe. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Current State and Future Perspectives in QSAR Models to Predict Blood- Brain Barrier Penetration in Central Nervous System Drug R&D.

    PubMed

    Morales, Juan F; Montoto, Sebastian Scioli; Fagiolino, Pietro; Ruiz, Maria E

    2017-01-01

    The Blood-Brain Barrier (BBB) is a physical and biochemical barrier that restricts the entry of certain drugs to the Central Nervous System (CNS), while allowing the passage of others. The ability to predict the permeability of a given molecule through the BBB is a key aspect in CNS drug discovery and development, since neurotherapeutic agents with molecular targets in the CNS should be able to cross the BBB, whereas peripherally acting agents should not, to minimize the risk of CNS adverse effects. In this review we examine and discuss QSAR approaches and current availability of experimental data for the construction of BBB permeability predictive models, focusing on the modeling of the biorelevant parameter unbound partitioning coefficient (Kp,uu). Emphasis is made on two possible strategies to overcome the current limitations of in silico models: considering the prediction of brain penetration as a multifactorial problem, and increasing experimental datasets through accurate and standardized experimental techniques.

  18. An experimental and theoretical study to relate uncommon rock/fluid properties to oil recovery. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, R.

    Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less

  19. In situ Biofilm Quantification in Bioelectrochemical Systems by using Optical Coherence Tomography.

    PubMed

    Molenaar, Sam D; Sleutels, Tom; Pereira, Joao; Iorio, Matteo; Borsje, Casper; Zamudio, Julian A; Fabregat-Santiago, Francisco; Buisman, Cees J N; Ter Heijne, Annemiek

    2018-04-25

    Detailed studies of microbial growth in bioelectrochemical systems (BESs) are required for their suitable design and operation. Here, we report the use of optical coherence tomography (OCT) as a tool for in situ and noninvasive quantification of biofilm growth on electrodes (bioanodes). An experimental platform is designed and described in which transparent electrodes are used to allow real-time, 3D biofilm imaging. The accuracy and precision of the developed method is assessed by relating the OCT results to well-established standards for biofilm quantification (chemical oxygen demand (COD) and total N content) and show high correspondence to these standards. Biofilm thickness observed by OCT ranged between 3 and 90 μm for experimental durations ranging from 1 to 24 days. This translated to growth yields between 38 and 42 mgCODbiomass  gCODacetate -1 at an anode potential of -0.35 V versus Ag/AgCl. Time-lapse observations of an experimental run performed in duplicate show high reproducibility in obtained microbial growth yield by the developed method. As such, we identify OCT as a powerful tool for conducting in-depth characterizations of microbial growth dynamics in BESs. Additionally, the presented platform allows concomitant application of this method with various optical and electrochemical techniques. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  20. Qualitative evaluation of selective tests for detection of Neospora hughesi antibodies in serum and cerebrospinal fluid of experimentally infected horses.

    PubMed

    Packham, Andrea E; Conrad, Patricia A; Wilson, W David; Jeanes, Lisa V; Sverlow, Karen W; Gardner, Ian A; Daft, Barbara M; Marsh, Antoinette E; Blagburn, Byron L; Ferraro, Gregory L; Barr, Bradd C

    2002-12-01

    Neospora hughesi is a newly recognized protozoan pathogen in horses that causes a myeloencephalitis similar to Sarcocystis neurona. There are no validated serologic tests using the gold standard sera that are currently available to detect specific N. hughesi antibodies and, thus, no tests available to detect antemortem exposure or estimate seroprevalence in the horse. The objectives of the present study were to establish a bank of gold standard equine sera through experimental infections with N. hughesi and to assess several serologic tests for the detection of related protozoan antibodies. Seven horses were inoculated with N. hughesi tachyzoites, and 7 horses received uninfected cell culture material. The horses were monitored, and blood and cerebrospinal fluid were collected repeatedly over a 4-mo period. With the sera, 4 different serologic techniques were evaluated. including a whole-parasite lysate enzyme-linked immunosorbent assay (ELISA), a recombinant protein ELISA, a modified direct agglutination test, and an indirect fluorescent antibody test. Qualitative and quantitative evaluation of the results showed that the N. hughesi indirect fluorescent antibody test (IFAT) consistently discriminated between experimentally infected and noninfected horses, using a cutoff of 1:640. Sera from 3 naturally infected horses had titers >1:640. Cerebrospinal fluid in all but I infected horse had very low N. hughesi IFAT titers (<1:160), starting at postinoculation day 30.

  1. Chemically defined medium and Caenorhabditis elegans

    NASA Technical Reports Server (NTRS)

    Szewczyk, Nathaniel J.; Kozak, Elena; Conley, Catharine A.

    2003-01-01

    BACKGROUND: C. elegans has been established as a powerful genetic system. Use of a chemically defined medium (C. elegans Maintenance Medium (CeMM)) now allows standardization and systematic manipulation of the nutrients that animals receive. Liquid cultivation allows automated culturing and experimentation and should be of use in large-scale growth and screening of animals. RESULTS: We find that CeMM is versatile and culturing is simple. CeMM can be used in a solid or liquid state, it can be stored unused for at least a year, unattended actively growing cultures may be maintained longer than with standard techniques, and standard C. elegans protocols work well with animals grown in defined medium. We also find that there are caveats to using defined medium. Animals in defined medium grow more slowly than on standard medium, appear to display adaptation to the defined medium, and display altered growth rates as they change the composition of the defined medium. CONCLUSIONS: As was suggested with the introduction of C. elegans as a potential genetic system, use of defined medium with C. elegans should prove a powerful tool.

  2. InterPred: A pipeline to identify and model protein-protein interactions.

    PubMed

    Mirabello, Claudio; Wallner, Björn

    2017-06-01

    Protein-protein interactions (PPI) are crucial for protein function. There exist many techniques to identify PPIs experimentally, but to determine the interactions in molecular detail is still difficult and very time-consuming. The fact that the number of PPIs is vastly larger than the number of individual proteins makes it practically impossible to characterize all interactions experimentally. Computational approaches that can bridge this gap and predict PPIs and model the interactions in molecular detail are greatly needed. Here we present InterPred, a fully automated pipeline that predicts and model PPIs from sequence using structural modeling combined with massive structural comparisons and molecular docking. A key component of the method is the use of a novel random forest classifier that integrate several structural features to distinguish correct from incorrect protein-protein interaction models. We show that InterPred represents a major improvement in protein-protein interaction detection with a performance comparable or better than experimental high-throughput techniques. We also show that our full-atom protein-protein complex modeling pipeline performs better than state of the art protein docking methods on a standard benchmark set. In addition, InterPred was also one of the top predictors in the latest CAPRI37 experiment. InterPred source code can be downloaded from http://wallnerlab.org/InterPred Proteins 2017; 85:1159-1170. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Experimental and natural constraints on the generation of calc-alkaline volcanic rocks in the Western Aleutian arc

    NASA Astrophysics Data System (ADS)

    Cottrell, E.; Kelley, K. A.; Grant, E.; Coombs, M. L.; Pistone, M.

    2016-12-01

    A new experimental technique with unique geometry is presented investigating deformation of simulated boreholes using standard axisymmetric triaxial deformation equipment. The Sandia WEllbore SImulation, SWESI, geometry, uses right cylinders of rock 50mm in diameter and 75mm in length. A 11.3mm hole is drilled perpendicular to the axis of the cylinder in the center of the sample to simulate a borehole. The hole is covered with a solid metal cover, and sealed with polyurethane. The metal cover can be machined with a high-pressure port to introduce different fluid chemistries into the borehole at controlled pressures. Samples are deformed in a standard load frame under confinement, allowing for a broad range of possible stresses, load paths, and temperatures. Experiments in this study are loaded to the desired confining pressure, then deformed at a constant axial strain rate or 10-5 sec-1. Two different suites of experiments are conducted in this study on sedimentary and crystalline rock types. The first series of experiments are conducted on Mancos Shale, a finely laminated transversely isotropic rock. Samples are cored at three different orientations to the laminations. A second series of experiments is conducted on Sierra White granite with different fluid chemistries inside the borehole. Numerical modelling and experimental observations including CT-microtomography demonstrate that stresses are concentrated around the simulated wellbore and recreate wellbore deformation mechanisms. Borehole strength and damage development is dependent on anisotropy orientation and fluid chemistry. Observed failure geometries, particularly for Mancos shale, can be highly asymmetric. These results demonstrate uncertainties in in situ stresses measurements using commonly-applied borehole breakout techniques in complicated borehole physico-chemical environments. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. SAND2017-8259 A

  4. Indices of polarimetric purity for biological tissues inspection

    NASA Astrophysics Data System (ADS)

    Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J.; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C.; Moreno, Ignacio; Campos, Juan

    2018-02-01

    We highlight the interest of using the Indices of Polarimetric Purity (IPPs) for the biological tissue inspection. These are three polarimetric metrics focused on the study of the depolarizing behaviour of the sample. The IPPs have been recently proposed in the literature and provide different and synthetized information than the commonly used depolarizing indices, as depolarization index (PΔ) or depolarization power (Δ). Compared with the standard polarimetric images of biological samples, IPPs enhance the contrast between different tissues of the sample and show differences between similar tissues which are not observed using the other standard techniques. Moreover, they present further physical information related to the depolarization mechanisms inherent to different tissues. In addition, the algorithm does not require advanced calculations (as in the case of polar decompositions), being the indices of polarimetric purity fast and easy to implement. We also propose a pseudo-coloured image method which encodes the sample information as a function of the different indices weights. These images allow us to customize the visualization of samples and to highlight certain of their constitutive structures. The interest and potential of the IPP approach are experimentally illustrated throughout the manuscript by comparing polarimetric images of different ex-vivo samples obtained with standard polarimetric methods with those obtained from the IPPs analysis. Enhanced contrast and retrieval of new information are experimentally obtained from the different IPP based images.

  5. [Influence of SiO2 films on color reproduction of Ni-Cr alloy porcelain crowns].

    PubMed

    Wu, Dong; Feng, Yunzhi

    2011-08-01

    To study whether SiO2 films will influence the color of Ni-Cr metal ceramic restorations. For the film plating experimental group, Sol-gel method was employed to apply SiO2 films to the surface of the Ni-Cr copings, while no coating was applied for the non-film-plating control group. Veneering porcelains were then applied subsequently, and a total of 12 B2-colored maxillary incisor metal ceramic crowns were fabricated with 6 crowns in each group. A ShadeEye Ncc computer-aided colorimeter was employed to measure the shade of the samples, as well as 6 B2(Vitapan classical vita color tabs) shade standards. The color was expressed as C1E-1976-Lab coordinates. There was a statistically significant color difference between all metal ceramic crowns and the B2 shade standards (delta E>1.5). The L*, a*, b* values of all crowns were higher than those of the B2 shade standards, and the crowns were typically yellower or redder. While neither significant color difference nor difference in shade values was observed between the film plating experimental group and non-film-plating control group (delta E<1.5). SiO2 films applied to the Ni-Cr copings by means of Sol-gel technique do not impact the final color of the metal ceramic restorations.

  6. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  7. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1994-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.

  8. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  9. TREATMENT OF LANDFILL LEACHATE BY COUPLING COAGULATION-FLOCCULATION OR OZONATION TO GRANULAR ACTIVATED CARBON ADSORPTION.

    PubMed

    Oloibiri, Violet; Ufomba, Innocent; Chys, Michael; Audenaert, Wim; Demeestere, Kristof; Van Hulle, Stijn W H

    2015-01-01

    A major concern for landfilling facilities is the treatment of their leachate. To optimize organic matter removal from this leachate, the combination of two or more techniques is preferred in order to meet stringent effluent standards. In our study, coagulation-flocculation and ozonation are compared as pre- treatment steps for stabilized landfill leachate prior to granular activated carbon (GAC) adsorption. The efficiency of the pre treatment techniques is evaluated using COD and UVA254 measurements. For coagulation- flocculation, different chemicals are compared and optimal dosages are determined. After this, iron (III) chloride is selected for subsequent adsorption studies due to its high percentage of COD and UVA254 removal and good sludge settle-ability. Our finding show that ozonation as a single treatment is effective in reducing COD in landfill leachate by 66% compared to coagulation flocculation (33%). Meanwhile, coagulation performs better in UVA254 reduction than ozonation. Subsequent GAC adsorption of ozonated effluent, coagulated effluent and untreated leachate resulted in 77%, 53% and 8% total COD removal respectively (after 6 bed volumes). The effect of the pre-treatment techniques on GAC adsorption properties is evaluated experimentally and mathematically using Thomas and Yoon-Nelson models. Mathematical modelling of the experimental GAC adsorption data shows that ozonation increases the adsorption capacity and break through time with a factor of 2.5 compared to coagulation-flocculation.

  10. Progress on an implementation of MIFlowCyt in XML

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.

    2015-03-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.

  11. Quantitative determination and validation of octreotide acetate using 1 H-NMR spectroscopy with internal standard method.

    PubMed

    Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang

    2018-01-01

    Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Using standardized fishery data to inform rehabilitation efforts

    USGS Publications Warehouse

    Spurgeon, Jonathan J.; Stewart, Nathaniel T.; Pegg, Mark A.; Pope, Kevin L.; Porath, Mark T.

    2016-01-01

    Lakes and reservoirs progress through an aging process often accelerated by human activities, resulting in degradation or loss of ecosystem services. Resource managers thus attempt to slow or reverse the negative effects of aging using a myriad of rehabilitation strategies. Sustained monitoring programs to assess the efficacy of rehabilitation strategies are often limited; however, long-term standardized fishery surveys may be a valuable data source from which to begin evaluation. We present 3 case studies using standardized fishery survey data to assess rehabilitation efforts stemming from the Nebraska Aquatic Habitat Plan, a large-scale program with the mission to rehabilitate waterbodies within the state. The case studies highlight that biotic responses to rehabilitation efforts can be assessed, to an extent, using standardized fishery data; however, there were specific areas where minor increases in effort would clarify the effectiveness of rehabilitation techniques. Management of lakes and reservoirs can be streamlined by maximizing the utility of such datasets to work smarter, not harder. To facilitate such efforts, we stress collecting both biotic (e.g., fish lengths and weight) and abiotic (e.g., dissolved oxygen, pH, and turbidity) data during standardized fishery surveys and designing rehabilitation actions with an appropriate experimental design.

  13. Standardized pivot shift test improves measurement accuracy.

    PubMed

    Hoshino, Yuichi; Araujo, Paulo; Ahlden, Mattias; Moore, Charity G; Kuroda, Ryosuke; Zaffagnini, Stefano; Karlsson, Jon; Fu, Freddie H; Musahl, Volker

    2012-04-01

    The variability of the pivot shift test techniques greatly interferes with achieving a quantitative and generally comparable measurement. The purpose of this study was to compare the variation of the quantitative pivot shift measurements with different surgeons' preferred techniques to a standardized technique. The hypothesis was that standardizing the pivot shift test would improve consistency in the quantitative evaluation when compared with surgeon-specific techniques. A whole lower body cadaveric specimen was prepared to have a low-grade pivot shift on one side and high-grade pivot shift on the other side. Twelve expert surgeons performed the pivot shift test using (1) their preferred technique and (2) a standardized technique. Electromagnetic tracking was utilized to measure anterior tibial translation and acceleration of the reduction during the pivot shift test. The variation of the measurement was compared between the surgeons' preferred technique and the standardized technique. The anterior tibial translation during pivot shift test was similar between using surgeons' preferred technique (left 24.0 ± 4.3 mm; right 15.5 ± 3.8 mm) and using standardized technique (left 25.1 ± 3.2 mm; right 15.6 ± 4.0 mm; n.s.). However, the variation in acceleration was significantly smaller with the standardized technique (left 3.0 ± 1.3 mm/s(2); right 2.5 ± 0.7 mm/s(2)) compared with the surgeons' preferred technique (left 4.3 ± 3.3 mm/s(2); right 3.4 ± 2.3 mm/s(2); both P < 0.01). Standardizing the pivot shift test maneuver provides a more consistent quantitative evaluation and may be helpful in designing future multicenter clinical outcome trials. Diagnostic study, Level I.

  14. Laser bonding with ICG-infused chitosan patches: preliminary experiences in suine dura mater and vocal folds

    NASA Astrophysics Data System (ADS)

    Rossi, Francesca; Matteini, Paolo; Ratto, Fulvio; Pini, Roberto; Iacoangeli, Maurizio; Giannoni, Luca; Fortuna, Damiano; Di Cicco, Emiliano; Corbara, Sylwia; Dallari, Stefano

    2014-05-01

    Laser bonding is a promising minimally invasive approach, emerging as a valid alternative to conventional suturing techniques. It shows widely demonstrated advantages in wound treatment: immediate closuring effect, minimal inflammatory response and scar formation, reduced healing time. This laser based technique can overcome the difficulties in working through narrow surgical corridors (e.g. the modern "key-hole" surgery as well as the endoscopy setting) or in thin tissues that are impossible to treat with staples and/or stitches. We recently proposed the use of chitosan matrices, stained with conventional chromophores, to be used in laser bonding of vascular tissue. In this work we propose the same procedure to perform laser bonding of vocal folds and dura mater repair. Laser bonding of vocal folds is proposed to avoid the development of adhesions (synechiae), after conventional or CO2 laser surgery. Laser bonding application in neurosurgery is proposed for the treatment of dural defects being the Cerebro Spinal Fluid leaks still a major issue. Vocal folds and dura mater were harvested from 9-months old porks and used in the experimental sessions within 4 hours after sacrifice. In vocal folds treatment, an IdocyanineGreen-infused chitosan patch was applied onto the anterior commissure, while the dura mater was previously incised and then bonded. A diode laser emitting at 810 nm, equipped with a 600 μm diameter optical fiber was used to weld the patch onto the tissue, by delivering single laser spots to induce local patch/tissue adhesion. The result is an immediate adhesion of the patch to the tissue. Standard histology was performed, in order to study the induced photothermal effect at the bonding sites. This preliminary experimental activity shows the advantages of the proposed technique in respect to standard surgery: simplification of the procedure; decreased foreign-body reaction; reduced inflammatory response; reduced operating times and better handling in depth.

  15. An improved survivability prognosis of breast cancer by using sampling and feature selection technique to solve imbalanced patient classification data.

    PubMed

    Wang, Kung-Jeng; Makond, Bunjira; Wang, Kung-Min

    2013-11-09

    Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE), cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR.

  16. An improved survivability prognosis of breast cancer by using sampling and feature selection technique to solve imbalanced patient classification data

    PubMed Central

    2013-01-01

    Background Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Methods Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE) ,cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Results Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. Conclusions LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR. PMID:24207108

  17. Comparative evaluation of microleakage in Class II restorations using open vs. closed centripetal build-up techniques with different lining materials

    PubMed Central

    Sawani, Shefali; Arora, Vipin; Jaiswal, Shikha; Nikhil, Vineeta

    2014-01-01

    Background: Evaluation of microleakage is important for assessing the success of new restorative materials and methods. Aim and Objectives: Comparative evaluation of microleakage in Class II restorations using open vs. closed centripetal build-up techniques with different lining materials. Materials and Methods: Standardized mesi-occlusal (MO) and distoocclusal (DO) Class II tooth preparations were preparedon 53 molars and samples were randomly divided into six experimental groups and one control group for restorations. Group 1: Open-Sandwich technique (OST) with flowable composite at the gingival seat. Group 2: OST with resin-modified glass ionomer cement (RMGIC) at the gingival seat. Group 3: Closed-Sandwich technique (CST) with flowable composite at the pulpal floor and axial wall. Group 4: CST with RMGIC at the pulpal floor and axial wall. Group 5: OST with flowable composite at the pulpal floor, axial wall, and gingival seat. Group 6: OST with RMGIC at the pulpal floor, axial wall, and gingival seat. Group 7: Control — no lining material, centripetal technique only. After restorations and thermocycling, apices were sealed and samples were immersed in 0.5% basic fuchsin dye. Sectioning was followed by stereomicroscopic evaluation. Results: Results were analyzed using Post Hoc Bonferroni test (statistics is not a form of tabulation). Cervical scores of control were more than the exprimental groups (P < 0.05). Less microleakage was observed in CST than OST in all experimental groups (P < 0.05). However, insignificant differences were observed among occlusal scores of different groups (P > 0.05). Conclusion: Class II composite restorations with centripetal build-up alone or when placed with CST reduces the cervical microleakage when compared to OST. PMID:25125847

  18. Development of cataract and corneal opacity in mice due to radon exposure

    NASA Astrophysics Data System (ADS)

    Abdelkawi, S. A.; Abo-Elmagd, M.; Soliman, H. A.

    This work investigates the radiation damage on the eye of albino mice exposed to effective radon doses ranging from 20.92 to 83.68 mSv. These doses were taken over 2-8 weeks using a radon chamber constructed by the National Institute for Standard (Egypt). The guidance on the quality assurance program for radon measurements was followed. Therefore, the estimated doses received by the laboratory animals meet the requirements of national standardE The refractive index(RI) and protein concentration were measured for soluble proteins of both corneas and lenses. In addition, the sodium dodecyle sulfate polyacrylamide gel electrophoresis (SDSPAGE) technique was used. The results show increasing of the RI of both cornea and lens proteins and decreasing in total protein concentration of exposed animals. These results were accompanied with changes in the SDSPAGE profile for both cornea and lens. Therefore, radon exposure produces substantial hazards to the cornea and lens of experimental animals and has a crucial role in the development of cataract and corneal opacity.

  19. Quantitative gel electrophoresis: new records in precision by elaborated staining and detection protocols.

    PubMed

    Deng, Xi; Schröder, Simone; Redweik, Sabine; Wätzig, Hermann

    2011-06-01

    Gel electrophoresis (GE) is a very common analytical technique for proteome research and protein analysis. Despite being developed decades ago, there is still a considerable need to improve its precision. Using the fluorescence of Colloidal Coomassie Blue -stained proteins in near-infrared (NIR), the major error source caused by the unpredictable background staining is strongly reduced. This result was generalized for various types of detectors. Since GE is a multi-step procedure, standardization of every single step is required. After detailed analysis of all steps, the staining and destaining were identified as the major source of the remaining variation. By employing standardized protocols, pooled percent relative standard deviations of 1.2-3.1% for band intensities were achieved for one-dimensional separations in repetitive experiments. The analysis of variance suggests that the same batch of staining solution should be used for gels of one experimental series to minimize day-to-day variation and to obtain high precision. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Comparison of a new noncoplanar intensity-modulated radiation therapy technique for craniospinal irradiation with 3 coplanar techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Anders T., E-mail: andehans@rm.dk; Lukacova, Slavka; Lassen-Ramshad, Yasmin

    2015-01-01

    When standard conformal x-ray technique for craniospinal irradiation is used, it is a challenge to achieve satisfactory dose coverage of the target including the area of the cribriform plate, while sparing organs at risk. We present a new intensity-modulated radiation therapy (IMRT), noncoplanar technique, for delivering irradiation to the cranial part and compare it with 3 other techniques and previously published results. A total of 13 patients who had previously received craniospinal irradiation with standard conformal x-ray technique were reviewed. New treatment plans were generated for each patient using the noncoplanar IMRT-based technique, a coplanar IMRT-based technique, and a coplanarmore » volumetric-modulated arch therapy (VMAT) technique. Dosimetry data for all patients were compared with the corresponding data from the conventional treatment plans. The new noncoplanar IMRT technique substantially reduced the mean dose to organs at risk compared with the standard radiation technique. The 2 other coplanar techniques also reduced the mean dose to some of the critical organs. However, this reduction was not as substantial as the reduction obtained by the noncoplanar technique. Furthermore, compared with the standard technique, the IMRT techniques reduced the total calculated radiation dose that was delivered to the normal tissue, whereas the VMAT technique increased this dose. Additionally, the coverage of the target was significantly improved by the noncoplanar IMRT technique. Compared with the standard technique, the coplanar IMRT and the VMAT technique did not improve the coverage of the target significantly. All the new planning techniques increased the number of monitor units (MU) used—the noncoplanar IMRT technique by 99%, the coplanar IMRT technique by 122%, and the VMAT technique by 26%—causing concern for leak radiation. The noncoplanar IMRT technique covered the target better and decreased doses to organs at risk compared with the other techniques. All the new techniques increased the number of MU compared with the standard technique.« less

  1. Experimental Guidance for Isospin Symmetry Breaking Calculations via Single Neutron Pickup Reactions

    NASA Astrophysics Data System (ADS)

    Leach, K. G.; Garrett, P. E.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Ball, G.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.; Towner, I. S.

    2013-03-01

    Recent activity in superallowed isospin-symmetry-breaking correction calculations has prompted interest in experimental confirmation of these calculation techniques. The shellmodel set of Towner and Hardy (2008) include the opening of specific core orbitals that were previously frozen. This has resulted in significant shifts in some of the δC values, and an improved agreement of the individual corrected {F}t values with the adopted world average of the 13 cases currently included in the high-precision evaluation of Vud. While the nucleus-to-nucleus variation of {F}t is consistent with the conserved-vector-current (CVC) hypothesis of the Standard Model, these new calculations must be thoroughly tested, and guidance must be given for their improvement. Presented here are details of a 64Zn(ěcd, t)63Zn experiment, undertaken to provide such guidance.

  2. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  3. Electromagnetic Real Time Navigation in the Region of the Posterior Pelvic Ring: An Experimental In-Vitro Feasibility Study and Comparison of Image Guided Techniques.

    PubMed

    Pishnamaz, Miguel; Wilkmann, Christoph; Na, Hong-Sik; Pfeffer, Jochen; Hänisch, Christoph; Janssen, Max; Bruners, Philipp; Kobbe, Philipp; Hildebrand, Frank; Schmitz-Rode, Thomas; Pape, Hans-Christoph

    2016-01-01

    Electromagnetic tracking is a relatively new technique that allows real time navigation in the absence of radiation. The aim of this study was to prove the feasibility of this technique for the treatment of posterior pelvic ring fractures and to compare the results with established image guided procedures. Tests were performed in pelvic specimens (Sawbones®) with standardized sacral fractures (Type Denis I or II). A gel matrix simulated the operative approach and a cover was used to disable visual control. The electromagnetic setup was performed by using a custom made carbon reference plate and a prototype stainless steel K-wire with an integrated sensor coil. Four different test series were performed: Group OCT: Optical navigation using preoperative CT-scans; group O3D: Optical navigation using intraoperative 3-D-fluoroscopy; group Fluoro: Conventional 2-D-fluoroscopy; group EMT: Electromagnetic navigation combined with a preoperative Dyna-CT. Accuracy of screw placement was analyzed by standardized postoperative CT-scan for each specimen. Operation time and intraoperative radiation exposure for the surgeon was documented. All data was analyzed using SPSS (Version 20, 76 Chicago, IL, USA). Statistical significance was defined as p< 0.05. 160 iliosacral screws were placed (40 per group). EMT resulted in a significantly higher incidence of optimal screw placement (EMT: 36/40) compared to the groups Fluoro (30/40; p< 0.05) and OCT (31/40; p< 0.05). Results between EMT and O3D were comparable (O3D: 37/40; n.s.). Also, the operation time was comparable between groups EMT and O3D (EMT 7.62 min vs. O3D 7.98 min; n.s.), while the surgical time was significantly shorter compared to the Fluoro group (10.69 min; p< 0.001) and the OCT group (13.3 min; p< 0.001). Electromagnetic guided iliosacral screw placement is a feasible procedure. In our experimental setup, this method was associated with improved accuracy of screw placement and shorter operation time when compared with the conventional fluoroscopy guided technique and compared to the optical navigation using preoperative CT-scans. Further studies are necessary to rule out drawbacks of this technique regarding ferromagnetic objects.

  4. Ultrasound biomicroscopy (UBM) and scanning acoustic microscopy (SAM) for the assessment of hernia mesh integration: a comparison to standard histology in an experimental model.

    PubMed

    Petter-Puchner, A; Gruber-Blum, S; Walder, N; Fortelny, R H; Redl, H; Raum, K

    2014-08-01

    Mesh integration is a key parameter for reliable and safe hernia repair. So far, its assessment is based on histology obtained from rare second-look operations or experimental research. Therefore, non-invasive high-resolution imaging techniques would be of great value. Ultrasound biomicroscopy (UBM) and scanning acoustic microscopy (SAM) have shown potential in the imaging of hard and soft tissues. This experimental study compared the detection of mesh integration, foreign body reaction and scar formation in UBM/SAM with standard histology. Ten titanized polypropylene meshes were implanted in rats in a model of onlay repair. 17 days postoperative animals were killed and samples were paraffin embedded for histology (H&E, Cresyl violet) or processed for postmortem UBM/SAM. The observation period was uneventful and meshes appeared well integrated. Relocation of neighboring cross-sectional levels could easily be achieved with the 40-MHz UBM and granulation tissue could be distinguished from adjacent muscle tissue layers. The spatial resolution of approximately 8 μm of the 200-MHz UBM system images was comparable to standard histology (2.5-5× magnification) and allowed a clear identification of mesh fibers and different tissue types, e.g., scar, fat, granulation, and muscle tissues, as well as vessels, abscedations, and foreign body giant cell clusters. This pilot study demonstrates the potential of high-frequency ultrasound to assess hernia mesh integration non-invasively. Although the methods lack cell-specific information, tissue integration could reliably be assessed. The possibility of conducting UBM in vivo advocates this method as a guidance tool for the indication of second-look operations and subsequent elaborate histological analyses.

  5. DROP: Detecting Return-Oriented Programming Malicious Code

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Xiao, Hai; Shen, Xiaobin; Yin, Xinchun; Mao, Bing; Xie, Li

    Return-Oriented Programming (ROP) is a new technique that helps the attacker construct malicious code mounted on x86/SPARC executables without any function call at all. Such technique makes the ROP malicious code contain no instruction, which is different from existing attacks. Moreover, it hides the malicious code in benign code. Thus, it circumvents the approaches that prevent control flow diversion outside legitimate regions (such as W ⊕ X ) and most malicious code scanning techniques (such as anti-virus scanners). However, ROP has its own intrinsic feature which is different from normal program design: (1) uses short instruction sequence ending in "ret", which is called gadget, and (2) executes the gadgets contiguously in specific memory space, such as standard GNU libc. Based on the features of the ROP malicious code, in this paper, we present a tool DROP, which is focused on dynamically detecting ROP malicious code. Preliminary experimental results show that DROP can efficiently detect ROP malicious code, and have no false positives and negatives.

  6. Phase-stepped fringe projection by rotation about the camera's perspective center.

    PubMed

    Huddart, Y R; Valera, J D; Weston, N J; Featherstone, T C; Moore, A J

    2011-09-12

    A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera's perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.

  7. Living cell dry mass measurement using quantitative phase imaging with quadriwave lateral shearing interferometry: an accuracy and sensitivity discussion.

    PubMed

    Aknoun, Sherazade; Savatier, Julien; Bon, Pierre; Galland, Frédéric; Abdeladim, Lamiae; Wattellier, Benoit; Monneret, Serge

    2015-01-01

    Single-cell dry mass measurement is used in biology to follow cell cycle, to address effects of drugs, or to investigate cell metabolism. Quantitative phase imaging technique with quadriwave lateral shearing interferometry (QWLSI) allows measuring cell dry mass. The technique is very simple to set up, as it is integrated in a camera-like instrument. It simply plugs onto a standard microscope and uses a white light illumination source. Its working principle is first explained, from image acquisition to automated segmentation algorithm and dry mass quantification. Metrology of the whole process, including its sensitivity, repeatability, reliability, sources of error, over different kinds of samples and under different experimental conditions, is developed. We show that there is no influence of magnification or spatial light coherence on dry mass measurement; effect of defocus is more critical but can be calibrated. As a consequence, QWLSI is a well-suited technique for fast, simple, and reliable cell dry mass study, especially for live cells.

  8. Computational Biology Methods for Characterization of Pluripotent Cells.

    PubMed

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation.

  9. Comparison of Preloaded Bougie versus Standard Bougie Technique for Endotracheal Intubation in a Cadaveric Model.

    PubMed

    Baker, Jay B; Maskell, Kevin F; Matlock, Aaron G; Walsh, Ryan M; Skinner, Carl G

    2015-07-01

    We compared intubating with a preloaded bougie (PB) against standard bougie technique in terms of success rates, time to successful intubation and provider preference on a cadaveric airway model. In this prospective, crossover study, healthcare providers intubated a cadaver using the PB technique and the standard bougie technique. Participants were randomly assigned to start with either technique. Following standardized training and practice, procedural success and time for each technique was recorded for each participant. Subsequently, participants were asked to rate their perceived ease of intubation on a visual analogue scale of 1 to 10 (1=difficult and 10=easy) and to select which technique they preferred. 47 participants with variable experience intubating were enrolled at an emergency medicine intern airway course. The success rate of all groups for both techniques was equal (95.7%). The range of times to completion for the standard bougie technique was 16.0-70.2 seconds, with a mean time of 29.7 seconds. The range of times to completion for the PB technique was 15.7-110.9 seconds, with a mean time of 29.4 seconds. There was a non-significant difference of 0.3 seconds (95% confidence interval -2.8 to 3.4 seconds) between the two techniques. Participants rated the relative ease of intubation as 7.3/10 for the standard technique and 7.6/10 for the preloaded technique (p=0.53, 95% confidence interval of the difference -0.97 to 0.50). Thirty of 47 participants subjectively preferred the PB technique (p=0.039). There was no significant difference in success or time to intubation between standard bougie and PB techniques. The majority of participants in this study preferred the PB technique. Until a clear and clinically significant difference is found between these techniques, emergency airway operators should feel confident in using the technique with which they are most comfortable.

  10. Air Embolism During TEVAR: Carbon Dioxide Flushing Decreases the Amount of Gas Released from Thoracic Stent-Grafts During Deployment.

    PubMed

    Rohlffs, Fiona; Tsilimparis, Nikolaos; Saleptsis, Vasilis; Diener, Holger; Debus, E Sebastian; Kölbel, Tilo

    2017-02-01

    To investigate the amount of gas released from Zenith thoracic stent-grafts using standard saline flushing vs the carbon dioxide flushing technique. In an experimental bench setting, 20 thoracic stent-grafts were separated into 2 groups of 10 endografts. One group of grafts was flushed with 60 mL saline and the other group was flushed with carbon dioxide for 5 minutes followed by 60 mL saline. All grafts were deployed into a water-filled container with a curved plastic pipe; the deployment was recorded and released gas was measured using a calibrated setup. Gas was released from all grafts in both study groups during endograft deployment. The average amount of released gas per graft was significantly lower in the study group with carbon dioxide flushing (0.79 vs 0.51 mL, p=0.005). Thoracic endografts release significant amounts of air during deployment if flushed according to the instructions for use. Application of carbon dioxide for the flushing of thoracic stent-grafts prior to standard saline flush significantly reduces the amount of gas released during deployment. The additional use of carbon dioxide should be considered as a standard flush technique for aortic stent-grafts, especially in those implanted in proximal aortic segments, to reduce the risk of air embolism and stroke.

  11. Adaptive Biasing Combined with Hamiltonian Replica Exchange to Improve Umbrella Sampling Free Energy Simulations.

    PubMed

    Zeller, Fabian; Zacharias, Martin

    2014-02-11

    The accurate calculation of potentials of mean force for ligand-receptor binding is one of the most important applications of molecular simulation techniques. Typically, the separation distance between ligand and receptor is chosen as a reaction coordinate along which a PMF can be calculated with the aid of umbrella sampling (US) techniques. In addition, restraints can be applied on the relative position and orientation of the partner molecules to reduce accessible phase space. An approach combining such phase space reduction with flattening of the free energy landscape and configurational exchanges has been developed, which significantly improves the convergence of PMF calculations in comparison with standard umbrella sampling. The free energy surface along the reaction coordinate is smoothened by iteratively adapting biasing potentials corresponding to previously calculated PMFs. Configurations are allowed to exchange between the umbrella simulation windows via the Hamiltonian replica exchange method. The application to a DNA molecule in complex with a minor groove binding ligand indicates significantly improved convergence and complete reversibility of the sampling along the pathway. The calculated binding free energy is in excellent agreement with experimental results. In contrast, the application of standard US resulted in large differences between PMFs calculated for association and dissociation pathways. The approach could be a useful alternative to standard US for computational studies on biomolecular recognition processes.

  12. Comparison of numerical and experimental results of the flow in the U9 Kaplan turbine model

    NASA Astrophysics Data System (ADS)

    Petit, O.; Mulu, B.; Nilsson, H.; Cervantes, M.

    2010-08-01

    The present work compares simulations made using the OpenFOAM CFD code with experimental measurements of the flow in the U9 Kaplan turbine model. Comparisons of the velocity profiles in the spiral casing and in the draft tube are presented. The U9 Kaplan turbine prototype located in Porjus and its model, located in Älvkarleby, Sweden, have curved inlet pipes that lead the flow to the spiral casing. Nowadays, this curved pipe and its effect on the flow in the turbine is not taken into account when numerical simulations are performed at design stage. To study the impact of the inlet pipe curvature on the flow in the turbine, and to get a better overview of the flow of the whole system, measurements were made on the 1:3.1 model of the U9 turbine. Previously published measurements were taken at the inlet of the spiral casing and just before the guide vanes, using the laser Doppler anemometry (LDA) technique. In the draft tube, a number of velocity profiles were measured using the LDA techniques. The present work extends the experimental investigation with a horizontal section at the inlet of the draft tube. The experimental results are used to specify the inlet boundary condition for the numerical simulations in the draft tube, and to validate the computational results in both the spiral casing and the draft tube. The numerical simulations were realized using the standard k-e model and a block-structured hexahedral wall function mesh.

  13. Experimental investigation on emission reduction in neem oil biodiesel using selective catalytic reduction and catalytic converter techniques.

    PubMed

    Viswanathan, Karthickeyan

    2018-05-01

    In the present study, non-edible seed oil namely raw neem oil was converted into biodiesel using transesterification process. In the experimentation, two biodiesel blends were prepared namely B25 (25% neem oil methyl ester with 75% of diesel) and B50 (50% neem oil methyl ester with 50% diesel). Urea-based selective catalytic reduction (SCR) technique with catalytic converter (CC) was fixed in the exhaust tail pipe of the engine for the reduction of engine exhaust emissions. Initially, the engine was operated with diesel as a working fluid and followed by refilling of biodiesel blends B25 and B50 to obtain the baseline readings without SCR and CC. Then, the same procedure was repeated with SCR and CC technique for emission reduction measurement in diesel, B25 and B50 sample. The experimental results revealed that the B25 blend showed higher break thermal efficiency (BTE) and exhaust gas temperature (EGT) with lower break-specific fuel consumption (BSFC) than B50 blend at all loads. On comparing with biodiesel blends, diesel experiences increased BTE of 31.9% with reduced BSFC of 0.29 kg/kWh at full load. A notable emission reduction was noticed for all test fuels in SCR and CC setup. At full load, B25 showed lower carbon monoxide (CO) of 0.09% volume, hydrocarbon (HC) of 24 ppm, and smoke of 14 HSU and oxides of nitrogen (NOx) of 735 ppm than diesel and B50 in SCR and CC setup. On the whole, the engine with SCR and CC setup showed better performance and emission characteristics than standard engine operation.

  14. Cutting thread at flexible endoscopy.

    PubMed

    Gong, F; Swain, P; Kadirkamanathan, S; Hepworth, C; Laufer, J; Shelton, J; Mills, T

    1996-12-01

    New thread-cutting techniques were developed for use at flexible endoscopy. A guillotine was designed to follow and cut thread at the endoscope tip. A new method was developed for guiding suture cutters. Efficacy of Nd: YAG laser cutting of threads was studied. Experimental and clinical experience with thread-cutting methods is presented. A 2.4 mm diameter flexible thread-cutting guillotine was constructed featuring two lateral holes with sharp edges through which sutures to be cut are passed. Standard suture cutters were guided by backloading thread through the cutters extracorporeally. A snare cutter was constructed to retrieve objects sewn to tissue. Efficacy and speed of Nd: YAG laser in cutting twelve different threads were studied. The guillotine cut thread faster (p < 0.05) than standard suture cutters. Backloading thread shortened time taken to cut thread (p < 0.001) compared with free-hand cutting. Nd: YAG laser was ineffective in cutting uncolored threads and slower than mechanical cutters. Results of thread cutting in clinical studies using sewing machine (n = 77 cutting episodes in 21 patients), in-vivo experiments (n = 156), and postsurgical cases (n = 15 over 15 years) are presented. New thread-cutting methods are described and their efficacy demonstrated in experimental and clinical studies.

  15. Possible safety hazards associated with the operation of the 0.3-m transonic cryogenic tunnel at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Webster, T. J.

    1982-01-01

    The 0.3 m Transonic Cryogenic Tunnel (TCT) at the NASA Langley Research Center was built in 1973 as a facility intended to be used for no more than 60 hours in order to verify the validity of the cryogenic wind tunnel concept at transonic speeds. The role of the 0.3 m TCT has gradually changed until now, after over 3000 hours of operation, it is classified as a major NASA research facility and, under the administration of the Experimental Techniques Branch, it is used extensively for the testing of airfoils at high Reynolds numbers and for the development of various technologies related to the efficient operation and use of cryogenic wind tunnels. The purpose of this report is to document the results of a recent safety analysis of the 0.3 m TCT facility. This analysis was made as part of an on going program with the Experimental Techniques Branch designed to ensure that the existing equipment and current operating procedures of the 0.3 m TCT facility are acceptable in terms of today's standards of safety for cryogenic systems.

  16. A Real-Time Interference Monitoring Technique for GNSS Based on a Twin Support Vector Machine Method.

    PubMed

    Li, Wutao; Huang, Zhigang; Lang, Rongling; Qin, Honglei; Zhou, Kai; Cao, Yongbin

    2016-03-04

    Interferences can severely degrade the performance of Global Navigation Satellite System (GNSS) receivers. As the first step of GNSS any anti-interference measures, interference monitoring for GNSS is extremely essential and necessary. Since interference monitoring can be considered as a classification problem, a real-time interference monitoring technique based on Twin Support Vector Machine (TWSVM) is proposed in this paper. A TWSVM model is established, and TWSVM is solved by the Least Squares Twin Support Vector Machine (LSTWSVM) algorithm. The interference monitoring indicators are analyzed to extract features from the interfered GNSS signals. The experimental results show that the chosen observations can be used as the interference monitoring indicators. The interference monitoring performance of the proposed method is verified by using GPS L1 C/A code signal and being compared with that of standard SVM. The experimental results indicate that the TWSVM-based interference monitoring is much faster than the conventional SVM. Furthermore, the training time of TWSVM is on millisecond (ms) level and the monitoring time is on microsecond (μs) level, which make the proposed approach usable in practical interference monitoring applications.

  17. Fragmentation modeling of a resin bonded sand

    NASA Astrophysics Data System (ADS)

    Hilth, William; Ryckelynck, David

    2017-06-01

    Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.

  18. Experimental cross-correlation nitrogen Q-branch CARS thermometry in a spark ignition engine

    NASA Astrophysics Data System (ADS)

    Lockett, R. D.; Ball, D.; Robertson, G. N.

    2013-07-01

    A purely experimental technique was employed to derive temperatures from nitrogen Q-branch Coherent Anti-Stokes Raman Scattering (CARS) spectra, obtained in a high pressure, high temperature environment (spark ignition Otto engine). This was in order to obviate any errors arising from deficiencies in the spectral scaling laws which are commonly used to represent nitrogen Q-branch CARS spectra at high pressure. The spectra obtained in the engine were compared with spectra obtained in a calibrated high pressure, high temperature cell, using direct cross-correlation in place of the minimisation of sums of squares of residuals. The technique is demonstrated through the measurement of air temperature as a function of crankshaft angle inside the cylinder of a motored single-cylinder Ricardo E6 research engine, followed by the measurement of fuel-air mixture temperatures obtained during the compression stroke in a knocking Ricardo E6 engine. A standard CARS programme (SANDIA's CARSFIT) was employed to calibrate the altered non-resonant background contribution to the CARS spectra that was caused by the alteration to the mole fraction of nitrogen in the unburned fuel-air mixture. The compression temperature profiles were extrapolated in order to predict the auto-ignition temperatures.

  19. A Real-Time Interference Monitoring Technique for GNSS Based on a Twin Support Vector Machine Method

    PubMed Central

    Li, Wutao; Huang, Zhigang; Lang, Rongling; Qin, Honglei; Zhou, Kai; Cao, Yongbin

    2016-01-01

    Interferences can severely degrade the performance of Global Navigation Satellite System (GNSS) receivers. As the first step of GNSS any anti-interference measures, interference monitoring for GNSS is extremely essential and necessary. Since interference monitoring can be considered as a classification problem, a real-time interference monitoring technique based on Twin Support Vector Machine (TWSVM) is proposed in this paper. A TWSVM model is established, and TWSVM is solved by the Least Squares Twin Support Vector Machine (LSTWSVM) algorithm. The interference monitoring indicators are analyzed to extract features from the interfered GNSS signals. The experimental results show that the chosen observations can be used as the interference monitoring indicators. The interference monitoring performance of the proposed method is verified by using GPS L1 C/A code signal and being compared with that of standard SVM. The experimental results indicate that the TWSVM-based interference monitoring is much faster than the conventional SVM. Furthermore, the training time of TWSVM is on millisecond (ms) level and the monitoring time is on microsecond (μs) level, which make the proposed approach usable in practical interference monitoring applications. PMID:26959020

  20. Teaching self-protection to children using television techniques.

    PubMed Central

    Poche, C; Yoder, P; Miltenberger, R

    1988-01-01

    This study compared the effectiveness of a videotape training program with other methods of teaching children self-protection to prevent child abduction. Subjects were kindergarten and first-grade students. Four experimental conditions were presented: videotape with behavior rehearsal, videotape only, a standard safety program, and no training. Acquisition of self-protective behaviors was measured at posttraining and follow-up by having confederate adults entice the children near their schools and homes. Results revealed that the videotape program with behavior rehearsal was highly effective in teaching children safe responses to potential abductors. The standard safety program was effective with fewer than half of the children. Three fourths of the children who received no training immediately agreed to go with the confederate suspects. The videotape program can be easily used with groups of young children in a classroom setting. PMID:3198545

  1. Principal Components Analysis on the spectral Bidirectional Reflectance Distribution Function of ceramic colour standards.

    PubMed

    Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A

    2011-09-26

    The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America

  2. Preliminary Analysis of Photoreading

    NASA Technical Reports Server (NTRS)

    McNamara, Danielle S.

    2000-01-01

    The purpose of this project was to provide a preliminary analysis of a reading strategy called PhotoReading. PhotoReading is a technique developed by Paul Scheele that claims to increase reading rate to 25,000 words per minute (Scheele, 1993). PhotoReading itself involves entering a "relaxed state" and looking at, but not reading, each page of a text for a brief moment (about I to 2 seconds). While this technique has received attention in the popular press, there had been no objective examinations of the technique's validity. To examine the effectiveness of PhotoReading, the principal investigator (i.e., trainee) participated in a PhotoReading workshop to learn the technique. Parallel versions of two standardized and three experimenter-created reading comprehension tests were administered to the trainee and an expert user of the PhotoReading technique to compare the use of normal reading strategies and the PhotoReading technique by both readers. The results for all measures yielded no benefits of using the PhotoReading technique. The extremely rapid reading rates claimed by PhotoReaders were not observed; indeed, the reading rates were generally comparable to those for normal reading. Moreover, the PhotoReading expert generally showed an increase in reading time when using the PhotoReading technique in comparison to when using normal reading strategies to process text. This increase in reading time when PhotoReading was accompanied by a decrease in text comprehension.

  3. Development of a Fluid Structures Interaction Test Technique for Fabrics

    NASA Technical Reports Server (NTRS)

    Zilliac, Gregory G.; Heineck, James T.; Schairer, Edward T.; Mosher, Robert N.; Garbeff, Theodore Joseph

    2012-01-01

    Application of fluid structures interaction (FSI) computational techniques to configurations of interest to the entry, descent and landing (EDL) community is limited by two factors - limited characterization of the material properties for fabrics of interest and insufficient experimental data to validate the FSI codes. Recently ILC Dover Inc. performed standard tests to characterize the static stress-strain response of four candidate fabrics for use in EDL applications. The objective of the tests described here is to address the need for a FSI dataset for CFD validation purposes. To reach this objective, the structural response of fabrics was measured in a very simple aerodynamic environment with well controlled boundary conditions. Two test series were undertaken. The first series covered a range of tunnel conditions and the second focused on conditions that resulted in fabric panel buckling.

  4. Solid immersion terahertz imaging with sub-wavelength resolution

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Schadko, Aleksander O.; Lebedev, Sergey P.; Tolstoguzov, Viktor L.; Kurlov, Vladimir N.; Reshetov, Igor V.; Spektor, Igor E.; Skorobogatiy, Maksim; Yurchenko, Stanislav O.; Zaytsev, Kirill I.

    2017-05-01

    We have developed a method of solid immersion THz imaging—a non-contact technique employing the THz beam focused into evanescent-field volume and allowing strong reduction in the dimensions of THz caustic. We have combined numerical simulations and experimental studies to demonstrate a sub-wavelength 0.35λ0-resolution of the solid immersion THz imaging system compared to 0.85λ0-resolution of a standard imaging system, employing only an aspherical singlet. We have discussed the prospective of using the developed technique in various branches of THz science and technology, namely, for THz measurements of solid-state materials featuring sub-wavelength variations of physical properties, for highly accurate mapping of healthy and pathological tissues in THz medical diagnosis, for detection of sub-wavelength defects in THz non-destructive sensing, and for enhancement of THz nonlinear effects.

  5. Novel Oversampling Technique for Improving Signal-to-Quantization Noise Ratio on Accelerometer-Based Smart Jerk Sensors in CNC Applications.

    PubMed

    Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo

    2009-01-01

    Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.

  6. Acoustic Parametric Array for Identifying Standoff Targets

    NASA Astrophysics Data System (ADS)

    Hinders, M. K.; Rudd, K. E.

    2010-02-01

    An integrated simulation method for investigating nonlinear sound beams and 3D acoustic scattering from any combination of complicated objects is presented. A standard finite-difference simulation method is used to model pulsed nonlinear sound propagation from a source to a scattering target via the KZK equation. Then, a parallel 3D acoustic simulation method based on the finite integration technique is used to model the acoustic wave interaction with the target. Any combination of objects and material layers can be placed into the 3D simulation space to study the resulting interaction. Several example simulations are presented to demonstrate the simulation method and 3D visualization techniques. The combined simulation method is validated by comparing experimental and simulation data and a demonstration of how this combined simulation method assisted in the development of a nonlinear acoustic concealed weapons detector is also presented.

  7. Soft magnetic tweezers: a proof of principle.

    PubMed

    Mosconi, Francesco; Allemand, Jean François; Croquette, Vincent

    2011-03-01

    We present here the principle of soft magnetic tweezers which improve the traditional magnetic tweezers allowing the simultaneous application and measurement of an arbitrary torque to a deoxyribonucleic acid (DNA) molecule. They take advantage of a nonlinear coupling regime that appears when a fast rotating magnetic field is applied to a superparamagnetic bead immersed in a viscous fluid. In this work, we present the development of the technique and we compare it with other techniques capable of measuring the torque applied to the DNA molecule. In this proof of principle, we use standard electromagnets to achieve our experiments. Despite technical difficulties related to the present implementation of these electromagnets, the agreement of measurements with previous experiments is remarkable. Finally, we propose a simple way to modify the experimental design of electromagnets that should bring the performances of the device to a competitive level.

  8. Non-thermal plasma technologies: new tools for bio-decontamination.

    PubMed

    Moreau, M; Orange, N; Feuilloley, M G J

    2008-01-01

    Bacterial control and decontamination are crucial to industrial safety assessments. However, most recently developed materials are not compatible with standard heat sterilization treatments. Advanced oxidation processes, and particularly non-thermal plasmas, are emerging and promising technologies for sanitation because they are both efficient and cheap. The applications of non-thermal plasma to bacterial control remain poorly known for several reasons: this technique was not developed for biological applications and most of the literature is in the fields of physics and chemistry. Moreover, the diversity of the devices and complexity of the plasmas made any general evaluation of the potential of the technique difficult. Finally, no experimental equipment for non-thermal plasma sterilization is commercially available and reference articles for microbiologists are rare. The present review aims to give an overview of the principles of action and applications of plasma technologies in biodecontamination.

  9. Taking the lead from our colleagues in medical education: the use of images of the in-vivo setting in teaching concepts of pharmaceutical science.

    PubMed

    Curley, Louise E; Kennedy, Julia; Hinton, Jordan; Mirjalili, Ali; Svirskis, Darren

    2017-01-01

    Despite pharmaceutical sciences being a core component of pharmacy curricula, few published studies have focussed on innovative methodologies to teach the content. This commentary identifies imaging techniques which can visualise oral dosage forms in-vivo and observe formulation disintegration in order to achieve a better understanding of in-vivo performance. Images formed through these techniques can provide students with a deeper appreciation of the fate of oral formulations in the body compared to standard disintegration and dissolution testing, which is conducted in-vitro. Such images which represent the in-vivo setting can be used in teaching to give context to both theory and experimental work, thereby increasing student understanding and enabling teaching of pharmaceutical sciences supporting students to correlate in-vitro and in-vivo processes.

  10. Weak Value Amplification is Suboptimal for Estimation and Detection

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-01-01

    We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.

  11. Two-Photon Excitation, Fluorescence Microscopy, and Quantitative Measurement of Two-Photon Absorption Cross Sections

    NASA Astrophysics Data System (ADS)

    DeArmond, Fredrick Michael

    As optical microscopy techniques continue to improve, most notably the development of super-resolution optical microscopy which garnered the Nobel Prize in Chemistry in 2014, renewed emphasis has been placed on the development and use of fluorescence microscopy techniques. Of particular note is a renewed interest in multiphoton excitation due to a number of inherent properties of the technique including simplified optical filtering, increased sample penetration, and inherently confocal operation. With this renewed interest in multiphoton fluorescence microscopy, comes an increased demand for robust non-linear fluorescent markers, and characterization of the associated tool set. These factors have led to an experimental setup to allow a systematized approach for identifying and characterizing properties of fluorescent probes in the hopes that the tool set will provide researchers with additional information to guide their efforts in developing novel fluorophores suitable for use in advanced optical microscopy techniques as well as identifying trends for their synthesis. Hardware was setup around a software control system previously developed. Three experimental tool sets were set up, characterized, and applied over the course of this work. These tools include scanning multiphoton fluorescence microscope with single molecule sensitivity, an interferometric autocorrelator for precise determination of the bandwidth and pulse width of the ultrafast Titanium Sapphire excitation source, and a simplified fluorescence microscope for the measurement of two-photon absorption cross sections. Resulting values for two-photon absorption cross sections and two-photon absorption action cross sections for two standardized fluorophores, four commercially available fluorophores, and ten novel fluorophores are presented as well as absorption and emission spectra.

  12. The Effects of micro Aluminum fillers In Epoxy resin on the thermal conductivity

    NASA Astrophysics Data System (ADS)

    Jasim, Kareem A.; Fadhil, Rihab N.

    2018-05-01

    A hand lay-up molding method was used to prepare Epoxy / Aluminum composites. As a matrix used Epoxy resin (EP) with reinforced by Aluminum particles. The preparation technique includes preparing carousel mold with different weight percentage of fillers (0, 0.05, 0.15, 0.25, 0.35, and 0.45). Standard specimens (in 30 mm diameter) were prepared to the thermal conductivity tests. The result of experimental thermal conductivity (k), for EP/Aluminum composites show that, k increase with increasing Aluminums percentage and it have maximum values of (1.4595 W/m.K).

  13. Earthquake Building Damage Mapping Based on Feature Analyzing Method from Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    An, L.; Zhang, J.; Gong, L.

    2018-04-01

    Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.

  14. TTI (Texas Transportation Institute) track/dynamometer study. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reineman, M.; Thompson, G.

    1983-01-01

    Seven passenger cars and one light truck were operated over the EPA urban and highway driving cycles to compare fuel economy measurements obtained on a test track with the fuel economy results obtained on a chassis dynamometer. The test program was designed to duplicate, as closely as possible, the track force loading (as determined by standard EPA road coastdown procedures) on the dynamometer. Experimental parameters which were investigated included loading differences between front- and rear-wheel drive vehicles, volumetric versus carbon balance fuel measurement techniques, coupled versus uncoupled roll dynamometer tests, and curved track versus straight track coastdowns.

  15. Advanced Satellite Research Project: SCAR Research Database. Bibliographic analysis

    NASA Technical Reports Server (NTRS)

    Pelton, Joseph N.

    1991-01-01

    The literature search was provided to locate and analyze the most recent literature that was relevant to the research. This was done by cross-relating books, articles, monographs, and journals that relate to the following topics: (1) Experimental Systems - Advanced Communications Technology Satellite (ACTS), and (2) Integrated System Digital Network (ISDN) and Advance Communication Techniques (ISDN and satellites, ISDN standards, broadband ISDN, flame relay and switching, computer networks and satellites, satellite orbits and technology, satellite transmission quality, and network configuration). Bibliographic essay on literature citations and articles reviewed during the literature search task is provided.

  16. [Histochemical stains for minerals by hematoxylin-lake method].

    PubMed

    Miyagawa, Makoto

    2013-04-01

    The present study was undertaken to establish the experimental animal model by histological staining methods for minerals. After intraperitoneal injections of minerals, precipitates deposited on the surface of the liver. Liver tissues were fixed in paraformaldehyde, embedded in paraffin and cut into thin sections which were used as minerals containing standard section. Several reagents for histological stains and spectrophotometry for minerals were applied in both test-tube experiments and stainings of tissue sections to test for minerals. Hematoxylin-lake was found of capable of staining minerals in tissue. A simple technique used was described for light microscopic detection of minerals.

  17. Vacuum infusion manufacturing and experimental characterization of Kevlar/epoxy composites

    NASA Astrophysics Data System (ADS)

    Ricciardi, M. R.; Giordano, M.; Langella, A.; Nele, L.; Antonucci, V.

    2014-05-01

    Epoxy/Kevlar composites have been manufactured by conventional Vacuum Infusion process and the Pulse Infusion technique. Pulse Infusion allows to control the pressure of the vacuum bag on the dry fiber reinforcement by using a proper designed pressure distributor that induces a pulsed transverse action and promotes the through thickness resin flow. The realized composite panel have been mechanically characterized by performing tensile and short beam shear tests according with the ASTM D3039 and ASTM D2344/D 2344M standard respectively in order to investigate the effect of Pulse Infusion on the tensile strength and ILSS.

  18. Recommendations for research design of telehealth studies.

    PubMed

    Chumbler, Neale R; Kobb, Rita; Brennan, David M; Rabinowitz, Terry

    2008-11-01

    Properly designed randomized controlled trials (RCTs) are the gold standard to use when examining the effectiveness of telehealth interventions on clinical outcomes. Some published telehealth studies have employed well-designed RCTs. However, such methods are not always feasible and practical in particular settings. This white paper addresses not only the need for properly designed RCTs, but also offers alternative research designs, such as quasi-experimental designs, and statistical techniques that can be employed to rigorously assess the effectiveness of telehealth studies. This paper further offers design and measurement recommendations aimed at and relevant to administrative decision-makers, policymakers, and practicing clinicians.

  19. Vacuum infusion manufacturing and experimental characterization of Kevlar/epoxy composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricciardi, M. R.; Giordano, M.; Antonucci, V.

    2014-05-15

    Epoxy/Kevlar composites have been manufactured by conventional Vacuum Infusion process and the Pulse Infusion technique. Pulse Infusion allows to control the pressure of the vacuum bag on the dry fiber reinforcement by using a proper designed pressure distributor that induces a pulsed transverse action and promotes the through thickness resin flow. The realized composite panel have been mechanically characterized by performing tensile and short beam shear tests according with the ASTM D3039 and ASTM D2344/D 2344M standard respectively in order to investigate the effect of Pulse Infusion on the tensile strength and ILSS.

  20. Recent applications of boxed molecular dynamics: a simple multiscale technique for atomistic simulations.

    PubMed

    Booth, Jonathan; Vazquez, Saulo; Martinez-Nunez, Emilio; Marks, Alison; Rodgers, Jeff; Glowacki, David R; Shalashilin, Dmitrii V

    2014-08-06

    In this paper, we briefly review the boxed molecular dynamics (BXD) method which allows analysis of thermodynamics and kinetics in complicated molecular systems. BXD is a multiscale technique, in which thermodynamics and long-time dynamics are recovered from a set of short-time simulations. In this paper, we review previous applications of BXD to peptide cyclization, solution phase organic reaction dynamics and desorption of ions from self-assembled monolayers (SAMs). We also report preliminary results of simulations of diamond etching mechanisms and protein unfolding in atomic force microscopy experiments. The latter demonstrate a correlation between the protein's structural motifs and its potential of mean force. Simulations of these processes by standard molecular dynamics (MD) is typically not possible, because the experimental time scales are very long. However, BXD yields well-converged and physically meaningful results. Compared with other methods of accelerated MD, our BXD approach is very simple; it is easy to implement, and it provides an integrated approach for simultaneously obtaining both thermodynamics and kinetics. It also provides a strategy for obtaining statistically meaningful dynamical results in regions of configuration space that standard MD approaches would visit only very rarely.

  1. Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.

    PubMed

    Maani, Ehsan; Katsaggelos, Aggelos K

    2009-09-01

    The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.

  2. Robotic Anterior and Midline Skull Base Surgery: Preclinical Investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Bert W.; Weinstein, Gregory S.

    Purpose: To develop a minimally invasive surgical technique to access the midline and anterior skull base using the optical and technical advantages of robotic surgical instrumentation. Methods and Materials: Ten experimental procedures focusing on approaches to the nasopharynx, clivus, sphenoid, pituitary sella, and suprasellar regions were performed on one cadaver and one live mongrel dog. Both the cadaver and canine procedures were performed in an approved training facility using the da Vinci Surgical Robot. For the canine experiments, a transoral robotic surgery (TORS) approach was used, and for the cadaver a newly developed combined cervical-transoral robotic surgery (C-TORS) approach wasmore » investigated and compared with standard TORS. The ability to access and dissect tissues within the various areas of the midline and anterior skull base were evaluated, and techniques to enhance visualization and instrumentation were developed. Results: Standard TORS approaches did not provide adequate access to the midline and anterior skull base; however, the newly developed C-TORS approach was successful in providing the surgical access to these regions of the skull base. Conclusion: Robotic surgery is an exciting minimally invasive approach to the skull base that warrants continued preclinical investigation and development.« less

  3. Reliability evaluation of high-performance, low-power FinFET standard cells based on mixed RBB/FBB technique

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole

    2017-04-01

    With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).

  4. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  5. Techniques in Experimental Mechanics Applicable to Forest Products Research

    Treesearch

    Leslie H. Groom; Audrey G. Zink

    1994-01-01

    The title of this publication-Techniques in Experimental Mechanics Applicable to Forest Products Research-is the theme of this plenary session from the 1994 Annual Meeting of the Forest Products Society (FPS). Although this session focused on experimental techniques that can be of assistance to researchers in the field of forest products, it is hoped that the...

  6. Negative refraction angular characterization in one-dimensional photonic crystals.

    PubMed

    Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn

    2011-04-06

    Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications.

  7. Negative Refraction Angular Characterization in One-Dimensional Photonic Crystals

    PubMed Central

    Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn

    2011-01-01

    Background Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. Methodology/Principal Findings By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Conclusions/Significance Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications. PMID:21494332

  8. Experimental equipment for an advanced ISOL facility[Isotope Separation On-Line Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baktash, C.; Lee, I.Y.; Rehm, K.E.

    This report summarizes the proceedings and recommendations of the Workshop on the Experimental Equipment for an Advanced ISOL Facility which was held at Lawrence Berkeley National Laboratory on July 22--25, 1998. The purpose of this workshop was to discuss the performance requirements, manpower and cost estimates, as well as a schedule of the experimental equipment needed to fully exploit the new physics which can be studied at an advanced ISOL facility. An overview of the new physics opportunities that would be provided by such a facility has been presented in the White Paper that was issued following the Columbus Meeting.more » The reactions and experimental techniques discussed in the Columbus White Paper served as a guideline for the formulation of the detector needs at the Berkeley Workshop. As outlined a new ISOL facility with intense, high-quality beams of radioactive nuclei would provide exciting new research opportunities in the areas of: the nature of nucleonic matter; the origin of the elements; and tests of the Standard Model. After an introductory section, the following equipment is discussed: gamma-ray detectors; recoil separators; magnetic spectrographs; particle detectors; targets; and apparatus using non-accelerated beams.« less

  9. Ambient Vibration Tests of an Arch Dam with Different Reservoir Water Levels: Experimental Results and Comparison with Finite Element Modelling

    PubMed Central

    Ranieri, Gaetano

    2014-01-01

    This paper deals with the ambient vibration tests performed in an arch dam in two different working conditions in order to assess the effect produced by two different reservoir water levels on the structural vibration properties. The study consists of an experimental part and a numerical part. The experimental tests were carried out in two different periods of the year, at the beginning of autumn (October 2012) and at the end of winter (March 2013), respectively. The measurements were performed using a fast technique based on asynchronous records of microtremor time-series. In-contact single-station measurements were done by means of one single high resolution triaxial tromometer and two low-frequency seismometers, placed in different points of the structure. The Standard Spectral Ratio method has been used to evaluate the natural frequencies of vibration of the structure. A 3D finite element model of the arch dam-reservoir-foundation system has been developed to verify analytically determined vibration properties, such as natural frequencies and mode shapes, and their changes linked to water level with the experimental results. PMID:25003146

  10. Imaging samples in silica aerogel using an experimental point spread function.

    PubMed

    White, Amanda J; Ebel, Denton S

    2015-02-01

    Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.

  11. A comparative approach for the characterization of a pneumatic piston gauge up to 8 MPa using finite element calculations

    NASA Astrophysics Data System (ADS)

    Dogra, Sugandha; Singh, Jasveer; Lodh, Abhishek; Dilawar Sharma, Nita; Bandyopadhyay, A. K.

    2011-02-01

    This paper reports the behavior of a well-characterized pneumatic piston gauge in the pressure range up to 8 MPa through simulation using finite element method (FEM). Experimentally, the effective area of this piston gauge has been estimated by cross-floating to obtain A0 and λ. The FEM technique addresses this problem through simulation and optimization with standard commercial software (ANSYS) where the material properties of the piston and cylinder, dimensional measurements, etc are used as the input parameters. The simulation provides the effective area Ap as a function of pressure in the free deformation mode. From these data, one can estimate Ap versus pressure and thereby Ao and λ. Further, we have carried out a similar theoretical calculation of Ap using the conventional method involving the Dadson's as well as Johnson-Newhall equations. A comparison of these results with the experimental results has been carried out.

  12. Effects of cobalt on the microstructure of Udimet 700. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Engel, M. A.

    1982-01-01

    Cobalt, a critical and "strategic" alloying element in many superalloys, was systematically substituted by nickel in experimental alloys Udimet 700 containing 0.1, 4.3, 8.6, 12.8 and the standard 17.0 wt.% cobalt. Electrolytic and chemical extraction techniques, X-ray diffraction, scanning electron and optical microscopy were used for the microstructural studies. The total weight fraction of gamma' was not significantly affected by the cobalt content, although a difference in the size and quantities of the primary and secondary gamma' phases was apparent. The lattice parameters of the gamma' were found to increase with increasing cobalt content while the lattice mismatch between the gamma matrix and gamma' phases decreased. Other significant effects of cobalt on the weight fraction, distribution and formation of the carbide and boride phases as well as the relative stability of the experimental alloys during long-time aging are also discussed.

  13. Experimental quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong

    2015-09-01

    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.

  14. Hybrid Feedforward-Feedback Noise Control Using Virtual Sensors

    NASA Technical Reports Server (NTRS)

    Bean, Jacob; Fuller, Chris; Schiller, Noah

    2016-01-01

    Several approaches to active noise control using virtual sensors are evaluated for eventual use in an active headrest. Specifically, adaptive feedforward, feedback, and hybrid control structures are compared. Each controller incorporates the traditional filtered-x least mean squares algorithm. The feedback controller is arranged in an internal model configuration to draw comparisons with standard feedforward control theory results. Simulation and experimental results are presented that illustrate each controllers ability to minimize the pressure at both physical and virtual microphone locations. The remote microphone technique is used to obtain pressure estimates at the virtual locations. It is shown that a hybrid controller offers performance benefits over the traditional feedforward and feedback controllers. Stability issues associated with feedback and hybrid controllers are also addressed. Experimental results show that 15-20 dB reduction in broadband disturbances can be achieved by minimizing the measured pressure, whereas 10-15 dB reduction is obtained when minimizing the estimated pressure at a virtual location.

  15. Measurement of two-photon-absorption spectra through nonlinear fluorescence produced by a line-shaped excitation beam.

    PubMed

    Hasani, E; Parravicini, J; Tartara, L; Tomaselli, A; Tomassini, D

    2018-05-01

    We propose an innovative experimental approach to estimate the two-photon absorption (TPA) spectrum of a fluorescent material. Our method develops the standard indirect fluorescence-based method for the TPA measurement by employing a line-shaped excitation beam, generating a line-shaped fluorescence emission. Such a configuration, which requires a relatively high amount of optical power, permits to have a greatly increased fluorescence signal, thus avoiding the photon counterdetection devices usually used in these measurements, and allowing to employ detectors such as charge-coupled device (CCD) cameras. The method is finally tested on a fluorescent isothiocyanate sample, whose TPA spectrum, which is measured with the proposed technique, is compared with the TPA spectra reported in the literature, confirming the validity of our experimental approach. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  16. Low cost ellipsometer using a standard commercial polarimeter

    NASA Astrophysics Data System (ADS)

    Velosa, F.; Abreu, M.

    2017-08-01

    Ellipsometry is an optical technique to characterize materials or phenomena that occurs at an interface or thin film between two different media. In this paper, we present an experimental low-cost version of a photometric ellipsometer, assembled with commonly found material at every Optics laboratory. The polarization parameters measurement was performed using a Thorlabs PAX5710 polarimeter. The uncertainty computed using the Guide to the Expression of Uncertainty in Measurement (GUM) procedures. With the assembled ellipsometer we were able to measure the thickness of a 10 nm thick SiO2 thin film deposited upon Si, and the complex refractive index of Gold and Tantalum samples. The SiO2 thickness we achieved had an experimental deviation of 4.5% with 2.00 nm uncertainty. The value complex refractive index of Gold and Tantalum measured agrees with the different values found in several references. The uncertainty values were found to be mostly limited by the polarimeter's uncertainty.

  17. Microwave Determination of Water Mole Fraction in Humid Gas Mixtures

    NASA Astrophysics Data System (ADS)

    Cuccaro, R.; Gavioso, R. M.; Benedetto, G.; Madonna Ripa, D.; Fernicola, V.; Guianvarc'h, C.

    2012-09-01

    A small volume (65 cm3) gold-plated quasi-spherical microwave resonator has been used to measure the water vapor mole fraction x w of H2O/N2 and H2O/air mixtures. This experimental technique exploits the high precision achievable in the determination of the cavity microwave resonance frequencies and is particularly sensitive to the presence of small concentrations of water vapor as a result of the high polarizability of this substance. The mixtures were prepared using the INRIM standard humidity generator for frost-point temperatures T fp in the range between 241 K and 270 K and a commercial two-pressure humidity generator operated at a dew-point temperature between 272 K and 291 K. The experimental measurements compare favorably with the calculated molar fractions of the mixture supplied by the humidity generators, showing a normalized error lower than 0.8.

  18. Experimental demonstration of two-dimensional hybrid waveguide-integrated plasmonic crystals on silicon-on-insulator platform

    NASA Astrophysics Data System (ADS)

    Ren, Guanghui; Yudistira, Didit; Nguyen, Thach G.; Khodasevych, Iryna; Schoenhardt, Steffen; Berean, Kyle J.; Hamm, Joachim M.; Hess, Ortwin; Mitchell, Arnan

    2017-07-01

    Nanoscale plasmonic structures can offer unique functionality due to extreme sub-wavelength optical confinement, but the realization of complex plasmonic circuits is hampered by high propagation losses. Hybrid approaches can potentially overcome this limitation, but only few practical approaches based on either single or few element arrays of nanoantennas on dielectric nanowire have been experimentally demonstrated. In this paper, we demonstrate a two dimensional hybrid photonic plasmonic crystal interfaced with a standard silicon photonic platform. Off resonance, we observe low loss propagation through our structure, while on resonance we observe strong propagation suppression and intense concentration of light into a dense lattice of nanoscale hot-spots on the surface providing clear evidence of a hybrid photonic plasmonic crystal bandgap. This fully integrated approach is compatible with established silicon-on-insulator (SOI) fabrication techniques and constitutes a significant step toward harnessing plasmonic functionality within SOI photonic circuits.

  19. Ferroptosis and Cell Death Analysis by Flow Cytometry.

    PubMed

    Chen, Daishi; Eyupoglu, Ilker Y; Savaskan, Nicolai

    2017-01-01

    Cell death and its recently discovered regulated form ferroptosis are characterized by distinct morphological, electrophysiological, and pharmacological features. In particular ferroptosis can be induced by experimental compounds and clinical drugs (i.e., erastin, sulfasalazine, sorafenib, and artesunate) in various cell types and cancer cells. Pharmacologically, this cell death process can be inhibited by iron chelators and lipid peroxidation inhibitors. Relevance of this specific cell death form has been found in different pathological conditions such as cancer, neurotoxicity, neurodegeneration, and ischemia. Distinguishing cell viability and cell death is essential for experimental and clinical applications and a key component in flow cytometry experiments. Dead cells can compromise the integrity of the data by nonspecific binding of antibodies and dyes. Therefore it is essential that dead cells are robustly and reproducibly identified and characterized by means of cytometry application. Here we describe a procedure to detect and quantify cell death and its specific form ferroptosis based on standard flow cytometry techniques.

  20. Experimental study on all-fiber-based unidimensional continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Xuyang; Liu, Wenyuan; Wang, Pu; Li, Yongmin

    2017-06-01

    We experimentally demonstrated an all-fiber-based unidimensional continuous-variable quantum key distribution (CV QKD) protocol and analyzed its security under collective attack in realistic conditions. A pulsed balanced homodyne detector, which could not be accessed by eavesdroppers, with phase-insensitive efficiency and electronic noise, was considered. Furthermore, a modulation method and an improved relative phase-locking technique with one amplitude modulator and one phase modulator were designed. The relative phase could be locked precisely with a standard deviation of 0.5° and a mean of almost zero. Secret key bit rates of 5.4 kbps and 700 bps were achieved for transmission fiber lengths of 30 and 50 km, respectively. The protocol, which simplified the CV QKD system and reduced the cost, displayed a performance comparable to that of a symmetrical counterpart under realistic conditions. It is expected that the developed protocol can facilitate the practical application of the CV QKD.

  1. Determination of B-complex vitamins in pharmaceutical formulations by surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Junior, Benedito Roberto Alvarenga; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Durango, Luis Guillermo Cuadrado; Forim, Moacir Rossi; Carneiro, Renato Lajarim

    2018-01-01

    The aim of this work was to quantify B-complex vitamins in pharmaceutical samples by surface enhanced Raman spectroscopy technique using gold colloid substrate. Synthesis of gold nanoparticles was performed according to an adapted Turkevich method. Initial essays were able to suggest the orientation of molecules on gold nanoparticles surface. Central Composite design was performed to obtain the highest SERS signal for nicotinamide and riboflavin. The evaluated parameters in the experimental design were volume of AuNPs, concentration of vitamins and sodium chloride concentration. The best condition for nicotinamide was NaCl 2.3 × 10- 3 mol L- 1 and 700 μL of AuNPs colloid and this same condition showed to be adequate to quantify thiamine. The experimental design for riboflavin shows the best condition at NaCl 1.15 × 10- 2 mol L- 1 and 2.8 mL of AuNPs colloid. It was possible to quantify thiamine and nicotinamide in presence of others vitamins and excipients in two solid multivitamin formulations using the standard addition procedure. The standard addition curve presented a R2 higher than 0.96 for both nicotinamide and thiamine, at orders of magnitude 10- 7 and 10- 8 mol L- 1, respectively. The nicotinamide content in a cosmetic gel sample was also quantified by direct analysis presenting R2 0.98. The t-student test presented no significant difference regarding HPLC method. Despite the experimental design performed for riboflavin, it was not possible its quantification in the commercial samples.

  2. Command-line cellular electrophysiology for conventional and real-time closed-loop experiments.

    PubMed

    Linaro, Daniele; Couto, João; Giugliano, Michele

    2014-06-15

    Current software tools for electrophysiological experiments are limited in flexibility and rarely offer adequate support for advanced techniques such as dynamic clamp and hybrid experiments, which are therefore limited to laboratories with a significant expertise in neuroinformatics. We have developed lcg, a software suite based on a command-line interface (CLI) that allows performing both standard and advanced electrophysiological experiments. Stimulation protocols for classical voltage and current clamp experiments are defined by a concise and flexible meta description that allows representing complex waveforms as a piece-wise parametric decomposition of elementary sub-waveforms, abstracting the stimulation hardware. To perform complex experiments lcg provides a set of elementary building blocks that can be interconnected to yield a large variety of experimental paradigms. We present various cellular electrophysiological experiments in which lcg has been employed, ranging from the automated application of current clamp protocols for characterizing basic electrophysiological properties of neurons, to dynamic clamp, response clamp, and hybrid experiments. We finally show how the scripting capabilities behind a CLI are suited for integrating experimental trials into complex workflows, where actual experiment, online data analysis and computational modeling seamlessly integrate. We compare lcg with two open source toolboxes, RTXI and RELACS. We believe that lcg will greatly contribute to the standardization and reproducibility of both simple and complex experiments. Additionally, on the long run the increased efficiency due to a CLI will prove a great benefit for the experimental community. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. 2-D left ventricular flow estimation by combining speckle tracking with Navier-Stokes-based regularization: an in silico, in vitro and in vivo study.

    PubMed

    Gao, Hang; Bijnens, Nathalie; Coisne, Damien; Lugiez, Mathieu; Rutten, Marcel; D'hooge, Jan

    2015-01-01

    Despite the availability of multiple ultrasound approaches to left ventricular (LV) flow characterization in two dimensions, this technique remains in its childhood and further developments seem warranted. This article describes a new methodology for tracking the 2-D LV flow field based on ultrasound data. Hereto, a standard speckle tracking algorithm was modified by using a dynamic kernel embedding Navier-Stokes-based regularization in an iterative manner. The performance of the proposed approach was first quantified in synthetic ultrasound data based on a computational fluid dynamics model of LV flow. Next, an experimental flow phantom setup mimicking the normal human heart was used for experimental validation by employing simultaneous optical particle image velocimetry as a standard reference technique. Finally, the applicability of the approach was tested in a clinical setting. On the basis of the simulated data, pointwise evaluation of the estimated velocity vectors correlated well (mean r = 0.84) with the computational fluid dynamics measurement. During the filling period of the left ventricle, the properties of the main vortex obtained from the proposed method were also measured, and their correlations with the reference measurement were also calculated (radius, r = 0.96; circulation, r = 0.85; weighted center, r = 0.81). In vitro results at 60 bpm during one cardiac cycle confirmed that the algorithm properly measures typical characteristics of the vortex (radius, r = 0.60; circulation, r = 0.81; weighted center, r = 0.92). Preliminary qualitative results on clinical data revealed physiologic flow fields. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  4. [Current macro-diagnostic trends of forensic medicine in the Czech Republic].

    PubMed

    Frišhons, Jan; Kučerová, Štěpánka; Jurda, Mikoláš; Sokol, Miloš; Vojtíšek, Tomáš; Hejna, Petr

    2017-01-01

    Over the last few years, advanced diagnostic methods have penetrated in the realm of forensic medicine in addition to standard autopsy techniques supported by traditional X-ray examination and macro-diagnostic laboratory tests. Despite the progress of imaging methods, the conventional autopsy has remained basic and essential diagnostic tool in forensic medicine. Postmortem computed tomography and magnetic resonance imaging are far the most progressive modern radio diagnostic methods setting the current trend of virtual autopsies all over the world. Up to now, only two institutes of forensic medicine have available postmortem computed tomography for routine diagnostic purposes in the Czech Republic. Postmortem magnetic resonance is currently unattainable for routine diagnostic use and was employed only for experimental purposes. Photogrammetry is digital method focused primarily on body surface imaging. Recently, the most fruitful results have been yielded from the interdisciplinary cooperation between forensic medicine and forensic anthropology with the implementation of body scanning techniques and 3D printing. Non-invasive and mini-invasive investigative methods such as postmortem sonography and postmortem endoscopy was unsystematically tested for diagnostic performance with good outcomes despite of limitations of these methods in postmortem application. Other futuristic methods, such as the use of a drone to inspect the crime scene are still experimental tools. The authors of the article present a basic overview of the both routinely and experimentally used investigative methods and current macro-diagnostic trends of the forensic medicine in the Czech Republic.

  5. Fabrication et caracterisation d'hybrides optiques tout-fibre

    NASA Astrophysics Data System (ADS)

    Madore, Wendy Julie

    In this thesis, we present the fabrication and characterization of optical hybrids made of all fibre 3 × 3 and 4 × 4 couplers. The three-fibre components are made with a triangular cross section, while the four-fibre components are made with a square cross section. All of these couplers have to exhibit equipartition of output amplitudes and specific relative phases of the output signals to be referred to as optical hybrids. These two types of couplers are first modelled to determine the appropriate set of experimental parameters to make hybrids out of them. The prototypes are made in standard telecommunication fibres and then characterized to quantify the performances in transmission and in phase. The objectives of this work is first to model the behaviour and physical properties of 3×3 and 4 × 4 couplers to make sure they can meet the requirements of optical hybrids with an appropriate set of fabrication parameters. The next step is to make prototypes of these 3×3 and 4 × 4 couplers and test their behaviour to check how they fulfill the requirements of optical hybrids. The experimental set-up selected is based on the fusion-tapering technique to make optical fibre components. The heat source is a micro-torch fuelled with a gas mix including propane and oxygen. This type of set-up gives the required freedom to adjust experimental parameters to suit both 3×3 and 4×4 couplers. The versatility of the set-up is also an advantage towards a repeatable and stable process to fuse and taper the different structures. The fabricated triangular-shape couplers have a total transmission of 85 % (-0,7 dB), the crossing is typically located around 1 550 nm with a transmission of around 33 % (-4 dB) per branch. In addition, the relative phases between the output signals are 120±9°. The fabricated square-shape couplers have a total transmission of 89 % (-0,5 dB) with a crossing around 1 550 nm and a transmission around 25 % (-6 dB) per branch. The relative phases between the output signals are 90±3°. As standard telecommunications fibres are used to make the couplers, the prototypes are compatible with all standard fibered set-ups and benches. The properties of optical hybrids are very interesting in coherent detection, where an unambiguous phase measurement is desired. For instance, some standard telecommunication systems use phase-shift keying (PSK), which means information is encoded in the phase of the electromagnetic wave. An all-optical decoding of signal is possible using optical hybrids. Another application is in biomedical imaging with techniques such as optical coherence tomography (OCT), or to a more general extend, profilometry systems. In state-of-the-art techniques, a conventional interferometer combined with Fourier analysis only gives absolute value of the phase. Therefore, the achievable imaging depth in the sample is decreased by a factor 2. Using optical hybrids would simply allow that unambiguous phase measurement, giving the sign and value of the phase at the same time.

  6. Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.

    PubMed

    Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A

    2011-11-01

    Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification of normal or pathological gait. Improved techniques to allow non-uniform scaling of generic models to more accurately reflect subject-specific bone geometries and anatomical reference frames may reduce differences between bone pose estimation techniques and allow for comparison across gait analysis platforms.

  7. Detection of Salmonella sp in chicken cuts using immunomagnetic separation

    PubMed Central

    de Cássia dos Santos da Conceição, Rita; Moreira, Ângela Nunes; Ramos, Roberta Juliano; Goularte, Fabiana Lemos; Carvalhal, José Beiro; Aleixo, José Antonio Guimarães

    2008-01-01

    The immunomagnetic separation (IMS) is a technique that has been used to increase sensitivity and specificity and to decrease the time required for detection of Salmonella in foods through different methodologies. In this work we report on the development of a method for detection of Salmonella in chicken cuts using in house antibody-sensitized microspheres associated to conventional plating in selective agar (IMS-plating). First, protein A-coated microspheres were sensitized with polyclonal antibodies against lipopolysacharide and flagella from salmonellae and used to standardize a procedure for capturing Salmonella Enteritidis from pure cultures and detection in selective agar. Subsequently, samples of chicken meat experimentally contaminated with S. Enteritidis were analyzed immediately after contamination and after 24h of refrigeration using three enrichment protocols. The detection limit of the IMS-plating procedure after standardization with pure culture was about 2x10 CFU/mL. The protocol using non-selective enrichment for 6-8h, selective enrichment for 16-18h and a post-enrichment for 4h gave the best results of S. Enteritidis detection by IMS-plating in experimentally contaminated meat. IMS-plating using this protocol was compared to the standard culture method for salmonellae detection in naturally contaminated chicken cuts and yielded 100% sensitivity and 94% specificity. The method developed using in house prepared magnetic microespheres for IMS and plating in selective agar was able to diminish by at least one day the time required for detection of Salmonella in chicken products by the conventional culture method. PMID:24031199

  8. A contact-free respiration monitor for smart bed and ambulatory monitoring applications.

    PubMed

    Hart, Adam; Tallevi, Kevin; Wickland, David; Kearney, Robert E; Cafazzo, Joseph A

    2010-01-01

    The development of a contact-free respiration monitor has a broad range of clinical applications in the home and hospital setting. Current approaches suffer from a variety of problems including unreliability, low sensitivity, and high cost. This work describes a novel approach to contact-free respiration monitoring that addresses these shortcomings by employing a highly sensitive capacitance sensor to detect variations in capacitive coupling caused by breathing. A prototype system consisting of a synthetic-metallic pad, sensor electronics, and iPhone interface was built and its performance compared experimentally to the gold standard technique (Respiratory Inductance Plethysmography) on both a healthy volunteer and SimMan robotic mannequin. The prototype sensor effectively captured respiratory movements over breathing rates of 5-55 bpm; achieving an average spectral correlation of 0.88 (CI: 0.86-0.90) and 0.95 (CI: 0.95-0.96) to the gold standard using the SimMan and healthy volunteer respectively.

  9. On dealing with multiple correlation peaks in PIV

    NASA Astrophysics Data System (ADS)

    Masullo, A.; Theunissen, R.

    2018-05-01

    A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.

  10. Evaluation of Information Leakage from Cryptographic Hardware via Common-Mode Current

    NASA Astrophysics Data System (ADS)

    Hayashi, Yu-Ichi; Homma, Naofumi; Mizuki, Takaaki; Sugawara, Takeshi; Kayano, Yoshiki; Aoki, Takafumi; Minegishi, Shigeki; Satoh, Akashi; Sone, Hideaki; Inoue, Hiroshi

    This paper presents a possibility of Electromagnetic (EM) analysis against cryptographic modules outside their security boundaries. The mechanism behind the information leakage is explained from the view point of Electromagnetic Compatibility: electric fluctuation released from cryptographic modules can conduct to peripheral circuits based on ground bounce, resulting in radiation. We demonstrate the consequence of the mechanism through experiments where the ISO/IEC standard block cipher AES (Advanced Encryption Standard) is implemented on an FPGA board and EM radiations from power and communication cables are measured. Correlation Electromagnetic Analysis (CEMA) is conducted in order to evaluate the information leakage. The experimental results show that secret keys are revealed even though there are various disturbing factors such as voltage regulators and AC/DC converters between the target module and the measurement points. We also discuss information-suppression techniques as electrical-level countermeasures against such CEMAs.

  11. Robotic surgical systems in maxillofacial surgery: a review

    PubMed Central

    Liu, Hang-Hang; Li, Long-Jiang; Shi, Bin; Xu, Chun-Wei; Luo, En

    2017-01-01

    Throughout the twenty-first century, robotic surgery has been used in multiple oral surgical procedures for the treatment of head and neck tumors and non-malignant diseases. With the assistance of robotic surgical systems, maxillofacial surgery is performed with less blood loss, fewer complications, shorter hospitalization and better cosmetic results than standard open surgery. However, the application of robotic surgery techniques to the treatment of head and neck diseases remains in an experimental stage, and the long-lasting effects on surgical morbidity, oncologic control and quality of life are yet to be established. More well-designed studies are needed before this approach can be recommended as a standard treatment paradigm. Nonetheless, robotic surgical systems will inevitably be extended to maxillofacial surgery. This article reviews the current clinical applications of robotic surgery in the head and neck region and highlights the benefits and limitations of current robotic surgical systems. PMID:28660906

  12. Standardisation of the ion beam facility at Chandigarh cyclotron for simultaneous PIXE and PESA analysis

    NASA Astrophysics Data System (ADS)

    Verma, Shivcharan; Mohanty, Biraja P.; Singh, Karn P.; Kumar, Ashok

    2018-02-01

    The proton beam facility at variable energy cyclotron (VEC) Panjab University, Chandigarh, India is being used for Particle Induced X-ray Emission (PIXE) analysis of different environmental, biological and industrial samples. The PIXE method, however, does not provide any information of low Z elements like carbon, nitrogen, oxygen and fluorine. As a result of the increased need for rapid and multi-elemental analysis of biological and environmental samples, the PIXE facility was upgraded and standardized to facilitate simultaneous measurements using PIXE and Proton Elastic Scattering Analysis (PESA). Both PIXE and PESA techniques were calibrated and standardized individually. Finally, the set up was tested by carrying out simultaneous PIXE and PESA measurements using a 2 mm diameter proton beam of 2.7 MeV on few multilayered thin samples. The results obtained show excellent agreement between PIXE and PESA measurements and confirm adequate sensitivity and precision of the experimental set up.

  13. Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration

    NASA Technical Reports Server (NTRS)

    Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.

    1996-01-01

    An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.

  14. Standardization of Ga-68 by coincidence measurements, liquid scintillation counting and 4πγ counting.

    PubMed

    Roteta, Miguel; Peyres, Virginia; Rodríguez Barquero, Leonor; García-Toraño, Eduardo; Arenillas, Pablo; Balpardo, Christian; Rodrígues, Darío; Llovera, Roberto

    2012-09-01

    The radionuclide (68)Ga is one of the few positron emitters that can be prepared in-house without the use of a cyclotron. It disintegrates to the ground state of (68)Zn partially by positron emission (89.1%) with a maximum energy of 1899.1 keV, and partially by electron capture (10.9%). This nuclide has been standardized in the frame of a cooperation project between the Radionuclide Metrology laboratories from CIEMAT (Spain) and CNEA (Argentina). Measurements involved several techniques: 4πβ-γ coincidences, integral gamma counting and Liquid Scintillation Counting using the triple to double coincidence ratio and the CIEMAT/NIST methods. Given the short half-life of the radionuclide assayed, a direct comparison between results from both laboratories was excluded and a comparison of experimental efficiencies of similar NaI detectors was used instead. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 11: Commissioning of a system for the measurement of electron stopping powers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEwen, Malcolm; Roy, Timothy; Tessier, Frederic

    Purpose: To develop the techniques required to experimentally determine electron stopping powers for application in primary standards and dosimetry protocols. Method and Materials: A large-volume HPGe detector system (>80% efficiency) was commissioned for the measurement of high energy (5–35 MeV) electron beams. As a proof of principle the system was used with a Y-90/Sr-90 radioactive source. Thin plates of absorbing material (< 0.1 gcm-2) were then placed between the source and detector and the emerging electron spectrum was acquired. The full experimental geometry was modelled using the EGSnrc package to validate the detector design, optimize the experimental setup and comparemore » measured and calculated spectra. Results: The biggest challenge using a beta source was to identify a robust spectral parameter to determine for each measurement. An end-point-fitting routine was used to determine the maximum energy, Emax, of the beta spectrum for each absorber thickness t. The parameter dEmax/dt is related to the electron stopping power and the same routine was applied to both measured and simulated spectra. Although the standard uncertainty in dEmax/dt was of the order of 5 %, by taking the ratio of measured and Monte Carlo values for dEmax/dt the uncertainty of the fitting routine was eliminated and the uncertainty was reduced to less than 2 %. The agreement between measurement and simulation was within this uncertainty estimate. Conclusion: The investigation confirmed the experimental approach and demonstrated that EGSnrc could accurately determine correction factors that will be required for the final measurement setup in a linac beam.« less

  16. Mechanical Characterization of Bone: State of the Art in Experimental Approaches-What Types of Experiments Do People Do and How Does One Interpret the Results?

    PubMed

    Bailey, Stacyann; Vashishth, Deepak

    2018-06-18

    The mechanical integrity of bone is determined by the direct measurement of bone mechanical properties. This article presents an overview of the current, most common, and new and upcoming experimental approaches for the mechanical characterization of bone. The key outcome variables of mechanical testing, as well as interpretations of the results in the context of bone structure and biology are also discussed. Quasi-static tests are the most commonly used for determining the resistance to structural failure by a single load at the organ (whole bone) level. The resistance to crack initiation or growth by fracture toughness testing and fatigue loading offers additional and more direct characterization of tissue material properties. Non-traditional indentation techniques and in situ testing are being increasingly used to probe the material properties of bone ultrastructure. Destructive ex vivo testing or clinical surrogate measures are considered to be the gold standard for estimating fracture risk. The type of mechanical test used for a particular investigation depends on the length scale of interest, where the outcome variables are influenced by the interrelationship between bone structure and composition. Advancement in the sensitivity of mechanical characterization techniques to detect changes in bone at the levels subjected to modifications by aging, disease, and/or pharmaceutical treatment is required. As such, a number of techniques are now available to aid our understanding of the factors that contribute to fracture risk.

  17. Visualisation of the flow at the tip of a high speed axial flow turbine rotor: An assessment of flow visualisation techniques and the requirement of the experimental turbine

    NASA Astrophysics Data System (ADS)

    Bindon, J.; Alder, D.; Ianovici, I.

    1987-11-01

    The field of flow visualization has been reviewed and its application to the study of the flow near the tip of an unshrouded axial turbine rotor discussed in detail. The logical conceptualization of experiments which could lead to a final understanding of the flow structure was developed and how this leads to test turbine design philosophy is suggested. The rotor periodicity shed by the stator requires that particle of pulse tracing is needed rather than the more universal continuous streamline trace which arises from a continuous tracer injection at a point in a flow. While the whole field of flow visualization at a rotor tip is demanding because of its very nature, pulse tracking will place a greater demand on the development of new skills and techniques. Since streamline tracking is somewhat more standard, these demands will not be as great. A fundamental choice does however need to be made between the two methods. The suggested experimental turbine should thus, always with the facility of infinitely variable Mach number, model the following: (1) Stationary annular cascade with tip clearance inside a stationary outer endwall; (2) Stationary annular cascade with tip clearance inside a moving endwall; (3) The transfer of flow visualization techniques developed into the rotating frame; (4) Fully rotating rotor with no inlet periodicity; (5) Fully rotating rotor with inlet periodicity.

  18. Using Mouse Mammary Tumor Cells to Teach Core Biology Concepts: A Simple Lab Module.

    PubMed

    McIlrath, Victoria; Trye, Alice; Aguanno, Ann

    2015-06-18

    Undergraduate biology students are required to learn, understand and apply a variety of cellular and molecular biology concepts and techniques in preparation for biomedical, graduate and professional programs or careers in science. To address this, a simple laboratory module was devised to teach the concepts of cell division, cellular communication and cancer through the application of animal cell culture techniques. Here the mouse mammary tumor (MMT) cell line is used to model for breast cancer. Students learn to grow and characterize these animal cells in culture and test the effects of traditional and non-traditional chemotherapy agents on cell proliferation. Specifically, students determine the optimal cell concentration for plating and growing cells, learn how to prepare and dilute drug solutions, identify the best dosage and treatment time course of the antiproliferative agents, and ascertain the rate of cell death in response to various treatments. The module employs both a standard cell counting technique using a hemocytometer and a novel cell counting method using microscopy software. The experimental procedure lends to open-ended inquiry as students can modify critical steps of the protocol, including testing homeopathic agents and over-the-counter drugs. In short, this lab module requires students to use the scientific process to apply their knowledge of the cell cycle, cellular signaling pathways, cancer and modes of treatment, all while developing an array of laboratory skills including cell culture and analysis of experimental data not routinely taught in the undergraduate classroom.

  19. Testing single point incremental forming moulds for rotomoulding operations

    NASA Astrophysics Data System (ADS)

    Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo

    2017-10-01

    Low pressure polymer processes as thermoforming or rotational moulding use much simpler moulds than high pressure processes like injection. However, despite the low forces involved in the process, moulds manufacturing for these applications is still a very material, energy and time consuming operation. Particularly in rotational moulding there is no standard for the mould manufacture and very different techniques are applicable. The goal of this research is to develop and validate a method for manufacturing plastically formed sheet metal moulds by single point incremental forming (SPIF) for rotomoulding and rotocasting operations. A Stewart platform based SPIF machine allow the forming of thick metal sheets, granting the required structural stiffness for the mould surface, and keeping a short manufacture lead time and low thermal inertia. The experimental work involves the proposal of a hollow part, design and fabrication of a sheet metal mould using dieless incremental forming techniques and testing its operation in the production of prototype parts.

  20. Introducing bio- and micro-technology into undergraduate thermal-fluids courses: investigating pipe pressure loss via atomic force microscopy.

    PubMed

    Müller, Marcus; Traum, Matthew J

    2012-01-01

    To introduce bio- and micro-technologies into general undergraduate thermal-fluids classes, a hands-on interdisciplinary in-class demonstration is described that juxtaposes classical pressure loss pipe flow experiments against a modern micro-characterization technique, AFM profilometry. Both approaches measure surface roughness and can segue into classroom discussions related to material selection and design of bio-medical devices to handle biological fluids such as blood. Appealing to the range of engineering students populating a general thermal-fluids course, a variety of pipe/hose/tube materials representing a spectrum of disciplines can be tested using both techniques. This in-class demonstration relies on technical content already available in standard thermal-fluids textbooks, provides experimental juxtaposition between classical and micro-technology-enabled approaches to the same experiment, and can be taught by personnel with no specialized micro- or bio-technology expertise.

  1. RP-1 and JP-8 Thermal Stability Experiments

    NASA Technical Reports Server (NTRS)

    Brown, Sarah P.; Emens, Jessica M.; Frederick, Robert A., Jr.

    2005-01-01

    This work experimentally investigates the effect of fuel composition changes on jet and rocket fuel thermal stability. A High Reynolds Number Thermal Stability test device evaluated JP-8 and RP-1 fuels. The experiment consisted of an electrically heated, stainless steel capillary tube with a controlled fuel outlet temperature. An optical pyrometer monitored the increasing external temperature profiles of the capillary tube as deposits build inside during each test. Multiple runs of each fuel composition provided results on measurement repeatability. Testing a t two different facilities provided data on measurement reproducibility. The technique is able to distinguish between thermally stable and unstable compositions of JP-8 and intermediate blends made by combining each composition. The technique is also able to distinguish among standard RP-1 rocket fuels and those having reduced sulfur levels. Carbon burn off analysis of residue in the capillary tubes on the RP-1 fuels correlates with the external temperature results.

  2. Tensorial dynamic time warping with articulation index representation for efficient audio-template learning.

    PubMed

    Le, Long N; Jones, Douglas L

    2018-03-01

    Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.

  3. Measuring the free neutron lifetime to <= 0.3s via the beam method

    NASA Astrophysics Data System (ADS)

    Fomin, Nadia

    2017-09-01

    Neutron beta decay is an archetype for all semi-leptonic charged-current weak processes. While of interest as a fundamental particle property, a precise value for the neutron lifetime is also required for consistency tests of the Standard Model as well as to calculate the primordial 4He abundance in Big Bang Nucleosynthesis models. An effort has begun to develop an in-beam measurement of the neutron lifetime with a projected <= 0.3s uncertainty. This effort is part of a phased campaign of neutron lifetime measurements based at the NIST Center for Neutron Research, using the Sussex-ILL-NIST technique. Recent advances in neutron fluence measurement techniques as well as new large area silicon detector technology address the two largest sources of uncertainty of in-beam measurements, paving the way for a new measurement. The experimental design and projected uncertainties for the 0.3s measurement will be discussed.

  4. Measuring the free neutron lifetime to <= 0.3s via the beam method

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan; Fomin, Nadia; BL3 Collaboration

    2015-10-01

    Neutron beta decay is an archetype for all semi-leptonic charged-current weak processes. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is needed to predict the primordial 4He abundance from the theory of Big Bang Nucleosynthesis. An effort has begun for an in-beam measurement of the neutron lifetime with an projected <=0.3s uncertainty. This effort is part of a phased campaign of neutron lifetime measurements based at the NIST Center for Neutron Research, using the Sussex-ILL-NIST technique. Recent advances in neutron fluence measurement techniques as well as new large area silicon detector technology address the two largest sources of uncertainty of in-beam measurements, paving the way for a new measurement. The experimental design and projected uncertainties for the 0.3s measurement will be discussed.

  5. Cell adhesion and proliferation on poly(tetrafluoroethylene) with plasma-metal and plasma-metal-carbon interfaces

    NASA Astrophysics Data System (ADS)

    Reznickova, Alena; Kvitek, Ondrej; Kolarova, Katerina; Smejkalova, Zuzana; Svorcik, Vaclav

    2017-06-01

    The aim of this article is to investigate the effect of the interface between plasma activated, gold and carbon coated poly(tetrafluoroethylene) (PTFE) on in vitro adhesion and spreading of mouse fibroblasts (L929). Surface properties of pristine and modified PTFE were studied by several experimental techniques. The thickness of a deposited gold film is an increasing function of the sputtering time, conversely thickness of carbon layer decreases with increasing distance between carbon source and the substrate. Because all the used surface modification techniques take place in inert Ar plasma, oxidized degradation products are formed on the PTFE surface, which affects wettability of the polymer surface. Cytocompatibility tests indicate that on samples with Au/C interface, the cells accumulate on the part of sample with evaporated carbon. Number of L929 cells proliferated on the studied samples is comparable to tissue culture polystyrene standard.

  6. X-ray investigations related to the shock history of the Shergotty achondrite

    NASA Technical Reports Server (NTRS)

    Horz, F.; Hanss, R.; Serna, C.

    1986-01-01

    The shock stress suffered by naturally shocked materials from the Shergotty achondrite was studied using X-ray diffraction techniques and experimentally shocked augite and enstatite as standards. The Shergotty pyroxenes revealed the formation of continuous diffraction rings, line broadening, preferred orientation of small scale diffraction domains, and other evidence of substantial lattice disorders. As disclosed by the application of Debye-Scherrer techniques, they are hybrids between single crystals and fine-grained random powders. The pyroxene lattice is very resistant to shock damage on smaller scales. While measurable lattice disaggregation and progressive fragmentation occur below 25 GPa, little additional damage is suffered from application of pressures between 30 to 60 GPa, making pressure calibration of naturally shocked pyroxenes via X-ray methods difficult. Powder diffractometer scans on pure maskelynite fractions of Shergotty revealed small amounts of still coherently diffracting plagioclase, which may contribute to the high refractive indices of the diaplectic feldspar glasses of Shergotty.

  7. An evaluation of EREP (Skylab) and ERTS imagery for integrated natural resources survey

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. An experimental procedure has been devised and is being tested for natural resource surveys to cope with the problems of interpreting and processing the large quantities of data provided by Skylab and ERTS. Some basic aspects of orbital imagery such as scale, the role of repetitive coverage, and types of sensors are being examined in relation to integrated surveys of natural resources and regional development planning. Extrapolation away from known ground conditions, a fundamental technique for mapping resources, becomes very effective when used on orbital imagery supported by field mapping. Meaningful boundary delimitations can be made on orbital images using various image enhancement techniques. To meet the needs of many developing countries, this investigation into the use of satellite imagery for integrated resource surveys involves the analysis of the images by means of standard visual photointerpretation methods.

  8. Decorrelation correction for nanoparticle tracking analysis of dilute polydisperse suspensions in bulk flow

    NASA Astrophysics Data System (ADS)

    Hartman, John; Kirby, Brian

    2017-03-01

    Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.

  9. A study of interior noise levels, noise sources and transmission paths in light aircraft

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.; Murray, B. S.; Theobald, M. A.

    1983-01-01

    The interior noise levels and spectral characteristics of 18 single-and twin-engine propeller-driven light aircraft, and source-path diagnosis of a single-engine aircraft which was considered representative of a large part of the fleet were studied. The purpose of the flight surveys was to measure internal noise levels and identify principal noise sources and paths under a carefully controlled and standardized set of flight procedures. The diagnostic tests consisted of flights and ground tests in which various parts of the aircraft, such as engine mounts, the engine compartment, exhaust pipe, individual panels, and the wing strut were instrumented to determine source levels and transmission path strengths using the transfer function technique. Predominant source and path combinations are identified. Experimental techniques are described. Data, transfer function calculations to derive source-path contributions to the cabin acoustic environment, and implications of the findings for noise control design are analyzed.

  10. Integration of Video-Based Demonstrations to Prepare Students for the Organic Chemistry Laboratory

    NASA Astrophysics Data System (ADS)

    Nadelson, Louis S.; Scaggs, Jonathan; Sheffield, Colin; McDougal, Owen M.

    2015-08-01

    Consistent, high-quality introductions to organic chemistry laboratory techniques effectively and efficiently support student learning in the organic chemistry laboratory. In this work, we developed and deployed a series of instructional videos to communicate core laboratory techniques and concepts. Using a quasi-experimental design, we tested the videos in five traditional laboratory experiments by integrating them with the standard pre-laboratory student preparation presentations and instructor demonstrations. We assessed the influence of the videos on student laboratory knowledge and performance, using sections of students who did not view the videos as the control. Our analysis of pre-quizzes revealed the control group had equivalent scores to the treatment group, while the post-quiz results show consistently greater learning gains for the treatment group. Additionally, the students who watched the videos as part of their pre-laboratory instruction completed their experiments in less time.

  11. Iodine Absorption Cells Purity Testing.

    PubMed

    Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej

    2017-01-06

    This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions' spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches).

  12. Iodine Absorption Cells Purity Testing

    PubMed Central

    Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej

    2017-01-01

    This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions’ spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches). PMID:28067834

  13. Assessment of Severe Apnoea through Voice Analysis, Automatic Speech, and Speaker Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.

    2009-12-01

    This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.

  14. Fundamentals of bipolar high-frequency surgery.

    PubMed

    Reidenbach, H D

    1993-04-01

    In endoscopic surgery a very precise surgical dissection technique and an efficient hemostasis are of decisive importance. The bipolar technique may be regarded as a method which satisfies both requirements, especially regarding a high safety standard in application. In this context the biophysical and technical fundamentals of this method, which have been known in principle for a long time, are described with regard to the special demands of a newly developed field of modern surgery. After classification of this method into a general and a quasi-bipolar mode, various technological solutions of specific bipolar probes, in a strict and in a generalized sense, are characterized in terms of indication. Experimental results obtained with different bipolar instruments and probes are given. The application of modern microprocessor-controlled high-frequency surgery equipment and, wherever necessary, the integration of additional ancillary technology into the specialized bipolar instruments may result in most useful and efficient tools of a key technology in endoscopic surgery.

  15. Fixation of revision implants is improved by new surgical technique to crack the sclerotic endosteal rim.

    PubMed

    Kold, S; Soballe, K; Mouzin, O; Chen, Xiangmei; Toft, M; Bechtold, J

    2002-01-01

    We used an experimental model producing a tissue response with a sclerotic endosteal neo-cortical rim associated with implant loosening in humans: a 6 mm PMMA cylinder pistoned 500 m concentrically in a 7.5 mm hole, with polyethylene particles. At a second operation at eight weeks, the standard revision procedure removed the fibrous membrane in one knee, and the crack revision procedure was used to crack the sclerotic endosteal rim in the contralateral knee. Once stability was achieved following the revision procedures, loaded Ti plasma sprayed implants were inserted into the revision cavities of 8 dogs for an additional 4 weeks. Revision implant fixation (ultimate shear strength and energy absorption) was significantly enhanced by cracking the sclerotic endosteal rim. In conclusion, we demonstrated a simple technique of cracking the sclerotic endosteal rim as an additional method for improving revision fixation. (Hip International 2002; 2: 77-9).

  16. Chemically Defined Medium and Caenorhabditis elegans: A Powerful Approach

    NASA Technical Reports Server (NTRS)

    Szewczyk, N. J.; Kozak, E.; Conley, C. A.

    2003-01-01

    C. elegans has been established as a powerful genetic system. Growth in a chemically defined medium (C. elegans Maintenance Medium (CeMM)) now allows standardization and systematic manipulation of the nutrients that animals receive. Liquid cultivation allows automated culturing and experimentation and should be of me in large-scale growth and screening of animals. Here we present our initial results from developing culture systems with CeMM. We find that CeMM is versatile and culturing is simple. CeMM can be used in a solid or liquid state, it can be stored unused for at least a year, unattended actively growing cultures may be maintained longer than with standard techniques, and standard C. elegans protocols work well with animals grown in defined medium. We also find that there are caveats of using defined medium. Animals in defined medium grow more slowly than on standard medium, appear to display adaptation to the defined medium, and display altered growth rates as they change defined medium composition. As was suggested with the introduction of C. elegans as a potential genetic system, use of defined medium with C. elegans should prove a powerful tool.

  17. Imaging of neural oscillations with embedded inferential and group prevalence statistics.

    PubMed

    Donhauser, Peter W; Florin, Esther; Baillet, Sylvain

    2018-02-01

    Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.

  18. Imaging of neural oscillations with embedded inferential and group prevalence statistics

    PubMed Central

    2018-01-01

    Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902

  19. Presentation and Impact of Experimental Techniques in Chemistry

    ERIC Educational Resources Information Center

    Sojka, Zbigniew; Che, Michel

    2008-01-01

    Laboratory and practical courses, where students become familiar with experimental techniques and learn to interpret data and relate them to appropriate theory, play a vital role in chemical education. In the large panoply of currently available techniques, it is difficult to find a rational and easy way to classify the techniques in relation to…

  20. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    PubMed

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  1. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  2. Skull Base Cerebrospinal Fluid Leakage Control with a Fibrin-Based Composite Tissue Adhesive

    PubMed Central

    Rock, Jack P.; Sierra, David H.; Castro-Moure, Frederico; Jiang, Feng

    1996-01-01

    Cerebrospinal fluid (CSF) leaks can be responsible for significant patient morbidity and mortality. While the majority of leaks induced after head trauma will seal without intervention, spontaneous or surgically-induced leaks often require operative repair. Many modifications on standard surgical technique are available for repair of CSF fistulae, but none assures adequate closure. We have studied the efficacy of a novel fibrin-based composite tissue adhesive (CTA) for closure of experimentally-induced CSF leaks in rats. Fistulae were created in two groups of animals. Two weeks after creation of the leaks, the animals were sacrificed and analyzed for persistence of leak. A 58% leakage rate was noted in the control group (n = 12), and no leaks were noted in the experimental group closed after application of CTA to the surgical defect followed by skin closure (n = 11). Comparing the control group to the experimental group, results were statistically significant (p = 0.015). These data suggest that CTA may be effective as an adjunct for the closure of CSF fistulae. ImagesFigure 2Figure 3 PMID:17170969

  3. Characterization of Ultra-fine Grained and Nanocrystalline Materials Using Transmission Kikuchi Diffraction

    PubMed Central

    Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine

    2017-01-01

    One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized. PMID:28447998

  4. Characterization of Ultra-fine Grained and Nanocrystalline Materials Using Transmission Kikuchi Diffraction.

    PubMed

    Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine

    2017-04-01

    One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized.

  5. 48 CFR 9904.401-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.401-50 Section 9904.401-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.401-50 Techniques for application. (a) The standard...

  6. Minimum Information about a Genotyping Experiment (MIGEN)

    PubMed Central

    Huang, Jie; Mirel, Daniel; Pugh, Elizabeth; Xing, Chao; Robinson, Peter N.; Pertsemlidis, Alexander; Ding, LiangHao; Kozlitina, Julia; Maher, Joseph; Rios, Jonathan; Story, Michael; Marthandan, Nishanth; Scheuermann, Richard H.

    2011-01-01

    Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata. PMID:22180825

  7. Vibration-based angular speed estimation for multi-stage wind turbine gearboxes

    NASA Astrophysics Data System (ADS)

    Peeters, Cédric; Leclère, Quentin; Antoni, Jérôme; Guillaume, Patrick; Helsen, Jan

    2017-05-01

    Most processing tools based on frequency analysis of vibration signals are only applicable for stationary speed regimes. Speed variation causes the spectral content to smear, which encumbers most conventional fault detection techniques. To solve the problem of non-stationary speed conditions, the instantaneous angular speed (IAS) is estimated. Wind turbine gearboxes however are typically multi-stage gearboxes, consisting of multiple shafts, rotating at different speeds. Fitting a sensor (e.g. a tachometer) to every single stage is not always feasible. As such there is a need to estimate the IAS of every single shaft based on the vibration signals measured by the accelerometers. This paper investigates the performance of the multi-order probabilistic approach for IAS estimation on experimental case studies of wind turbines. This method takes into account the meshing orders of the gears present in the system and has the advantage that a priori it is not necessary to associate harmonics with a certain periodic mechanical event, which increases the robustness of the method. It is found that the MOPA has the potential to easily outperform standard band-pass filtering techniques for speed estimation. More knowledge of the gearbox kinematics is beneficial for the MOPA performance, but even with very little knowledge about the meshing orders, the MOPA still performs sufficiently well to compete with the standard speed estimation techniques. This observation is proven on two different data sets, both originating from vibration measurements on the gearbox housing of a wind turbine.

  8. Analysis, optimization, and implementation of a hybrid DS/FFH spread-spectrum technique for smart grid communications

    DOE PAGES

    Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; ...

    2015-03-12

    In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less

  9. Investigation of the Surface Stress in SiC and Diamond Nanocrystals by In-situ High Pressure Powder Diffraction Technique

    NASA Technical Reports Server (NTRS)

    Palosz, B.; Stelmakh, S.; Grzanka, E.; Gierlotka, S.; Zhao, Y.; Palosz, W.

    2003-01-01

    The real atomic structure of nanocrystals determines key properties of the materials. For such materials the serious experimental problem lies in obtaining sufficiently accurate measurements of the structural parameters of the crystals, since very small crystals constitute rather a two-phase than a uniform crystallographic phase system. As a result, elastic properties of nanograins may be expected to reflect a dual nature of their structure, with a corresponding set of different elastic property parameters. We studied those properties by in-situ high-pressure powder diffraction technique. For nanocrystalline, even one-phase materials such measurements are particularly difficult to make since determination of the lattice parameters of very small crystals presents a challenge due to inherent limitations of standard elaboration of powder diffractograms. In this investigation we used our methodology of the structural analysis, the 'apparent lattice parameter' (alp) concept. The methodology allowed us to avoid the traps (if applied to nanocrystals) of standard powder diffraction evaluation techniques. The experiments were performed for nanocrystalline Sic and GaN powders using synchrotron sources. We applied both hydrostatic and isostatic pressures in the range of up to 40 GPa. Elastic properties of the samples were examined based on the measurements of a change of the lattice parameters with pressure. The results show a dual nature of the mechanical properties (compressibilities) of the materials, indicating a complex, core-shell structure of the grains.

  10. Progress in SPECT/CT imaging of prostate cancer.

    PubMed

    Seo, Youngho; Franc, Benjamin L; Hawkins, Randall A; Wong, Kenneth H; Hasegawa, Bruce H

    2006-08-01

    Prostate cancer is the most common type of cancer (other than skin cancer) among men in the United States. Although prostate cancer is one of the few cancers that grow so slowly that it may never threaten the lives of some patients, it can be lethal once metastasized. Indium-111 capromab pendetide (ProstaScint, Cytogen Corporation, Princeton, NJ) imaging is indicated for staging and recurrence detection of the disease, and is particularly useful to determine whether or not the disease has spread to distant metastatic sites. However, the interpretation of 111In-capromab pendetide is challenging without correlated structural information mostly because the radiopharmaceutical demonstrates nonspecific uptake in the normal vasculature, bowel, bone marrow, and the prostate gland. We developed an improved method of imaging and localizing 111In-Capromab pendetide using a SPECT/CT imaging system. The specific goals included: i) development and application of a novel iterative SPECT reconstruction algorithm that utilizes a priori information from coregistered CT; and ii) assessment of clinical impact of adding SPECT/CT for prostate cancer imaging with capromab pendetide utilizing the standard and novel reconstruction techniques. Patient imaging studies with capromab pendetide were performed from 1999 to 2004 using two different SPECT/CT scanners, a prototype SPECT/CT system and a commercial SPECT/CT system (Discovery VH, GE Healthcare, Waukesha, WI). SPECT projection data from both systems were reconstructed using an experimental iterative algorithm that compensates for both photon attenuation and collimator blurring. In addition, the data obtained from the commercial system were reconstructed with attenuation correction using an OSEM reconstruction supplied by the camera manufacturer for routine clinical interpretation. For 12 sets of patient data, SPECT images reconstructed using the experimental algorithm were interpreted separately and compared with interpretation of images obtained using the standard reconstruction technique. The experimental reconstruction algorithm improved spatial resolution, reduced streak artifacts, and yielded a better correlation with anatomic details of CT in comparison to conventional reconstruction methods (e.g., filtered back-projection or OSEM with attenuation correction only). Images produced with the experimental algorithm produced a subjective improvement in the confidence of interpretation for 11 of 12 studies. There were also changes in interpretations for 4 of 12 studies although the changes were not sufficient to alter prognosis or the patient treatment plan.

  11. Technical Note: Phantom study to evaluate the dose and image quality effects of a computed tomography organ-based tube current modulation technique.

    PubMed

    Gandhi, Diksha; Crotty, Dominic J; Stevens, Grant M; Schmidt, Taly Gilat

    2015-11-01

    This technical note quantifies the dose and image quality performance of a clinically available organ-dose-based tube current modulation (ODM) technique, using experimental and simulation phantom studies. The investigated ODM implementation reduces the tube current for the anterior source positions, without increasing current for posterior positions, although such an approach was also evaluated for comparison. Axial CT scans at 120 kV were performed on head and chest phantoms on an ODM-equipped scanner (Optima CT660, GE Healthcare, Chalfont St. Giles, England). Dosimeters quantified dose to breast, lung, heart, spine, eye lens, and brain regions for ODM and 3D-modulation (SmartmA) settings. Monte Carlo simulations, validated with experimental data, were performed on 28 voxelized head phantoms and 10 chest phantoms to quantify organ dose and noise standard deviation. The dose and noise effects of increasing the posterior tube current were also investigated. ODM reduced the dose for all experimental dosimeters with respect to SmartmA, with average dose reductions across dosimeters of 31% (breast), 21% (lung), 24% (heart), 6% (spine), 19% (eye lens), and 11% (brain), with similar results for the simulation validation study. In the phantom library study, the average dose reduction across all phantoms was 34% (breast), 20% (lung), 8% (spine), 20% (eye lens), and 8% (brain). ODM increased the noise standard deviation in reconstructed images by 6%-20%, with generally greater noise increases in anterior regions. Increasing the posterior tube current provided similar dose reduction as ODM for breast and eye lens, increased dose to the spine, with noise effects ranging from 2% noise reduction to 16% noise increase. At noise equal to SmartmA, ODM increased the estimated effective dose by 4% and 8% for chest and head scans, respectively. Increasing the posterior tube current further increased the effective dose by 15% (chest) and 18% (head) relative to SmartmA. ODM reduced dose in all experimental and simulation studies over a range of phantoms, while increasing noise. The results suggest a net dose/noise benefit for breast and eye lens for all studied phantoms, negligible lung dose effects for two phantoms, increased lung dose and/or noise for eight phantoms, and increased dose and/or noise for brain and spine for all studied phantoms compared to the reference protocol.

  12. 48 CFR 9904.413-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.413-50 Section 9904.413-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.413-50 Techniques for application. (a) Assignment of actuarial gains and losses. (1) In accordance with the provisions of Cost Accounting Standard 9904.412...

  13. Reconfigurable ultra-wideband waveform generation with simple photonic devices

    NASA Astrophysics Data System (ADS)

    Dastmalchi, Mansour; Abtahi, Mohammad; Lemus, David; Rusch, Leslie A.; LaRochelle, Sophie

    2012-08-01

    We propose and experimentally demonstrate a low cost, low power consumption technique for ultra-wideband pulse shaping. Our approach is based on thermal apodization of two identical linearly chirped fiber Bragg gratings (LCFBG) placed in both arms of a balanced photodetector. Resistive heating elements with low electrical power consumption are used to tune the LCFBG spectral responses. Using a standard gain switched distributed feedback laser as a pulsed optical source and a simple energy detector receiver, we measured a bit error rate of 1.5×10-4 at a data rate of 1 Gb/s after RF transmission over a 1-m link.

  14. The primacy of basics in advanced life support.

    PubMed

    Chamberlain, Douglas; Frenneaux, Michael; Fletcher, David

    2009-06-01

    The standards required for optimal effect of chest compressions and the degree to which most practice falls short of ideal have not been widely appreciated. This review highlights some of the important data now available and offers a haemodynamic explanation that broadens current concepts. New techniques have permitted a detailed examination of how compressions are performed in practice. The implications of recent experimental work adds a new imperative to the need for improvement. In addition to highlighting the need for improved training and audit, the greater understanding of mechanisms in resuscitation suggest that guidelines for management of adult cardiac arrest of presumed cardiac origin need further revision and simplification.

  15. Nano Peltier cooling device from geometric effects using a single graphene nanoribbon

    NASA Astrophysics Data System (ADS)

    Li, Wan-Ju; Yao, Dao-Xin; Carlson, Erica

    2012-02-01

    Based on the phenomenon of curvature-induced doping in graphene we propose a class of Peltier cooling devices, produced by geometrical effects, without gating. We show how a graphene nanoribbon laid on an array of curved nano cylinders can be used to create a targeted cooling device. Using theoretical calculations and experimental inputs, we predict that the cooling power of such a device can approach 1kW/cm^2, on par with the best known techniques using standard lithography methods. The structure proposed here helps pave the way toward designing graphene electronics which use geometry rather than gating to control devices.

  16. Recent Developments in Fluorescence Correlation Spectroscopy for Diffusion Measurements in Planar Lipid Membranes

    PubMed Central

    Macháň, Radek; Hof, Martin

    2010-01-01

    Fluorescence correlation spectroscopy (FCS) is a single molecule technique used mainly for determination of mobility and local concentration of molecules. This review describes the specific problems of FCS in planar systems and reviews the state of the art experimental approaches such as 2-focus, Z-scan or scanning FCS, which overcome most of the artefacts and limitations of standard FCS. We focus on diffusion measurements of lipids and proteins in planar lipid membranes and review the contributions of FCS to elucidating membrane dynamics and the factors influencing it, such as membrane composition, ionic strength, presence of membrane proteins or frictional coupling with solid support. PMID:20386647

  17. Integral method for the calculation of Hawking radiation in dispersive media. I. Symmetric asymptotics.

    PubMed

    Robertson, Scott; Leonhardt, Ulf

    2014-11-01

    Hawking radiation has become experimentally testable thanks to the many analog systems which mimic the effects of the event horizon on wave propagation. These systems are typically dominated by dispersion and give rise to a numerically soluble and stable ordinary differential equation only if the rest-frame dispersion relation Ω^{2}(k) is a polynomial of relatively low degree. Here we present a new method for the calculation of wave scattering in a one-dimensional medium of arbitrary dispersion. It views the wave equation as an integral equation in Fourier space, which can be solved using standard and efficient numerical techniques.

  18. Fermilab muon g-2 experiment

    NASA Astrophysics Data System (ADS)

    Gorringe, Tim

    2018-05-01

    The Fermilab muon g-2 experiment will measure the muon anomalous magnetic moment aμ to 140 ppb - a four-fold improvement over the earlier Brookhaven experiment. The measurement of aμ is well known as a unique test of the standard model with broad sensitivity to new interactions, particles and phenomena. The goal of 140 ppb is commensurate with ongoing improvements in the SM prediction of the anomalous moment and addresses the longstanding 3.5σ discrepancy between the BNL result and the SM prediction. In this article I discuss the physics motivation and experimental technique for measuring aμ, and the current status and the future work for the project.

  19. Molecular Dynamics Simulations of Nucleic Acids. From Tetranucleotides to the Ribosome.

    PubMed

    Šponer, Jiří; Banáš, Pavel; Jurečka, Petr; Zgarbová, Marie; Kührová, Petra; Havrila, Marek; Krepl, Miroslav; Stadlbauer, Petr; Otyepka, Michal

    2014-05-15

    We present a brief overview of explicit solvent molecular dynamics (MD) simulations of nucleic acids. We explain physical chemistry limitations of the simulations, namely, the molecular mechanics (MM) force field (FF) approximation and limited time scale. Further, we discuss relations and differences between simulations and experiments, compare standard and enhanced sampling simulations, discuss the role of starting structures, comment on different versions of nucleic acid FFs, and relate MM computations with contemporary quantum chemistry. Despite its limitations, we show that MD is a powerful technique for studying the structural dynamics of nucleic acids with a fast growing potential that substantially complements experimental results and aids their interpretation.

  20. Quartz crystal microbalance for the cardiac markers/antibodies binding kinetic measurements in the plasma samples

    NASA Astrophysics Data System (ADS)

    Agafonova, L. E.; Shumyantseva, V. V.; Archakov, A. I.

    2014-06-01

    The quartz crystal microbalance (QCM) was exploited for cardiac markers detection and kinetic studies of immunochemical reaction of cardiac troponin I (cTnI) and human heart fatty acid binding protein (H-FABP) with the corresponding monoclonal antibodies in undiluted plasma (serum) and standard solutions. The QCM technique allowed to dynamically monitor the kinetic differences in specific interactions and nonspecific sorption, without multiple labeling procedures and separation steps. The affinity binding process was characterized by the association (ka) and the dissociation (kd) kinetic constants and the equilibrium association (K) constant, all of which were obtained from experimental data.

  1. A novel carbon coating technique for foil bolometers

    NASA Astrophysics Data System (ADS)

    Sheikh, U. A.; Duval, B. P.; Labit, B.; Nespoli, F.

    2016-11-01

    Naked foil bolometers can reflect a significant fraction of incident energy and therefore cannot be used for absolute measurements. This paper outlines a novel coating approach to address this problem by blackening the surface of gold foil bolometers using physical vapour deposition. An experimental bolometer was built containing four standard gold foil bolometers, of which two were coated with 100+ nm of carbon. All bolometers were collimated and observed the same relatively high temperature, ohmically heated plasma. Preliminary results showed 13%-15% more incident power was measured by the coated bolometers and this is expected to be much higher in future TCV detached divertor experiments.

  2. Scattering resonances in bimolecular collisions between NO radicals and H2 challenge the theoretical gold standard

    NASA Astrophysics Data System (ADS)

    Vogels, Sjoerd N.; Karman, Tijs; Kłos, Jacek; Besemer, Matthieu; Onvlee, Jolijn; van der Avoird, Ad; Groenenboom, Gerrit C.; van de Meerakker, Sebastiaan Y. T.

    2018-02-01

    Over the last 25 years, the formalism known as coupled-cluster (CC) theory has emerged as the method of choice for the ab initio calculation of intermolecular interaction potentials. The implementation known as CCSD(T) is often referred to as the gold standard in quantum chemistry. It gives excellent agreement with experimental observations for a variety of energy-transfer processes in molecular collisions, and it is used to calibrate density functional theory. Here, we present measurements of low-energy collisions between NO radicals and H2 molecules with a resolution that challenges the most sophisticated quantum chemistry calculations at the CCSD(T) level. Using hitherto-unexplored anti-seeding techniques to reduce the collision energy in a crossed-beam inelastic-scattering experiment, a resonance structure near 14 cm-1 is clearly resolved in the state-to-state integral cross-section, and a unique resonance fingerprint is observed in the corresponding differential cross-section. This resonance structure discriminates between two NO-H2 potentials calculated at the CCSD(T) level and pushes the required accuracy beyond the gold standard.

  3. The Standard Model: how far can it go and how can we tell?

    PubMed

    Butterworth, J M

    2016-08-28

    The Standard Model of particle physics encapsulates our current best understanding of physics at the smallest distances and highest energies. It incorporates quantum electrodynamics (the quantized version of Maxwell's electromagnetism) and the weak and strong interactions, and has survived unmodified for decades, save for the inclusion of non-zero neutrino masses after the observation of neutrino oscillations in the late 1990s. It describes a vast array of data over a wide range of energy scales. I review a selection of these successes, including the remarkably successful prediction of a new scalar boson, a qualitatively new kind of object observed in 2012 at the Large Hadron Collider. New calculational techniques and experimental advances challenge the Standard Model across an ever-wider range of phenomena, now extending significantly above the electroweak symmetry breaking scale. I will outline some of the consequences of these new challenges, and briefly discuss what is still to be found.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'. © 2016 The Author(s).

  4. Direct frequency comb optical frequency standard based on two-photon transitions of thermal atoms

    PubMed Central

    Zhang, S. Y.; Wu, J. T.; Zhang, Y. L.; Leng, J. X.; Yang, W. P.; Zhang, Z. G.; Zhao, J. Y.

    2015-01-01

    Optical clocks have been the focus of science and technology research areas due to their capability to provide highest frequency accuracy and stability to date. Their superior frequency performance promises significant advances in the fields of fundamental research as well as practical applications including satellite-based navigation and ranging. In traditional optical clocks, ultrastable optical cavities, laser cooling and particle (atoms or a single ion) trapping techniques are employed to guarantee high stability and accuracy. However, on the other hand, they make optical clocks an entire optical tableful of equipment, and cannot work continuously for a long time; as a result, they restrict optical clocks used as very convenient and compact time-keeping clocks. In this article, we proposed, and experimentally demonstrated, a novel scheme of optical frequency standard based on comb-directly-excited atomic two-photon transitions. By taking advantage of the natural properties of the comb and two-photon transitions, this frequency standard achieves a simplified structure, high robustness as well as decent frequency stability, which promise widespread applications in various scenarios. PMID:26459877

  5. Surgery or Rehabilitation: A Randomized Clinical Trial Comparing the Treatment of Vocal Fold Polyps via Phonosurgery and Traditional Voice Therapy with "Voice Therapy Expulsion" Training.

    PubMed

    Barillari, Maria Rosaria; Volpe, Umberto; Mirra, Giuseppina; Giugliano, Francesco; Barillari, Umberto

    2017-05-01

    Phonomicrosurgery is generally considered to be the treatment of choice for removing vocal fold polyps. However, specific techniques of voice therapy may represent, in selected cases and under certain conditions, a noninvasive therapeutic option for the treatment of such laryngeal lesions. The aim of the present study is to longitudinally assess, in terms of clinical outcomes and quality of life, two groups of patients with cordal polyps, treated either with standard surgery plus standard voice therapy or with a specific training of voice therapy alone, which we have called "Voice Therapy Expulsion." This study is a randomized controlled trial. A total of 150 patients with vocal fold polyps were randomly assigned to either standard surgery or "voice therapy expulsion" protocol. The trial was carried out at the Division of Phoniatrics and Audiology of the Second University of Naples and at the Division of Communication Disorders of Local Health Unit (3 Naples South) from January 2010 to December 2013. A thorough phoniatric evaluation, including laryngostroboscopy, acoustic voice analysis, global grade of dysphonia, instability, roughness, breathiness, asthenia, and strain scale, Voice Handicap Index, and Voice-Related Quality of Life, was performed by using standardized tools, at baseline, at the end of the treatment, and up to 1 year after treatment. We found no significant differences between the two experimental groups in terms of clinical outcomes and personal satisfaction. However, "Voice Therapy Expulsion" was associated with higher scores for quality of life at endpoint evaluation. Besides phonosurgery, this specific "Voice Therapy Expulsion" technique should be considered as a valid, noninvasive, and well-tolerated therapeutic option for the treatment of selected patients with vocal fold polyps. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Update on fertility preservation in young women undergoing breast cancer and ovarian cancer therapy.

    PubMed

    Lambertini, Matteo; Ginsburg, Elizabeth S; Partridge, Ann H

    2015-02-01

    The purpose of the article is to review the available options for fertility preservation in patients with breast and ovarian cancer, and the special issues faced by BRCA mutation carriers. Future fertility is an important consideration for many young patients with cancer. There are both experimental and standard available strategies for patients with breast and ovarian cancer to preserve fertility, and each has potential advantages and disadvantages. Embryo cryopreservation is widely available with a highly successful track record. Improvements in laboratory techniques have led to oocyte cryopreservation recently being recategorized as nonexperimental. Conservative gynecologic surgery is a standard consideration for patients with stage I ovarian cancer who desire future fertility. Ovarian tissue cryopreservation as well as ovarian suppression with luteinizing hormone-releasing hormone analogs during chemotherapy are considered experimental methods at this time, although recent data suggest both safety and efficacy for the use of luteinizing hormone-releasing hormone analogs in women receiving (neo)adjuvant chemotherapy for breast cancer. Special issues should be considered for women with BRCA mutations because of the need to undergo preventive surgery at young age. Multidisciplinary teams and well functioning relationships between the oncology and reproductive units are crucial to manage the fertility issues of young women with cancer.

  7. A Hertzian contact mechanics based formulation to improve ultrasound elastography assessment of uterine cervical tissue stiffness.

    PubMed

    Briggs, Brandi N; Stender, Michael E; Muljadi, Patrick M; Donnelly, Meghan A; Winn, Virginia D; Ferguson, Virginia L

    2015-06-25

    Clinical practice requires improved techniques to assess human cervical tissue properties, especially at the internal os, or orifice, of the uterine cervix. Ultrasound elastography (UE) holds promise for non-invasively monitoring cervical stiffness throughout pregnancy. However, this technique provides qualitative strain images that cannot be linked to a material property (e.g., Young's modulus) without knowledge of the contact pressure under a rounded transvaginal transducer probe and correction for the resulting non-uniform strain dissipation. One technique to standardize elastogram images incorporates a material of known properties and uses one-dimensional, uniaxial Hooke's law to calculate Young's modulus within the compressed material half-space. However, this method does not account for strain dissipation and the strains that evolve in three-dimensional space. We demonstrate that an analytical approach based on 3D Hertzian contact mechanics provides a reasonable first approximation to correct for UE strain dissipation underneath a round transvaginal transducer probe and thus improves UE-derived estimates of tissue modulus. We validate the proposed analytical solution and evaluate sources of error using a finite element model. As compared to 1D uniaxial Hooke's law, the Hertzian contact-based solution yields significantly improved Young's modulus predictions in three homogeneous gelatin tissue phantoms possessing different moduli. We also demonstrate the feasibility of using this technique to image human cervical tissue, where UE-derived moduli estimations for the uterine cervix anterior lip agreed well with published, experimentally obtained values. Overall, UE with an attached reference standard and a Hertzian contact-based correction holds promise for improving quantitative estimates of cervical tissue modulus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Dimensional accuracy of pickup implant impression: an in vitro comparison of novel modular versus standard custom trays.

    PubMed

    Simeone, Piero; Valentini, Pier Paolo; Pizzoferrato, Roberto; Scudieri, Folco

    2011-01-01

    The purpose of this in vitro study was to compare the dimensional accuracy of the pickup impression technique using a modular individual tray (MIT) and using a standard individual tray (ST) for multiple internal-connection implants. The roles of both materials and geometric misfits were considered. First, because the MIT relies on the stiffness and elasticity of acrylic resin material, a preliminary investigation of the resin volume contraction during curing and polymerization was done. Then, two sets of specimens were tested to compare the accuracy of the MIT (test group) to that of the ST (control group). The linear and angular displacements of the transfer copings were measured and compared during three different stages of the impression procedure. Experimental measurements were performed with a computerized coordinate measuring machine. The curing dynamic of the acrylic resin was strongly dependent on the physical properties of the acrylic material and the powder/liquid ratio. Specifically, an increase in the powder/liquid ratio accelerated resin polymerization (curing time decreases by 70%) and reduced the final volume contraction by 45%. However, the total shrinkage never exceeded the elastic limits of the material; hence, it did not affect the coping's stability. In the test group, linear errors were reduced by 55% and angular errors were reduced by 65%. Linear and angular displacements of the transfer copings were significantly reduced with the MIT technique, which led to higher dimensional accuracy versus the ST group. The MIT approach, in combination with a thin and uniform amount of acrylic resin in the pickup impression technique, showed no significant permanent distortions in multiple misalignment internal-connection implants compared to the ST technique.

  9. Development and experimental study of oil-free capacitor module for plasma focus device

    NASA Astrophysics Data System (ADS)

    Sharma, Ravindra Kumar; Sharma, Archana

    2017-03-01

    This development is concerned with the compact capacitor module for a plasma focus device. Oil-free, non-standard geometry capacitors are designed and developed for high current delivery in sub-microseconds time. Metalized dielectric film based pulse capacitor becomes progressively less viable at currents above 10 kA. It is due to reliability and energy scaling difficulties, based on effects such as vaporization, high resistivity, and end connection. Bipolar electrolytic capacitors are also not preferred due to their limited life and comparatively low peak current delivery. Bi-axially oriented polypropylene (BOPP) film with extended aluminum foil is a combination to deliver moderately high power. But, electrically weak points, relative permittivity, and the edge gap margins have made its adoption difficult. A concept has been developed in lab for implementing the above combination in a less complex and costly manner. This paper concerns the development and testing process techniques for quite different hollow cylindrical, oil-free capacitors (4 μ F , 10 kV, 20 nH). Shot life of 1000 has been experimentally performed on the test bed at its rated energy density level. The technological methods and engineering techniques are now available and utilized for manufacturing and testing of BOPP film based oil-free capacitors.

  10. Ground deposition of liquid droplets released from a point source in the atmospheric surface layer

    NASA Astrophysics Data System (ADS)

    Panneton, Bernard

    1989-01-01

    A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.

  11. Thirst-dependent risk preferences in monkeys identify a primitive form of wealth.

    PubMed

    Yamada, Hiroshi; Tymula, Agnieszka; Louie, Kenway; Glimcher, Paul W

    2013-09-24

    Experimental economic techniques have been widely used to evaluate human risk attitudes, but how these measured attitudes relate to overall individual wealth levels is unclear. Previous noneconomic work has addressed this uncertainty in animals by asking the following: (i) Do our close evolutionary relatives share both our risk attitudes and our degree of economic rationality? And (ii) how does the amount of food or water one holds (a nonpecuniary form of "wealth") alter risk attitudes in these choosers? Unfortunately, existing noneconomic studies have provided conflicting insights from an economic point of view. We therefore used standard techniques from human experimental economics to measure monkey risk attitudes for water rewards as a function of blood osmolality (an objective measure of how much water the subjects possess). Early in training, monkeys behaved randomly, consistently violating first-order stochastic dominance and monotonicity. After training, they behaved like human choosers--technically consistent in their choices and weakly risk averse (i.e., risk averse or risk neutral on average)--suggesting that well-trained monkeys can serve as a model for human choice behavior. As with attitudes about money in humans, these risk attitudes were strongly wealth dependent; as the animals became "poorer," risk aversion increased, a finding incompatible with some models of wealth and risk in human decision making.

  12. Computational design of cyclic peptides for the customized oriented immobilization of globular proteins.

    PubMed

    Soler, Miguel A; Rodriguez, Alex; Russo, Anna; Adedeji, Abimbola Feyisara; Dongmo Foumthuim, Cedrix J; Cantarutti, Cristina; Ambrosetti, Elena; Casalis, Loredana; Corazza, Alessandra; Scoles, Giacinto; Marasco, Daniela; Laio, Alessandro; Fortuna, Sara

    2017-01-25

    The oriented immobilization of proteins, key for the development of novel responsive biomaterials, relies on the availability of effective probes. These are generally provided by standard approaches based on in vivo maturation and in vitro selection of antibodies and/or aptamers. These techniques can suffer technical problems when a non-immunogenic epitope needs to be targeted. Here we propose a strategy to circumvent this issue by in silico design. In our method molecular binders, in the form of cyclic peptides, are computationally evolved by stochastically exploring their sequence and structure space to identify high-affinity peptides for a chosen epitope of a target globular protein: here a solvent-exposed site of β2-microglobulin (β2m). Designed sequences were screened by explicit solvent molecular dynamics simulations (MD) followed by experimental validation. Five candidates gave dose-response surface plasmon resonance signals with dissociation constants in the micromolar range. One of them was further analyzed by means of isothermal titration calorimetry, nuclear magnetic resonance, and 250 ns of MD. Atomic-force microscopy imaging showed that this peptide is able to immobilize β2m on a gold surface. In short, we have shown by a variety of experimental techniques that it is possible to capture a protein through an epitope of choice by computational design.

  13. Development and experimental study of oil-free capacitor module for plasma focus device.

    PubMed

    Sharma, Ravindra Kumar; Sharma, Archana

    2017-03-01

    This development is concerned with the compact capacitor module for a plasma focus device. Oil-free, non-standard geometry capacitors are designed and developed for high current delivery in sub-microseconds time. Metalized dielectric film based pulse capacitor becomes progressively less viable at currents above 10 kA. It is due to reliability and energy scaling difficulties, based on effects such as vaporization, high resistivity, and end connection. Bipolar electrolytic capacitors are also not preferred due to their limited life and comparatively low peak current delivery. Bi-axially oriented polypropylene (BOPP) film with extended aluminum foil is a combination to deliver moderately high power. But, electrically weak points, relative permittivity, and the edge gap margins have made its adoption difficult. A concept has been developed in lab for implementing the above combination in a less complex and costly manner. This paper concerns the development and testing process techniques for quite different hollow cylindrical, oil-free capacitors (4 μF, 10 kV, 20 nH). Shot life of 1000 has been experimentally performed on the test bed at its rated energy density level. The technological methods and engineering techniques are now available and utilized for manufacturing and testing of BOPP film based oil-free capacitors.

  14. Random field assessment of nanoscopic inhomogeneity of bone

    PubMed Central

    Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu

    2010-01-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128

  15. Application of Advanced Materials Protecting from Influence of Free Space Environment

    NASA Astrophysics Data System (ADS)

    Dotsenko, Oleg; Shovkoplyas, Yuriy

    2016-07-01

    High cost and low availability of the components certified for use in the space environment forces satellite designers to using industrial and even commercial items. Risks associated with insufficient knowledge about behavior of these components in radiation environment are parried, mainly, by careful radiating designing of a satellite where application of special protective materials with improved space radiation shielding characteristics is one of the most widely used practices. Another advantage of protective materials application appears when a satellite designer needs using equipment in more severe space environment conditions then it has been provided at the equipment development. In such cases only expensive repeated qualification of the equipment hardness can be alternative to protective materials application. But mostly this way is unacceptable for satellite developers, being within strong financial and temporal restrictions. To apply protective materials effectively, the developer should have possibility to answer the question: "Where inside a satellite shall I place these materials and what shall be their shape to meet the requirements on space radiation hardness with minimal mass and volume expenses?" At that, the minimum set of requirements on space radiation hardness include: ionizing dose, nonionizing dose, single events, and internal charging. The standard calculative models and experimental techniques, now in use for space radiation hardness assurance of a satellite are unsuitable for the problem solving in such formulation. The sector analysis methodology, widely used in satellite radiating designing, is applicable only for aluminium shielding and doesn't allow taking into account advantages of protective materials. The programs simulating transport of space radiations through a substance with the use of Monte-Carlo technique, such as GEANT4, FLUKA, HZETRN and others, are fully applicable in view of their capabilities; but time required for calculations with use of these tools makes their utilization extremely problematic in the engineering practice. The calculative and experimental technique developed by the authors allows estimation of ionizing dose, nonionizing dose, single events, and internal charging of solar and trapped electron and proton radiations at the requested points inside a satellite when the special protective materials have been applied. The results of developed technique application are in satisfactory agreement with the results achieved with the help of the standard calculative models.

  16. Generalized Procedure for Improved Accuracy of Thermal Contact Resistance Measurements for Materials With Arbitrary Temperature-Dependent Thermal Conductivity

    DOE PAGES

    Sayer, Robert A.

    2014-06-26

    Thermal contact resistance (TCR) is most commonly measured using one-dimensional steady-state calorimetric techniques. In the experimental methods we utilized, a temperature gradient is applied across two contacting beams and the temperature drop at the interface is inferred from the temperature profiles of the rods that are measured at discrete points. During data analysis, thermal conductivity of the beams is typically taken to be an average value over the temperature range imposed during the experiment. Our generalized theory is presented and accounts for temperature-dependent changes in thermal conductivity. The procedure presented enables accurate measurement of TCR for contacting materials whose thermalmore » conductivity is any arbitrary function of temperature. For example, it is shown that the standard technique yields TCR values that are about 15% below the actual value for two specific examples of copper and silicon contacts. Conversely, the generalized technique predicts TCR values that are within 1% of the actual value. The method is exact when thermal conductivity is known exactly and no other errors are introduced to the system.« less

  17. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  18. In vivo visualization and ex vivo quantification of experimental myocardial infarction by indocyanine green fluorescence imaging

    PubMed Central

    Sonin, Dmitry; Papayan, Garry; Pochkaeva, Evgeniia; Chefu, Svetlana; Minasian, Sarkis; Kurapeev, Dmitry; Vaage, Jarle; Petrishchev, Nickolay; Galagudza, Michael

    2016-01-01

    The fluorophore indocyanine green accumulates in areas of ischemia-reperfusion injury due to an increase in vascular permeability and extravasation of the dye. The aim of the study was to validate an indocyanine green-based technique of in vivo visualization of myocardial infarction. A further aim was to quantify infarct size ex vivo and compare this technique with the standard triphenyltetrazolium chloride staining. Wistar rats were subjected to regional myocardial ischemia (30 minutes) followed by reperfusion (n = 7). Indocyanine green (0.25 mg/mL in 1 mL of normal saline) was infused intravenously for 10 minutes starting from the 25th minute of ischemia. Video registration in the near-infrared fluorescence was performed. Epicardial fluorescence of indocyanine green corresponded to the injured area after 30 minutes of reperfusion. Infarct size was similar when determined ex vivo using traditional triphenyltetrazolium chloride assay and indocyanine green fluorescent labeling. Intravital visualization of irreversible injury can be done directly by fluorescence on the surface of the heart. This technique may also be an alternative for ex vivo measurements of infarct size. PMID:28101408

  19. In vivo visualization and ex vivo quantification of experimental myocardial infarction by indocyanine green fluorescence imaging.

    PubMed

    Sonin, Dmitry; Papayan, Garry; Pochkaeva, Evgeniia; Chefu, Svetlana; Minasian, Sarkis; Kurapeev, Dmitry; Vaage, Jarle; Petrishchev, Nickolay; Galagudza, Michael

    2017-01-01

    The fluorophore indocyanine green accumulates in areas of ischemia-reperfusion injury due to an increase in vascular permeability and extravasation of the dye. The aim of the study was to validate an indocyanine green-based technique of in vivo visualization of myocardial infarction. A further aim was to quantify infarct size ex vivo and compare this technique with the standard triphenyltetrazolium chloride staining. Wistar rats were subjected to regional myocardial ischemia (30 minutes) followed by reperfusion (n = 7). Indocyanine green (0.25 mg/mL in 1 mL of normal saline) was infused intravenously for 10 minutes starting from the 25th minute of ischemia. Video registration in the near-infrared fluorescence was performed. Epicardial fluorescence of indocyanine green corresponded to the injured area after 30 minutes of reperfusion. Infarct size was similar when determined ex vivo using traditional triphenyltetrazolium chloride assay and indocyanine green fluorescent labeling. Intravital visualization of irreversible injury can be done directly by fluorescence on the surface of the heart. This technique may also be an alternative for ex vivo measurements of infarct size.

  20. Pulse-shape discrimination techniques for the COBRA double beta-decay experiment at LNGS

    NASA Astrophysics Data System (ADS)

    Zatschler, S.; COBRA Collaboration

    2017-09-01

    In modern elementary particle physics several questions arise from the fact that neutrino oscillation experiments have found neutrinos to be massive. Among them is the so far unknown nature of neutrinos: either they act as so-called Majorana particles, where one cannot distinguish between particle and antiparticle, or they are Dirac particles like all the other fermions in the Standard Model. The study of neutrinoless double beta-decay (0νββ-decay), where the lepton number conservation is violated by two units, could answer the question regarding the underlying nature of neutrinos and might also shed light on the mechanism responsible for the mass generation. So far there is no experimental evidence for the existence of 0νββ-decay, hence, existing experiments have to be improved and novel techniques should be explored. One of the next-generation experiments dedicated to the search for this ultra-rare decay is the COBRA experiment. This article gives an overview of techniques to identify and reject background based on pulse-shape discrimination.

  1. Digital implementation of a laser frequency stabilisation technique in the telecommunications band

    NASA Astrophysics Data System (ADS)

    Jivan, Pritesh; van Brakel, Adriaan; Manuel, Rodolfo Martínez; Grobler, Michael

    2016-02-01

    Laser frequency stabilisation in the telecommunications band was realised using the Pound-Drever-Hall (PDH) error signal. The transmission spectrum of the Fabry-Perot cavity was used as opposed to the traditionally used reflected spectrum. A comparison was done using an analogue as well as a digitally implemented system. This study forms part of an initial step towards developing a portable optical time and frequency standard. The frequency discriminator used in the experimental setup was a fibre-based Fabry-Perot etalon. The phase sensitive system made use of the optical heterodyne technique to detect changes in the phase of the system. A lock-in amplifier was used to filter and mix the input signals to generate the error signal. This error signal may then be used to generate a control signal via a PID controller. An error signal was realised at a wavelength of 1556 nm which correlates to an optical frequency of 1.926 THz. An implementation of the analogue PDH technique yielded an error signal with a bandwidth of 6.134 GHz, while a digital implementation yielded a bandwidth of 5.774 GHz.

  2. Characterization of polymerized liposomes using a combination of dc and cyclical electrical field-flow fractionation.

    PubMed

    Sant, Himanshu J; Chakravarty, Siddharth; Merugu, Srinivas; Ferguson, Colin G; Gale, Bruce K

    2012-10-02

    Characterization of polymerized liposomes (PolyPIPosomes) was carried out using a combination of normal dc electrical field-flow fractionation and cyclical electrical field-flow fractionation (CyElFFF) as an analytical technique. The constant nature of the carrier fluid and channel configuration for this technique eliminates many variables associated with multidimensional analysis. CyElFFF uses an oscillating field to induce separation and is performed in the same channel as standard dc electrical field-flow fractionation separation. Theory and experimental methods to characterize nanoparticles in terms of their sizes and electrophoretic mobilities are discussed in this paper. Polystyrene nanoparticles are used for system calibration and characterization of the separation performance, whereas polymerized liposomes are used to demonstrate the applicability of the system to biomedical samples. This paper is also the first to report separation and a higher effective field when CyElFFF is operated at very low applied voltages. The technique is shown to have the ability to quantify both particle size and electrophoretic mobility distributions for colloidal polystyrene nanoparticles and PolyPIPosomes.

  3. Impact of corpus domain for sentiment classification: An evaluation study using supervised machine learning techniques

    NASA Astrophysics Data System (ADS)

    Karsi, Redouane; Zaim, Mounia; El Alami, Jamila

    2017-07-01

    Thanks to the development of the internet, a large community now has the possibility to communicate and express its opinions and preferences through multiple media such as blogs, forums, social networks and e-commerce sites. Today, it becomes clearer that opinions published on the web are a very valuable source for decision-making, so a rapidly growing field of research called “sentiment analysis” is born to address the problem of automatically determining the polarity (Positive, negative, neutral,…) of textual opinions. People expressing themselves in a particular domain often use specific domain language expressions, thus, building a classifier, which performs well in different domains is a challenging problem. The purpose of this paper is to evaluate the impact of domain for sentiment classification when using machine learning techniques. In our study three popular machine learning techniques: Support Vector Machines (SVM), Naive Bayes and K nearest neighbors(KNN) were applied on datasets collected from different domains. Experimental results show that Support Vector Machines outperforms other classifiers in all domains, since it achieved at least 74.75% accuracy with a standard deviation of 4,08.

  4. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  5. Experimental scrambling and noise reduction applied to the optical encryption of QR codes.

    PubMed

    Barrera, John Fredy; Vélez, Alejandro; Torroba, Roberto

    2014-08-25

    In this contribution, we implement two techniques to reinforce optical encryption, which we restrict in particular to the QR codes, but could be applied in a general encoding situation. To our knowledge, we present the first experimental-positional optical scrambling merged with an optical encryption procedure. The inclusion of an experimental scrambling technique in an optical encryption protocol, in particular dealing with a QR code "container", adds more protection to the encoding proposal. Additionally, a nonlinear normalization technique is applied to reduce the noise over the recovered images besides increasing the security against attacks. The opto-digital techniques employ an interferometric arrangement and a joint transform correlator encrypting architecture. The experimental results demonstrate the capability of the methods to accomplish the task.

  6. Prediction of physical protein protein interactions

    NASA Astrophysics Data System (ADS)

    Szilágyi, András; Grimm, Vera; Arakaki, Adrián K.; Skolnick, Jeffrey

    2005-06-01

    Many essential cellular processes such as signal transduction, transport, cellular motion and most regulatory mechanisms are mediated by protein-protein interactions. In recent years, new experimental techniques have been developed to discover the protein-protein interaction networks of several organisms. However, the accuracy and coverage of these techniques have proven to be limited, and computational approaches remain essential both to assist in the design and validation of experimental studies and for the prediction of interaction partners and detailed structures of protein complexes. Here, we provide a critical overview of existing structure-independent and structure-based computational methods. Although these techniques have significantly advanced in the past few years, we find that most of them are still in their infancy. We also provide an overview of experimental techniques for the detection of protein-protein interactions. Although the developments are promising, false positive and false negative results are common, and reliable detection is possible only by taking a consensus of different experimental approaches. The shortcomings of experimental techniques affect both the further development and the fair evaluation of computational prediction methods. For an adequate comparative evaluation of prediction and high-throughput experimental methods, an appropriately large benchmark set of biophysically characterized protein complexes would be needed, but is sorely lacking.

  7. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  8. Application of the windowed-Fourier-transform-based fringe analysis technique for investigating temperature and concentration fields in fluids.

    PubMed

    Mohanan, Sharika; Srivastava, Atul

    2014-04-10

    The present work is concerned with the development and application of a novel fringe analysis technique based on the principles of the windowed-Fourier-transform (WFT) for the determination of temperature and concentration fields from interferometric images for a range of heat and mass transfer applications. Based on the extent of the noise level associated with the experimental data, the technique has been coupled with two different phase unwrapping methods: the Itoh algorithm and the quality guided phase unwrapping technique for phase extraction. In order to generate the experimental data, a range of experiments have been carried out which include cooling of a vertical flat plate in free convection conditions, combustion of mono-propellant flames, and growth of organic as well as inorganic crystals from their aqueous solutions. The flat plate and combustion experiments are modeled as heat transfer applications wherein the interest is to determine the whole-field temperature distribution. Aqueous-solution-based crystal growth experiments are performed to simulate the mass transfer phenomena and the interest is to determine the two-dimensional solute concentration field around the growing crystal. A Mach-Zehnder interferometer has been employed to record the path-integrated quantity of interest (temperature and/or concentration) in the form of interferometric images in the experiments. The potential of the WFT method has also been demonstrated on numerically simulated phase data for varying noise levels, and the accuracy in phase extraction have been quantified in terms of the root mean square errors. Three levels of noise, i.e., 0%, 10%, and 20% have been considered. Results of the present study show that the WFT technique allows an accurate extraction of phase values that can subsequently be converted into two-dimensional temperature and/or concentration distribution fields. Moreover, since WFT is a local processing technique, speckle patterns and the inherent noise in the interferometric data do not affect the resultant phase values. Brief comparisons of the accuracy of the WFT with other standard techniques such as conventional Fourier-filtering methods are also presented.

  9. Thermal neutron detector based on COTS CMOS imagers and a conversion layer containing Gadolinium

    NASA Astrophysics Data System (ADS)

    Pérez, Martín; Blostein, Juan Jerónimo; Bessia, Fabricio Alcalde; Tartaglione, Aureliano; Sidelnik, Iván; Haro, Miguel Sofo; Suárez, Sergio; Gimenez, Melisa Lucía; Berisso, Mariano Gómez; Lipovetzky, Jose

    2018-06-01

    In this work we will introduce a novel low cost position sensitive thermal neutron detection technique, based on a Commercial Off The Shelf CMOS image sensor covered with a Gadolinium containing conversion layer. The feasibility of the neutron detection technique implemented in this work has been experimentally demonstrated. A thermal neutron detection efficiency of 11.3% has been experimentally obtained with a conversion layer of 11.6 μm. It was experimentally verified that the thermal neutron detection efficiency of this technique is independent on the intensity of the incident thermal neutron flux, which was confirmed for conversion layers of different thicknesses. Based on the experimental results, a spatial resolution better than 25 μm is expected. This spatial resolution makes the proposed technique specially useful for neutron beam characterization, neutron beam dosimetry, high resolution neutron imaging, and several neutron scattering techniques.

  10. Optical fiber sensors measurement system and special fibers improvement

    NASA Astrophysics Data System (ADS)

    Jelinek, Michal; Hrabina, Jan; Hola, Miroslava; Hucl, Vaclav; Cizek, Martin; Rerucha, Simon; Lazar, Josef; Mikel, Bretislav

    2017-06-01

    We present method for the improvement of the measurement accuracy in the optical frequency spectra measurements based on tunable optical filters. The optical filter was used during the design and realization of the measurement system for the inspection of the fiber Bragg gratings. The system incorporates a reference block for the compensation of environmental influences, an interferometric verification subsystem and a PC - based control software implemented in LabView. The preliminary experimental verification of the measurement principle and the measurement system functionality were carried out on a testing rig with a specially prepared concrete console in the UJV Řež. The presented system is the laboratory version of the special nuclear power plant containment shape deformation measurement system which was installed in the power plant Temelin during last year. On the base of this research we started with preparation other optical fiber sensors to nuclear power plants measurement. These sensors will be based on the microstructured and polarization maintaining optical fibers. We started with development of new methods and techniques of the splicing and shaping optical fibers. We are able to made optical tapers from ultra-short called adiabatic with length around 400 um up to long tapers with length up to 6 millimeters. We developed new techniques of splicing standard Single Mode (SM) and Multimode (MM) optical fibers and splicing of optical fibers with different diameters in the wavelength range from 532 to 1550 nm. Together with development these techniques we prepared other techniques to splicing and shaping special optical fibers like as Polarization-Maintaining (PM) or hollow core Photonic Crystal Fiber (PCF) and theirs cross splicing methods with focus to minimalize backreflection and attenuation. The splicing special optical fibers especially PCF fibers with standard telecommunication and other SM fibers can be done by our developed techniques. Adjustment of the splicing process has to be prepared for any new optical fibers and new fibers combinations. The splicing of the same types of fibers from different manufacturers can be adjusted by several tested changes in the splicing process. We are able to splice PCF with standard telecommunication fiber with attenuation up to 2 dB. The method is also presented. Development of these new techniques and methods of the optical fibers splicing are made with respect to using these fibers to another research and development in the field of optical fibers sensors, laser frequency stabilization and laser interferometry based on optical fibers. Especially for the field of laser frequency stabilization we developed and present new techniques to closing microstructured fibers with gases inside.

  11. Optimized radiation-hardened erbium doped fiber amplifiers for long space missions

    NASA Astrophysics Data System (ADS)

    Ladaci, A.; Girard, S.; Mescia, L.; Robin, T.; Laurent, A.; Cadier, B.; Boutillier, M.; Ouerdane, Y.; Boukenter, A.

    2017-04-01

    In this work, we developed and exploited simulation tools to optimize the performances of rare earth doped fiber amplifiers (REDFAs) for space missions. To describe these systems, a state-of-the-art model based on the rate equations and the particle swarm optimization technique is developed in which we also consider the main radiation effect on REDFA: the radiation induced attenuation (RIA). After the validation of this tool set by confrontation between theoretical and experimental results, we investigate how the deleterious radiation effects on the amplifier performance can be mitigated following adequate strategies to conceive the REDFA architecture. The tool set was validated by comparing the calculated Erbium-doped fiber amplifier (EDFA) gain degradation under X-rays at ˜300 krad(SiO2) with the corresponding experimental results. Two versions of the same fibers were used in this work, a standard optical fiber and a radiation hardened fiber, obtained by loading the previous fiber with hydrogen gas. Based on these fibers, standard and radiation hardened EDFAs were manufactured and tested in different operating configurations, and the obtained data were compared with simulation data done considering the same EDFA structure and fiber properties. This comparison reveals a good agreement between simulated gain and experimental data (<10% as the maximum error for the highest doses). Compared to our previous results obtained on Er/Yb-amplifiers, these results reveal the importance of the photo-bleaching mechanism competing with the RIA that cannot be neglected for the modeling of the radiation-induced gain degradation of EDFAs. This implies to measure in representative conditions the RIA at the pump and signal wavelengths that are used as input parameters for the simulation. The validated numerical codes have then been used to evaluate the potential of some EDFA architecture evolutions in the amplifier performance during the space mission. Optimization of both the fiber length and the EDFA pumping scheme allows us to strongly reduce its radiation vulnerability in terms of gain. The presented approach is a complementary and effective tool for hardening by device techniques and opens new perspectives for the applications of REDFAs and lasers in harsh environments.

  12. Development of a Novel Space Flight Plan to Monitor Female Mice Fertility Using Reduced Crew Time

    NASA Technical Reports Server (NTRS)

    Christenson, Lane; Hong, Xiaoman; Alwood, Joshua S.; Ronca, April E.; Tash, Joseph S.; Talyansky, Yuli

    2017-01-01

    Ovarian estrogen impacts the normal homeostatic and metabolic processes of all tissues and organ systems within the body: particularly, but not limited to canonical space-flight impacted systems: bone, muscle, immune, wound repair, and cardiovascular. Effects of space flight on the ovarian estrogen production are therefore critical to our understanding of all space flight experiments using female mice, the current paradigm being used on the International Space Station (ISS). Recently, we demonstrated that vaginal wall histology could be used to determine the stage of the estrous cycle in female mice at the time of sacrifice in space. Moreover, this robust technique was completed following two post-flight freezethaw procedures of the carcasses (RR1 experiment). Thus, this technique represents a viable mechanism to determine the estrous cycle status of the female at the time of sacrifice and can be completed in a manner that does not impact primary experimental objectives. We propose that vaginal wall histology become a standard procedure completed on all mice sacrificed in space and that the individual estrous status of each animal be shared with all investigators. While evidence of estrous cyclicity was present in long-term (33 day) RR1 mice, fertility of female mice exposed to weightlessness remains unknown. In preparation for an upcoming funded NASA flight investigating the effects of long duration spaceflight on female fertility, we have refined our experimental design to minimize crew flight time and to accommodate the duration of Dragon capsule berth. These refinements maintain all our proposed primary and secondary experimental objectives. Briefly, in order to evaluate fertility, we will super ovulate mice using standard procedures (PMSG hCG), followed by collection of reproductive tract after follicular stimulation alone (PMSG) or following ovulation (hCG). Ovarian folliculogenesis and ovulation rate will be determined in fixed tissues following return in order to determine fertility. Ovarian and uterine tissues will also be evaluated by hormonal and gene expression profiling using quantitative approaches (radioimmunoassays, western blots, digital droplet PCR). Comparisons will be made to contemporary vivarium and Rodent Research Hardware Transporter and Habitat housed animals maintained on earth. Supported by NNX15AB48G to JST.

  13. Experimental, theoretical, and device application development of nanoscale focused electron-beam-induced deposition

    NASA Astrophysics Data System (ADS)

    Randolph, Steven Jeffrey

    Electron-beam-induced deposition (EBID) is a highly versatile nanofabrication technique that allows for growth of a variety of materials with nanoscale precision and resolution. While several applications and studies of EBID have been reported and published, there is still a significant lack of understanding of the complex mechanisms involved in the process. Consequently, EBID process control is, in general, limited and certain common experimental results regarding nanofiber growth have yet to be fully explained. Such anomalous results have been addressed in this work both experimentally and by computer simulation. Specifically, a correlation between SiOx nanofiber deposition observations and the phenomenon of electron beam heating (EBH) was shown by comparison of thermal computer models and experimental results. Depending on the beam energy, beam current, and nanostructure geometry, the heat generated can be substantial and may influence the deposition rate. Temperature dependent EBID growth experiments qualitatively verified the results of the EBH model. Additionally, EBID was used to produce surface image layers for maskless, direct-write lithography (MDL). A single layer process used directly written SiOx features as a masking layer for amorphous silicon thin films. A bilayer process implemented a secondary masking layer consisting of standard photoresist into which a pattern---directly written by EBID tungsten---was transferred. The single layer process was found to be extremely sensitive to the etch selectivity of the plasma etch. In the bilayer process, EBID tungsten was written onto photoresist and the pattern transferred by means of oxygen plasma dry development following a brief refractory descum. Conditions were developed to reduce the spatial spread of electrons in the photoresist layer and obtain ˜ 35 nm lines. Finally, an EBID-based technique for field emitter repair was applied to the Digital Electrostatically focused e-beam Array Lithography (DEAL) parallel electron beam lithography configuration to repair damaged or missing carbon nanofiber cathodes. The I-V response and lithography results from EBID tungsten-based devices were comparable to CNF-based DEAL devices indicating a successful repair technique.

  14. Experimental and analytical determination of stability parameters for a balloon tethered in a wind

    NASA Technical Reports Server (NTRS)

    Redd, L. T.; Bennett, R. M.; Bland, S. R.

    1973-01-01

    Experimental and analytical techniques for determining stability parameters for a balloon tethered in a steady wind are described. These techniques are applied to a particular 7.64-meter-long balloon, and the results are presented. The stability parameters of interest appear as coefficients in linearized stability equations and are derived from the various forces and moments acting on the balloon. In several cases the results from the experimental and analytical techniques are compared and suggestions are given as to which techniques are the most practical means of determining values for the stability parameters.

  15. In-situ identification of anti-personnel mines using acoustic resonant spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, R L; Roberts, R S

    1999-02-01

    A new technique for identifying buried Anti-Personnel Mines is described, and a set of preliminary experiments designed to assess the feasibility of this technique is presented. Analysis of the experimental results indicates that the technique has potential, but additional work is required to bring the technique to fruition. In addition to the experimental results presented here, a technique used to characterize the sensor employed in the experiments is detailed.

  16. Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model

    DTIC Science & Technology

    2010-03-01

    EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End

  17. Modeling and Hazard Analysis Using STPA

    NASA Astrophysics Data System (ADS)

    Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka

    2010-09-01

    A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis and following the NASA standards for safety-critical systems, the results of our experimental application of STPA can be compared with these more traditional safety engineering approaches in terms of the problems identified and the resources required to use it.

  18. Enrichment Meter Dataset from High-Resolution Gamma Spectroscopy Measurements of U3O8 Enrichment Standards and UF6 Cylinder Wall Equivalents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Andrew D.; Croft, Stephen; Shephard, Adam M.

    2015-12-01

    The Enrichment Meter Principle (EMP) is the basis for a commonly used standard test method for the non-destructive assay of 235U enrichment in bulk compounds [1]. The technique involves determining the net count rate in the direct 186 keV peak using medium or high energy gamma-ray spectrometry in a fixed geometry. With suitable correction for wall attenuation, compound type, rate loss (live time), and peaked background (if significant), the atom fraction of 235U may be obtained from the counting rate from a linear relationship through the origin. The widespread use of this method for field verification of enrichment [2,3] togethermore » with the fact that the response function rests on fundamental physics considerations (i.e., is not represented by a convenient but arbitrary form) makes it an interesting example of uncertainty quantification, one in which one can expect a valid measurement model can be applied. When applied using NaI(Tl) and region of interest analysis, the technique is susceptible to both interference error and bias [2-4]. When implemented using high-resolution gamma-ray spectroscopy, the spectrum interpretation is considerable simplified and more robust [5]. However, a practical challenge to studying the uncertainty budget of the EMP method (for example, to test linearity, extract wall corrections and so forth using modern methods) is the availability of quality experimental data that can be referenced and shared. To fill this gap, the research team undertook an experimental campaign [6]. A measurement campaign was conducted to produce high-resolution gamma spectroscopy enrichment meter data comparable to UF 6 cylinder measurements. The purpose of this report is to provide both an introduction to and quality assurance (QA) of the raw data produced. This report is intended for the analyst or researcher who uses the raw data. Unfortunately, the raw data (i.e., the spectra files) are too voluminous to include in this report but can be requested from Steven Croft of the Safeguards & Security Technology Group (scroft@ornl.gov 865-241-2834). The complete processed data are tabulated in Appendix A. The analysis techniques used to produce the QA data presented in this report [e.g., three regions-ofinterest (ROI) peak extraction and batch analysis processes] are not the most sophisticated techniques available; analysts are encouraged to reanalyze the raw data using more sensitive techniques and to improve upon the results presented here. With that being said, the analysis techniques used here are more than adequate to present and inspect the quality of the data.« less

  19. Development of a High-Resolution Laser Absorption Spectroscopy Method with Application to the Determination of Absolute Concentration of Gaseous Elemental Mercury in Air.

    PubMed

    Srivastava, Abneesh; Hodges, Joseph T

    2018-06-05

    Isotope dilution-cold-vapor-inductively coupled plasma mass spectrometry (ID-CV-ICPMS) has become the primary standard for measurement of gaseous elemental mercury (GEM) mass concentration. However, quantitative mass spectrometry is challenging for several reasons including (1) the need for isotopic spiking with a standard reference material, (2) the requirement for bias-free passive sampling protocols, (3) the need for stable mass spectrometry interface design, and (4) the time and cost involved for gas sampling, sample processing, and instrument calibration. Here, we introduce a high-resolution laser absorption spectroscopy method that eliminates the need for sample-specific calibration standards or detailed analysis of sample treatment losses. This technique involves a tunable, single-frequency laser absorption spectrometer that measures isotopically resolved spectra of elemental mercury (Hg) spectra of 6 1 S 0 ← 6 3 P 1 intercombination transition near λ = 253.7 nm. Measured spectra are accurately modeled from first-principles using the Beer-Lambert law and Voigt line profiles combined with literature values for line positions, line shape parameters, and the spontaneous emission Einstein coefficient to obtain GEM mass concentration values. We present application of this method for the measurement of the equilibrium concentration of mercury vapor near room temperature. Three closed systems are considered: two-phase mixtures of liquid Hg and its vapor and binary two-phase mixtures of Hg-air and Hg-N 2 near atmospheric pressure. Within the experimental relative standard uncertainty, 0.9-1.5% congruent values of the equilibrium Hg vapor concentration are obtained for the Hg-only, Hg-air, Hg-N 2 systems, in confirmation with thermodynamic predictions. We also discuss detection limits and the potential of the present technique to serve as an absolute primary standard for measurements of gas-phase mercury concentration and isotopic composition.

  20. Effect of low-shrinkage monomers on the physicochemical properties of experimental composite resin

    PubMed Central

    He, Jingwei; Garoushi, Sufyan; Vallittu, Pekka K.; Lassila, Lippo

    2018-01-01

    Abstract This study was conducted to determine whether novel experimental low-shrinkage dimethacrylate co-monomers could provide low polymerization shrinkage composites without sacrifice to degree of conversion, and mechanical properties of the composites. Experimental composites were prepared by mixing 28.6 wt% of bisphenol-A-glycidyl dimethacrylate based resin matrix (bis-GMA) with various weight-fractions of co-monomers; tricyclo decanedimethanol dacrylate (SR833s) and isobornyl acrylate (IBOA) to 71.4 wt% of particulate-fillers. A composite based on bis-GMA/TEGDMA (triethylene glycol dimethacrylate) was used as a control. Fracture toughness and flexural strength were determined for each experimental material following international standards. Degree of monomer-conversion (DC%) was determined by FTIR spectrometry. The volumetric shrinkage in percent was calculated as a buoyancy change in distilled water by means of the Archimedes’ principle. Polymerization shrinkage-strain and -stress of the specimens were measured using the strain-gage technique and tensilometer, respectively with respect to time. Statistical analysis revealed that control group had the highest double-bond conversion (p < .05) among the experimental resins tested. All of the experimental composite resins had comparable flexural strength, modulus, and fracture toughness (p > .05). Volumetric shrinkage and shrinkage stress decreased with increasing IBOA concentration. Replacing TEGDMA with SR833s and IBOA can decrease the volumetric shrinkage, shrinkage strain, and shrinkage stress of composite resins without affecting the mechanical properties. However, the degree of conversion was also decreased. PMID:29536025

  1. Effect of low-shrinkage monomers on the physicochemical properties of experimental composite resin.

    PubMed

    He, Jingwei; Garoushi, Sufyan; Vallittu, Pekka K; Lassila, Lippo

    2018-01-01

    This study was conducted to determine whether novel experimental low-shrinkage dimethacrylate co-monomers could provide low polymerization shrinkage composites without sacrifice to degree of conversion, and mechanical properties of the composites. Experimental composites were prepared by mixing 28.6 wt% of bisphenol-A-glycidyl dimethacrylate based resin matrix ( bis -GMA) with various weight-fractions of co-monomers; tricyclo decanedimethanol dacrylate (SR833s) and isobornyl acrylate (IBOA) to 71.4 wt% of particulate-fillers. A composite based on bis -GMA/TEGDMA (triethylene glycol dimethacrylate) was used as a control. Fracture toughness and flexural strength were determined for each experimental material following international standards. Degree of monomer-conversion (DC%) was determined by FTIR spectrometry. The volumetric shrinkage in percent was calculated as a buoyancy change in distilled water by means of the Archimedes' principle. Polymerization shrinkage-strain and -stress of the specimens were measured using the strain-gage technique and tensilometer, respectively with respect to time. Statistical analysis revealed that control group had the highest double-bond conversion ( p  < .05) among the experimental resins tested. All of the experimental composite resins had comparable flexural strength, modulus, and fracture toughness ( p  > .05). Volumetric shrinkage and shrinkage stress decreased with increasing IBOA concentration. Replacing TEGDMA with SR833s and IBOA can decrease the volumetric shrinkage, shrinkage strain, and shrinkage stress of composite resins without affecting the mechanical properties. However, the degree of conversion was also decreased.

  2. eSBMTools 1.0: enhanced native structure-based modeling tools.

    PubMed

    Lutz, Benjamin; Sinner, Claude; Heuermann, Geertje; Verma, Abhinav; Schug, Alexander

    2013-11-01

    Molecular dynamics simulations provide detailed insights into the structure and function of biomolecular systems. Thus, they complement experimental measurements by giving access to experimentally inaccessible regimes. Among the different molecular dynamics techniques, native structure-based models (SBMs) are based on energy landscape theory and the principle of minimal frustration. Typically used in protein and RNA folding simulations, they coarse-grain the biomolecular system and/or simplify the Hamiltonian resulting in modest computational requirements while achieving high agreement with experimental data. eSBMTools streamlines running and evaluating SBM in a comprehensive package and offers high flexibility in adding experimental- or bioinformatics-derived restraints. We present a software package that allows setting up, modifying and evaluating SBM for both RNA and proteins. The implemented workflows include predicting protein complexes based on bioinformatics-derived inter-protein contact information, a standardized setup of protein folding simulations based on the common PDB format, calculating reaction coordinates and evaluating the simulation by free-energy calculations with weighted histogram analysis method or by phi-values. The modules interface with the molecular dynamics simulation program GROMACS. The package is open source and written in architecture-independent Python2. http://sourceforge.net/projects/esbmtools/. alexander.schug@kit.edu. Supplementary data are available at Bioinformatics online.

  3. Genital region cleansing wipes: Effects on urine culture contamination.

    PubMed

    Selek, Mehmet Burak; Bektöre, Bayhan; Sezer, Ogün; Kula Atik, Tuğba; Baylan, Orhan; Özyurt, Mustafa

    2017-01-30

    Urine culture is the gold standard test for revealing the microbial agent causing urinary tract infection (UTI). Culture results are affected by sampling techniques; improper sampling leads to contamination of urine and thus contamination of the culture with urogenital flora. We aimed to evaluate the effect of urogenital cleansing, performed with chlorhexidine-containing genital region cleansing wipes (GRCW) on contamination rates. A total of 2,665 patients with UTI-related complaints and with urine culture requests from various outpatient clinics were enrolled in the study. Of the patients, 1,609 in the experimental group used GRCW before sampling, while 1,046 in the control group did not use any wipes. The contamination rate in the experimental group patients was 7.7%, while it was 15.8% in the control group. Contamination rates were significantly higher in the control group than in the experimental group for both women and men. Contamination rates for children and adults were also significantly lower in the experimental group than in the control group. Our study, conducted in a large population, showed that the use of chlorhexidine-containing cleansing wipes significantly reduced urine culture contamination rates in both genders, in both child and adult age groups. Using GRCW, collection of urine after urogenital area cleansing will decrease the contamination problem.

  4. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  5. μ-PIV measurements of the ensemble flow fields surrounding a migrating semi-infinite bubble.

    PubMed

    Yamaguchi, Eiichiro; Smith, Bradford J; Gaver, Donald P

    2009-08-01

    Microscale particle image velocimetry (μ-PIV) measurements of ensemble flow fields surrounding a steadily-migrating semi-infinite bubble through the novel adaptation of a computer controlled linear motor flow control system. The system was programmed to generate a square wave velocity input in order to produce accurate constant bubble propagation repeatedly and effectively through a fused glass capillary tube. We present a novel technique for re-positioning of the coordinate axis to the bubble tip frame of reference in each instantaneous field through the analysis of the sudden change of standard deviation of centerline velocity profiles across the bubble interface. Ensemble averages were then computed in this bubble tip frame of reference. Combined fluid systems of water/air, glycerol/air, and glycerol/Si-oil were used to investigate flows comparable to computational simulations described in Smith and Gaver (2008) and to past experimental observations of interfacial shape. Fluorescent particle images were also analyzed to measure the residual film thickness trailing behind the bubble. The flow fields and film thickness agree very well with the computational simulations as well as existing experimental and analytical results. Particle accumulation and migration associated with the flow patterns near the bubble tip after long experimental durations are discussed as potential sources of error in the experimental method.

  6. Critical overview of all available animal models for abdominal wall hernia research.

    PubMed

    Vogels, R R M; Kaufmann, R; van den Hil, L C L; van Steensel, S; Schreinemacher, M H F; Lange, J F; Bouvy, N D

    2017-10-01

    Since the introduction of the first prosthetic mesh for abdominal hernia repair, there has been a search for the "ideal mesh." The use of preclinical or animal models for assessment of necessary characteristics of new and existing meshes is an indispensable part of hernia research. Unfortunately, in our experience there is a lack of consensus among different research groups on which model to use. Therefore, we hypothesized that there is a lack of comparability within published animal research on hernia surgery due to wide range in experimental setup among different research groups. A systematic search of the literature was performed to provide a complete overview of all animal models published between 2000 and 2014. Relevant parameters on model characteristics and outcome measurement were scored on a standardized scoring sheet. Due to the wide range in different animals used, ranging from large animal models like pigs to rodents, we decided to limit the study to 168 articles concerning rat models. Within these rat models, we found wide range of baseline animal characteristics, operation techniques, and outcome measurements. Making reliable comparison of results among these studies is impossible. There is a lack of comparability among experimental hernia research, limiting the impact of this experimental research. We therefore propose the establishment of guidelines for experimental hernia research by the EHS.

  7. Experimental design and reporting standards for metabolomics studies of mammalian cell lines.

    PubMed

    Hayton, Sarah; Maker, Garth L; Mullaney, Ian; Trengove, Robert D

    2017-12-01

    Metabolomics is an analytical technique that investigates the small biochemical molecules present within a biological sample isolated from a plant, animal, or cultured cells. It can be an extremely powerful tool in elucidating the specific metabolic changes within a biological system in response to an environmental challenge such as disease, infection, drugs, or toxins. A historically difficult step in the metabolomics pipeline is in data interpretation to a meaningful biological context, for such high-variability biological samples and in untargeted metabolomics studies that are hypothesis-generating by design. One way to achieve stronger biological context of metabolomic data is via the use of cultured cell models, particularly for mammalian biological systems. The benefits of in vitro metabolomics include a much greater control of external variables and no ethical concerns. The current concerns are with inconsistencies in experimental procedures and level of reporting standards between different studies. This review discusses some of these discrepancies between recent studies, such as metabolite extraction and data normalisation. The aim of this review is to highlight the importance of a standardised experimental approach to any cultured cell metabolomics study and suggests an example procedure fully inclusive of information that should be disclosed in regard to the cell type/s used and their culture conditions. Metabolomics of cultured cells has the potential to uncover previously unknown information about cell biology, functions and response mechanisms, and so the accurate biological interpretation of the data produced and its ability to be compared to other studies should be considered vitally important.

  8. Determination of B-complex vitamins in pharmaceutical formulations by surface-enhanced Raman spectroscopy.

    PubMed

    Junior, Benedito Roberto Alvarenga; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Durango, Luis Guillermo Cuadrado; Forim, Moacir Rossi; Carneiro, Renato Lajarim

    2018-01-05

    The aim of this work was to quantify B-complex vitamins in pharmaceutical samples by surface enhanced Raman spectroscopy technique using gold colloid substrate. Synthesis of gold nanoparticles was performed according to an adapted Turkevich method. Initial essays were able to suggest the orientation of molecules on gold nanoparticles surface. Central Composite design was performed to obtain the highest SERS signal for nicotinamide and riboflavin. The evaluated parameters in the experimental design were volume of AuNPs, concentration of vitamins and sodium chloride concentration. The best condition for nicotinamide was NaCl 2.3×10 -3 molL -1 and 700μL of AuNPs colloid and this same condition showed to be adequate to quantify thiamine. The experimental design for riboflavin shows the best condition at NaCl 1.15×10 -2 molL -1 and 2.8mL of AuNPs colloid. It was possible to quantify thiamine and nicotinamide in presence of others vitamins and excipients in two solid multivitamin formulations using the standard addition procedure. The standard addition curve presented a R 2 higher than 0.96 for both nicotinamide and thiamine, at orders of magnitude 10 -7 and 10 -8 molL -1 , respectively. The nicotinamide content in a cosmetic gel sample was also quantified by direct analysis presenting R 2 0.98. The t-student test presented no significant difference regarding HPLC method. Despite the experimental design performed for riboflavin, it was not possible its quantification in the commercial samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  10. Prospecting for new physics in the Higgs and flavor sectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishara, Fady

    We explore two directions in beyond the standard model physics: dark matter model building and probing new sources of CP violation. In dark matter model building, we consider two scenarios where the stability of dark matter derives from the flavor symmetries of the standard model. The first model contains a flavor singlet dark matter candidate whose couplings to the visible sector are proportional to the flavor breaking parameters. This leads to a metastable dark matter with TeV scale mediators. In the second model, we consider a fully gauged SU(3) 3 flavor model with a flavor triplet dark matter. Consequently, the dark matter multiplet is charged while the standard model fields are neutral under a remnant Z 3 which ensures dark matter stability. We show that a Dirac fermion dark matter with radiative splitting in the multiplet must have a mass in the range [0:5; 5] TeV in order to satisfy all experimental constraints. We then turn our attention to Higgs portal dark matter and investigate the possibility of obtaining bounds on the up, down, and strange quark Yukawa couplings. If Higgs portal dark matter is discovered, we find that direct detection rates are insensitive to vanishing light quark Yukawa couplings. We then review flavor models and give the expected enhancement or suppression of the Yukawa couplings in those models. Finally, in the last two chapters, we develop techniques for probing CP violation in the Higgs coupling to photons and in rare radiative decays of B mesons. While theoretically clean, we find that these methods are not practical with current and planned detectors. However, these techniques can be useful with a dedicated detector (e.g., a gaseous TPC). In the case of radiative B meson decay B 0 → (K* → Kππ) γ, the techniques we develop also allow the extraction of the photon polarization fraction which is sensitive to new physics contributions since, in the standard model, the right(left) handed polarization fraction is of O( Λ QCD=m b) formore » $$\\bar{B}^{0}$$(B 0) meson decays.« less

  11. Practical approach to subject-specific estimation of knee joint contact force.

    PubMed

    Knarr, Brian A; Higginson, Jill S

    2015-08-20

    Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data; however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models' predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Practical approach to subject-specific estimation of knee joint contact force

    PubMed Central

    Knarr, Brian A.; Higginson, Jill S.

    2015-01-01

    Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data, however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models’ predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. PMID:25952546

  13. Isoelectric points and points of zero charge of metal (hydr)oxides: 50years after Parks' review.

    PubMed

    Kosmulski, Marek

    2016-12-01

    The pH-dependent surface charging of metal (hydr)oxides is reviewed on the occasion of the 50th anniversary of the publication by G.A. Parks: "Isoelectric points of solid oxides, solid hydroxides, and aqueous hydroxo complex systems" in Chemical Reviews. The point of zero charge (PZC) and isoelectric point (IEP) became standard parameters to characterize metal oxides in aqueous dispersions, and they define adsorption (surface excess) of ions, stability against coagulation, rheological properties of dispersions, etc. They are commonly used in many branches of science including mineral processing, soil science, materials science, geochemistry, environmental engineering, and corrosion science. Parks established standard procedures and experimental conditions which are required to obtain reliable and reproducible values of PZC and IEP. The field is very active, and the number of related papers exceeds 300 a year, and the standards established by Parks remain still valid. Relevant experimental techniques improved over the years, especially the measurements of electrophoretic mobility became easier and more reliable, are the numerical values of PZC and IEP compiled by Parks were confirmed by contemporary publications with a few exceptions. The present paper is an up-to-date compilation of the values of PZC and IEP of metal oxides. Unlike in former reviews by the same author, which were more comprehensive, only limited number of selected results are presented and discussed here. On top of the results obtained by means of classical methods (titration and electrokinetic methods), new methods and correlations found over the recent 50years are presented. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.

  15. Physics Beyond the Standard Model: Exotic Leptons and Black Holes at Future Colliders

    NASA Astrophysics Data System (ADS)

    Harris, Christopher M.

    2005-02-01

    The Standard Model of particle physics has been remarkably successful in describing present experimental results. However, it is assumed to be only a low-energy effective theory which will break down at higher energy scales, theoretically motivated to be around 1 TeV. There are a variety of proposed models of new physics beyond the Standard Model, most notably supersymmetric and extra dimension models. New charged and neutral heavy leptons are a feature of a number of theories of new physics, including the `intermediate scale' class of supersymmetric models. Using a time-of-flight technique to detect the charged leptons at the Large Hadron Collider, the discovery range (in the particular scenario studied in the first part of this thesis) is found to extend up to masses of 950 GeV. Extra dimension models, particularly those with large extra dimensions, allow the possible experimental production of black holes. The remainder of the thesis describes some theoretical results and computational tools necessary to model the production and decay of these miniature black holes at future particle colliders. The grey-body factors which describe the Hawking radiation emitted by higher-dimensional black holes are calculated numerically for the first time and then incorporated in a Monte Carlo black hole event generator; this can be used to model black hole production and decay at next-generation colliders. It is hoped that this generator will allow more detailed examination of black hole signatures and help to devise a method for extracting the number of extra dimensions present in nature.

  16. UIAGM Ropehandling Techniques.

    ERIC Educational Resources Information Center

    Cloutier, K. Ross

    The Union Internationale des Associations des Guides de Montagne's (UIAGM) rope handling techniques are intended to form the standard for guiding ropework worldwide. These techniques have become the legal standard for instructional institutions and commercial guiding organizations in UIAGM member countries: Austria, Canada, France, Germany, Great…

  17. Efficacy and safety of sunitinib alternate day regimen in patients with metastatic renal cell carcinoma in Japan: Comparison with standard 4/2 schedule.

    PubMed

    Ohba, Kojiro; Miyata, Yasuyoshi; Yasuda, Takuji; Asai, Akihiro; Mitsunari, Kensuke; Matsuo, Tomohiro; Mochizuki, Yasushi; Matsunaga, Noriko; Sakai, Hideki

    2018-06-01

    Sunitinib is a standard agent for metastatic renal cell carcinoma (mRCC). The standard schedule, 4 weeks-on followed by 2 weeks-off (4/2 schedule), often does not maintain an adequate dosage because of the severe adverse events (AEs). We compared the efficacy and safety of an alternative every other day (q.a.d.) dosing with that of the 4/2 schedule in mRCC patients. Of the 55 Japanese patients, 32 and 23 were administered 4/2 (standard group) and q.a.d. schedules (50 or 37.5 mg, every other day; experimental groups), respectively. The AEs, anticancer effects, and trough plasma concentrations of sunitinib were compared between them. The most common AE in the standard group was thrombocytopenia (43.2%), but it was observed in only two patients in the experimental group (8.7%). Although leukopenia and hand-foot syndrome were both detected in six patients (18.8%) in the standard group, no patients had these AEs in the experimental group. The incidence of dose interruption in the experimental group (21.7%) was significantly lower than that in the standard group was (59.4%, P = 0.005). Time to progression (TTP) and overall survival (OS) of the experimental group were better than those of the standard group (P < 0.001 and P = 0.002, respectively). Mean plasma levels in the experimental group (64.83 ng/mL) were significantly lower than those in the standard group (135.82 ng/mL, P < 0.001) were. Sunitinib administered q.a.d. was safe and effective for mRCC patients. We speculate that the persistent optimal drug plasma concentrations contributed to these effects. © 2018 The Authors. Asia-Pacific Journal of Clinical Oncology Published by John Wiley & Sons Australia, Ltd.

  18. Physical and optical properties of DCJTB dye for OLED display applications: Experimental and theoretical investigation

    NASA Astrophysics Data System (ADS)

    Kurban, Mustafa; Gündüz, Bayram

    2017-06-01

    In this study, 4-(dicyanomethylene)-2-tert-butyl-6-(1,1,7,7-tetramethyljulolidin-4-yl-vinyl)-4H-pyran (DCJTB) was achieved using the experimental and theoretical studies. The electronic, optical and spectroscopic properties of DCJTB molecule were first investigated by performing experimental both solution and thin film techniques and then theoretical calculations. Theoretical results showed that one intense electronic transition is 505.26 nm a quite reasonable and agreement with the measured experimental data 505.00 and 503 nm with solution technique and film technique, respectively. Experimental and simple models were also taken into consideration to calculate the optical refractive index (n) of DCJTB molecule. The structural and electronic properties were next calculated using density functional theory (DFT) with B3LYP/6-311G (d, p) basis set. UV, FT-IR spectra characteristics and the electronic properties, such as frontier orbitals, and band gap energy (Eg) of DCJTB were also recorded time-dependent (TD) DFT approach. The theoretical Eg value were found to be 2.269 eV which is consistent with experimental results obtained from solution technique for THF solvent (2.155 eV) and literature (2.16 eV). The results herein obtained reveal that solution is simple, cost-efficient and safe for optoelectronic applications when compared with film technique.

  19. Parametric studies and characterization measurements of x-ray lithography mask membranes

    NASA Astrophysics Data System (ADS)

    Wells, Gregory M.; Chen, Hector T. H.; Engelstad, Roxann L.; Palmer, Shane R.

    1991-08-01

    The techniques used in the experimental characterization of thin membranes are considered for their potential use as mask blanks for x-ray lithography. Among the parameters of interest for this evaluation are the film's stress, fracture strength, uniformity of thickness, absorption in the x-ray and visible spectral regions and the modulus and grain structure of the material. The experimental techniques used for measuring these properties are described. The accuracy and applicability of the assumptions used to derive the formulas that relate the experimental measurements to the parameters of interest are considered. Experimental results for silicon carbide and diamond films are provided. Another characteristic needed for an x-ray mask carrier is radiation stability. The number of x-ray exposures expected to be performed in the lifetime of an x-ray mask on a production line is on the order of 107. The dimensional stability requirements placed on the membranes during this period are discussed. Interferometric techniques that provide sufficient sensitivity for these stability measurements are described. A comparison is made between the different techniques that have been developed in term of the information that each technique provides, the accuracy of the various techniques, and the implementation issues that are involved with each technique.

  20. From experimental zoology to big data: Observation and integration in the study of animal development.

    PubMed

    Bolker, Jessica; Brauckmann, Sabine

    2015-06-01

    The founding of the Journal of Experimental Zoology in 1904 was inspired by a widespread turn toward experimental biology in the 19th century. The founding editors sought to promote experimental, laboratory-based approaches, particularly in developmental biology. This agenda raised key practical and epistemological questions about how and where to study development: Does the environment matter? How do we know that a cell or embryo isolated to facilitate observation reveals normal developmental processes? How can we integrate descriptive and experimental data? R.G. Harrison, the journal's first editor, grappled with these questions in justifying his use of cell culture to study neural patterning. Others confronted them in different contexts: for example, F.B. Sumner insisted on the primacy of fieldwork in his studies on adaptation, but also performed breeding experiments using wild-collected animals. The work of Harrison, Sumner, and other early contributors exemplified both the power of new techniques, and the meticulous explanation of practice and epistemology that was marshaled to promote experimental approaches. A century later, experimentation is widely viewed as the standard way to study development; yet at the same time, cutting-edge "big data" projects are essentially descriptive, closer to natural history than to the approaches championed by Harrison et al. Thus, the original questions about how and where we can best learn about development are still with us. Examining their history can inform current efforts to incorporate data from experiment and description, lab and field, and a broad range of organisms and disciplines, into an integrated understanding of animal development. © 2015 Wiley Periodicals, Inc.

Top