Sample records for measure selection technique

  1. A review on creatinine measurement techniques.

    PubMed

    Mohabbati-Kalejahi, Elham; Azimirad, Vahid; Bahrami, Manouchehr; Ganbari, Ahmad

    2012-08-15

    This paper reviews the entire recent global tendency for creatinine measurement. Creatinine biosensors involve complex relationships between biology and micro-mechatronics to which the blood is subjected. Comparison between new and old methods shows that new techniques (e.g. Molecular Imprinted Polymers based algorithms) are better than old methods (e.g. Elisa) in terms of stability and linear range. All methods and their details for serum, plasma, urine and blood samples are surveyed. They are categorized into five main algorithms: optical, electrochemical, impedometrical, Ion Selective Field-Effect Transistor (ISFET) based technique and chromatography. Response time, detection limit, linear range and selectivity of reported sensors are discussed. Potentiometric measurement technique has the lowest response time of 4-10 s and the lowest detection limit of 0.28 nmol L(-1) belongs to chromatographic technique. Comparison between various techniques of measurements indicates that the best selectivity belongs to MIP based and chromatographic techniques. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Polarization and Color Filtering Applied to Enhance Photogrammetric Measurements of Reflective Surfaces

    NASA Technical Reports Server (NTRS)

    Wells, Jeffrey M.; Jones, Thomas W.; Danehy, Paul M.

    2005-01-01

    Techniques for enhancing photogrammetric measurement of reflective surfaces by reducing noise were developed utilizing principles of light polarization. Signal selectivity with polarized light was also compared to signal selectivity using chromatic filters. Combining principles of linear cross polarization and color selectivity enhanced signal-to-noise ratios by as much as 800 fold. More typical improvements with combining polarization and color selectivity were about 100 fold. We review polarization-based techniques and present experimental results comparing the performance of traditional retroreflective targeting materials, cornercube targets returning depolarized light, and color selectivity.

  3. Inefficiency of Signal Amplification by Post-selection

    NASA Astrophysics Data System (ADS)

    Tanaka, Saki; Yamamoto, Naoki

    Basing the two-state vector formalism, Aharonov, Albert and Vaidman found a measurement way such that spin 1/2 particle can turn out 100 [1]. The measurement result is called weak value and this value depends on pre-and post- selected states. The weak value becomes infinitely large when the post- selected state is orthogonal to pre-selected state. By using this feature, the weak measurement has been applied to amplification technique. However, the success of the post-selection depends on luck and this technique does not always work. We take into account of loss by post-selection, and evaluate this amplification by quantum estimation theory. As a result, we get an inequality which means that post-selection does not improve estate accuracy when the number of states is limited.

  4. Statistical approach for selection of biologically informative genes.

    PubMed

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.

  5. Techniques for determining total body water using deuterium oxide

    NASA Technical Reports Server (NTRS)

    Bishop, Phillip A.

    1990-01-01

    The measurement of total body water (TBW) is fundamental to the study of body fluid changes consequent to microgravity exposure or treatment with microgravity countermeasures. Often, the use of radioactive isotopes is prohibited for safety or other reasons. It was selected and implemented for use by some Johnson Space Center (JCS) laboratories, which permitted serial measurements over a 14 day period which was accurate enough to serve as a criterion method for validating new techniques. These requirements resulted in the selection of deuterium oxide dilution as the method of choice for TBW measurement. The development of this technique at JSC is reviewed. The recommended dosage, body fluid sampling techniques, and deuterium assay options are described.

  6. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  7. Data selection techniques in the interpretation of MAGSAT data over Australia

    NASA Technical Reports Server (NTRS)

    Johnson, B. D.; Dampney, C. N. G.

    1983-01-01

    The MAGSAT data require critical selection in order to produce a self-consistent data set suitable for map construction and subsequent interpretation. Interactive data selection techniques are described which involve the use of a special-purpose profile-oriented data base and a colour graphics display. The careful application of these data selection techniques permits validation every data value and ensures that the best possible self-consistent data set is being used to construct the maps of the magnetic field measured at satellite altitudes over Australia.

  8. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  9. Laser two focus techniques

    NASA Astrophysics Data System (ADS)

    Schodl, R.

    The development of the laser two focus velocimetry are reviewed. The fundamentals of this nonintrusive fluid flow velocity measurement technique are described. Emphasis is placed upon the advances of this technique. Results of measurements in a very small flow channel and in a small turbocharger compressor rotor are presented. The influence of beam diameter - beam separation ratio on the measuring accuracy and on the measuring time is treated. A multicolor two dimensional system with selectable beam separation is presented. The laser Doppler and the laser two focus techniques are compared.

  10. Techniques used by United Kingdom consultant plastic surgeons to select implant size for primary breast augmentation.

    PubMed

    Holmes, W J M; Timmons, M J; Kauser, S

    2015-10-01

    Techniques used to estimate implant size for primary breast augmentation have evolved since the 1970s. Currently no consensus exists on the optimal method to select implant size for primary breast augmentation. In 2013 we asked United Kingdom consultant plastic surgeons who were full members of BAPRAS or BAAPS what was their technique for implant size selection for primary aesthetic breast augmentation. We also asked what was the range of implant sizes they commonly used. The answers to question one were grouped into four categories: experience, measurements, pre-operative external sizers and intra-operative sizers. The response rate was 46% (164/358). Overall, 95% (153/159) of all respondents performed some form of pre-operative assessment, the others relied on "experience" only. The most common technique for pre-operative assessment was by external sizers (74%). Measurements were used by 57% of respondents and 3% used intra-operative sizers only. A combination of measurements and sizers was used by 34% of respondents. The most common measurements were breast base (68%), breast tissue compliance (19%), breast height (15%), and chest diameter (9%). The median implant size commonly used in primary breast augmentation was 300cc. Pre-operative external sizers are the most common technique used by UK consultant plastic surgeons to select implant size for primary breast augmentation. We discuss the above findings in relation to the evolution of pre-operative planning techniques for breast augmentation. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  11. MASS MEASUREMENTS BY AN ACCURATE AND SENSITIVE SELECTED ION RECORDING TECHNIQUE

    EPA Science Inventory

    Trace-level components of mixtures were successfully identified or confirmed by mass spectrometric accurate mass measurements, made at high resolution with selected ion recording, using GC and LC sample introduction. Measurements were made at 20 000 or 10 000 resolution, respecti...

  12. Weak Value Amplification of a Post-Selected Single Photon

    NASA Astrophysics Data System (ADS)

    Hallaji, Matin

    Weak value amplification (WVA) is a measurement technique in which the effect of a pre- and post-selected system on a weakly interacting probe is magnified. In this thesis, I present the first experimental observation of WVA of a single photon. We observed that a signal photon --- sent through a polarization interferometer and post-selected by photodetection in the almost-dark port --- can act like eight photons. The effect of this single photon is measured as a nonlinear phase shift on a separate laser beam. The interaction between the two is mediated by a sample of laser- cooled 85Rb atoms. Electromagnetically induced transparency (EIT) is used to enhance the nonlinearity and overcome resonant absorption. I believe this work to be the first demonstration of WVA where a deterministic interaction is used to entangle two distinct optical systems. In WVA, the amplification is contingent on discarding a large portion of the original data set. While amplification increases measurement sensitivity, discarding data worsens it. Questioning whether these competing effects conspire to improve or diminish measurement accuracy has resulted recently in controversy. I address this question by calculating the maximum amount of information achievable with the WVA technique. By comparing this information to that achievable by the standard technique, where no post-selection is employed, I show that the WVA technique can be advantageous under a certain class of noise models. Finally, I propose a way to optimally apply the WVA technique.

  13. Nitric oxide selective electrodes.

    PubMed

    Davies, Ian R; Zhang, Xueji

    2008-01-01

    Since nitric oxide (NO) was identified as the endothelial-derived relaxing factor in the late 1980s, many approaches have attempted to provide an adequate means for measuring physiological levels of NO. Although several techniques have been successful in achieving this aim, the electrochemical method has proved the only technique that can reliably measure physiological levels of NO in vitro, in vivo, and in real time. We describe here the development of electrochemical sensors for NO, including the fabrication of sensors, the detection principle, calibration, detection limits, selectivity, and response time. Furthermore, we look at the many experimental applications where NO selective electrodes have been successfully used.

  14. Applications Of Measurement Techniques To Develop Small-Diameter, Undersea Fiber Optic Cables

    NASA Astrophysics Data System (ADS)

    Kamikawa, Neil T.; Nakagawa, Arthur T.

    1984-12-01

    Attenuation, strain, and optical time domain reflectometer (OTDR) measurement techniques were applied successfully in the development of a minimum-diameter, electro-optic sea floor cable. Temperature and pressure models for excess attenuation in polymer coated, graded-index fibers were investigated analytically and experimentally using these techniques in the laboratory. The results were used to select a suitable fiber for the cable. Measurements also were performed on these cables during predeployment and sea-trial testing to verify laboratory results. Application of the measurement techniques and results are summarized in this paper.

  15. The Chronic and Acute Effects of Exercise Upon Selected Blood Measures.

    ERIC Educational Resources Information Center

    Roitman, J. L.; Brewer, J. P.

    This study investigated the effects of chronic and acute exercise upon selected blood measures and indices. Nine male cross-country runners were studied. Red blood count, hemoglobin, and hematocrit were measured using standard laboratory techniques; mean corpuscular volume (MCV), mean corpuscular hemoglobin, and mean corpuscular hemoglobin…

  16. Measurement techniques and instruments suitable for life-prediction testing of photovoltaic arrays

    NASA Technical Reports Server (NTRS)

    Noel, G. T.; Sliemers, F. A.; Deringer, G. C.; Wood, V. E.; Wilkes, K. E.; Gaines, G. B.; Carmichael, D. C.

    1978-01-01

    Array failure modes, relevant materials property changes, and primary degradation mechanisms are discussed as a prerequisite to identifying suitable measurement techniques and instruments. Candidate techniques and instruments are identified on the basis of extensive reviews of published and unpublished information. These methods are organized in six measurement categories - chemical, electrical, optical, thermal, mechanical, and other physicals. Using specified evaluation criteria, the most promising techniques and instruments for use in life prediction tests of arrays were selected.

  17. Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  18. Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  19. Investigation of the feasibility of optical diagnostic measurements at the exit of the SSME

    NASA Technical Reports Server (NTRS)

    Shirley, John A.; Boedeker, Laurence R.

    1993-01-01

    Under Contract NAS8-36861 sponsored by NASA Marshall Space Flight Center, the United Technologies Research Center is conducting an investigation of the feasibility of remote optical diagnostics to measure temperature, species concentration and velocity at the exit of the Space Shuttle Main Engine (SSME). This is a two phase study consisting of a conceptual design phase followed by a laboratory experimental investigation. The first task of the conceptual design studies is to screen and evaluate the techniques which can be used for the measurements. The second task is to select the most promising technique or techniques, if as expected, more than one type of measurement must be used to measure all the flow variables of interest. The third task is to examine in detail analytically the capabilities and limitations of the selected technique(s). The results of this study are described in the section of this report entitled Conceptual Design Investigations. The conceptual design studies identified spontaneous Raman scattering and photodissociative flow-tagging for measurements respectively of gas temperature and major species concentration and for velocity. These techniques and others that were considered are described in the section describing the conceptual design. The objective of the second phase of investigations was to investigate experimentally the techniques identified in the first phase. The first task of the experimental feasibility study is to design and assemble laboratory scale experimental apparatus to evaluate the best approaches for SSME exit optical diagnostics for temperature, species concentrations and velocity, as selected in the Phase I conceptual design study. The second task is to evaluate performance, investigate limitations, and establish actual diagnostic capabilities, accuracies and precision for the selected optical systems. The third task is to evaluate design requirements and system trade-offs of conceptual instruments. Spontaneous Raman scattering excited by a KrF excimer laser pulse was investigated for SSME exit plane temperature and major species concentration measurements. The relative concentrations of molecular hydrogen and water vapor would be determined by measuring the integrated Q-branch scattering signals through narrow bandpass filters in front of photomultipliers. The temperature would be determined by comparing the signal from a single hydrogen rotational Raman line to the total hydrogen Q-branch signal. The rotational Raman line would be isolated by a monochromator and detected with a PMT.

  20. COAL SULFUR MEASUREMENTS

    EPA Science Inventory

    The report describes a new technique for sulfur forms analysis based on low-temperature oxygen plasma ashing. The technique involves analyzing the low-temperature plasma ash by modified ASTM techniques after selectively removing the organic material. The procedure has been tested...

  1. Optimum electrode configuration selection for electrical resistance change based damage detection in composites using an effective independence measure

    NASA Astrophysics Data System (ADS)

    Escalona, Luis; Díaz-Montiel, Paulina; Venkataraman, Satchi

    2016-04-01

    Laminated carbon fiber reinforced polymer (CFRP) composite materials are increasingly used in aerospace structures due to their superior mechanical properties and reduced weight. Assessing the health and integrity of these structures requires non-destructive evaluation (NDE) techniques to detect and measure interlaminar delamination and intralaminar matrix cracking damage. The electrical resistance change (ERC) based NDE technique uses the inherent changes in conductive properties of the composite to characterize internal damage. Several works that have explored the ERC technique have been limited to thin cross-ply laminates with simple linear or circular electrode arrangements. This paper investigates a method of optimum selection of electrode configurations for delamination detection in thick cross-ply laminates using ERC. Inverse identification of damage requires numerical optimization of the measured response with a model predicted response. Here, the electrical voltage field in the CFRP composite laminate is calculated using finite element analysis (FEA) models for different specified delamination size and locations, and location of ground and current electrodes. Reducing the number of sensor locations and measurements is needed to reduce hardware requirements, and computational effort needed for inverse identification. This paper explores the use of effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations of selecting a pair of electrodes among the n electrodes. To enable use of EI to ERC required, it is proposed in this research a singular value decomposition SVD to obtain a spectral representation of the resistance measurements in the laminate. The effectiveness of EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set of resistance measurements and the reduced set of measurements. The investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERC based damage detection.

  2. Derivative information recovery by a selective integration technique

    NASA Technical Reports Server (NTRS)

    Johnson, M. A.

    1974-01-01

    A nonlinear stationary homogeneous digital filter DIRSIT (derivative information recovery by a selective integration technique) is investigated. The spectrum of a quasi-linear discrete describing function (DDF) to DIRSIT is obtained by a digital measuring scheme. A finite impulse response (FIR) approximation to the quasi-linearization is then obtained. Finally, DIRSIT is compared with its quasi-linear approximation and with a standard digital differentiating technique. Results indicate the effects of DIRSIT on a wide variety of practical signals.

  3. Flight control synthesis for flexible aircraft using Eigenspace assignment

    NASA Technical Reports Server (NTRS)

    Davidson, J. B.; Schmidt, D. K.

    1986-01-01

    The use of eigenspace assignment techniques to synthesize flight control systems for flexible aircraft is explored. Eigenspace assignment techniques are used to achieve a specified desired eigenspace, chosen to yield desirable system impulse residue magnitudes for selected system responses. Two of these are investigated. The first directly determines constant measurement feedback gains that will yield a close-loop system eigenspace close to a desired eigenspace. The second technique selects quadratic weighting matrices in a linear quadratic control synthesis that will asymptotically yield the close-loop achievable eigenspace. Finally, the possibility of using either of these techniques with state estimation is explored. Application of the methods to synthesize integrated flight-control and structural-mode-control laws for a large flexible aircraft is demonstrated and results discussed. Eigenspace selection criteria based on design goals are discussed, and for the study case it would appear that a desirable eigenspace can be obtained. In addition, the importance of state-space selection is noted along with problems with reduced-order measurement feedback. Since the full-state control laws may be implemented with dynamic compensation (state estimation), the use of reduced-order measurement feedback is less desirable. This is especially true since no change in the transient response from the pilot's input results if state estimation is used appropriately. The potential is also noted for high actuator bandwidth requirements if the linear quadratic synthesis approach is utilized. Even with the actuator pole location selected, a problem with unmodeled modes is noted due to high bandwidth. Some suggestions for future research include investigating how to choose an eigenspace that will achieve certain desired dynamics and stability robustness, determining how the choice of measurements effects synthesis results, and exploring how the phase relationships between desired eigenvector elements effects the synthesis results.

  4. Impact of the Z potential technique on reducing the sperm DNA fragmentation index, fertilization rate and embryo development.

    PubMed

    Duarte, Carlos; Núñez, Víctor; Wong, Yat; Vivar, Carlos; Benites, Elder; Rodriguez, Urso; Vergara, Carlos; Ponce, Jorge

    2017-12-01

    In assisted reproduction procedures, we need to develop and enhance new protocols to optimize sperm selection. The aim of this study is to evaluate the ability of the Z potential technique to select sperm with intact DNA in non-normospermic patients and evaluate the impact of this selection on embryonic development. We analyzed a total of 174 human seminal samples with at least one altered parameter. We measured basal, post density gradients, and post density gradients + Z potential DNA fragmentation index. To evaluate the impact of this technique on embryo development, 54 cases were selected. The embryo development parameters evaluated were fertilization rate, cleavage rate, top quality embryos at the third day and blastocysts rate. We found significant differences in the study groups when we compared the sperm fragmentation index by adding the Z potential technique to density gradient selection vs. density gradients alone. Furthermore, there was no significant difference in the embryo development parameters between the low sperm fragmentation index group vs. the moderate and high sperm fragmentation index groups, when selecting sperms with this new technique. The Z potential technique is a very useful tool for sperm selection; it significantly reduces the DNA fragmentation index and improves the parameters of embryo development. This technique could be considered routine for its simplicity and low cost.

  5. Evaluation of methods for rapid determination of freezing point of aviation fuels

    NASA Technical Reports Server (NTRS)

    Mathiprakasam, B.

    1982-01-01

    Methods for identification of the more promising concepts for the development of a portable instrument to rapidly determine the freezing point of aviation fuels are described. The evaluation process consisted of: (1) collection of information on techniques previously used for the determination of the freezing point, (2) screening and selection of these techniques for further evaluation of their suitability in a portable unit for rapid measurement, and (3) an extensive experimental evaluation of the selected techniques and a final selection of the most promising technique. Test apparatuses employing differential thermal analysis and the change in optical transparency during phase change were evaluated and tested. A technique similar to differential thermal analysis using no reference fuel was investigated. In this method, the freezing point was obtained by digitizing the data and locating the point of inflection. Results obtained using this technique compare well with those obtained elsewhere using different techniques. A conceptual design of a portable instrument incorporating this technique is presented.

  6. Biomechanical and energetic determinants of technique selection in classical cross-country skiing.

    PubMed

    Pellegrini, Barbara; Zoppirolli, Chiara; Bortolan, Lorenzo; Holmberg, Hans-Christer; Zamparo, Paola; Schena, Federico

    2013-12-01

    Classical cross-country skiing can be performed using three main techniques: diagonal stride (DS), double poling (DP), and double poling with kick (DK). Similar to other forms of human and animal gait, it is currently unclear whether technique selection occurs to minimize metabolic cost or to keep some mechanical factors below a given threshold. The aim of this study was to find the determinants of technique selection. Ten male athletes roller skied on a treadmill at different slopes (from 0° to 7° at 10km/h) and speeds (from 6 to 18km/h at 2°). The technique preferred by skiers was gathered for every proposed condition. Biomechanical parameters and metabolic cost were then measured for each condition and technique. Skiers preferred DP for skiing on the flat and they transitioned to DK and then to DS with increasing slope steepness, when increasing speed all skiers preferred DP. Data suggested that selections mainly occur to remain below a threshold of poling force. Second, critically low values of leg thrust time may limit the use of leg-based techniques at high speeds. A small role has been identified for the metabolic cost of locomotion, which determined the selection of DP for flat skiing. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Indirect Gas Species Monitoring Using Tunable Diode Lasers

    DOEpatents

    Von Drasek, William A.; Saucedo, Victor M.

    2005-02-22

    A method for indirect gas species monitoring based on measurements of selected gas species is disclosed. In situ absorption measurements of combustion species are used for process control and optimization. The gas species accessible by near or mid-IR techniques are limited to species that absorb in this spectral region. The absorption strength is selected to be strong enough for the required sensitivity and is selected to be isolated from neighboring absorption transitions. By coupling the gas measurement with a software sensor gas, species not accessible from the near or mid-IR absorption measurement can be predicted.

  8. Comparison of Three Optical Methods for Measuring Model Deformation

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Fleming, G. A.; Hoppe, J. C.

    2000-01-01

    The objective of this paper is to compare the current state-of-the-art of the following three optical techniques under study by NASA for measuring model deformation in wind tunnels: (1) video photogrammetry, (2) projection moire interferometry, and (3) the commercially available Optotrak system. An objective comparison of these three techniques should enable the selection of the best technique for a particular test undertaken at various NASA facilities. As might be expected, no one technique is best for all applications. The techniques are also not necessarily mutually exclusive and in some cases can be complementary to one another.

  9. Site-Selection in Single-Molecule Junction for Highly Reproducible Molecular Electronics.

    PubMed

    Kaneko, Satoshi; Murai, Daigo; Marqués-González, Santiago; Nakamura, Hisao; Komoto, Yuki; Fujii, Shintaro; Nishino, Tomoaki; Ikeda, Katsuyoshi; Tsukagoshi, Kazuhito; Kiguchi, Manabu

    2016-02-03

    Adsorption sites of molecules critically determine the electric/photonic properties and the stability of heterogeneous molecule-metal interfaces. Then, selectivity of adsorption site is essential for development of the fields including organic electronics, catalysis, and biology. However, due to current technical limitations, site-selectivity, i.e., precise determination of the molecular adsorption site, remains a major challenge because of difficulty in precise selection of meaningful one among the sites. We have succeeded the single site-selection at a single-molecule junction by performing newly developed hybrid technique: simultaneous characterization of surface enhanced Raman scattering (SERS) and current-voltage (I-V) measurements. The I-V response of 1,4-benzenedithiol junctions reveals the existence of three metastable states arising from different adsorption sites. Notably, correlated SERS measurements show selectivity toward one of the adsorption sites: "bridge sites". This site-selectivity represents an essential step toward the reliable integration of individual molecules on metallic surfaces. Furthermore, the hybrid spectro-electric technique reveals the dependence of the SERS intensity on the strength of the molecule-metal interaction, showing the interdependence between the optical and electronic properties in single-molecule junctions.

  10. The efficacy of the 'mind map' study technique.

    PubMed

    Farrand, Paul; Hussain, Fearzana; Hennessy, Enid

    2002-05-01

    To examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information. To obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken. Barts and the London School of Medicine and Dentistry, University of London. 50 second- and third-year medical students. Recall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%). Mind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.

  11. Overview of Supersonic Aerodynamics Measurement Techniques in the NASA Langley Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.

    2007-01-01

    An overview is given of selected measurement techniques used in the NASA Langley Research Center (NASA LaRC) Unitary Plan Wind Tunnel (UPWT) to determine the aerodynamic characteristics of aerospace vehicles operating at supersonic speeds. A broad definition of a measurement technique is adopted in this paper and is any qualitative or quantitative experimental approach that provides information leading to the improved understanding of the supersonic aerodynamic characteristics. On-surface and off-surface measurement techniques used to obtain discrete (point) and global (field) measurements and planar and global flow visualizations are described, and examples of all methods are included. The discussion is limited to recent experiences in the UPWT and is, therefore, not an exhaustive review of existing experimental techniques. The diversity and high quality of the measurement techniques and the resultant data illustrate the capabilities of a ground-based experimental facility and the key role that it plays in the advancement of our understanding, prediction, and control of supersonic aerodynamics.

  12. Fibre Optic Sensors for Selected Wastewater Characteristics

    PubMed Central

    Chong, Su Sin; Abdul Aziz, A. R.; Harun, Sulaiman W.

    2013-01-01

    Demand for online and real-time measurements techniques to meet environmental regulation and treatment compliance are increasing. However the conventional techniques, which involve scheduled sampling and chemical analysis can be expensive and time consuming. Therefore cheaper and faster alternatives to monitor wastewater characteristics are required as alternatives to conventional methods. This paper reviews existing conventional techniques and optical and fibre optic sensors to determine selected wastewater characteristics which are colour, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD). The review confirms that with appropriate configuration, calibration and fibre features the parameters can be determined with accuracy comparable to conventional method. With more research in this area, the potential for using FOS for online and real-time measurement of more wastewater parameters for various types of industrial effluent are promising. PMID:23881131

  13. Comparison of holographic setups used in heat and mass transfer measurement

    NASA Astrophysics Data System (ADS)

    Doleček, R.; Psota, P.; Lédl, V.; Vít, T.; Kopecký, V.

    2014-03-01

    The authors of the paper deal with measurement of heat and mass transfer for several years and they have frequently used few techniqes for measurement of refractive index distribution based on holographic interferometry. Some of the well known techniques have been modified some and some new ones developped. Every technique could be applied with success in different type of meassurement and obviously every one has set of properties making them unique. We decided to digest few different basic techniques and describe its properties in this paper with the aim to help the reader select the proper one for their measurement. The list of techniques and its properties is not comprehensive but schould serve as a basic orientation in the field.

  14. Selected Literature According to Subject Field. Measurement and Control Techniques in Nuclear Reactors. Bibliographuc Compilation; AUSGEWAHLTES SCHRIFTTUM NACH SACHGEBIETEN. MESS- UND REGELTECHNIK AN KERNREAKTOREN. BIBLIOGRAPHISCHE ZUSAMMENSTELLUNG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gessmann, D., comp

    1963-11-01

    One hundred and eighty-one references on measurement and control techniques in nuclear reactors are presented. The period covered is Jan. 1 to Dec. 31, 1962. The references are arranged by subject and report number and author indexes are included. (M.C.G.)

  15. Factors Affecting the Adoption of R&D Project Selection Techniques at the Air Force Wright Aeronautical Laboratories

    DTIC Science & Technology

    1988-09-01

    tested. To measure 42 the adequacy of the sample, the Kaiser - Meyer - Olkin measure of sampling adequacy was used. This technique is described in Factor...40 4- 0 - 7 0 0 07 -58d the relatively large number of variables, there was concern about the adequacy of the sample size. A Kaiser - Meyer - Olkin

  16. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    NASA Astrophysics Data System (ADS)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  17. Sensors for ceramic components in advanced propulsion systems

    NASA Technical Reports Server (NTRS)

    Koller, A. C.; Bennethum, W. H.; Burkholder, S. D.; Brackett, R. R.; Harris, J. P.

    1995-01-01

    This report includes: (1) a survey of the current methods for the measurement of surface temperature of ceramic materials suitable for use as hot section flowpath components in aircraft gas turbine engines; (2) analysis and selection of three sensing techniques with potential to extend surface temperature measurement capability beyond current limits; and (3) design, manufacture, and evaluation of the three selected techniques which include the following: platinum rhodium thin film thermocouple on alumina and mullite substrates; doped silicon carbide thin film thermocouple on silicon carbide, silicon nitride, and aluminum nitride substrates; and long and short wavelength radiation pyrometry on the substrates listed above plus yttria stabilized zirconia. Measurement of surface emittance of these materials at elevated temperature was included as part of this effort.

  18. Development and Validation of Measures for Selecting Soldiers for the Officer Candidate School

    DTIC Science & Technology

    2011-08-01

    SJT, there has been a debate about what SJTs actually measure and why they work (cf. Moss & Hunt, 1926; Thorndike , 1936), a debate that continues...meta-analytic review and integration. Psychological Bulletin, 129, 914-945. Thorndike , R. L. (1936). Factor analysis of social and abstract...intelligence. The Journal of Educational Psychology, XXVII, 231—233. Thorndike , R. L. (1949). Personnel selection: Test and measurement techniques. New York

  19. A comparison of mandibular denture base deformation with different impression techniques for implant overdentures.

    PubMed

    Elsyad, Moustafa Abdou; El-Waseef, Fatma Ahmad; Al-Mahdy, Yasmeen Fathy; Fouad, Mohammed Mohammed

    2013-08-01

    This study aimed to evaluate mandibular denture base deformation along with three impression techniques used for implant-retained overdenture. Ten edentulous patients (five men and five women) received two implants in the canine region of the mandible and three duplicate mandibular overdentures which were constructed with mucostatic, selective pressure, and definitive pressure impression techniques. Ball abutments and respective gold matrices were used to connect the overdentures to the implants. Six linear strain gauges were bonded to the lingual polished surface of each duplicate overdenture at midline and implant areas to measure strain during maximal clenching and gum chewing. The strains recorded at midline were compressive while strains at implant areas were tensile. Clenching recorded significant higher strain when compared with gum chewing for all techniques. The mucostatic technique recorded the highest strain and the definite pressure technique recorded the lowest. There was no significant difference between the strain recorded with mucostatic technique and that registered with selective pressure technique. The highest strain was recorded at the level of ball abutment's top with the mucostatic technique during clenching. Definite pressure impression technique for implant-retained mandibular overdenture is associated with minimal denture deformation during function when compared with mucostatic and selective pressure techniques. Reinforcement of the denture base over the implants may be recommended to increase resistance of fracture when mucostatic or selective pressure impression technique is used. © 2012 John Wiley & Sons A/S.

  20. Spectroscopic vector analysis for fast pattern quality monitoring

    NASA Astrophysics Data System (ADS)

    Sohn, Younghoon; Ryu, Sungyoon; Lee, Chihoon; Yang, Yusin

    2018-03-01

    In semiconductor industry, fast and effective measurement of pattern variation has been key challenge for assuring massproduct quality. Pattern measurement techniques such as conventional CD-SEMs or Optical CDs have been extensively used, but these techniques are increasingly limited in terms of measurement throughput and time spent in modeling. In this paper we propose time effective pattern monitoring method through the direct spectrum-based approach. In this technique, a wavelength band sensitive to a specific pattern change is selected from spectroscopic ellipsometry signal scattered by pattern to be measured, and the amplitude and phase variation in the wavelength band are analyzed as a measurement index of the pattern change. This pattern change measurement technique is applied to several process steps and verified its applicability. Due to its fast and simple analysis, the methods can be adapted to the massive process variation monitoring maximizing measurement throughput.

  1. Measurement Techniques and Instruments Suitable for Life-prediction Testing of Photovoltaic Arrays

    NASA Technical Reports Server (NTRS)

    Noel, G. T.; Wood, V. E.; Mcginniss, V. D.; Hassell, J. A.; Richard, N. A.; Gaines, G. B.; Carmichael, D. C.

    1979-01-01

    The validation of a 20-year service life for low-cost photovoltaic arrays is a critical requirement in the Low-Cost Solar Array (LSA) Project. The validation is accomplished through accelerated life-prediction tests. A two-phase study was conducted to address the needs before such tests are carried out. The results and recommended techniques from the Phase 1 investigation are summarized in the appendix. Phase 2 of the study is covered in this report and consisted of experimental evaluations of three techniques selected from these recommended as a results of the Phase 1 findings. The three techniques evaluated were specular and nonspecular optical reflectometry, chemiluminescence measurements, and electric current noise measurements.

  2. Wing Twist Measurements at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W.; Wahls, Richard A.; Goad, William K.

    1996-01-01

    A technique for measuring wing twist currently in use at the National Transonic Facility is described. The technique is based upon a single camera photogrammetric determination of two dimensional coordinates with a fixed (and known) third dimensional coordinate. The wing twist is found from a conformal transformation between wind-on and wind-off 2-D coordinates in the plane of rotation. The advantages and limitations of the technique as well as the rationale for selection of this particular technique are discussed. Examples are presented to illustrate run-to-run and test-to-test repeatability of the technique in air mode. Examples of wing twist in cryogenic nitrogen mode are also presented.

  3. A Critical Review on Clinical Application of Separation Techniques for Selective Recognition of Uracil and 5-Fluorouracil.

    PubMed

    Pandey, Khushaboo; Dubey, Rama Shankar; Prasad, Bhim Bali

    2016-03-01

    The most important objectives that are frequently found in bio-analytical chemistry involve applying tools to relevant medical/biological problems and refining these applications. Developing a reliable sample preparation step, for the medical and biological fields is another primary objective in analytical chemistry, in order to extract and isolate the analytes of interest from complex biological matrices. Since, main inborn errors of metabolism (IEM) diagnosable through uracil analysis and the therapeutic monitoring of toxic 5-fluoruracil (an important anti-cancerous drug) in dihydropyrimidine dehydrogenase deficient patients, require an ultra-sensitive, reproducible, selective, and accurate analytical techniques for their measurements. Therefore, keeping in view, the diagnostic value of uracil and 5-fluoruracil measurements, this article refines several analytical techniques involved in selective recognition and quantification of uracil and 5-fluoruracil from biological and pharmaceutical samples. The prospective study revealed that implementation of molecularly imprinted polymer as a solid-phase material for sample preparation and preconcentration of uracil and 5-fluoruracil had proven to be effective as it could obviates problems related to tedious separation techniques, owing to protein binding and drastic interferences, from the complex matrices in real samples such as blood plasma, serum samples.

  4. Active Learning through Online Instruction

    ERIC Educational Resources Information Center

    Gulbahar, Yasemin; Kalelioglu, Filiz

    2010-01-01

    This article explores the use of proper instructional techniques in online discussions that lead to meaningful learning. The research study looks at the effective use of two instructional techniques within online environments, based on qualitative measures. "Brainstorming" and "Six Thinking Hats" were selected and implemented…

  5. Simultaneous Temperature and Velocity Measurements in a Large-Scale, Supersonic, Heated Jet

    NASA Technical Reports Server (NTRS)

    Danehy, P. M.; Magnotti, G.; Bivolaru, D.; Tedder, S.; Cutler, A. D.

    2008-01-01

    Two laser-based measurement techniques have been used to characterize an axisymmetric, combustion-heated supersonic jet issuing into static room air. The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) measurement technique measured temperature and concentration while the interferometric Rayleigh scattering (IRS) method simultaneously measured two components of velocity. This paper reports a preliminary analysis of CARS-IRS temperature and velocity measurements from selected measurement locations. The temperature measurements show that the temperature along the jet axis remains constant while dropping off radially. The velocity measurements show that the nozzle exit velocity fluctuations are about 3% of the maximum velocity in the flow.

  6. A comparison of selected vertical wind measurement techniques on basis of the EUCAARI IMPACT observations

    NASA Astrophysics Data System (ADS)

    Arabas, S.; Baehr, C.; Boquet, M.; Dufournet, Y.; Pawlowska, H.; Siebert, H.; Unal, C.

    2009-04-01

    The poster presents a comparison of selected methods for determination of the vertical wind in the boundary layer used during the EUCAARI IMPACT campaign that took place in May 2008 in The Netherlands. The campaign covered a monthlong intensified ground-based and airborne measurements in the vicinity of the CESAR observatory in Cabauw. Ground-based vertical wind remote sensing was carried out using the Leosphere WindCube WLS70 IR Doppler lidar, Vaisala LAP3000 radar wind-profiler and the TUDelft TARA S-band radar. In-situ airborne measurements were performed using an ultrasonic anemometer (on the ACTOS helicopter underhung platform) and a 5-hole pressure probe (on the SAFIRE ATR-42 airplane radome). Several in-situ anemometers were deployed on the 200-meter high tower of the CESAR observatory. A summary of the characteristics and principles of the considered techniques is presented. A comparison of the results obtained from different platforms depicts the capabilities of each technique and highlights the time, space and velocity resolutions.

  7. Isomer discrimination of PAHs formed in sooting flames by jet-cooled laser-induced fluorescence: application to the measurement of pyrene and fluoranthene

    NASA Astrophysics Data System (ADS)

    Mouton, Thomas; Mercier, Xavier; Desgroux, Pascale

    2016-05-01

    Jet-cooled laser-induced fluorescence is a spectroscopic method, specifically developed for the study of PAHs formed in flames. This technique has already been used to measure different aromatic species in sooting low-pressure methane flames such as benzene, naphthalene, and pyrene. The use of the LIF technique to excite PAHs drastically cooled down inside a supersonic jet offers the possibility to get selective and quantitative profiles of PAHs sampled from sooting flames. In this paper, we demonstrate the ability of this experimental method to separate the contribution of two mass isomers generated in sooting flames which are the pyrene and the fluoranthene. The selectivity of the method is demonstrated by studying the spectral properties of these species. The method is then applied to the measurement of both these species in two sooting flames with different equivalence ratios and stabilized at 200 torr (26.65 kPa). The sensitivity of the technique has been found to reach a few ppb in the case of fluoranthene measurements.

  8. A Comparison of Galaxy Spiral Arm Pitch Angle Measurements Using Manual and Automated Techniques

    NASA Astrophysics Data System (ADS)

    Hewitt, Ian; Treuthardt, Patrick

    2018-01-01

    Disk galaxy evolution is dominated by secular processes in the nearby universe. Revealing the morphological characteristics and underlying dynamics of these galaxies is key to understanding their evolution. The arm structure of disk galaxies can generally be described with logarithmic spirals, thereby giving measurements of pitch angle. These measurements are valuable for probing the dynamics and less apparent characteristics of these galaxies (i.e. supermassive black hole mass). Pitch angle measurements are powerful because they can be derived from a single, uncalibrated, broadband image with sufficient contrast, as opposed to more intensive observations. Accurate determination of these measurements can be challenging, however, since pitch angle can vary with radius.There are currently several semi-automated and manual techniques used to determine pitch angle. These are, or will be, used in at least two Zooniverse citizen science projects. The goal of this work is to determine if different, specific techniques return similar pitch angles for the same set of galaxies. We compare the results from a machine vision technique using SPARCFIRE, a non-Euclidean based hand selection of pitch angle, and two methods using 2D Fourier decomposition (i.e. selecting stable regions from the results of direct application to broadband images and application to traced versions of the observed spiral pattern). Each technique is applied to our sample of galaxies and the resulting pitch angles are compared to generated logarithmic spirals to evaluate the match quality.

  9. Analysis of nanoliter samples of electrolytes using a flow-through microfluorometer.

    PubMed

    Zhelyaskov, V R; Liu, S; Broderick, M P

    2000-04-01

    Several techniques have been developed to study the transport properties of nanoliter samples of renal tubule segments, such as continuous flow colorimetry and continuous fluorometry. We have extended the capability of the NANOFLO, a flow-through microfluorometer, designed for measurement of carbon dioxide, urea, ammonia, glucose, lactate, etc., to analyze sodium, calcium and chloride ions, using three commercially available fluorescent indicators for intracellular and extracellular measurements. The selection of fluorescent indicator for each electrolyte was dependent on the optimal match of the dissociation constant and the analyte concentration range of interest. Using Fluo-3 dye we achieved a detection limit for Ca2+ of 0.1 pmol and selectivity over Mg2+ of between 7:1 to 10:1. Using sodium green dye we achieved detection limit for Na+ of 12 pmol and a selectivity over K+ of 40:1. The detection limit for Cl- using lucigenin dye was 10 pmol. This technique can be readily adapted for the measurement of other physiologically important ultralow volume.

  10. The 2.4 μm Galaxy Luminosity Function As Measured Using WISE. I. Measurement Techniques

    NASA Astrophysics Data System (ADS)

    Lake, S. E.; Wright, E. L.; Tsai, C.-W.; Lam, A.

    2017-04-01

    The astronomy community has at its disposal a large back catalog of public spectroscopic galaxy redshift surveys that can be used for the measurement of luminosity functions (LFs). Utilizing the back catalog with new photometric surveys to maximum efficiency requires modeling the color selection bias imposed on the selection of target galaxies by flux limits at multiple wavelengths. The likelihood derived herein can address, in principle, all possible color selection biases through the use of a generalization of the LF, {{Φ }}(L), over the space of all spectra: the spectro-luminosity functional, {{\\Psi }}[{L}ν ]. It is, therefore, the first estimator capable of simultaneously analyzing multiple redshift surveys in a consistent way. We also propose a new way of parametrizing the evolution of the classic Schechter function parameters, L ⋆ and ϕ ⋆, that improves both the physical realism and statistical performance of the model. The techniques derived in this paper are used in a companion paper by Lake et al. to measure the LF of galaxies at the rest-frame wavelength of 2.4 μ {{m}} using the Widefield Infrared Survey Explorer (WISE).

  11. Robust Feature Selection Technique using Rank Aggregation.

    PubMed

    Sarkar, Chandrima; Cooley, Sarah; Srivastava, Jaideep

    2014-01-01

    Although feature selection is a well-developed research area, there is an ongoing need to develop methods to make classifiers more efficient. One important challenge is the lack of a universal feature selection technique which produces similar outcomes with all types of classifiers. This is because all feature selection techniques have individual statistical biases while classifiers exploit different statistical properties of data for evaluation. In numerous situations this can put researchers into dilemma as to which feature selection method and a classifiers to choose from a vast range of choices. In this paper, we propose a technique that aggregates the consensus properties of various feature selection methods to develop a more optimal solution. The ensemble nature of our technique makes it more robust across various classifiers. In other words, it is stable towards achieving similar and ideally higher classification accuracy across a wide variety of classifiers. We quantify this concept of robustness with a measure known as the Robustness Index (RI). We perform an extensive empirical evaluation of our technique on eight data sets with different dimensions including Arrythmia, Lung Cancer, Madelon, mfeat-fourier, internet-ads, Leukemia-3c and Embryonal Tumor and a real world data set namely Acute Myeloid Leukemia (AML). We demonstrate not only that our algorithm is more robust, but also that compared to other techniques our algorithm improves the classification accuracy by approximately 3-4% (in data set with less than 500 features) and by more than 5% (in data set with more than 500 features), across a wide range of classifiers.

  12. Nuclear technologies for explosives detection

    NASA Astrophysics Data System (ADS)

    Bell, Curtis J.

    1992-12-01

    This paper presents an exploration of several techniques for detection of Improvised Explosive Devices (IED) using interactions of specific nuclei with gammarays or fast neutrons. Techniques considered use these interactions to identify the device by measuring the densities and/or relative concentrations of the elemental constituents of explosives. These techniques are to be compared with selected other nuclear and non-nuclear methods. Combining of nuclear and non-nuclear techniques will also be briefly discussed.

  13. COMPARISON OF MEASUREMENT TECHNIQUES FOR QUANTIFYING SELECTED ORGANIC EMISSIONS FROM KEROSENE SPACE HEATERS

    EPA Science Inventory

    The report goes results of (1) a comparison the hood and chamber techniques for quantifying pollutant emission rates from unvented combustion appliances, and (2) an assessment of the semivolatile and nonvolatile organic-compound emissions from unvented kerosene space heaters. In ...

  14. Bibliography on methods of atmospheric visibility measurements relevant to air traffic control and related subjects

    DOT National Transportation Integrated Search

    1973-11-30

    The bibliographical survey provides reference information and background material to assist in the selection of principles and measuring techniques which may be used in the development of future systems to measure Runway Visual Range (RVR), Slant Vis...

  15. Two-photon excitation of nitric oxide fluorescence as a temperature indicator in unsteady gas-dynamic processes

    NASA Technical Reports Server (NTRS)

    Mckenzie, R. L.; Gross, K. P.

    1980-01-01

    A laser induced fluorescence technique, suitable for measuring fluctuating temperatures in cold turbulent flows containing very low concentrations of nitric oxide is described. Temperatures below 300 K may be resolved with signal to noise ratios greater than 50 to 1 using high peak power, tunable dye lasers. The method relies on the two photon excitation of selected ro-vibronic transitions. The analysis includes the effects of fluorescence quenching and shows the technique to be effective at all densities below ambient. Signal to noise ratio estimates are based on a preliminary measurement of the two photon absorptivity for a selected rotational transition in the NO gamma (0,0) band.

  16. An improved dual-frequency technique for the remote sensing of ocean currents and wave spectra

    NASA Technical Reports Server (NTRS)

    Schuler, D. L.; Eng, W. P.

    1984-01-01

    A two frequency microwave radar technique for the remote sensing of directional ocean wave spectra and surface currents is investigated. This technique is conceptually attractive because its operational physical principle involves a spatial electromagnetic scattering resonance with a single, but selectable, long gravity wave. Multiplexing of signals having different spacing of the two transmitted frequencies allows measurements of the entire long wave ocean spectrum to be carried out. A new scatterometer is developed and experimentally tested which is capable of making measurements having much larger signal/background values than previously possible. This instrument couples the resonance technique with coherent, frequency agility radar capabilities. This scatterometer is presently configured for supporting a program of surface current measurements.

  17. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  18. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.

    1972-01-01

    The search for advanced measurement techniques for determining long term compatibility of materials with propellants was conducted in several parts. A comprehensive survey of the existing measurement and testing technology for determining material-propellant interactions was performed. Selections were made from those existing techniques which were determined could meet or be made to meet the requirements. Areas of refinement or changes were recommended for improvement of others. Investigations were also performed to determine the feasibility and advantages of developing and using new techniques to achieve significant improvements over existing ones. The most interesting demonstration was that of the new technique, the volatile metal chelate analysis. Rivaling the neutron activation analysis in terms of sensitivity and specificity, the volatile metal chelate technique was fully demonstrated.

  19. Comparative evaluation of workload estimation techniques in piloting tasks

    NASA Technical Reports Server (NTRS)

    Wierwille, W. W.

    1983-01-01

    Techniques to measure operator workload in a wide range of situations and tasks were examined. The sensitivity and intrusion of a wide variety of workload assessment techniques in simulated piloting tasks were investigated. Four different piloting tasks, psychomotor, perceptual, mediational, and communication aspects of piloting behavior were selected. Techniques to determine relative sensitivity and intrusion were applied. Sensitivity is the relative ability of a workload estimation technique to discriminate statistically significant differences in operator loading. High sensitivity requires discriminable changes in score means as a function of load level and low variation of the scores about the means. Intrusion is an undesirable change in the task for which workload is measured, resulting from the introduction of the workload estimation technique or apparatus.

  20. Interference-free coherence dynamics of gas-phase molecules using spectral focusing.

    PubMed

    Wrzesinski, Paul J; Roy, Sukesh; Gord, James R

    2012-10-08

    Spectral focusing using broadband femtosecond pulses to achieve highly selective measurements has been employed for numerous applications in spectroscopy and microspectroscopy. In this work we highlight the use of spectral focusing for selective excitation and detection of gas-phase species. Furthermore, we demonstrate that spectral focusing, coupled with time-resolved measurements based upon probe delay, allows the observation of interference-free coherence dynamics of multiple molecules and gas-phase temperature making this technique ideal for gas-phase measurements of reacting flows and combustion processes.

  1. Anomalous amplification of a homodyne signal via almost-balanced weak values.

    PubMed

    Liu, Wei-Tao; Martínez-Rincón, Julián; Viza, Gerardo I; Howell, John C

    2017-03-01

    We propose precision measurements of ultra-small angular velocities of a mirror within a modified Sagnac interferometer, where the counter-propagating beams are spatially separated, using the recently proposed technique of almost-balanced weak values amplification (ABWV) [Phys. Rev. Lett.116, 100803 (2016)PRLTAO0031-900710.1103/PhysRevLett.116.100803]. The separation between the two beams provides additional amplification with respect to using collinear beams in a Sagnac interferometer. Within the same setup, the weak-value amplification technique is also performed for comparison. Much higher amplification factors can be obtained using the almost-balanced weak values technique, with the best one achieved in our experiments being as high as 1.2×107. In addition, the amplification factor monotonically increases with decreasing of the post-selection phase for the ABWV case in our experiments, which is not the case for weak-value amplification (WVA) at small post-selection phases. Both techniques consist of measuring the angular velocity. The sensitivity of the ABWV technique is ∼38  nrad/s per averaged pulse for a repetition rate of 1 Hz and ∼33  nrad/s per averaged pulse for the WVA technique.

  2. Intracranial Pressure Monitoring: Invasive versus Non-Invasive Methods—A Review

    PubMed Central

    Raboel, P. H.; Bartek, J.; Andresen, M.; Bellander, B. M.; Romner, B.

    2012-01-01

    Monitoring of intracranial pressure (ICP) has been used for decades in the fields of neurosurgery and neurology. There are multiple techniques: invasive as well as noninvasive. This paper aims to provide an overview of the advantages and disadvantages of the most common and well-known methods as well as assess whether noninvasive techniques (transcranial Doppler, tympanic membrane displacement, optic nerve sheath diameter, CT scan/MRI and fundoscopy) can be used as reliable alternatives to the invasive techniques (ventriculostomy and microtransducers). Ventriculostomy is considered the gold standard in terms of accurate measurement of pressure, although microtransducers generally are just as accurate. Both invasive techniques are associated with a minor risk of complications such as hemorrhage and infection. Furthermore, zero drift is a problem with selected microtransducers. The non-invasive techniques are without the invasive methods' risk of complication, but fail to measure ICP accurately enough to be used as routine alternatives to invasive measurement. We conclude that invasive measurement is currently the only option for accurate measurement of ICP. PMID:22720148

  3. Air Quality Instrumentation. Volume 1.

    ERIC Educational Resources Information Center

    Scales, John W., Ed.

    To insure a wide dissemination of information describing advances in measurement and control techniques, the Instrument Society of America (ISA) has published this monograph of selected papers from recent ISA symposia dealing with air pollution. Papers range from a discussion of some relatively new applications of proven techniques to discussions…

  4. Synthesis of (azelaic-co-dodecanedioic) polyanhydride by microwave technique

    NASA Astrophysics Data System (ADS)

    Gutiérrez, M.; Sierra, C.; Acevedo Morantes, M.; Herrera, A. P.

    2016-02-01

    A polyanhydride was synthesized through microwave radiation using azelaic acid and dodecanedioic dicarboxylic acid at concentrations of 75:25, 50:50, and 25:75%w/w with acetic anhydride as crosslinking agent. Polymerization was carried out during 3 and 5 minutes. The copolymer with the highest molecular weight was selected using the intrinsic viscometry technique and by Huggin/Kraemer and Solomon/Ciuta methods. Based on these measurements, the 50:50 copolymer was selected with a polymerization time of 3 minutes in the microwave. This sample displayed the highest intrinsic viscosity (41.82cm3/g), demonstrating the relevance of the microwave technique for the synthesis of biopolymers.

  5. Unfavourable results with distraction in craniofacial skeleton

    PubMed Central

    Agarwal, Rajiv

    2013-01-01

    Distraction osteogenesis has revolutionised the management of craniofacial abnormalities. The technique however requires precise planning, patient selection, execution and follow-up to achieve consistent and positive results and to avoid unfavourable results. The unfavourable results with craniofacial distraction stem from many factors ranging from improper patient selection, planning and use of inappropriate distraction device and vector. The present study analyses the current standards and techniques of distraction and details in depth the various errors and complications that may occur due to this technique. The commonly observed complications of distraction have been detailed along with measures and suggestions to avoid them in clinical practice. PMID:24501455

  6. Mass balance for on-line alphakLa estimation in activated sludge oxidation ditch.

    PubMed

    Chatellier, P; Audic, J M

    2001-01-01

    The capacity of an aeration system to transfer oxygen to a given activated sludge oxidation ditch is characterised by the alphakLa parameter. This parameter is difficult to measure under normal plant working conditions. Usually this measurement involves off-gas techniques or static mass balance. Therefore an on-line technique has been developed and tested in order to evaluate alphakLa. This technique deduces alphakLa from a data analysis of low cost sensor measurement: two flow meters and one oxygen probe. It involves a dynamic mass balance applied to aeration cycles selected according to given criteria. This technique has been applied to a wastewater treatment plant during four years. Significant variations of the alphakLa values have been detected while the number of blowers changes. This technique has been applied to another plant during two months.

  7. New consistency tests for high-accuracy measurements of X-ray mass attenuation coefficients by the X-ray extended-range technique.

    PubMed

    Chantler, C T; Islam, M T; Rae, N A; Tran, C Q; Glover, J L; Barnea, Z

    2012-03-01

    An extension of the X-ray extended-range technique is described for measuring X-ray mass attenuation coefficients by introducing absolute measurement of a number of foils - the multiple independent foil technique. Illustrating the technique with the results of measurements for gold in the 38-50 keV energy range, it is shown that its use enables selection of the most uniform and well defined of available foils, leading to more accurate measurements; it allows one to test the consistency of independently measured absolute values of the mass attenuation coefficient with those obtained by the thickness transfer method; and it tests the linearity of the response of the counter and counting chain throughout the range of X-ray intensities encountered in a given experiment. In light of the results for gold, the strategy to be ideally employed in measuring absolute X-ray mass attenuation coefficients, X-ray absorption fine structure and related quantities is discussed.

  8. Development of Career Motivational Prediction and Selection Procedures.

    ERIC Educational Resources Information Center

    Culclasure, David F.

    The report constitutes a comprehensive review of the literature related to career motivation and selection procedures. It surveyed the reported techniques for measuring career motivation and interest which were used by 24 industrial firms, 14 personnel and management consulting organizations, 8 marketing research firms, and various governmental…

  9. A proposed configurable approach for recommendation systems via data mining techniques

    NASA Astrophysics Data System (ADS)

    Khedr, Ayman E.; Idrees, Amira M.; Hegazy, Abd El-Fatah; El-Shewy, Samir

    2018-02-01

    This study presents a configurable approach for recommendations which determines the suitable recommendation method for each field based on the characteristics of its data, the method includes determining the suitable technique for selecting a representative sample of the provided data. Then selecting the suitable feature weighting measure to provide a correct weight for each feature based on its effect on the recommendations. Finally, selecting the suitable algorithm to provide the required recommendations. The proposed configurable approach could be applied on different domains. The experiments have revealed that the approach is able to provide recommendations with only 0.89 error rate percentage.

  10. Drop-off Detection with the Long Cane: Effects of Different Cane Techniques on Performance

    PubMed Central

    Kim, Dae Shik; Emerson, Robert Wall; Curtis, Amy

    2010-01-01

    This study compared the drop-off detection performance with the two-point touch and constant contact cane techniques using a repeated-measures design with a convenience sample of 15 cane users with visual impairments. The constant contact technique was superior to the two-point touch technique in the drop-off detection rate and the 50% detection threshold. The findings may help an orientation and mobility instructor select an appropriate technique for a particular client or training situation. PMID:21209791

  11. Exploring possibilities of band gap measurement with off-axis EELS in TEM.

    PubMed

    Korneychuk, Svetlana; Partoens, Bart; Guzzinati, Giulio; Ramaneti, Rajesh; Derluyn, Joff; Haenen, Ken; Verbeeck, Jo

    2018-06-01

    A technique to measure the band gap of dielectric materials with high refractive index by means of energy electron loss spectroscopy (EELS) is presented. The technique relies on the use of a circular (Bessel) aperture and suppresses Cherenkov losses and surface-guided light modes by enforcing a momentum transfer selection. The technique also strongly suppresses the elastic zero loss peak, making the acquisition, interpretation and signal to noise ratio of low loss spectra considerably better, especially for excitations in the first few eV of the EELS spectrum. Simulations of the low loss inelastic electron scattering probabilities demonstrate the beneficial influence of the Bessel aperture in this setup even for high accelerating voltages. The importance of selecting the optimal experimental convergence and collection angles is highlighted. The effect of the created off-axis acquisition conditions on the selection of the transitions from valence to conduction bands is discussed in detail on a simplified isotropic two band model. This opens the opportunity for deliberately selecting certain transitions by carefully tuning the microscope parameters. The suggested approach is experimentally demonstrated and provides good signal to noise ratio and interpretable band gap signals on reference samples of diamond, GaN and AlN while offering spatial resolution in the nm range. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Local diffusion and diffusion-T2 distribution measurements in porous media

    NASA Astrophysics Data System (ADS)

    Vashaee, S.; Newling, B.; MacMillan, B.; Marica, F.; Li, M.; Balcom, B. J.

    2017-05-01

    Slice-selective pulsed field gradient (PFG) and PFG-T2 measurements are developed to measure spatially-resolved molecular diffusion and diffusion-T2 distributions. A spatially selective adiabatic inversion pulse was employed for slice-selection. The slice-selective pulse is able to select a coarse slice, on the order of 1 cm, at an arbitrary position in the sample. The new method can be employed to characterize oil-water mixtures in porous media. The new technique has an inherent sensitivity advantage over phase encoding imaging based methods due to signal being localized from a thick slice. The method will be advantageous for magnetic resonance of porous media at low field where sensitivity is problematic. Experimental CPMG data, following PFG diffusion measurement, were compromised by a transient ΔB0(t) field offset. The off resonance effects of ΔB0(t) were examined by simulation. The ΔB0 offset artifact in D-T2 distribution measurements may be avoided by employing real data, instead of magnitude data.

  13. ITEM SELECTION TECHNIQUES AND EVALUATION OF INSTRUCTIONAL OBJECTIVES.

    ERIC Educational Resources Information Center

    COX, RICHARD C.

    THE VALIDITY OF AN EDUCATIONAL ACHIEVEMENT TEST DEPENDS UPON THE CORRESPONDENCE BETWEEN SPECIFIED EDUCATIONAL OBJECTIVES AND THE EXTENT TO WHICH THESE OBJECTIVES ARE MEASURED BY THE EVALUATION INSTRUMENT. THIS STUDY IS DESIGNED TO EVALUATE THE EFFECT OF STATISTICAL ITEM SELECTION ON THE STRUCTURE OF THE FINAL EVALUATION INSTRUMENT AS COMPARED WITH…

  14. Optimal Wavelengths Selection Using Hierarchical Evolutionary Algorithm for Prediction of Firmness and Soluble Solids Content in Apples

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...

  15. Temperature determination of shock layer using spectroscopic techniques

    NASA Technical Reports Server (NTRS)

    Akundi, Murty A.

    1989-01-01

    Shock layer temperature profiles are obtained through analysis of radiation from shock layers produced by a blunt body inserted in an arc jet flow. Spectral measurements of N2(+) have been made at 0.5 inch, 1.0 inch, and 1.4 inches from the blunt body. A technique is developed to measure the vibrational and rotational temperatures of N2(+). Temperature profiles from the radiation layers show a high temperature near the shock front and decreasing temperature near the boundary layer. Precise temperature measurements could not be made using this technique due to the limited resolution. Use of a high resolution grating will help to make a more accurate temperature determination. Laser induced fluorescence technique is much better since it gives the scope for selective excitation and a better spacial resolution.

  16. Validation and evaluation of measuring methods for the 3D documentation of external injuries in the field of forensic medicine.

    PubMed

    Buck, Ursula; Buße, Kirsten; Campana, Lorenzo; Schyma, Christian

    2018-03-01

    Three-dimensional (3D) measurement techniques are gaining importance in many areas. The latest developments brought more cost-effective, user-friendly, and faster technologies onto the market. Which 3D techniques are suitable in the field of forensic medicine and what are their advantages and disadvantages? This wide-ranging study evaluated and validated various 3D measurement techniques for the forensic requirements. High-tech methods as well as low-budget systems have been tested and compared in terms of accuracy, ease of use, expenditure of time, mobility, cost, necessary knowhow, and their limitations. Within this study, various commercial measuring systems of the different techniques were tested. Based on the first results, one measuring system was selected for each technique, which appeared to be the most suitable for the forensic application or is already established in forensic medicine. A body of a deceased, a face and an injury of a living person, and a shoe sole were recorded by 11 people with different professions and previous knowledge using the selected systems. The results were assessed and the personal experiences were evaluated using a questionnaire. In addition, precision investigations were carried out using test objects. The study shows that the hand-held scanner and photogrammetry are very suitable for the 3D documentation of forensic medical findings. Their moderate acquisition costs and easy operation could lead to more frequent application in forensic medicine in the future. For special applications, the stripe-light scanner still has its justification due to its high precision, the flexible application area, and the high reliability. The results show that, thanks to the technological advances, the 3D measurement technology will have more and more impact on the routine of the forensic medical examination.

  17. Surface acoustical intensity measurements on a diesel engine

    NASA Technical Reports Server (NTRS)

    Mcgary, M. C.; Crocker, M. J.

    1980-01-01

    The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.

  18. Reconsideration of dynamic force spectroscopy analysis of streptavidin-biotin interactions.

    PubMed

    Taninaka, Atsushi; Takeuchi, Osamu; Shigekawa, Hidemi

    2010-05-13

    To understand and design molecular functions on the basis of molecular recognition processes, the microscopic probing of the energy landscapes of individual interactions in a molecular complex and their dependence on the surrounding conditions is of great importance. Dynamic force spectroscopy (DFS) is a technique that enables us to study the interaction between molecules at the single-molecule level. However, the obtained results differ among previous studies, which is considered to be caused by the differences in the measurement conditions. We have developed an atomic force microscopy technique that enables the precise analysis of molecular interactions on the basis of DFS. After verifying the performance of this technique, we carried out measurements to determine the landscapes of streptavidin-biotin interactions. The obtained results showed good agreement with theoretical predictions. Lifetimes were also well analyzed. Using a combination of cross-linkers and the atomic force microscope that we developed, site-selective measurement was carried out, and the steps involved in bonding due to microscopic interactions are discussed using the results obtained by site-selective analysis.

  19. Strain gage selection in loads equations using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.

  20. Measuring Cytokine Concentrations Using Magnetic Spectroscopy of Nanoparticle Brownian Relaxation

    NASA Astrophysics Data System (ADS)

    Khurshid, Hafsa; Shi, Yipeng; Weaver, John

    The magnetic particle spectroscopy is a newly developed non-invasive technique for obtaining information about the nanoparticles' micro environment. In this technique the nanoparticles' magnetization, induced by an alternating magnetic field at various applied frequencies, is processed to analyze rotational freedom of nanoparticles. By analyzing average rotational freedom, it is possible to measure the nanoparticle's relaxation time, and hence get an estimate of the temperature and viscosity of the medium. In molecular concentration sensing, the rotational freedom indicates the number of nanoparticles that are bound by a selected analyte. We have developed microscopic nanoparticles probes to measure the concentration of selected molecules. The nanoparticles are targeted to bind the selected molecule and the resulting reduction in rotational freedom can be quantified remotely. Previously, sensitivity measurements has been reported to be of the factor of 200. However, with our newer perpendicular field setup (US Patent Application Serial No 61/721,378), it possible to sense cytokine concentrations as low as 5 Pico-Molar in-vitro. The excellent sensitivity of this apparatus is due to isolation of the drive field from the signal so the output can be amplified to a higher level. Dartmouth College.

  1. The development of laser speckle velocimetry for the study of vortical flows

    NASA Technical Reports Server (NTRS)

    Krothapalli, A.

    1991-01-01

    A new experimental technique commonly known as PIDV (particle image displacement velocity) was developed to measure an instantaneous two dimensional velocity fluid in a selected plane of the flow field. This technique was successfully applied to the study of several problems: (1) unsteady flows with large scale vortical structures; (2) the instantaneous two dimensional flow in the transition region of a rectangular air jet; and (3) the instantaneous flow over a circular bump in a transonic flow. In several other experiments PIDV is routinely used as a non-intrusive measurement technique to obtain instantaneous two dimensional velocity fields.

  2. Evaluation of ultrasonics and optimized radiography for 2219-T87 aluminum weldments

    NASA Technical Reports Server (NTRS)

    Clotfelter, W. N.; Hoop, J. M.; Duren, P. C.

    1975-01-01

    Ultrasonic studies are described which are specifically directed toward the quantitative measurement of randomly located defects previously found in aluminum welds with radiography or with dye penetrants. Experimental radiographic studies were also made to optimize techniques for welds of the thickness range to be used in fabricating the External Tank of the Space Shuttle. Conventional and innovative ultrasonic techniques were applied to the flaw size measurement problem. Advantages and disadvantages of each method are discussed. Flaw size data obtained ultrasonically were compared to radiographic data and to real flaw sizes determined by destructive measurements. Considerable success was achieved with pulse echo techniques and with 'pitch and catch' techniques. The radiographic work described demonstrates that careful selection of film exposure parameters for a particular application must be made to obtain optimized flaw detectability. Thus, film exposure techniques can be improved even though radiography is an old weld inspection method.

  3. Selection of a Prototype Engine Monitor for Coast Guard Main Diesel Propulsion

    DOT National Transportation Integrated Search

    1979-04-01

    A diesel engine monitor system has been synthesized from several parameter measurement subsystems which employ measurement techniques suitable for use on the main propulsion engines in U.S. Coast Cutters. The primary functions of the system are to mo...

  4. Review of chemical separation techniques applicable to alpha spectrometric measurements

    NASA Astrophysics Data System (ADS)

    de Regge, P.; Boden, R.

    1984-06-01

    Prior to alpha-spectrometric measurements several chemical manipulations are usually required to obtain alpha-radiating sources with the desired radiochemical and chemical purity. These include sampling, dissolution or leaching of the elements of interest, conditioning of the solution, chemical separation and preparation of the alpha-emitting source. The choice of a particular method is dependent on different criteria but always involves aspects of the selectivity or the quantitative nature of the separations. The availability of suitable tracers or spikes and modern high resolution instruments resulted in the wide-spread application of isotopic dilution techniques to the problems associated with quantitative chemical separations. This enhanced the development of highly elective methods and reagents which led to important simplifications in the separation schemes. The chemical separation methods commonly used in connection with alpha-spectrometric measurements involve precipitation with selected scavenger elements, solvent extraction, ion exchange and electrodeposition techniques or any combination of them. Depending on the purpose of the final measurement and the type of sample available the chemical separation methods have to be adapted to the particular needs of environment monitoring, nuclear chemistry and metrology, safeguards and safety, waste management and requirements in the nuclear fuel cycle. Against the background of separation methods available in the literature the present paper highlights the current developments and trends in the chemical techniques applicable to alpha spectrometry.

  5. Feasibility of automated dropsize distributions from holographic data using digital image processing techniques. [particle diameter measurement technique

    NASA Technical Reports Server (NTRS)

    Feinstein, S. P.; Girard, M. A.

    1979-01-01

    An automated technique for measuring particle diameters and their spatial coordinates from holographic reconstructions is being developed. Preliminary tests on actual cold-flow holograms of impinging jets indicate that a suitable discriminant algorithm consists of a Fourier-Gaussian noise filter and a contour thresholding technique. This process identifies circular as well as noncircular objects. The desired objects (in this case, circular or possibly ellipsoidal) are then selected automatically from the above set and stored with their parametric representations. From this data, dropsize distributions as a function of spatial coordinates can be generated and combustion effects due to hardware and/or physical variables studied.

  6. Comparison of Selective Culturing and Biochemical Techniques for Measuring Biological Activity in Geothermal Process Fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pryfogle, Peter Albert

    2000-09-01

    For the past three years, scientists at the Idaho National Engineering and Environmental Laboratory have been conducting studies aimed at determining the presence and influence of bacteria found in geothermal plant cooling water systems. In particular, the efforts have been directed at understanding the conditions that lead to the growth and accumulation of biomass within these systems, reducing the operational and thermal efficiency. Initially, the methods selected were based upon the current practices used by the industry and included the collection of water quality parameters, the measurement of soluble carbon, and the use of selective medial for the determination ofmore » the number density of various types of organisms. This data has been collected on a seasonal basis at six different facilities located at the Geysers’ in Northern California. While this data is valuable in establishing biological growth trends in the facilities and providing an initial determination of upset or off-normal conditions, more detailed information about the biological activity is needed to determine what is triggering or sustaining the growth in these facilities in order to develop improved monitoring and treatment techniques. In recent years, new biochemical approaches, based upon the analyses of phospholipid fatty acids and DNA recovered from environmental samples, have been developed and commercialized. These techniques, in addition to allowing the determination of the quantity of biomass, also provide information on the community composition and the nutritional status of the organisms. During the past year, samples collected from the condenser effluents of four of the plants from The Geysers’ were analyzed using these methods and compared with the results obtained from selective culturing techniques. The purpose of this effort was to evaluate the cost-benefit of implementing these techniques for tracking microbial activity in the plant study, in place of the selective culturing analyses that are currently the industry standard.« less

  7. Reducing Sweeping Frequencies in Microwave NDT Employing Machine Learning Feature Selection

    PubMed Central

    Moomen, Abdelniser; Ali, Abdulbaset; Ramahi, Omar M.

    2016-01-01

    Nondestructive Testing (NDT) assessment of materials’ health condition is useful for classifying healthy from unhealthy structures or detecting flaws in metallic or dielectric structures. Performing structural health testing for coated/uncoated metallic or dielectric materials with the same testing equipment requires a testing method that can work on metallics and dielectrics such as microwave testing. Reducing complexity and expenses associated with current diagnostic practices of microwave NDT of structural health requires an effective and intelligent approach based on feature selection and classification techniques of machine learning. Current microwave NDT methods in general based on measuring variation in the S-matrix over the entire operating frequency ranges of the sensors. For instance, assessing the health of metallic structures using a microwave sensor depends on the reflection or/and transmission coefficient measurements as a function of the sweeping frequencies of the operating band. The aim of this work is reducing sweeping frequencies using machine learning feature selection techniques. By treating sweeping frequencies as features, the number of top important features can be identified, then only the most influential features (frequencies) are considered when building the microwave NDT equipment. The proposed method of reducing sweeping frequencies was validated experimentally using a waveguide sensor and a metallic plate with different cracks. Among the investigated feature selection techniques are information gain, gain ratio, relief, chi-squared. The effectiveness of the selected features were validated through performance evaluations of various classification models; namely, Nearest Neighbor, Neural Networks, Random Forest, and Support Vector Machine. Results showed good crack classification accuracy rates after employing feature selection algorithms. PMID:27104533

  8. Developing Characterization Procedures for Qualifying both Novel Selective Laser Sintering Polymer Powders and Recycled Powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bajric, Sendin

    Selective laser sintering (SLS) is an additive technique which is showing great promise over conventional manufacturing techniques. SLS requires certain key material properties for a polymer powder to be successfully processed into an end-use part, and therefore limited selection of materials are available. Furthermore, there has been evidence of a powder’s quality deteriorating following each SLS processing cycle. The current investigation serves to build a path forward in identifying new SLS powder materials by developing characterization procedures for identifying key material properties as well as for detecting changes in a powder’s quality. Thermogravimetric analyses, differential scanning calorimetry, and bulk densitymore » measurements were investigated.« less

  9. A quantum inspired model of radar range and range-rate measurements with applications to weak value measurements

    NASA Astrophysics Data System (ADS)

    Escalante, George

    2017-05-01

    Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.

  10. EDITORIAL: Measurement techniques for multiphase flows Measurement techniques for multiphase flows

    NASA Astrophysics Data System (ADS)

    Okamoto, Koji; Murai, Yuichi

    2009-11-01

    Research on multiphase flows is very important for industrial applications, including power stations, vehicles, engines, food processing and so on. Multiphase flows originally have nonlinear features because of multiphase systems. The interaction between the phases plays a very interesting role in the flows. The nonlinear interaction causes the multiphase flows to be very complicated. Therefore techniques for measuring multiphase flows are very useful in helping to understand the nonlinear phenomena. The state-of-the-art measurement techniques were presented and discussed at the sixth International Symposium on Measurement Techniques for Multiphase Flows (ISMTMF2008) held in Okinawa, Japan, on 15-17 December 2008. This special feature of Measurement Science and Technology includes selected papers from ISMTMF2008. Okinawa has a long history as the Ryukyus Kingdom. China, Japan and many western Pacific countries have had cultural and economic exchanges through Okinawa for over 1000 years. Much technical and scientific information was exchanged at the symposium in Okinawa. The proceedings of ISMTMF2008 apart from these special featured papers were published in Journal of Physics: Conference Series vol. 147 (2009). We would like to express special thanks to all the contributors to the symposium and this special feature. This special feature will be a milestone in measurement techniques for multiphase flows.

  11. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS).

    PubMed

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique.

  12. Measurement of cross relaxation between two selected nuclei by synchronous nutation of magnetization in nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Burghardt, Irene; Konrat, Robert; Boulat, Benoit; Vincent, Sébastien J. F.; Bodenhausen, Geoffrey

    1993-01-01

    A novel technique is described that allows one to measure cross-relaxation rates (Overhauser effects) between two selected nuclei in high-resolution NMR. The two chosen sites are irradiated simultaneously with the sidebands of an amplitude-modulated radio-frequency field, so that their magnetization vectors are forced to undergo a simultaneous motion, which is referred to as ``synchronous nutation.'' From the time-dependence observed for different initial conditions, one may derive cross-relaxation rates, and hence determine internuclear distances. The scalar interactions between the selected spins and other spins belonging to the same coupling network are effectively decoupled. Furthermore, cross relaxation to other spins in the environment does not affect the transient response of the selected spins, which are therefore in effect isolated from their environment in terms of dipolar interactions. The method is particularly suitable to study cases where normal Overhauser effects are perturbed by spin-diffusion effects due to the presence of further spins. The technique is applied to the protein bovine pancreatic trypsin inhibitor.

  13. High-precision double-frequency interferometric measurement of the cornea shape

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.; Smirnov, Eugene M.; Ilchenko, Leonid M.; Goncharov, Vadym O.

    1996-11-01

    To measure the shape of the cornea and its declinations from the necessary values before and after PRK operation, s well as the shape of other spherical objects like artificial pupil, a technique was used of double-frequency dual-beam interferometry. The technique is based on determination of the optical path difference between two neighboring laser beams, reflected from the cornea or other surface under investigation. Knowing the distance between the beams on the investigated shape. The shape itself is reconstructed by along-line integration. To adjust the wavefront orientation of the laser beam to the spherical shape of the cornea or artificial pupil in the course of scanning, additional lens is involved. Signal-to-noise ratio is ameliorated excluding losses in the acousto-optic deflectors. Polarization selection is realized for choosing the signal needed for measurement. 2D image presentation is accompanied by convenient PC accessories, permitting precise cross-section measurements along selected directions. Sensitivity of the order of 10-2 micrometers is achieved.

  14. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  15. Less or more hemodynamic monitoring in critically ill patients.

    PubMed

    Jozwiak, Mathieu; Monnet, Xavier; Teboul, Jean-Louis

    2018-06-07

    Hemodynamic investigations are required in patients with shock to identify the type of shock, to select the most appropriate treatments and to assess the patient's response to the selected therapy. We discuss how to select the most appropriate hemodynamic monitoring techniques in patients with shock as well as the future of hemodynamic monitoring. Over the last decades, the hemodynamic monitoring techniques have evolved from intermittent toward continuous and real-time measurements and from invasive toward less-invasive approaches. In patients with shock, current guidelines recommend the echocardiography as the preferred modality for the initial hemodynamic evaluation. In patients with shock nonresponsive to initial therapy and/or in the most complex patients, it is recommended to monitor the cardiac output and to use advanced hemodynamic monitoring techniques. They also provide other useful variables that are useful for managing the most complex cases. Uncalibrated and noninvasive cardiac output monitors are not reliable enough in the intensive care setting. The use of echocardiography should be initially encouraged in patients with shock to identify the type of shock and to select the most appropriate therapy. The use of more invasive hemodynamic monitoring techniques should be discussed on an individualized basis.

  16. Comparative study of resist stabilization techniques for metal etch processing

    NASA Astrophysics Data System (ADS)

    Becker, Gerry; Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Livesay, William R.

    1999-06-01

    This study investigates resist stabilization techniques as they are applied to a metal etch application. The techniques that are compared are conventional deep-UV/thermal stabilization, or UV bake, and electron beam stabilization. The electron beam tool use din this study, an ElectronCure system from AlliedSignal Inc., ELectron Vision Group, utilizes a flood electron source and a non-thermal process. These stabilization techniques are compared with respect to a metal etch process. In this study, two types of resist are considered for stabilization and etch: a g/i-line resist, Shipley SPR-3012, and an advanced i-line, Shipley SPR 955- Cm. For each of these resist the effects of stabilization on resist features are evaluated by post-stabilization SEM analysis. Etch selectivity in all cases is evaluated by using a timed metal etch, and measuring resists remaining relative to total metal thickness etched. Etch selectivity is presented as a function of stabilization condition. Analyses of the effects of the type of stabilization on this method of selectivity measurement are also presented. SEM analysis was also performed on the features after a compete etch process, and is detailed as a function of stabilization condition. Post-etch cleaning is also an important factor impacted by pre-etch resist stabilization. Results of post- etch cleaning are presented for both stabilization methods. SEM inspection is also detailed for the metal features after resist removal processing.

  17. Study on fast measurement of sugar content of yogurt using Vis/NIR spectroscopy techniques

    NASA Astrophysics Data System (ADS)

    He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli

    2006-09-01

    In order to measuring the sugar content of yogurt rapidly, a fast measurement of sugar content of yogurt using Vis/NIR-spectroscopy techniques was established. 25 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The sugar content of yogurt on positions scanned by spectrum were measured by a sugar content meter. The mathematical model between sugar content and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS). The correlation coefficient of sugar content based on PLS model is more than 0.894, and standard error of calibration (SEC) is 0.356, standard error of prediction (SEP) is 0.389. Through predicting the sugar content quantitatively of 35 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0.934. The results show the good to excellent prediction performance. The Vis/NIR spectroscopy technique had significantly greater accuracy for determining the sugar content. It was concluded that the Vis/NIRS measurement technique seems reliable to assess the fast measurement of sugar content of yogurt, and a new method for the measurement of sugar content of yogurt was established.

  18. Air Quality Instrumentation. Volume 2.

    ERIC Educational Resources Information Center

    Scales, John W., Ed.

    To insure a wide dissemination of information describing advances in measurement and control techniques, the Instrument Society of America (ISA) has published this monograph of selected papers, the second in a series, from recent ISA symposia dealing with air pollution. Papers range from a discussion of individual pollutant measurements to…

  19. Neutron and positron techniques for fluid transfer system analysis and remote temperature and stress measurement

    NASA Astrophysics Data System (ADS)

    Stewart, P. A. E.

    1987-05-01

    Present and projected applications of penetrating radiation techniques to gas turbine research and development are considered. Approaches discussed include the visualization and measurement of metal component movement using high energy X-rays, the measurement of metal temperatures using epithermal neutrons, the measurement of metal stresses using thermal neutron diffraction, and the visualization and measurement of oil and fuel systems using either cold neutron radiography or emitting isotope tomography. By selecting the radiation appropriate to the problem, the desired data can be probed for and obtained through imaging or signal acquisition, and the necessary information can then be extracted with digital image processing or knowledge based image manipulation and pattern recognition.

  20. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  1. Transient Infrared Measurement of Laser Absorption Properties of Porous Materials

    NASA Astrophysics Data System (ADS)

    Marynowicz, Andrzej

    2016-06-01

    The infrared thermography measurements of porous building materials have become more frequent in recent years. Many accompanying techniques for the thermal field generation have been developed, including one based on laser radiation. This work presents a simple optimization technique for estimation of the laser beam absorption for selected porous building materials, namely clinker brick and cement mortar. The transient temperature measurements were performed with the use of infrared camera during laser-induced heating-up of the samples' surfaces. As the results, the absorbed fractions of the incident laser beam together with its shape parameter are reported.

  2. Survival-related Selection Bias in Studies of Racial Health Disparities: The Importance of the Target Population and Study Design.

    PubMed

    Howe, Chanelle J; Robinson, Whitney R

    2018-07-01

    The impact of survival-related selection bias has not always been discussed in relevant studies of racial health disparities. Moreover, the analytic approaches most frequently employed in the epidemiologic literature to minimize selection bias are difficult to implement appropriately in racial disparities research. This difficulty stems from the fact that frequently employed analytic techniques require that common causes of survival and the outcome are accurately measured. Unfortunately, such common causes are often unmeasured or poorly measured in racial health disparities studies. In the absence of accurate measures of the aforementioned common causes, redefining the target population or changing the study design represents a useful approach for reducing the extent of survival-related selection bias. To help researchers recognize and minimize survival-related selection bias in racial health disparities studies, we illustrate the aforementioned selection bias and how redefining the target population or changing the study design can be useful.

  3. Measurement of electromagnetic properties of powder and solid metal materials for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Todorov, Evgueni Iordanov

    2017-04-01

    The lack of validated nondestructive evaluation (NDE) techniques for examination during and after additive manufacturing (AM) component fabrication is one of the obstacles in the way of broadening use of AM for critical applications. Knowledge of electromagnetic properties of powder (e.g. feedstock) and solid AM metal components is necessary to evaluate and deploy electromagnetic NDE modalities for examination of AM components. The objective of this research study was to develop and implement techniques for measurement of powder and solid metal electromagnetic properties. Three materials were selected - Inconel 625, duplex stainless steel 2205, and carbon steel 4140. The powder properties were measured with alternate current (AC) model based eddy current technique and direct current (DC) resistivity measurements. The solid metal properties were measured with DC resistivity measurements, DC magnetic techniques, and AC model based eddy current technique. Initial magnetic permeability and electrical conductivity were acquired for both powder and solid metal. Additional magnetic properties such as maximum permeability, coercivity, retentivity, and others were acquired for 2205 and 4140. Two groups of specimens were tested along the build length and width respectively to investigate for possible anisotropy. There was no significant difference or anisotropy when comparing measurements acquired along build length to those along the width. A trend in AC measurements might be associated with build geometry. Powder electrical conductivity was very low and difficult to estimate reliably with techniques used in the study. The agreement between various techniques was very good where adequate comparison was possible.

  4. Automatic welding detection by an intelligent tool pipe inspection

    NASA Astrophysics Data System (ADS)

    Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.

    2015-07-01

    This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.

  5. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  6. Deformation Measurement In The Hayward Fault Zone Using Partially Correlated Persistent Scatterers

    NASA Astrophysics Data System (ADS)

    Lien, J.; Zebker, H. A.

    2013-12-01

    Interferometric synthetic aperture radar (InSAR) is an effective tool for measuring temporal changes in the Earth's surface. By combining SAR phase data collected at varying times and orbit geometries, with InSAR we can produce high accuracy, wide coverage images of crustal deformation fields. Changes in the radar imaging geometry, scatterer positions, or scattering behavior between radar passes causes the measured radar return to differ, leading to a decorrelation phase term that obscures the deformation signal and prevents the use of large baseline data. Here we present a new physically-based method of modeling decorrelation from the subset of pixels with the highest intrinsic signal-to-noise ratio, the so-called persistent scatters (PS). This more complete formulation, which includes both phase and amplitude scintillations, better describes the scattering behavior of partially correlated PS pixels and leads to a more reliable selection algorithm. The new method identifies PS pixels using maximum likelihood signal-to-clutter ratio (SCR) estimation based on the joint interferometric stack phase-amplitude distribution. Our PS selection method is unique in that it considers both phase and amplitude; accounts for correlation between all possible pairs of interferometric observations; and models the effect of spatial and temporal baselines on the stack. We use the resulting maximum likelihood SCR estimate as a criterion for PS selection. We implement the partially correlated persistent scatterer technique to analyze a stack of C-band European Remote Sensing (ERS-1/2) interferometric radar data imaging the Hayward Fault Zone from 1995 to 2000. We show that our technique achieves a better trade-off between PS pixel selection accuracy and network density compared to other PS identification methods, particularly in areas of natural terrain. We then present deformation measurements obtained by the selected PS network. Our results demonstrate that the partially correlated persistent scatterer technique can attain accurate deformation measurements even in areas that suffer decorrelation due to natural terrain. The accuracy of phase unwrapping and subsequent deformation estimation on the spatially sparse PS network depends on both pixel selection accuracy and the density of the network. We find that many additional pixels can be added to the PS list if we are able to correctly identify and add those in which the scattering mechanism exhibits partial, rather than complete, correlation across all radar scenes.

  7. New developments in measurements technology relevant to the studies of deep geological repositories in bedded salt

    NASA Astrophysics Data System (ADS)

    Mao, N. H.; Ramirez, A. L.

    1980-10-01

    Developments in measurement technology are presented which are relevant to the studies of deep geological repositories for nuclear waste disposal during all phases of development, i.e., site selection, site characterization, construction, operation, and decommission. Emphasis was placed on geophysics and geotechnics with special attention to those techniques applicable to bedded salt. The techniques are grouped into sections as follows: tectonic environment, state of stress, subsurface structures, fractures, stress changes, deformation, thermal properties, fluid transport properties, and other approaches. Several areas that merit further research and developments are identified. These areas are: in situ thermal measurement techniques, fracture detection and characterization, in situ stress measurements, and creep behavior. The available instrumentations should generally be improved to have better resolution and accuracy, enhanced instrument survivability, and reliability for extended time periods in a hostile environment.

  8. A systematic review on the quality of measurement techniques for the assessment of burn wound depth or healing potential.

    PubMed

    Jaspers, Mariëlle E H; van Haasterecht, Ludo; van Zuijlen, Paul P M; Mokkink, Lidwine B

    2018-06-22

    Reliable and valid assessment of burn wound depth or healing potential is essential to treatment decision-making, to provide a prognosis, and to compare studies evaluating different treatment modalities. The aim of this review was to critically appraise, compare and summarize the quality of relevant measurement properties of techniques that aim to assess burn wound depth or healing potential. A systematic literature search was performed using PubMed, EMBASE and Cochrane Library. Two reviewers independently evaluated the methodological quality of included articles using an adapted version of the Consensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. A synthesis of evidence was performed to rate the measurement properties for each technique and to draw an overall conclusion on quality of the techniques. Thirty-six articles were included, evaluating various techniques, classified as (1) laser Doppler techniques; (2) thermography or thermal imaging; (3) other measurement techniques. Strong evidence was found for adequate construct validity of laser Doppler imaging (LDI). Moderate evidence was found for adequate construct validity of thermography, videomicroscopy, and spatial frequency domain imaging (SFDI). Only two studies reported on the measurement property reliability. Furthermore, considerable variation was observed among comparator instruments. Considering the evidence available, it appears that LDI is currently the most favorable technique; thereby assessing burn wound healing potential. Additional research is needed into thermography, videomicroscopy, and SFDI to evaluate their full potential. Future studies should focus on reliability and measurement error, and provide a precise description of which construct is aimed to measure. Copyright © 2018 Elsevier Ltd and ISBI. All rights reserved.

  9. INTERCOMPARISON OF DPASV AND ISE FOR THE MEASUREMENT OF CU COMPLEXATION CHARACTERISTICS OF NOM IN FRESHWATER. (R825395)

    EPA Science Inventory

    Complexation by dissolved humic substances has an important influence on
    trace metal behavior in natural systems. Unfortunately, few analytical
    techniques are available with adequate sensitivity and selectivity to measure
    free metal ions reliably at the low concent...

  10. REVIEW OF SELECTED STATE-OF-THE-ART APPLICATIONS OF DIAGNOSTIC MEASUREMENTS FOR RADON MITIGATION PLANNING

    EPA Science Inventory

    Since late-1984, EPA's AEERL has supported a program to develop and demonstrate radon mitigation techniques for single-family detached dwellings. As part of the program, projects have been started directed at developing and demonstrating the use of diagnostic measurements in all ...

  11. Image processing for IMRT QA dosimetry.

    PubMed

    Zaini, Mehran R; Forest, Gary J; Loshek, David D

    2005-01-01

    We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.

  12. Film Balance Studies of Membrane Lipids and Related Molecules

    ERIC Educational Resources Information Center

    Cadenhead, D. A.

    1972-01-01

    Discusses apparatus, techniques, and measurements used to determine cell membrane composition. The use of a film balance to study monolayer membranes of selected lipids is described and results reported. (TS)

  13. A Fluorescence Recovery After Photobleaching (FRAP) Technique for the Measurement of Solute Transport Across Surfactant-Laden Interfaces

    NASA Technical Reports Server (NTRS)

    Browne, Edward P.; Hatton, T. Alan

    1996-01-01

    The technique of Fluorescence Recovery After Photobleaching (FRAP) has been applied to the measurement of interfacial transport in two-phase systems. FRAP exploits the loss of fluorescence exhibited by certain fluorophores when over-stimulated (photobleached), so that a two-phase system, originally at equilibrium, can be perturbed without disturbing the interface by strong light from an argon-ion laser and its recovery monitored by a microscope-mounted CCD camera as it relaxes to a new equilibrium. During this relaxation, the concentration profiles of the probe solute are measured on both sides of the interface as a function of time, yielding information about the transport characteristics of the system. To minimize the size of the meniscus between the two phases, a photolithography technique is used to selectively treat the glass walls of the cell in which the phases are contained. This allows concentration measurements to be made very close to the interface and increases the sensitivity of the FRAP technique.

  14. Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.

    PubMed

    Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B

    2015-06-01

    Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.

  15. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  16. Combining active learning and semi-supervised learning techniques to extract protein interaction sentences.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2011-11-24

    Protein-protein interaction (PPI) extraction has been a focal point of many biomedical research and database curation tools. Both Active Learning and Semi-supervised SVMs have recently been applied to extract PPI automatically. In this paper, we explore combining the AL with the SSL to improve the performance of the PPI task. We propose a novel PPI extraction technique called PPISpotter by combining Deterministic Annealing-based SSL and an AL technique to extract protein-protein interaction. In addition, we extract a comprehensive set of features from MEDLINE records by Natural Language Processing (NLP) techniques, which further improve the SVM classifiers. In our feature selection technique, syntactic, semantic, and lexical properties of text are incorporated into feature selection that boosts the system performance significantly. By conducting experiments with three different PPI corpuses, we show that PPISpotter is superior to the other techniques incorporated into semi-supervised SVMs such as Random Sampling, Clustering, and Transductive SVMs by precision, recall, and F-measure. Our system is a novel, state-of-the-art technique for efficiently extracting protein-protein interaction pairs.

  17. An Examination of Exposure Control and Content Balancing Restrictions on Item Selection in CATs Using the Partial Credit Model.

    ERIC Educational Resources Information Center

    Davis, Laurie Laughlin; Pastor, Dena A.; Dodd, Barbara G.; Chiang, Claire; Fitzpatrick, Steven J.

    2003-01-01

    Examined the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system based on the partial credit model. Simulation results show the Sympson-Hetter technique can be used with minimal impact on measurement precision,…

  18. Simulation techniques for estimating error in the classification of normal patterns

    NASA Technical Reports Server (NTRS)

    Whitsitt, S. J.; Landgrebe, D. A.

    1974-01-01

    Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.

  19. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  20. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS)

    PubMed Central

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique. PMID:29293593

  1. Stream channel reference sites: An illustrated guide to field technique

    Treesearch

    Cheryl C Harrelson; C. L. Rawlins; John P. Potyondy

    1994-01-01

    This document is a guide to establishing permanent reference sites for gathering data about the physical characteristics of streams and rivers. The minimum procedure consists of the following: (1) select a site, (2) map the site and location, (3) measure the channel cross-section, (4) survey a longitudinal profile of the channel, (5) measure stream flow, (6) measure...

  2. Conducting field studies for testing pesticide leaching models

    USGS Publications Warehouse

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  3. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    PubMed

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-12-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  4. Undercooling measurement in a low-gravity containerless environment

    NASA Technical Reports Server (NTRS)

    Robinson, M. B.

    1981-01-01

    A technique is described for measuring the amount of undercooling for samples processed in a low-gravity containerless environment. The time of undercooling is determined by measuring the time of cooling before nucleation and recalescence by two infrared detectors. Once the cooling curve for each drop is calculated, the amount of undercooling can then be found. The technique is demonstrated by measuring the amount of undercooling for drops of pure niobium and select compositions of the niobium-germanium alloy system while free falling in a 32 n evacuated drop tube. The total hemispherical emissivities and specific heats for these materials were measured using a high-temperature containerless calorimeter. An overview of the effect of undercooling on drops of niobium and niobium-germanium is given.

  5. Data re-arranging techniques leading to proper variable selections in high energy physics

    NASA Astrophysics Data System (ADS)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  6. The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.

    2013-01-01

    We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.

  7. Precision phase estimation based on weak-value amplification

    NASA Astrophysics Data System (ADS)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  8. The development of laser speckle velocimetry for the study of vortical flows

    NASA Technical Reports Server (NTRS)

    Krothapalli, A.

    1991-01-01

    A research program was undertaken to develop a new experimental technique commonly known as particle image displacement velocity (PIVD) to measure an instantaneous two dimensional velocity field in a selected plane of flow field. This technique was successfully developed and applied to the study of several aerodynamic problems. A detailed description of the technique and a broad review of all the research activity carried out in this field are reported. A list of technical publications is also provided. The application of PIDV to unsteady flows with large scale structures is demonstrated in a study of the temporal evolution of the flow past an impulsively started circular cylinder. The instantaneous two dimensional flow in the transition region of a rectangular air jet was measured using PIDV and the details are presented. This experiment clearly demonstrates the PIDV capability in the measurement of turbulent flows. Preliminary experiments were also conducted to measure the instantaneous flow over a circular bump in a transonic flow. Several other experiments now routinely use PIDV as a non-intrustive measurement technique to obtain instantaneous two dimensional velocity fields.

  9. Quasi-simultaneous Measurements of Ionic Currents by Vibrating Probe and pH Distribution by Ion-selective Microelectrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaacs, H.S.; Lamaka, S.V.; Taryba, M.

    2011-01-01

    This work reports a new methodology to measure quasi-simultaneously the local electric fields and the distribution of specific ions in a solution via selective microelectrodes. The field produced by the net electric current was detected using the scanning vibrating electrode technique (SVET) with quasi-simultaneous measurements of pH with an ion-selective microelectrode (pH-SME). The measurements were performed in a validation cell providing a 48 ?m diameter Pt wire cross section as a source of electric current. A time lag between acquiring each current density and pH data-point was 1.5 s due to the response time of pH-SME. The quasi-simultaneous SVET-pH measurementsmore » that correlate electrochemical oxidation-reduction processes with acid-base chemical equilibria are reported for the first time. No cross-talk between the vibrating microelectrode and the ion-selective microelectrode could be detected under given experimental conditions.« less

  10. Measuring CAMD technique performance. 2. How "druglike" are drugs? Implications of Random test set selection exemplified using druglikeness classification models.

    PubMed

    Good, Andrew C; Hermsmeier, Mark A

    2007-01-01

    Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.

  11. Pancreatic islet blood flow and its measurement

    PubMed Central

    Jansson, Leif; Barbu, Andreea; Bodin, Birgitta; Drott, Carl Johan; Espes, Daniel; Gao, Xiang; Grapensparr, Liza; Källskog, Örjan; Lau, Joey; Liljebäck, Hanna; Palm, Fredrik; Quach, My; Sandberg, Monica; Strömberg, Victoria; Ullsten, Sara; Carlsson, Per-Ola

    2016-01-01

    Pancreatic islets are richly vascularized, and islet blood vessels are uniquely adapted to maintain and support the internal milieu of the islets favoring normal endocrine function. Islet blood flow is normally very high compared with that to the exocrine pancreas and is autonomously regulated through complex interactions between the nervous system, metabolites from insulin secreting β-cells, endothelium-derived mediators, and hormones. The islet blood flow is normally coupled to the needs for insulin release and is usually disturbed during glucose intolerance and overt diabetes. The present review provides a brief background on islet vascular function and especially focuses on available techniques to measure islet blood perfusion. The gold standard for islet blood flow measurements in experimental animals is the microsphere technique, and its advantages and disadvantages will be discussed. In humans there are still no methods to measure islet blood flow selectively, but new developments in radiological techniques hold great hopes for the future. PMID:27124642

  12. Digital Photography as a Tool to Measure School Cafeteria Consumption

    ERIC Educational Resources Information Center

    Swanson, Mark

    2008-01-01

    Background: Assessing actual consumption of school cafeteria meals presents challenges, given recall problems of children, the cost of direct observation, and the time constraints in the school cafeteria setting. This study assesses the use of digital photography as a technique to measure what elementary-aged students select and actually consume…

  13. Recent Advances in Measurement and Dietary Mitigation of Enteric Methane Emissions in Ruminants

    PubMed Central

    Patra, Amlan K.

    2016-01-01

    Methane (CH4) emission, which is mainly produced during normal fermentation of feeds by the rumen microorganisms, represents a major contributor to the greenhouse gas (GHG) emissions. Several enteric CH4 mitigation technologies have been explored recently. A number of new techniques have also been developed and existing techniques have been improved in order to evaluate CH4 mitigation technologies and prepare an inventory of GHG emissions precisely. The aim of this review is to discuss different CH4 measuring and mitigation technologies, which have been recently developed. Respiration chamber technique is still considered as a gold standard technique due to its greater precision and reproducibility in CH4 measurements. With the adoption of recent recommendations for improving the technique, the SF6 method can be used with a high level of precision similar to the chamber technique. Short-term measurement techniques of CH4 measurements generally invite considerable within- and between-animal variations. Among the short-term measuring techniques, Greenfeed and methane hood systems are likely more suitable for evaluation of CH4 mitigation studies, if measurements could be obtained at different times of the day relative to the diurnal cycle of the CH4 production. Carbon dioxide and CH4 ratio, sniffer, and other short-term breath analysis techniques are more suitable for on farm screening of large number of animals to generate the data of low CH4-producing animals for genetic selection purposes. Different indirect measuring techniques are also investigated in recent years. Several new dietary CH4 mitigation technologies have been explored, but only a few of them are practical and cost-effective. Future research should be directed toward both the medium- and long-term mitigation strategies, which could be utilized on farms to accomplish substantial reductions of CH4 emissions and to profitably reduce carbon footprint of livestock production systems. This review presents recent developments and critical analysis on different measurements and dietary mitigation of enteric CH4 emissions technologies. PMID:27243027

  14. Recent Advances in Measurement and Dietary Mitigation of Enteric Methane Emissions in Ruminants.

    PubMed

    Patra, Amlan K

    2016-01-01

    Methane (CH4) emission, which is mainly produced during normal fermentation of feeds by the rumen microorganisms, represents a major contributor to the greenhouse gas (GHG) emissions. Several enteric CH4 mitigation technologies have been explored recently. A number of new techniques have also been developed and existing techniques have been improved in order to evaluate CH4 mitigation technologies and prepare an inventory of GHG emissions precisely. The aim of this review is to discuss different CH4 measuring and mitigation technologies, which have been recently developed. Respiration chamber technique is still considered as a gold standard technique due to its greater precision and reproducibility in CH4 measurements. With the adoption of recent recommendations for improving the technique, the SF6 method can be used with a high level of precision similar to the chamber technique. Short-term measurement techniques of CH4 measurements generally invite considerable within- and between-animal variations. Among the short-term measuring techniques, Greenfeed and methane hood systems are likely more suitable for evaluation of CH4 mitigation studies, if measurements could be obtained at different times of the day relative to the diurnal cycle of the CH4 production. Carbon dioxide and CH4 ratio, sniffer, and other short-term breath analysis techniques are more suitable for on farm screening of large number of animals to generate the data of low CH4-producing animals for genetic selection purposes. Different indirect measuring techniques are also investigated in recent years. Several new dietary CH4 mitigation technologies have been explored, but only a few of them are practical and cost-effective. Future research should be directed toward both the medium- and long-term mitigation strategies, which could be utilized on farms to accomplish substantial reductions of CH4 emissions and to profitably reduce carbon footprint of livestock production systems. This review presents recent developments and critical analysis on different measurements and dietary mitigation of enteric CH4 emissions technologies.

  15. Fabrication of thermal-resistant gratings for high-temperature measurements using geometric phase analysis.

    PubMed

    Zhang, Q; Liu, Z; Xie, H; Ma, K; Wu, L

    2016-12-01

    Grating fabrication techniques are crucial to the success of grating-based deformation measurement methods because the quality of the grating will directly affect the measurement results. Deformation measurements at high temperatures entail heating and, perhaps, oxidize the grating. The contrast of the grating lines may change during the heating process. Thus, the thermal-resistant capability of the grating becomes a point of great concern before taking measurements. This study proposes a method that combines a laser-engraving technique with the processes of particle spraying and sintering for fabricating thermal-resistant gratings. The grating fabrication technique is introduced and discussed in detail. A numerical simulation with a geometric phase analysis (GPA) is performed for a homogeneous deformation case. Then, the selection scheme of the grating pitch is suggested. The validity of the proposed technique is verified by fabricating a thermal-resistant grating on a ZrO 2 specimen and measuring its thermal strain at high temperatures (up to 1300 °C). Images of the grating before and after deformation are used to obtain the thermal-strain field by GPA and to compare the results with well-established reference data. The experimental results indicate that this proposed technique is feasible and will offer good prospects for further applications.

  16. Automated validation of a computer operating system

    NASA Technical Reports Server (NTRS)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  17. Visible near-diffraction-limited lucky imaging with full-sky laser-assisted adaptive optics

    NASA Astrophysics Data System (ADS)

    Basden, A. G.

    2014-08-01

    Both lucky imaging techniques and adaptive optics require natural guide stars, limiting sky-coverage, even when laser guide stars are used. Lucky imaging techniques become less successful on larger telescopes unless adaptive optics is used, as the fraction of images obtained with well-behaved turbulence across the whole telescope pupil becomes vanishingly small. Here, we introduce a technique combining lucky imaging techniques with tomographic laser guide star adaptive optics systems on large telescopes. This technique does not require any natural guide star for the adaptive optics, and hence offers full sky-coverage adaptive optics correction. In addition, we introduce a new method for lucky image selection based on residual wavefront phase measurements from the adaptive optics wavefront sensors. We perform Monte Carlo modelling of this technique, and demonstrate I-band Strehl ratios of up to 35 per cent in 0.7 arcsec mean seeing conditions with 0.5 m deformable mirror pitch and full adaptive optics sky-coverage. We show that this technique is suitable for use with lucky imaging reference stars as faint as magnitude 18, and fainter if more advanced image selection and centring techniques are used.

  18. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites

    PubMed Central

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-01-01

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection. PMID:28772485

  19. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites.

    PubMed

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-02-04

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection.

  20. Guidance on Nanomaterial Hazards and Risks

    DTIC Science & Technology

    2015-05-21

    and at room temperature and 37 C°– solid separation by centrifugation, filtration , or chemical techniques (more experimental techniques combining...members in this potency sequence using bolus in vivo testing, verify the bolus results with selective inhalation testing. The potency of members of...measures in in vitro and limited in vivo experimental systems would facilitate the characterization of dose-response relationships across a set of ENMs

  1. Relative Utility of Selected Software Requirement Metrics

    DTIC Science & Technology

    1991-12-01

    testing . They can also help in deciding if and how to use complexity reduction techniques. In summary, requirement metrics can be useful because they...answer items in a test instrument. In order to differentiate between misinterpretation and comprehension, the measurement technique must be able to...effectively test a requirement, it is verifiable. Ramamoorthy and others have proposed requirements complexity metrics that can be used to infer the

  2. Application of HFCT and UHF Sensors in On-Line Partial Discharge Measurements for Insulation Diagnosis of High Voltage Equipment

    PubMed Central

    Álvarez, Fernando; Garnacho, Fernando; Ortego, Javier; Sánchez-Urán, Miguel Ángel

    2015-01-01

    Partial discharge (PD) measurements provide valuable information for assessing the condition of high voltage (HV) insulation systems, contributing to their quality assurance. Different PD measuring techniques have been developed in the last years specially designed to perform on-line measurements. Non-conventional PD methods operating in high frequency bands are usually used when this type of tests are carried out. In PD measurements the signal acquisition, the subsequent signal processing and the capability to obtain an accurate diagnosis are conditioned by the selection of a suitable detection technique and by the implementation of effective signal processing tools. This paper proposes an optimized electromagnetic detection method based on the combined use of wideband PD sensors for measurements performed in the HF and UHF frequency ranges, together with the implementation of powerful processing tools. The effectiveness of the measuring techniques proposed is demonstrated through an example, where several PD sources are measured simultaneously in a HV installation consisting of a cable system connected by a plug-in terminal to a gas insulated substation (GIS) compartment. PMID:25815452

  3. Evaluation of analysis techniques for low frequency interior noise and vibration of commercial aircraft

    NASA Technical Reports Server (NTRS)

    Landmann, A. E.; Tillema, H. F.; Marshall, S. E.

    1989-01-01

    The application of selected analysis techniques to low frequency cabin noise associated with advanced propeller engine installations is evaluated. Three design analysis techniques were chosen for evaluation including finite element analysis, statistical energy analysis (SEA), and a power flow method using element of SEA (computer program Propeller Aircraft Interior Noise). An overview of the three procedures is provided. Data from tests of a 727 airplane (modified to accept a propeller engine) were used to compare with predictions. Comparisons of predicted and measured levels at the end of the first year's effort showed reasonable agreement leading to the conclusion that each technique had value for propeller engine noise predictions on large commercial transports. However, variations in agreement were large enough to remain cautious and to lead to recommendations for further work with each technique. Assessment of the second year's results leads to the conclusion that the selected techniques can accurately predict trends and can be useful to a designer, but that absolute level predictions remain unreliable due to complexity of the aircraft structure and low modal densities.

  4. Method of detecting system function by measuring frequency response

    DOEpatents

    Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.

    2013-01-08

    Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.

  5. Adaptive selection of diurnal minimum variation: a statistical strategy to obtain representative atmospheric CO2 data and its application to European elevated mountain stations

    NASA Astrophysics Data System (ADS)

    Yuan, Ye; Ries, Ludwig; Petermeier, Hannes; Steinbacher, Martin; Gómez-Peláez, Angel J.; Leuenberger, Markus C.; Schumacher, Marcus; Trickl, Thomas; Couret, Cedric; Meinhardt, Frank; Menzel, Annette

    2018-03-01

    Critical data selection is essential for determining representative baseline levels of atmospheric trace gases even at remote measurement sites. Different data selection techniques have been used around the world, which could potentially lead to reduced compatibility when comparing data from different stations. This paper presents a novel statistical data selection method named adaptive diurnal minimum variation selection (ADVS) based on CO2 diurnal patterns typically occurring at elevated mountain stations. Its capability and applicability were studied on records of atmospheric CO2 observations at six Global Atmosphere Watch stations in Europe, namely, Zugspitze-Schneefernerhaus (Germany), Sonnblick (Austria), Jungfraujoch (Switzerland), Izaña (Spain), Schauinsland (Germany), and Hohenpeissenberg (Germany). Three other frequently applied statistical data selection methods were included for comparison. Among the studied methods, our ADVS method resulted in a lower fraction of data selected as a baseline with lower maxima during winter and higher minima during summer in the selected data. The measured time series were analyzed for long-term trends and seasonality by a seasonal-trend decomposition technique. In contrast to unselected data, mean annual growth rates of all selected datasets were not significantly different among the sites, except for the data recorded at Schauinsland. However, clear differences were found in the annual amplitudes as well as the seasonal time structure. Based on a pairwise analysis of correlations between stations on the seasonal-trend decomposed components by statistical data selection, we conclude that the baseline identified by the ADVS method is a better representation of lower free tropospheric (LFT) conditions than baselines identified by the other methods.

  6. Longitudinal MR cortical thinning of individuals and its correlation with PET metabolic reduction: a measurement consistency and correctness studies

    NASA Astrophysics Data System (ADS)

    Lin, Zhongmin S.; Avinash, Gopal; McMillan, Kathryn; Yan, Litao; Minoshima, Satoshi

    2014-03-01

    Cortical thinning and metabolic reduction can be possible imaging biomarkers for Alzheimer's disease (AD) diagnosis and monitoring. Many techniques have been developed for the cortical measurement and widely used for the clinical statistical studies. However, the measurement consistency of individuals, an essential requirement for a clinically useful technique, requires proper further investigation. Here we leverage our previously developed BSIM technique 1 to measure cortical thickness and thinning and use it with longitudinal MRI from ADNI to investigate measurement consistency and spatial resolution. 10 normal, 10 MCI, and 10 AD subjects in their 70s were selected for the study. Consistent cortical thinning patterns were observed in all baseline and follow up images. Rapid cortical thinning was shown in some MCI and AD cases. To evaluate the correctness of the cortical measurement, we compared longitudinal cortical thinning with clinical diagnosis and longitudinal PET metabolic reduction measured using 3D-SSP technique2 for the same person. Longitudinal MR cortical thinning and corresponding PET metabolic reduction showed high level pattern similarity revealing certain correlations worthy of further studies. Severe cortical thinning that might link to disease conversion from MCI to AD was observed in two cases. In summary, our results suggest that consistent cortical measurements using our technique may provide means for clinical diagnosis and monitoring at individual patient's level and MR cortical thinning measurement can complement PET metabolic reduction measurement.

  7. A measurement of the mass of the top quark using the ideogram technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houben, Pieter Willem Huib

    2009-06-03

    This thesis describes a measurement of the mass of the top quark on data collected with the D0 detector at the Tevatron collider in the period from 2002 until 2006. The first chapter describes the Standard Model and the prominent role of the top quark mass. The second chapter gives a description of the D0 detector which is used for this measurement. After the pmore » $$\\bar{p}$$ collisions have been recorded, reconstruction of physics objects is required, which is described in Chapter 3. Chapter 4 describes how the interesting collisions in which top quarks are produced are separated from the `uninteresting' ones with a set of selection criteria. The method to extract the top quark mass from the sample of selected collisions (also called events), which is based on the ideogram technique, is explained in Chapter 5, followed in Chapter 6 by the description of the calibration of the method using simulation of our most precise knowledge of nature. Chapter 7 shows the result of the measurement together with some cross checks and an estimation of the uncertainty on this measurement. This thesis concludes with a constraint on the Higgs boson mass.« less

  8. The Evolving Field of Wound Measurement Techniques: A Literature Review.

    PubMed

    Khoo, Rachel; Jansen, Shirley

    2016-06-01

    Wound healing is a complex and multifactorial process that requires the involvement of a multidisciplinary approach. Methods of wound measurement have been developed and continually refined with the purpose of ensuring precision in wound measurement and documentation as the primary indicator of healing. This review aims to ascertain the efficacies of current wound area measurement techniques, and to highlight any perceived gaps in the literature so as to develop suggestions for future studies and practice. Med- line, PubMed, CliniKey, and CINAHL were searched using the terms "wound/ulcer measurement techniques," "wound assessment," "digi- tal planimetry," and "structured light." Articles between 2000 and 2014 were selected, and secondary searches were carried out by exam- ining the references of relevant articles. Only papers written in English were included. A universal, standardized method of wound as- sessment has not been established or proposed. At present, techniques range from the simple to the more complex - most of which have char- acteristics that allow for applicability in both rural and urban settings. Techniques covered are: ruler measurements, acetate tracings/contact planimetry, digital planimetry, and structured light devices. Conclu- sion. In reviewing the literature, the precision and reliability of digital planimetry over the more conventional methods of ruler measurements and acetate tracings are consistently demonstrated. The advent and utility of the laser or structured light approach, however, is promising, has only been analyzed by a few, and opens up the scope for further evaluation of this technique.

  9. A 100-Year Review: Identification and genetic selection of economically important traits in dairy cattle.

    PubMed

    Miglior, Filippo; Fleming, Allison; Malchiodi, Francesca; Brito, Luiz F; Martin, Pauline; Baes, Christine F

    2017-12-01

    Over the past 100 yr, the range of traits considered for genetic selection in dairy cattle populations has progressed to meet the demands of both industry and society. At the turn of the 20th century, dairy farmers were interested in increasing milk production; however, a systematic strategy for selection was not available. Organized milk performance recording took shape, followed quickly by conformation scoring. Methodological advances in both genetic theory and statistics around the middle of the century, together with technological innovations in computing, paved the way for powerful multitrait analyses. As more sophisticated analytical techniques for traits were developed and incorporated into selection programs, production began to increase rapidly, and the wheels of genetic progress began to turn. By the end of the century, the focus of selection had moved away from being purely production oriented toward a more balanced breeding goal. This shift occurred partly due to increasing health and fertility issues and partly due to societal pressure and welfare concerns. Traits encompassing longevity, fertility, calving, health, and workability have now been integrated into selection indices. Current research focuses on fitness, health, welfare, milk quality, and environmental sustainability, underlying the concentrated emphasis on a more comprehensive breeding goal. In the future, on-farm sensors, data loggers, precision measurement techniques, and other technological aids will provide even more data for use in selection, and the difficulty will lie not in measuring phenotypes but rather in choosing which traits to select for. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Design, fabrication and testing of hierarchical micro-optical structures and systems

    NASA Astrophysics Data System (ADS)

    Cannistra, Aaron Thomas

    Micro-optical systems are becoming essential components in imaging, sensing, communications, computing, and other applications. Optically based designs are replacing electronic, chemical and mechanical systems for a variety of reasons, including low power consumption, reduced maintenance, and faster operation. However, as the number and variety of applications increases, micro-optical system designs are becoming smaller, more integrated, and more complicated. Micro and nano-optical systems found in nature, such as the imaging systems found in many insects and crustaceans, can have highly integrated optical structures that vary in size by orders of magnitude. These systems incorporate components such as compound lenses, anti-reflective lens surface structuring, spectral filters, and polarization selective elements. For animals, these hybrid optical systems capable of many optical functions in a compact package have been repeatedly selected during the evolutionary process. Understanding the advantages of these designs gives motivation for synthetic optical systems with comparable functionality. However, alternative fabrication methods that deviate from conventional processes are needed to create such systems. Further complicating the issue, the resulting device geometry may not be readily compatible with existing measurement techniques. This dissertation explores several nontraditional fabrication techniques for optical components with hierarchical geometries and measurement techniques to evaluate performance of such components. A micro-transfer molding process is found to produce high-fidelity micro-optical structures and is used to fabricate a spectral filter on a curved surface. By using a custom measurement setup we demonstrate that the spectral filter retains functionality despite the nontraditional geometry. A compound lens is fabricated using similar fabrication techniques and the imaging performance is analyzed. A spray coating technique for photoresist application to curved surfaces combined with interference lithography is also investigated. Using this technique, we generate polarizers on curved surfaces and measure their performance. This work furthers an understanding of how combining multiple optical components affects the performance of each component, the final integrated devices, and leads towards realization of biomimetically inspired imaging systems.

  11. Determining Data Quality for the NOvA Experiment

    NASA Astrophysics Data System (ADS)

    Murphy, Ryan; NOvA Collaboration Collaboration

    2016-03-01

    NOvA is a long-baseline neutrino oscillation experiment with two liquid scintillator filled tracking calorimeter detectors separated by 809 km. The detectors are located 14.6 milliradians off-axis of Fermilab's NuMI beam. The NOvA experiment is designed to measure the rate of electron-neutrino appearance out of the almost-pure muon-neutrino NuMI beam, with the data measured at the Near Detector being used to accurately determine the expected rate of the Far Detector. It is therefore very important to have automated and accurate monitoring of the data recorded by the detectors so any hardware, DAQ or beam issues arising in the 0.3 million (20k) channels of the far (near) detector which could effect this extrapolation technique are identified and the affected data removed from the physics analysis data set. This poster will cover the techniques and efficiency of selecting good data, describing the selections placed on different data and hardware levels.

  12. A comparison of cord gingival displacement with the gingitage technique.

    PubMed

    Tupac, R G; Neacy, K

    1981-11-01

    Fifteen young adult dogs were divided into three groups representing 0, 7- and 21-day healing periods. Randomly selected cuspid teeth were used to compare cord gingival displacement and gingitage techniques for subgingival tooth preparation and impression making. Clinical and histologic measurements were used as a basis for comparison. Results indicate that (1) the experimental teeth were clinically healthy at the beginning of the experiment, (2) clinical health of the gingival tissues was controlled throughout the course of the experiment, and (3) within this experimental setting, there was no significant difference between the cord gingival displacement technique and the gingitage technique.

  13. Potential techniques for non-destructive evaluation of cable materials

    NASA Astrophysics Data System (ADS)

    Gillen, Kenneth T.; Clough, Roger L.; Mattson, Bengt; Stenberg, Bengt; Oestman, Erik

    This paper describes the connection between mechanical degradation of common cable materials, in radiation and elevated temperature environments, and density increases caused by the oxidation which leads to this degradation. Two techniques based on density changes are suggested as potential non-destructive evaluation (NDE) procedures which may be applicable to monitoring the mechanical condition of cable materials in power plant environments. The first technique is direct measurement of density changes, via a density gradient column, using small shavings removed from the surface of cable jackets at selected locations. The second technique is computed X-ray tomography, utilizing a portable scanning device.

  14. A Systematic Comparison of Data Selection Criteria for SMT Domain Adaptation

    PubMed Central

    Chao, Lidia S.; Lu, Yi; Xing, Junwen

    2014-01-01

    Data selection has shown significant improvements in effective use of training data by extracting sentences from large general-domain corpora to adapt statistical machine translation (SMT) systems to in-domain data. This paper performs an in-depth analysis of three different sentence selection techniques. The first one is cosine tf-idf, which comes from the realm of information retrieval (IR). The second is perplexity-based approach, which can be found in the field of language modeling. These two data selection techniques applied to SMT have been already presented in the literature. However, edit distance for this task is proposed in this paper for the first time. After investigating the individual model, a combination of all three techniques is proposed at both corpus level and model level. Comparative experiments are conducted on Hong Kong law Chinese-English corpus and the results indicate the following: (i) the constraint degree of similarity measuring is not monotonically related to domain-specific translation quality; (ii) the individual selection models fail to perform effectively and robustly; but (iii) bilingual resources and combination methods are helpful to balance out-of-vocabulary (OOV) and irrelevant data; (iv) finally, our method achieves the goal to consistently boost the overall translation performance that can ensure optimal quality of a real-life SMT system. PMID:24683356

  15. NLOS Correction/Exclusion for GNSS Measurement Using RAIM and City Building Models.

    PubMed

    Hsu, Li-Ta; Gu, Yanlei; Kamijo, Shunsuke

    2015-07-17

    Currently, global navigation satellite system (GNSS) receivers can provide accurate and reliable positioning service in open-field areas. However, their performance in the downtown areas of cities is still affected by the multipath and none-line-of-sight (NLOS) receptions. This paper proposes a new positioning method using 3D building models and the receiver autonomous integrity monitoring (RAIM) satellite selection method to achieve satisfactory positioning performance in urban area. The 3D building model uses a ray-tracing technique to simulate the line-of-sight (LOS) and NLOS signal travel distance, which is well-known as pseudorange, between the satellite and receiver. The proposed RAIM fault detection and exclusion (FDE) is able to compare the similarity between the raw pseudorange measurement and the simulated pseudorange. The measurement of the satellite will be excluded if the simulated and raw pseudoranges are inconsistent. Because of the assumption of the single reflection in the ray-tracing technique, an inconsistent case indicates it is a double or multiple reflected NLOS signal. According to the experimental results, the RAIM satellite selection technique can reduce by about 8.4% and 36.2% the positioning solutions with large errors (solutions estimated on the wrong side of the road) for the 3D building model method in the middle and deep urban canyon environment, respectively.

  16. Technical Note: The Initial Stages of Statistical Data Analysis

    PubMed Central

    Tandy, Richard D.

    1998-01-01

    Objective: To provide an overview of several important data-related considerations in the design stage of a research project and to review the levels of measurement and their relationship to the statistical technique chosen for the data analysis. Background: When planning a study, the researcher must clearly define the research problem and narrow it down to specific, testable questions. The next steps are to identify the variables in the study, decide how to group and treat subjects, and determine how to measure, and the underlying level of measurement of, the dependent variables. Then the appropriate statistical technique can be selected for data analysis. Description: The four levels of measurement in increasing complexity are nominal, ordinal, interval, and ratio. Nominal data are categorical or “count” data, and the numbers are treated as labels. Ordinal data can be ranked in a meaningful order by magnitude. Interval data possess the characteristics of ordinal data and also have equal distances between levels. Ratio data have a natural zero point. Nominal and ordinal data are analyzed with nonparametric statistical techniques and interval and ratio data with parametric statistical techniques. Advantages: Understanding the four levels of measurement and when it is appropriate to use each is important in determining which statistical technique to use when analyzing data. PMID:16558489

  17. The genetic consequences of selection in natural populations.

    PubMed

    Thurman, Timothy J; Barrett, Rowan D H

    2016-04-01

    The selection coefficient, s, quantifies the strength of selection acting on a genetic variant. Despite this parameter's central importance to population genetic models, until recently we have known relatively little about the value of s in natural populations. With the development of molecular genetic techniques in the late 20th century and the sequencing technologies that followed, biologists are now able to identify genetic variants and directly relate them to organismal fitness. We reviewed the literature for published estimates of natural selection acting at the genetic level and found over 3000 estimates of selection coefficients from 79 studies. Selection coefficients were roughly exponentially distributed, suggesting that the impact of selection at the genetic level is generally weak but can occasionally be quite strong. We used both nonparametric statistics and formal random-effects meta-analysis to determine how selection varies across biological and methodological categories. Selection was stronger when measured over shorter timescales, with the mean magnitude of s greatest for studies that measured selection within a single generation. Our analyses found conflicting trends when considering how selection varies with the genetic scale (e.g., SNPs or haplotypes) at which it is measured, suggesting a need for further research. Besides these quantitative conclusions, we highlight key issues in the calculation, interpretation, and reporting of selection coefficients and provide recommendations for future research. © 2016 John Wiley & Sons Ltd.

  18. Fabrication of glass gas cells for the HALOE and MAPS satellite experiments

    NASA Technical Reports Server (NTRS)

    Sullivan, E. M.; Walthall, H. G.

    1984-01-01

    The Halogen Occultation Experiment (HALOE) and the Measurement of Air Pollution from Satellites (MAPS) experiment are satellite-borne experiments which measure trace constituents in the Earth's atmosphere. The instruments which obtain the data for these experiments are based on the gas filter correlation radiometer measurement technique. In this technique, small samples of the gases of interest are encapsulated in glass cylinders, called gas cells, which act as very selective optical filters. This report describes the techniques employed in the fabrication of the gas cells for the HALOE and MAPS instruments. Details of the method used to fuse the sapphire windows (required for IR transmission) to the glass cell bodies are presented along with detailed descriptions of the jigs and fixtures used during the assembly process. The techniques and equipment used for window inspection and for pairing the HALOE windows are discussed. Cell body materials and the steps involved in preparing the cell bodies for the glass-to-sapphire fusion process are given.

  19. Participation of Employees and Students of the Faculty of Geodesy and Cartography in Polar Research

    NASA Astrophysics Data System (ADS)

    Pasik, Mariusz; Adamek, Artur; Rajner, Marcin; Kurczyński, Zdzisław; Pachuta, Andrzej; Woźniak, Marek; Bylina, Paweł; Próchniewicz, Dominik

    2016-06-01

    This year the Faculty of Geodesy and Cartography, Warsaw University of Technology celebrates its 95th jubilee, which provides an opportunity to present the Faculty's rich traditions in polar research. Employees and students of the faculty for almost 60 years have taken part in research expeditions to the polar circle. The article presents various studies typical of geodesy and cartography, as well as miscellany of possible measurement applications and geodetic techniques used to support interdisciplinary research. Wide range of geodetic techniques used in polar studies includes classic angular and linear surveys, photogrammetric techniques, gravimetric measurements, GNSS satellite techniques and satellite imaging. Those measurements were applied in glaciological, geological, geodynamic, botanical researches as well as in cartographic studies. Often they were used in activities aiming to ensure continuous functioning of Polish research stations on both hemispheres. This study is a short overview of thematic scope and selected research results conducted by our employees and students.

  20. A methodological evaluation of volumetric measurement techniques including three-dimensional imaging in breast surgery.

    PubMed

    Hoeffelin, H; Jacquemin, D; Defaweux, V; Nizet, J L

    2014-01-01

    Breast surgery currently remains very subjective and each intervention depends on the ability and experience of the operator. To date, no objective measurement of this anatomical region can codify surgery. In this light, we wanted to compare and validate a new technique for 3D scanning (LifeViz 3D) and its clinical application. We tested the use of the 3D LifeViz system (Quantificare) to perform volumetric calculations in various settings (in situ in cadaveric dissection, of control prostheses, and in clinical patients) and we compared this system to other techniques (CT scanning and Archimedes' principle) under the same conditions. We were able to identify the benefits (feasibility, safety, portability, and low patient stress) and limitations (underestimation of the in situ volume, subjectivity of contouring, and patient selection) of the LifeViz 3D system, concluding that the results are comparable with other measurement techniques. The prospects of this technology seem promising in numerous applications in clinical practice to limit the subjectivity of breast surgery.

  1. A Methodological Evaluation of Volumetric Measurement Techniques including Three-Dimensional Imaging in Breast Surgery

    PubMed Central

    Hoeffelin, H.; Jacquemin, D.; Defaweux, V.; Nizet, J L.

    2014-01-01

    Breast surgery currently remains very subjective and each intervention depends on the ability and experience of the operator. To date, no objective measurement of this anatomical region can codify surgery. In this light, we wanted to compare and validate a new technique for 3D scanning (LifeViz 3D) and its clinical application. We tested the use of the 3D LifeViz system (Quantificare) to perform volumetric calculations in various settings (in situ in cadaveric dissection, of control prostheses, and in clinical patients) and we compared this system to other techniques (CT scanning and Archimedes' principle) under the same conditions. We were able to identify the benefits (feasibility, safety, portability, and low patient stress) and limitations (underestimation of the in situ volume, subjectivity of contouring, and patient selection) of the LifeViz 3D system, concluding that the results are comparable with other measurement techniques. The prospects of this technology seem promising in numerous applications in clinical practice to limit the subjectivity of breast surgery. PMID:24511536

  2. W-band PELDOR with 1 kW microwave power: molecular geometry, flexibility and exchange coupling.

    PubMed

    Reginsson, Gunnar W; Hunter, Robert I; Cruickshank, Paul A S; Bolton, David R; Sigurdsson, Snorri Th; Smith, Graham M; Schiemann, Olav

    2012-03-01

    A technique that is increasingly being used to determine the structure and conformational flexibility of biomacromolecules is Pulsed Electron-Electron Double Resonance (PELDOR or DEER), an Electron Paramagnetic Resonance (EPR) based technique. At X-band frequencies (9.5 GHz), PELDOR is capable of precisely measuring distances in the range of 1.5-8 nm between paramagnetic centres but the orientation selectivity is weak. In contrast, working at higher frequencies increases the orientation selection but usually at the expense of decreased microwave power and PELDOR modulation depth. Here it is shown that a home-built high-power pulsed W-band EPR spectrometer (HiPER) with a large instantaneous bandwidth enables one to achieve PELDOR data with a high degree of orientation selectivity and large modulation depths. We demonstrate a measurement methodology that gives a set of PELDOR time traces that yield highly constrained data sets. Simulating the resulting time traces provides a deeper insight into the conformational flexibility and exchange coupling of three bisnitroxide model systems. These measurements provide strong evidence that W-band PELDOR may prove to be an accurate and quantitative tool in assessing the relative orientations of nitroxide spin labels and to correlate those orientations to the underlying biological structure and dynamics. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Applications of Doppler-free saturation spectroscopy for edge physics studies (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, E. H., E-mail: martineh@ornl.gov; Caughman, J. B. O.; Isler, R. C.

    Doppler-free saturation spectroscopy provides a very powerful method to obtain detailed information about the electronic structure of the atom through measurement of the spectral line profile. This is achieved through a significant decrease in the Doppler broadening and essentially an elimination of the instrument broadening inherent to passive spectroscopic techniques. In this paper we present the technique and associated physics of Doppler-free saturation spectroscopy in addition to how one selects the appropriate transition. Simulations of H{sub δ} spectra are presented to illustrate the increased sensitivity to both electric field and electron density measurements.

  4. Applications of Doppler-free saturation spectroscopy for edge physics studies (invited).

    PubMed

    Martin, E H; Zafar, A; Caughman, J B O; Isler, R C; Bell, G L

    2016-11-01

    Doppler-free saturation spectroscopy provides a very powerful method to obtain detailed information about the electronic structure of the atom through measurement of the spectral line profile. This is achieved through a significant decrease in the Doppler broadening and essentially an elimination of the instrument broadening inherent to passive spectroscopic techniques. In this paper we present the technique and associated physics of Doppler-free saturation spectroscopy in addition to how one selects the appropriate transition. Simulations of H δ spectra are presented to illustrate the increased sensitivity to both electric field and electron density measurements.

  5. Simultaneous imaging/reflectivity measurements to assess diagnostic mirror cleaning.

    PubMed

    Skinner, C H; Gentile, C A; Doerner, R

    2012-10-01

    Practical methods to clean ITER's diagnostic mirrors and restore reflectivity will be critical to ITER's plasma operations. We describe a technique to assess the efficacy of mirror cleaning techniques and detect any damage to the mirror surface. The method combines microscopic imaging and reflectivity measurements in the red, green, and blue spectral regions and at selected wavelengths. The method has been applied to laser cleaning of single crystal molybdenum mirrors coated with either carbon or beryllium films 150-420 nm thick. It is suitable for hazardous materials such as beryllium as the mirrors remain sealed in a vacuum chamber.

  6. Performance degradation mechanisms and modes in terrestrial photovoltaic arrays and technology for their diagnosis

    NASA Technical Reports Server (NTRS)

    Noel, G. T.; Sliemers, F. A.; Derringer, G. C.; Wood, V. E.; Wilkes, K. E.; Gaines, G. B.; Carmichael, D. C.

    1978-01-01

    Accelerated life-prediction test methodologies have been developed for the validation of a 20-year service life for low-cost photovoltaic arrays. Array failure modes, relevant materials property changes, and primary degradation mechanisms are discussed as a prerequisite to identifying suitable measurement techniques and instruments. Measurements must provide sufficient confidence to permit selection among alternative designs and materials and to stimulate widespread deployment of such arrays. Furthermore, the diversity of candidate materials and designs, and the variety of potential environmental stress combinations, degradation mechanisms and failure modes require that combinations of measurement techniques be identified which are suitable for the characterization of various encapsulation system-cell structure-environment combinations.

  7. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  8. Review of invasive urodynamics and progress towards non-invasive measurements in the assessment of bladder outlet obstruction

    PubMed Central

    Griffiths, C. J.; Pickard, R. S.

    2009-01-01

    Objective: This article defines the need for objective measurements to help diagnose the cause of lower urinary tract symptoms (LUTS). It describes the conventional techniques available, mainly invasive, and then summarizes the emerging range of non-invasive measurement techniques. Methods: This is a narrative review derived form the clinical and scientific knowledge of the authors together with consideration of selected literature. Results: Consideration of measured bladder pressure urinary flow rate during voiding in an invasive pressure flow study is considered the gold standard for categorization of bladder outlet obstruction (BOO). The diagnosis is currently made by plotting the detrusor pressure at maximum flow (pdetQmax) and maximum flow rate (Qmax) on the nomogram approved by the International Continence Society. This plot will categorize the void as obstructed, equivocal or unobstructed. The invasive and relatively complex nature of this investigation has led to a number of inventive techniques to categorize BOO either by measuring bladder pressure non-invasively or by providing a proxy measure such as bladder weight. Conclusion: Non-invasive methods of diagnosing BOO show great promise and a few have reached the stage of being commercially available. Further studies are however needed to validate the measurement technique and assess their worth in the assessment of men with LUTS. PMID:19468436

  9. Groundwater levels for selected wells in Upper Kittitas County, Washington

    USGS Publications Warehouse

    Fasser, E.T.; Julich, R.J.

    2011-01-01

    Groundwater levels for selected wells in Upper Kittitas County, Washington, are presented on an interactive, web-based map to document the spatial distribution of groundwater levels in the study area measured during spring 2011. Groundwater-level data and well information were collected by the U.S. Geological Survey using standard techniques and are stored in the U.S. Geological Survey National Water Information System, Groundwater Site-Inventory database.

  10. Evaluation of selected recurrence measures in discriminating pre-ictal and inter-ictal periods from epileptic EEG data

    NASA Astrophysics Data System (ADS)

    Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus

    2016-04-01

    We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database.

  11. Lateral access to the holes of photonic crystal fibers selective filling and sensing applications

    NASA Astrophysics Data System (ADS)

    Cordeiro, Cristiano M. B.; Dos Santos, Eliane M.; Brito Cruz, C. H.; de Matos, Christiano J.; Ferreiira, Daniel S.

    2006-09-01

    A new, simple, technique is demonstrated to laterally access the cladding holes of solid-core photonic crystal fibers (PCFs) or the central hole of hollow-core PCFs by blowing a hole through the fiber wall (using a fusion splicer and the application of pressure). For both fiber types material was subsequently and successfully inserted into the holes. The proposed method compares favorably with other reported selective filling techniques in terms of simplicity and reproducibility. Also, since the holes are laterally filled, simultaneous optical access to the PCFs is possible, which can prove useful for practical sensing applications. As a proof-of-concept experiment, Rhodamine fluorescence measurements are shown.

  12. Streamflow data

    USGS Publications Warehouse

    Holmes, Robert R.; Singh, Vijay P.

    2016-01-01

    The importance of streamflow data to the world’s economy, environmental health, and public safety continues to grow as the population increases. The collection of streamflow data is often an involved and complicated process. The quality of streamflow data hinges on such things as site selection, instrumentation selection, streamgage maintenance and quality assurance, proper discharge measurement techniques, and the development and continued verification of the streamflow rating. This chapter serves only as an overview of the streamflow data collection process as proper treatment of considerations, techniques, and quality assurance cannot be addressed adequately in the space limitations of this chapter. Readers with the need for the detailed information on the streamflow data collection process are referred to the many references noted in this chapter. 

  13. Increase of lower esophageal sphincter pressure after osteopathic intervention on the diaphragm in patients with gastroesophageal reflux.

    PubMed

    da Silva, R C V; de Sá, C C; Pascual-Vaca, Á O; de Souza Fontes, L H; Herbella Fernandes, F A M; Dib, R A; Blanco, C R; Queiroz, R A; Navarro-Rodriguez, T

    2013-07-01

    The treatment of gastroesophageal reflux disease may be clinical or surgical. The clinical consists basically of the use of drugs; however, there are new techniques to complement this treatment, osteopathic intervention in the diaphragmatic muscle is one these. The objective of the study is to compare pressure values in the examination of esophageal manometry of the lower esophageal sphincter (LES) before and immediately after osteopathic intervention in the diaphragm muscle. Thirty-eight patients with gastroesophageal reflux disease - 16 submitted to sham technique and 22 submitted osteopathic technique - were randomly selected. The average respiratory pressure (ARP) and the maximum expiratory pressure (MEP) of the LES were measured by manometry before and after osteopathic technique at the point of highest pressure. Statistical analysis was performed using the Student's t-test and Mann-Whitney, and magnitude of the technique proposed was measured using the Cohen's index. Statistically significant difference in the osteopathic technique was found in three out of four in relation to the group of patients who performed the sham technique for the following measures of LES pressure: ARP with P= 0.027. The MEP had no statistical difference (P= 0.146). The values of Cohen d for the same measures were: ARP with d= 0.80 and MEP d= 0.52. Osteopathic manipulative technique produces a positive increment in the LES region soon after its performance. © 2012 Copyright the Authors. Journal compilation © 2012, Wiley Periodicals, Inc. and the International Society for Diseases of the Esophagus.

  14. Various Measures of the Effectiveness of Yellow Goggles

    DTIC Science & Technology

    1980-10-08

    technique which is widely used r.o improve vision under these conditions is the use of yellow goggles. Skiers commonly don yellow goggles...different laboratory studies are presented. Two of the studies were of depth perception, since skiers believe that yellow goggles help them...selected for measurement because of practical considerations and theoretical implications. EXPERIMENTS ON DEPTH PERCEPTION Background Since skiers

  15. Direct and indirect measurements relevant to the assessment of fatigue of the respiratory muscles - review

    NASA Astrophysics Data System (ADS)

    Kuraszkiewicz, Bożena

    2011-01-01

    The purpose of this review is to present selected tests available with the potential to detect the development of respiratory muscle fatigue in normal subjects and patients. All reviewed techniques represent a part of a variety of measures and indices, which have been employed to assess this complex process at the present time.

  16. Techniques and methods for estimating abundance of larval and metamorphosed sea lampreys in Great Lakes tributaries, 1995 to 2001

    USGS Publications Warehouse

    Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.

    2003-01-01

    Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.

  17. SU-E-J-12: A New Stereological Method for Tumor Volume Evaluation for Esophageal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y; Tianjin Medical University Cancer Institute and Hospital; East Carolina University

    2014-06-01

    Purpose: Stereological method used to obtain three dimensional quantitative information from two dimensional images is a widely used tool in the study of cells and pathology. But the feasibility of the method for quantitative evaluation of volumes with 3D image data sets for radiotherapy clinical application has not been explored. On the other hand, a quick, easy-to-use and reliable method is highly desired in image-guided-radiotherapy(IGRT) for tumor volume measurement for the assessment of response to treatment. To meet this need, a stereological method for evaluating tumor volumes for esophageal cancer is presented in this abstract. Methods: The stereology method wasmore » optimized by selecting the appropriate grid point distances and sample types. 7 patients with esophageal cancer were selected retrospectively for this study, each having pre and post treatment computed tomography (CT) scans. Stereological measurements were performed for evaluating the gross tumor volume (GTV) changes after radiotherapy and the results was compared with the ones by planimetric measurements. Two independent observers evaluated the reproducibility for volume measurement using the new stereological technique. Results: The intraobserver variation in the GTV volume estimation was 3.42±1.68cm3 (the Wilcoxon matched-pairs test Resultwas Z=−1.726,P=0.084>0.05); the interobserver variation in the GTV volume estimation was 22.40±7.23 cm3 (Z=−3.296,P=0.083>0.05), which showed the consistency in GTV volume calculation with the new method for the same and different users. The agreement level between the results from the two techniques was also evaluated. Difference between the measured GTVs was 20.10±5.35 cm3 (Z=−3.101,P=0.089>0.05). Variation of the measurement results using the two techniques was low and clinically acceptable. Conclusion: The good agreement between stereological and planimetric techniques proves the reliability of the stereological tumor volume estimations. The optimized stereological technique described in this abstract may provide a quick, unbiased and reproducible tool for tumor volume estimation for treatment response assessment. Supported by NSFC (#81041107, #81171342 and #31000784)« less

  18. A non-collinear mixing technique to measure the acoustic nonlinearity parameter of adhesive bond

    NASA Astrophysics Data System (ADS)

    Ju, Taeho; Achenbach, Jan. D.; Jacobs, Laurence J.; Qu, Jianmin

    2018-04-01

    In this work, we employed a wave mixing technique with an incident longitudinal wave and a shear wave to measure the Acoustic Nonlinearity Parameter (ANLP) of adhesive bonds. An adhesive transfer tape (F-9473PC) was used as an adhesive material: two aluminum plates are bonded together by the tape. To achieve a high signal to noise ratio, the optimal interaction angle and frequency ratio between the two incident waves were carefully selected so resonance occurs primarily in the adhesive layer, which somewhat suppressed the resonance in the aluminum plates. One of the most significant features of this method is that the measurements need only one-side access to the sample being measured. To demonstrate the effectiveness of the proposed technique, the adhesively bonded aluminum sample was placed in a temperature-controlled chamber for thermal aging. The ANLP of the thermally aged sample was compared with that of a freshly made adhesive sample. The results show that the ANLP increases with aging time and temperature.

  19. Intercomparison of HONO Measurements Made Using Wet-Chemical (NITROMAC) and Spectroscopic (IBBCEAS & LP/FAGE) Techniques

    NASA Astrophysics Data System (ADS)

    Dusanter, S.; Lew, M.; Bottorff, B.; Bechara, J.; Mielke, L. H.; Berke, A.; Raff, J. D.; Stevens, P. S.; Afif, C.

    2013-12-01

    A good understanding of the oxidative capacity of the atmosphere is important to tackle fundamental issues related to climate change and air quality. The hydroxyl radical (OH) is the dominant oxidant in the daytime troposphere and an accurate description of its sources in atmospheric models is of utmost importance. Recent field studies indicate higher-than-expected concentrations of HONO during the daytime, suggesting that the photolysis of HONO may be an important underestimated source of OH. Understanding the tropospheric HONO budget requires confidence in analytical instrumentation capable of selectively measuring HONO. In this presentation, we discuss an intercomparison study of HONO measurements performed during summer 2013 at the edge of a hardwood forest in Southern Indiana. This exercise involved a wet chemical technique (NITROMAC), an Incoherent Broad-Band Cavity Enhanced Absorption Spectroscopy instrument (IBBCEAS), and a Laser-Photofragmentation/Fluorescence Assay by Gas Expansion instrument (LP/FAGE). The agreement observed between the three techniques will be discussed for both ambient measurements and cross calibration experiments.

  20. Flow direction measurement criteria and techniques planned for the 40- by 80-/80- x 120-foot wind tunnel integrated systems tests

    NASA Technical Reports Server (NTRS)

    Zell, P. T.; Hoffmann, J.; Sandlin, D. R.

    1985-01-01

    A study was performed in order to develop the criteria for the selection of flow direction indicators for use in the Integrated Systems Tests (ISTs) of the 40 by 80/80 by 120 Foot Wind Tunnel System. The problems, requirements, and limitations of flow direction measurement in the wind tunnel were investigated. The locations and types of flow direction measurements planned in the facility were discussed. A review of current methods of flow direction measurement was made and the most suitable technique for each location was chosen. A flow direction vane for each location was chosen. A flow direction vane that employs a Hall Effect Transducer was then developed and evaluated for application during the ISTs.

  1. AHIMSA - Ad hoc histogram information measure sensing algorithm for feature selection in the context of histogram inspired clustering techniques

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.

  2. High-Grading Lunar Samples for Return to Earth

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Sellar, Glenn; Nunez, Jorge; Winterhalter, Daniel; Farmer, Jack

    2009-01-01

    Astronauts on long-duration lunar missions will need the capability to "high-grade" their samples to select the highest value samples for transport to Earth and to leave others on the Moon. We are supporting studies to defile the "necessary and sufficient" measurements and techniques for highgrading samples at a lunar outpost. A glovebox, dedicated to testing instruments and techniques for high-grading samples, is in operation at the JSC Lunar Experiment Laboratory.

  3. High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces

    NASA Astrophysics Data System (ADS)

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong

    2012-10-01

    This paper presents a novel 3-D scanning technique for high-reflective surfaces based on phase-shifting fringe projection method. High dynamic range fringe acquisition (HDRFA) technique is developed to process the fringe images reflected from the shiny surfaces, and generates a synthetic fringe image by fusing the raw fringe patterns, acquired with different camera exposure time and the illumination fringe intensity from the projector. Fringe image fusion algorithm is introduced to avoid saturation and under-illumination phenomenon by choosing the pixels in the raw fringes with the highest fringe modulation intensity. A method of auto-selection of HDRFA parameters is developed and largely increases the measurement automation. The synthetic fringes have higher signal-to-noise ratio (SNR) under ambient light by optimizing HDRFA parameters. Experimental results show that the proposed technique can successfully measure objects with high-reflective surfaces and is insensitive to ambient light.

  4. Biosensor-based microRNA detection: techniques, design, performance, and challenges.

    PubMed

    Johnson, Blake N; Mutharasan, Raj

    2014-04-07

    The current state of biosensor-based techniques for amplification-free microRNA (miRNA) detection is critically reviewed. Comparison with non-sensor and amplification-based molecular techniques (MTs), such as polymerase-based methods, is made in terms of transduction mechanism, associated protocol, and sensitivity. Challenges associated with miRNA hybridization thermodynamics which affect assay selectivity and amplification bias are briefly discussed. Electrochemical, electromechanical, and optical classes of miRNA biosensors are reviewed in terms of transduction mechanism, limit of detection (LOD), time-to-results (TTR), multiplexing potential, and measurement robustness. Current trends suggest that biosensor-based techniques (BTs) for miRNA assay will complement MTs due to the advantages of amplification-free detection, LOD being femtomolar (fM)-attomolar (aM), short TTR, multiplexing capability, and minimal sample preparation requirement. Areas of future importance in miRNA BT development are presented which include focus on achieving high measurement confidence and multiplexing capabilities.

  5. ANALYSIS OF SAMPLING TECHNIQUES FOR IMBALANCED DATA: AN N=648 ADNI STUDY

    PubMed Central

    Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M.; Ye, Jieping

    2013-01-01

    Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer’s disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and under sampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1). a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2). sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. PMID:24176869

  6. Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study.

    PubMed

    Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M; Ye, Jieping

    2014-02-15

    Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. © 2013 Elsevier Inc. All rights reserved.

  7. Dynamic measurements of thermophysical properties of metals and alloys at high temperatures by subsecond pulse heating techniques

    NASA Technical Reports Server (NTRS)

    Cezairliyan, Ared

    1993-01-01

    Rapid (subsecond) heating techniques developed at the National Institute of Standards and Technology for the measurements of selected thermophysical and related properties of metals and alloys at high temperatures (above 1000 C) are described. The techniques are based on rapid resistive self-heating of the specimen from room temperature to the desired high temperature in short times and measuring the relevant experimental quantities, such as electrical current through the specimen, voltage across the specimen, specimen temperature, length, etc., with appropriate time resolution. The first technique, referred to as the millisecond-resolution technique, is for measurements on solid metals and alloys in the temperature range 1000 C to the melting temperature of the specimen. It utilizes a heavy battery bank for the energy source, and the total heating time of the specimen is typically in the range of 100-1000 ms. Data are recorded digitally every 0.5 ms with a full-scale resolution of about one part in 8000. The properties that can be measured with this system are as follows: specific heat, enthalpy, thermal expansion, electrical resistivity, normal spectral emissivity, hemispherical total emissivity, temperature and energy of solid-solid phase transformations, and melting temperature (solidus). The second technique, referred to as the microsecond-resolution technique, is for measurements on liquid metals and alloys in the temperature range 1200 to 6000 C. It utilizes a capacitor bank for the energy source, and the total heating time of the specimen is typically in the range 50-500 micro-s. Data are recorded digitally every 0.5 micro-s with a full-scale resolution of about one part in 4000. The properties that can be measured with this system are: melting temperature (solidus and liquidus), heat of fusion, specific heat, enthalpy, and electrical resistivity. The third technique is for measurements of the surface tension of liquid metals and alloys at their melting temperature. It utilizes a modified millisecond-resolution heating system designed for use in a microgravity environment.

  8. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  9. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements

    NASA Technical Reports Server (NTRS)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.

    1996-01-01

    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  10. Rayleigh Scattering Diagnostic for Dynamic Measurement of Velocity and Temperature

    NASA Technical Reports Server (NTRS)

    Seasholtz, Richard G.; Panda, J.

    2001-01-01

    A new technique for measuring dynamic gas velocity and temperature is described. The technique is based on molecular Rayleigh scattering of laser light, so no seeding of the flow is necessary. The Rayleigh scattered light is filtered with a fixed cavity, planar mirror Fabry-Perot interferometer. A minimum number of photodetectors were used in order to allow the high data acquisition rate needed for dynamic measurements. One photomultiplier tube (PMT) was used to measure the total Rayleigh scattering, which is proportional to the gas density. Two additional PMTs were used to detect light that passes through two apertures in a mask located in the interferometer fringe plane. An uncertainty analysis was used to select the optimum aperture parameters and to predict the measurement uncertainty due to photon shot-noise. Results of an experiment to measure the velocity of a subsonic free jet are presented.

  11. Measuring and modeling near surface reflected and emitted radiation fluxes at the FIFE site

    NASA Technical Reports Server (NTRS)

    Blad, Blaine L.; Norman, John M.; Walter-Shea, Elizabeth; Starks, Patrick; Vining, Roel; Hays, Cynthia

    1988-01-01

    Research was conducted during the four Intensive Field Campaigns (IFC) of the FIFE project in 1987. The research was done on a tall grass prairie with specific measurement sites on and near the Konza Prairie in Kansas. Measurements were made to help meet the following objectives: determination of the variability in reflected and emitted radiation fluxes in selected spectral wavebands as a function of topography and vegetative community; development of techniques to account for slope and sun angle effects on the radiation fluxes; estimation of shortwave albedo and net radiation fluxes using the reflected and emitted spectral measurements described; estimation of leaf and canopy spectral properties from calculated normalized differences coupled with off-nadir measurements using inversion techniques; estimation of plant water status at several locations with indices utilizing plant temperature and other environmental parameters; and determination of relationships between estimated plant water status and measured soil water content. Results are discussed.

  12. Four-dimensional modeling of recent vertical movements in the area of the southern California uplift

    USGS Publications Warehouse

    Vanicek, Petr; Elliot, Michael R.; Castle, Robert O.

    1979-01-01

    This paper describes an analytical technique that utilizes scattered geodetic relevelings and tide-gauge records to portray Recent vertical crustal movements that may have been characterized by spasmodic changes in velocity. The technique is based on the fitting of a time-varying algebraic surface of prescribed degree to the geodetic data treated as tilt elements and to tide-gauge readings treated as point movements. Desired variations in time can be selected as any combination of powers of vertical movement velocity and episodic events. The state of the modeled vertical displacement can be shown for any number of dates for visual display. Statistical confidence limits of the modeled displacements, derived from the density of measurements in both space and time, line length, and accuracy of input data, are also provided. The capabilities of the technique are demonstrated on selected data from the region of the southern California uplift. 

  13. Increasing the speed of tumour diagnosis during surgery with selective scanning Raman microscopy

    NASA Astrophysics Data System (ADS)

    Kong, Kenny; Rowlands, Christopher J.; Varma, Sandeep; Perkins, William; Leach, Iain H.; Koloydenko, Alexey A.; Pitiot, Alain; Williams, Hywel C.; Notingher, Ioan

    2014-09-01

    One of the main challenges in cancer surgery is ensuring that all tumour cells are removed during surgery, while sparing as much healthy tissue as possible. Histopathology, the gold-standard technique for cancer diagnosis, is often impractical for intra-operative use because of the time-consuming tissue preparation procedures (sectioning and staining). Raman micro-spectroscopy is a powerful technique that can discriminate between tumours and healthy tissues with high accuracy, based entirely on intrinsic chemical differences. However, raster-scanning Raman micro-spectroscopy is a slow imaging technique that typically requires data acquisition times as long as several days for typical tissue samples obtained during surgery (1 × 1 cm2) - in particular when high signal-to-noise ratio spectra are required to ensure accurate diagnosis. In this paper we present two techniques based on selective sampling Raman micro-spectroscopy that can overcome these limitations. In selective sampling, information regarding the spatial features of the tissue, either measured by an alternative optical technique or estimated in real-time from the Raman spectra, can be used to drastically reduce the number of Raman spectra required for diagnosis. These sampling strategies allowed diagnosis of basal cell carcinoma in skin tissue samples excised during Mohs micrographic surgery faster than frozen section histopathology, and two orders of magnitude faster than previous techniques based on raster-scanning Raman microscopy. Further development of these techniques may help during cancer surgery by providing a fast and objective way for surgeons to ensure the complete removal of tumour cells while sparing as much healthy tissue as possible.

  14. Development of custom measurement system for biomechanical evaluation of independent wheelchair transfers.

    PubMed

    Koontz, Alicia M; Lin, Yen-Sheng; Kankipati, Padmaja; Boninger, Michael L; Cooper, Rory A

    2011-01-01

    This study describes a new custom measurement system designed to investigate the biomechanics of sitting-pivot wheelchair transfers and assesses the reliability of selected biomechanical variables. Variables assessed include horizontal and vertical reaction forces underneath both hands and three-dimensional trunk, shoulder, and elbow range of motion. We examined the reliability of these measures between 5 consecutive transfer trials for 5 subjects with spinal cord injury and 12 nondisabled subjects while they performed a self-selected sitting pivot transfer from a wheelchair to a level bench. A majority of the biomechanical variables demonstrated moderate to excellent reliability (r > 0.6). The transfer measurement system recorded reliable and valid biomechanical data for future studies of sitting-pivot wheelchair transfers.We recommend a minimum of five transfer trials to obtain a reliable measure of transfer technique for future studies.

  15. Human performance measurement: Validation procedures applicable to advanced manned telescience systems

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1990-01-01

    As telescience systems become more and more complex, autonomous, and opaque to their operators it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed as they relate to total system validation. The assumption is made that human interaction with the automated system will be required well into the Space Station Freedom era. Candidate human performance measurement-validation techniques are discussed for selected ground-to-space-to-ground and space-to-space situations. Most of these measures may be used in conjunction with an information throughput model presented elsewhere (Haines, 1990). Teleoperations, teleanalysis, teleplanning, teledesign, and teledocumentation are considered, as are selected illustrative examples of space related telescience activities.

  16. Evaluation of polarization mode dispersion in a telecommunication wavelength selective switch using quantum interferometry.

    PubMed

    Fraine, A; Minaeva, O; Simon, D S; Egorov, R; Sergienko, A V

    2012-01-30

    A polarization mode dispersion (PMD) measurement of a commercial telecommunication wavelength selective switch (WSS) using a quantum interferometric technique with polarization-entangled states is presented. Polarization-entangled photons with a broad spectral width covering the telecom band are produced using a chirped periodically poled nonlinear crystal. The first demonstration of a quantum metrology application using an industrial commercial device shows a promising future for practical high-resolution quantum interference.

  17. Measuring concentrations of selected air pollutants inside California vehicles : final report

    DOT National Transportation Integrated Search

    1998-12-01

    This study provided the data needed to characterize in-transit exposures to air pollutants for California drivers. It also demonstrated a number of in-situ monitoring techniques in moving vehicles and provided findings that shed new light on particle...

  18. Social insects and selfish genes.

    PubMed

    Bourke, A F

    2001-10-01

    Sometimes science advances because of a new idea. Sometimes, it's because of a new technique. When both occur together, exciting times result. In the study of social insects, DNA-based methods for measuring relatedness now allow increasingly detailed tests of Hamilton's theory of kin selection.

  19. Chemical determination of free radical-induced damage to DNA.

    PubMed

    Dizdaroglu, M

    1991-01-01

    Free radical-induced damage to DNA in vivo can result in deleterious biological consequences such as the initiation and promotion of cancer. Chemical characterization and quantitation of such DNA damage is essential for an understanding of its biological consequences and cellular repair. Methodologies incorporating the technique of gas chromatography/mass spectrometry (GC/MS) have been developed in recent years for measurement of free radical-induced DNA damage. The use of GC/MS with selected-ion monitoring (SIM) facilitates unequivocal identification and quantitation of a large number of products of all four DNA bases produced in DNA by reactions with hydroxyl radical, hydrated electron, and H atom. Hydroxyl radical-induced DNA-protein cross-links in mammalian chromatin, and products of the sugar moiety in DNA are also unequivocally identified and quantitated. The sensitivity and selectivity of the GC/MS-SIM technique enables the measurement of DNA base products even in isolated mammalian chromatin without the necessity of first isolating DNA, and despite the presence of histones. Recent results reviewed in this article demonstrate the usefulness of the GC/MS technique for chemical determination of free radical-induced DNA damage in DNA as well as in mammalian chromatin under a vast variety of conditions of free radical production.

  20. Laser correlation velocimetry performance in diesel applications: spatial selectivity and velocity sensitivity

    NASA Astrophysics Data System (ADS)

    Hespel, Camille; Blaisot, Jean-Bernard; Gazon, Matthieu; Godard, Gilles

    2012-07-01

    The characterization of diesel jets in the near field of the nozzle exit still presents challenges for experimenters. Detailed velocity measurements are needed to characterize diesel injector performance and also to establish boundary conditions for CFD codes. The present article examines the efficiency of laser correlation velocimetry (LCV) applied to diesel spray characterization. A new optical configuration based on a long-distance microscope was tested, and special care was taken to examine the spatial selectivity of the technique. Results show that the depth of the measurement volume (along the laser beam) of LCV extends beyond the depth of field of the imaging setup. The LCV results were also found to be particularly sensitive to high-speed elements of a spray. Results from high-pressure diesel jets in a back-pressure environment indicate that this technique is particularly suited to the very near field of the nozzle exit, where the flow is the narrowest and where the velocity distribution is not too large. It is also shown that the performance of the LCV technique is controlled by the filtering and windowing parameters used in the processing of the raw signals.

  1. A participatory approach for selecting cost-effective measures in the WFD context: the Mar Menor (SE Spain).

    PubMed

    Perni, Angel; Martínez-Paz, José M

    2013-08-01

    Achieving a good ecological status in water bodies by 2015 is one of the objectives established in the European Water Framework Directive. Cost-effective analysis (CEA) has been applied for selecting measures to achieve this goal, but this appraisal technique requires technical and economic information that is not always available. In addition, there are often local insights that can only be identified by engaging multiple stakeholders in a participatory process. This paper proposes to combine CEA with the active involvement of stakeholders for selecting cost-effective measures. This approach has been applied to the case study of one of the main coastal lagoons in the European Mediterranean Sea, the Mar Menor, which presents eutrophication problems. Firstly, face-to-face interviews were conducted to estimate relative effectiveness and relative impacts of a set of measures by means of the pairwise comparison technique. Secondly, relative effectiveness was used to estimate cost-effectiveness ratios. The most cost-effective measures were the restoration of watercourses that drain into the lagoon and the treatment of polluted groundwater. Although in general the stakeholders approved the former, most of them stated that the latter involved some uncertainties, which must be addressed before implementing it. Stakeholders pointed out that the PoM would have a positive impact not only on water quality, but also on fishing, agriculture and tourism in the area. This approach can be useful to evaluate other programmes, plans or projects related to other European environmental strategies. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling

    PubMed Central

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028

  3. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling.

    PubMed

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.

  4. Rare-Earth Oxide (Yb2O3) Selective Emitter Fabrication and Evaluation

    NASA Technical Reports Server (NTRS)

    Jennette, Bryan; Gregory, Don A.; Herren, Kenneth; Tucker, Dennis; Smith, W. Scott (Technical Monitor)

    2001-01-01

    This investigation involved the fabrication and evaluation of rare-earth oxide selective emitters. The first goal of this study was to successfully fabricate the selective emitter samples using paper and ceramic materials processing techniques. The resulting microstructure was also analyzed using a Scanning Electron Microscope. All selective emitter samples fabricated for this study were made with ytterbium oxide (Yb2O3). The second goal of this study involved the measurement of the spectral emission and the radiated power of all the selective emitter samples. The final goal of this study involved the direct comparison of the radiated power emitted by the selective emitter samples to that of a standard blackbody at the same temperature and within the same wavelength range.

  5. Some fuzzy techniques for staff selection process: A survey

    NASA Astrophysics Data System (ADS)

    Md Saad, R.; Ahmad, M. Z.; Abu, M. S.; Jusoh, M. S.

    2013-04-01

    With high level of business competition, it is vital to have flexible staff that are able to adapt themselves with work circumstances. However, staff selection process is not an easy task to be solved, even when it is tackled in a simplified version containing only a single criterion and a homogeneous skill. When multiple criteria and various skills are involved, the problem becomes much more complicated. In adddition, there are some information that could not be measured precisely. This is patently obvious when dealing with opinions, thoughts, feelings, believes, etc. One possible tool to handle this issue is by using fuzzy set theory. Therefore, the objective of this paper is to review the existing fuzzy techniques for solving staff selection process. It classifies several existing research methods and identifies areas where there is a gap and need further research. Finally, this paper concludes by suggesting new ideas for future research based on the gaps identified.

  6. Measurement of Two-Phase Flow and Heat Transfer Parameters using Infrared Thermometry

    NASA Technical Reports Server (NTRS)

    Kim, Tae-Hoon; Kommer, Eric; Dessiatoun, Serguei; Kim, Jungho

    2012-01-01

    A novel technique to measure heat transfer and liquid film thickness distributions over relatively large areas for two-phase flow and heat transfer phenomena using infrared (IR)thermometry is described. IR thermometry is an established technology that can be used to measure temperatures when optical access to the surface is available in the wavelengths of interest. In this work, a midwave IR camera (3.6-5.1 microns) is used to determine the temperature distribution within a multilayer consisting of a silicon substrate coated with a thin insulator. Since silicon is largely transparent to IR radiation, the temperature of the inner and outer walls of the multilayer can be measured by coating selected areas with a thin, IR opaque film. If the fluid used is also partially transparent to IR, the flow can be visualized and the liquid film thickness can be measured. The theoretical basis for the technique is given along with a description of the test apparatus and data reduction procedure. The technique is demonstrated by determining the heat transfer coefficient distributions produced by droplet evaporation and flow boiling heat transfer.

  7. Concentration and Velocity Measurements of Both Phases in Liquid-Solid Slurries

    NASA Astrophysics Data System (ADS)

    Altobelli, Stephen; Hill, Kimberly; Caprihan, Arvind

    2007-03-01

    Natural and industrial slurry flows abound. They are difficult to calculate and to measure. We demonstrate a simple technique for studying steady slurries. We previously used time-of-flight techniques to study pressure driven slurry flow in pipes. Only the continuous phase velocity and concentration fields were measured. The discrete phase concentration was inferred. In slurries composed of spherical, oil-filled pills and poly-methyl-siloxane oils, we were able to use inversion nulling to measure the concentration and velocity fields of both phases. Pills are available in 1-5mm diameter and silicone oils are available in a wide range of viscosities, so a range of flows can be studied. We demonstrated the technique in horizontal, rotating cylinder flows. We combined two tried and true methods to do these experiments. The first used the difference in T1 to select between phases. The second used gradient waveforms with controlled first moments to produce velocity dependent phase shifts. One novel processing method was developed that allows us to use static continuous phase measurements to reference both the continuous and discrete phase velocity images. ?

  8. Novel Calibration Technique for a Coulometric Evolved Vapor Analyzer for Measuring Water Content of Materials

    NASA Astrophysics Data System (ADS)

    Bell, S. A.; Miao, P.; Carroll, P. A.

    2018-04-01

    Evolved vapor coulometry is a measurement technique that selectively detects water and is used to measure water content of materials. The basis of the measurement is the quantitative electrolysis of evaporated water entrained in a carrier gas stream. Although this measurement has a fundamental principle—based on Faraday's law which directly relates electrolysis current to amount of substance electrolyzed—in practice it requires calibration. Commonly, reference materials of known water content are used, but the variety of these is limited, and they are not always available for suitable values, materials, with SI traceability, or with well-characterized uncertainty. In this paper, we report development of an alternative calibration approach using as a reference the water content of humid gas of defined dew point traceable to the SI via national humidity standards. The increased information available through this new type of calibration reveals a variation of the instrument performance across its range not visible using the conventional approach. The significance of this is discussed along with details of the calibration technique, example results, and an uncertainty evaluation.

  9. High resolution x-ray fluorescence spectroscopy - a new technique for site- and spin-selectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin

    1996-12-01

    X-ray spectroscopy has long been used to elucidate electronic and structural information of molecules. One of the weaknesses of x-ray absorption is its sensitivity to all of the atoms of a particular element in a sample. Through out this thesis, a new technique for enhancing the site- and spin-selectivity of the x-ray absorption has been developed. By high resolution fluorescence detection, the chemical sensitivity of K emission spectra can be used to identify oxidation and spin states; it can also be used to facilitate site-selective X-ray Absorption Near Edge Structure (XANES) and site-selective Extended X-ray Absorption Fine Structure (EXAFS). Themore » spin polarization in K fluorescence could be used to generate spin selective XANES or spin-polarized EXAFS, which provides a new measure of the spin density, or the nature of magnetic neighboring atoms. Finally, dramatic line-sharpening effects by the combination of absorption and emission processes allow observation of structure that is normally unobservable. All these unique characters can enormously simplify a complex x-ray spectrum. Applications of this novel technique have generated information from various transition-metal model compounds to metalloproteins. The absorption and emission spectra by high resolution fluorescence detection are interdependent. The ligand field multiplet model has been used for the analysis of K{alpha} and K{beta} emission spectra. First demonstration on different chemical states of Fe compounds has shown the applicability of site selectivity and spin polarization. Different interatomic distances of the same element in different chemical forms have been detected using site-selective EXAFS.« less

  10. Dynamic footprint measurement collection technique and intrarater reliability: ink mat, paper pedography, and electronic pedography.

    PubMed

    Fascione, Jeanna M; Crews, Ryan T; Wrobel, James S

    2012-01-01

    Identifying the variability of footprint measurement collection techniques and the reliability of footprint measurements would assist with appropriate clinical foot posture appraisal. We sought to identify relationships between these measures in a healthy population. On 30 healthy participants, midgait dynamic footprint measurements were collected using an ink mat, paper pedography, and electronic pedography. The footprints were then digitized, and the following footprint indices were calculated with photo digital planimetry software: footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index. Differences between techniques were identified with repeated-measures analysis of variance with post hoc test of Scheffe. In addition, to assess practical similarities between the different methods, intraclass correlation coefficients (ICCs) were calculated. To assess intrarater reliability, footprint indices were calculated twice on 10 randomly selected ink mat footprint measurements, and the ICC was calculated. Dynamic footprint measurements collected with an ink mat significantly differed from those collected with paper pedography (ICC, 0.85-0.96) and electronic pedography (ICC, 0.29-0.79), regardless of the practical similarities noted with ICC values (P = .00). Intrarater reliability for dynamic ink mat footprint measurements was high for the footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index (ICC, 0.74-0.99). Footprint measurements collected with various techniques demonstrate differences. Interchangeable use of exact values without adjustment is not advised. Intrarater reliability of a single method (ink mat) was found to be high.

  11. A comparative review of pharmacoeconomic guidelines.

    PubMed

    Jacobs, P; Bachynsky, J; Baladi, J F

    1995-09-01

    We have reviewed 4 international sets of guidelines for the economic evaluation of pharmaceutical products-those of the Australian Pharmaceutical Benefits Advisory Committee, the Canadian Coordinating Office for Health Technology Assessment, the Ontario Ministry of Health, and the England and Wales Department of Health. Comparison of these guidelines reveals that there are a number of differences between them, including disparities in outcome selection, costs and perspectives. These observations were attributed to differences in study purpose, conceptual approach, measurement techniques and value judgements. Uniformity can be achieved only in conceptual approach and measurement technique. Guidelines should be flexible to accommodate differences in the study purposes and value judgements of the analysts.

  12. A proposed configuration for a stepped specimen to be used in the systematic evaluation of factors influencing warpage in metallic alloys being used for cryogenic wind tunnel models

    NASA Technical Reports Server (NTRS)

    Wigley, D. A.

    1982-01-01

    A proposed configuration for a stepped specimen to be used in the system evaluation of mechanisms that can introduce warpage or dimensional changes in metallic alloys used for cryogenic wind tunnel models is described. Considerations for selecting a standard specimen are presented along with results obtained from an investigation carried out for VASCOMAX 200 maraging steel. Details of the machining and measurement techniques utilized in the investigation are presented. Initial results from the sample of VASCOMAX 200 show that the configuration and measuring techniques are capable of giving quantitative results.

  13. NLOS Correction/Exclusion for GNSS Measurement Using RAIM and City Building Models

    PubMed Central

    Hsu, Li-Ta; Gu, Yanlei; Kamijo, Shunsuke

    2015-01-01

    Currently, global navigation satellite system (GNSS) receivers can provide accurate and reliable positioning service in open-field areas. However, their performance in the downtown areas of cities is still affected by the multipath and none-line-of-sight (NLOS) receptions. This paper proposes a new positioning method using 3D building models and the receiver autonomous integrity monitoring (RAIM) satellite selection method to achieve satisfactory positioning performance in urban area. The 3D building model uses a ray-tracing technique to simulate the line-of-sight (LOS) and NLOS signal travel distance, which is well-known as pseudorange, between the satellite and receiver. The proposed RAIM fault detection and exclusion (FDE) is able to compare the similarity between the raw pseudorange measurement and the simulated pseudorange. The measurement of the satellite will be excluded if the simulated and raw pseudoranges are inconsistent. Because of the assumption of the single reflection in the ray-tracing technique, an inconsistent case indicates it is a double or multiple reflected NLOS signal. According to the experimental results, the RAIM satellite selection technique can reduce by about 8.4% and 36.2% the positioning solutions with large errors (solutions estimated on the wrong side of the road) for the 3D building model method in the middle and deep urban canyon environment, respectively. PMID:26193278

  14. VizieR Online Data Catalog: Face-on disk galaxies photometry. I. (de Jong+, 1994)

    NASA Astrophysics Data System (ADS)

    de Jong, R. S.; van der Kruit, P. C.

    1995-07-01

    We present accurate surface photometry in the B, V, R, I, H and K passbands of 86 spiral galaxies. The galaxies in this statistically complete sample of undisturbed spirals were selected from the UGC to have minimum diameters of 2' and minor over major axis ratios larger than 0.625. This sample has been selected in such a way that it can be used to represent a volume limited sample. The observation and reduction techniques are described in detail, especially the not often used driftscan technique for CCDs and the relatively new techniques using near-infrared (near-IR) arrays. For each galaxy we present radial profiles of surface brightness. Using these profiles we calculated the integrated magnitudes of the galaxies in the different passbands. We performed internal and external consistency checks for the magnitudes as well as the luminosity profiles. The internal consistency is well within the estimated errors. Comparisons with other authors indicate that measurements from photographic plates can show large deviations in the zero-point magnitude. Our surface brightness profiles agree within the errors with other CCD measurements. The comparison of integrated magnitudes shows a large scatter, but a consistent zero-point. These measurements will be used in a series of forthcoming papers to discuss central surface brightnesses, scalelengths, colors and color gradients of disks of spiral galaxies. (9 data files).

  15. Development of a Theory-Based Intervention to Increase Clinical Measurement of Reactive Balance in Adults at Risk of Falls.

    PubMed

    Sibley, Kathryn M; Brooks, Dina; Gardner, Paula; Janaudis-Ferreira, Tania; McGlynn, Mandy; OʼHoski, Sachi; McEwen, Sara; Salbach, Nancy M; Shaffer, Jennifer; Shing, Paula; Straus, Sharon E; Jaglal, Susan B

    2016-04-01

    Effective balance reactions are essential for avoiding falls, but are not regularly measured by physical therapists. Physical therapists report wanting to improve reactive balance assessment, and theory-based approaches are recommended as the foundation for the development of interventions. This article describes how a behavior change theory for health care providers, the theoretical domains framework (TDF), was used to develop an intervention to increase reactive balance measurement among physical therapists who work in rehabilitation settings and treat adults who are at risk of falls. We employed published recommendations for using the TDF-guided intervention development. We identified what health care provider behavior is in need of change, relevant barriers and facilitators, strategies to address them, and how we would measure behavior change. In this case, identifying strategies required selecting both a reactive balance measure and behavior change techniques. Previous research had determined that physical therapists need to increase reactive balance measurement, and identified barriers and facilitators that corresponded to 8 TDF domains. A published review informed the selection of the Balance Evaluation Systems Test (Reactive Postural Responses Section) as addressing the barriers and facilitators, and existing research informed the selection of 9 established behavior change techniques corresponding to each identified TDF domain. The TDF framework were incorporated into a 12-month intervention with interactive group sessions, local champions, and health record modifications. Intervention effect can be evaluated using health record abstraction, questionnaires, and qualitative semistructured interviews. Although future research will evaluate the intervention in a controlled study, the process of theory-based intervention development can be applied to other rehabilitation research contexts, maximizing the impact of this work.Video Abstract is available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A123).

  16. Does Angling Technique Selectively Target Fishes Based on Their Behavioural Type?

    PubMed Central

    Wilson, Alexander D. M.; Brownscombe, Jacob W.; Sullivan, Brittany; Jain-Schlaepfer, Sofia; Cooke, Steven J.

    2015-01-01

    Recently, there has been growing recognition that fish harvesting practices can have important impacts on the phenotypic distributions and diversity of natural populations through a phenomenon known as fisheries-induced evolution. Here we experimentally show that two common recreational angling techniques (active crank baits versus passive soft plastics) differentially target wild largemouth bass (Micropterus salmoides) and rock bass (Ambloplites rupestris) based on variation in their behavioural tendencies. Fish were first angled in the wild using both techniques and then brought back to the laboratory and tested for individual-level differences in common estimates of personality (refuge emergence, flight-initiation-distance, latency-to-recapture and with a net, and general activity) in an in-lake experimental arena. We found that different angling techniques appear to selectively target these species based on their boldness (as characterized by refuge emergence, a standard measure of boldness in fishes) but not other assays of personality. We also observed that body size was independently a significant predictor of personality in both species, though this varied between traits and species. Our results suggest a context-dependency for vulnerability to capture relative to behaviour in these fish species. Ascertaining the selective pressures angling practices exert on natural populations is an important area of fisheries research with significant implications for ecology, evolution, and resource management. PMID:26284779

  17. Proceedings of the XXII A.I.VE.LA. National Meeting

    NASA Astrophysics Data System (ADS)

    Primo Tomasini, Enrico

    2015-11-01

    A.I.VE.LA. - the Italian Association of Laser Velocimetry and non-invasive diagnostics - is a non-profit cultural association whose objective is to promote and support research in the field of non-contact or minimally invasive measurement techniques, particularly electromagnetic-based techniques and optical techniques. Through its Annual Meeting, AIVELA aims to create an active and stimulating forum where current research results and technical advances can be exchanged and the development of new systems for laboratory use, field testing and industrial application can be promoted. The techniques covered include Laser Doppler Anemometry - LDA, Phase Doppler Anemometry - PDA, Image Velocimetry - PIV, Flow visualization techniques, Spectroscopic measurement techniques (LIF, Raman, etc.), Laser Doppler Vibrometry - LDV, Speckle Pattern Interferometry - ESPI, Holographic techniques, Shearography, Digital Image Correlation - DIC, Moiré techniques, Structured light techniques, Infrared imaging, Photoelasticity, Image based measurement techniques, Ultrasonic sensing, Acoustic and Aeroacoustic measurements, etc. The first Annual Meeting was held back in October 1992 and since then there has been a large consensus among the research and scientific communities that the papers presented at the event are of a high scientific interest. The XXII AIVELA Annual Meeting was held at the Faculty of Engineering of University of Rome Tor Vergata on 15-16 December 2014 and was organised in collaboration with the International Master Courses in "Protection Against CBRNe Events". This volume contains a selection of the papers presented at the event. The detailed Programme of the Meeting can be found at: http://www.aivela.org/XXII_Convegno/index.html Trusting our Association and its initiatives will meet your interest, I wish to thank you in advance for your kind attention and hope to meet you soon at one of our events.

  18. Electrolytic recovery of mercury enriched in isotopic abundance

    DOEpatents

    Grossman, Mark W.

    1991-01-01

    The present invention is directed to a method of electrolytically extracting liquid mercury from HgO or Hg.sub.2 Cl.sub.2. Additionally there are disclosed two related techniques associated with the present invention, namely (1) a technique for selectively removing product from different regions of a long photochemical reactor (photoreactor) and (2) a method of accurately measuring the total quantity of mercury formed as either HgO or Hg.sub.2 Cl.sub.2.

  19. WAVELENGTH-RESOLVED REMPI MASS SPECTROMETRY FOR THE MONITORING OF TOXIC INCINERATION TRACE GASES

    EPA Science Inventory

    Structure-selective measurement techniques are needed for the assessment of the toxic loading of incinerator gases. This review article shows that wavelength-resolved, resonance-enhanced, multiphoton- ionization (REMPY) mass spectrometry can be used to this end. In this case, how...

  20. Eyeball Measurement of Dexterity: Tests as Alternatives to Interviews.

    ERIC Educational Resources Information Center

    Guion, Robert M.; Imada, Andrew S.

    1981-01-01

    Reports a study conducted for litigation in a sex discrimination case dealing with misuse of an employment interview. Results show that dexterity could not be determined in an interview and a more appropriate selection technique such as a test was required. (Author/JAC)

  1. RADON REDUCTION TECHNIQUES FOR DETACHED HOUSES, TECHNICAL GUIDANCE (SECOND EDITION)

    EPA Science Inventory

    This document is intended for use by State officials, radon mitigation contractors, building contractors, concerned homeowners, and other persons as an aid in the selection, design, and operation of radon reduction measurements for houses. It is the second edition of EPA's techn...

  2. Measurement of Net Fluxes of Ammonium and Nitrate at the Surface of Barley Roots Using Ion-Selective Microelectrodes 1

    PubMed Central

    Henriksen, Gordon H.; Raman, D. Raj; Walker, Larry P.; Spanswick, Roger M.

    1992-01-01

    Net fluxes of NH4+ and NO3− into roots of 7-day-old barley (Hordeum vulgare L. cv Prato) seedlings varied both with position along the root axis and with time. These variations were not consistent between replicate plants; different roots showed unique temporal and spatial patterns of uptake. Axial scans of NH4+ and NO3− net fluxes were conducted along the apical 7 centimeters of seminal roots of intact barley seedlings in solution culture using ion-selective microelectrodes in the unstirred layer immediately external to the root surface. Theoretically derived relationships between uptake and concentration gradients, combined with experimental observations of the conditions existing in our experimental system, permitted evaluation of the contribution of bulk water flow to ion movement in the unstirred layer, as well as a measure of the spatial resolution of the microelectrode flux estimation technique. Finally, a method was adopted to assess the accuracy of this technique. PMID:16668947

  3. Investigation into Contact Resistance And Damage of Metal Contacts Used in RF-MEMS Switches

    DTIC Science & Technology

    2009-09-01

    mechanically cycled by a piezo- electric transducer ( PZT ). The resistance through the simulated switch was measured using a four-wire measurement technique...research, including a brief overview of contact theory. Then chapter 3 gives an overview of engi- 13 V I PZT Sample Mount Cantilever Lower Contact...as described in [3, 118]. The measurement of surface texture and 4These figures were published in Materials Selection in Mechanical Design, Michael F

  4. Automatic measurement of images on astrometric plates

    NASA Astrophysics Data System (ADS)

    Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.

    1994-04-01

    We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).

  5. Satellite Data Transmission (SDT) requirement

    NASA Technical Reports Server (NTRS)

    Chie, C. M.; White, M.; Lindsey, W. C.

    1984-01-01

    An 85 Mb/s modem/codec to operate in a 34 MHz C-band domestic satellite transponder at a system carrier to noise power ratio of 19.5 dB is discussed. Characteristics of a satellite channel and the approach adopted for the satellite data transmission modem/codec selection are discussed. Measured data and simulation results of the existing 50 Mbps link are compared and used to verify the simulation techniques. Various modulation schemes that were screened for the SDT are discussed and the simulated performance of two prime candidates, the 8 PSK and the SMSK/2 are given. The selection process that leads to the candidate codec techniques are documented and the technology of the modem/codec candidates is assessed. Costs of the modems and codecs are estimated.

  6. Production and characterization of large-area sputtered selective solar absorber coatings

    NASA Astrophysics Data System (ADS)

    Graf, Wolfgang; Koehl, Michael; Wittwer, Volker

    1992-11-01

    Most of the commercially available selective solar absorber coatings are produced by electroplating. Often the reproducibility or the durability of their optical properties is not very satisfying. Good reproducibility can be achieved by sputtering, the technique for the production of low-(epsilon) coatings for windows. The suitability of this kind of deposition technique for flat-plate solar absorber coatings based on the principle of ceramic/metal composites was investigated for different material combinations, and prototype collectors were manufactured. The optical characterization of the coatings is based on spectral measurements of the near-normal/hemispherical and the angle-dependent reflectance in the wavelength-range 0.38 micrometers - 17 micrometers . The durability assessment was carried out by temperature tests in ovens and climatic chambers.

  7. Utilization of Magnetorheological Finishing as a Diagnostic Tool for Investigating the Three-Dimensional Structure of Fractures in Fused Silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menapace, J A; Davis, P J; Steele, W A

    2005-11-11

    We have developed an experimental technique that combines magnetorheological finishing (MRF) and microscopy to examine fractures and/or artifacts in optical materials. The technique can be readily used to provide access to, and interrogation of, a selected segment of a fracture or object that extends beneath the surface. Depth slicing, or cross-sectioning at selected intervals, further allows the observation and measurement of the three-dimensional nature of the sites and the generation of volumetric representations that can be used to quantify shape and depth, and to understand how they were created, how they interact with surrounding material, and how they may bemore » eliminated or mitigated.« less

  8. Coal thickness gauge using RRAS techniques, part 1. [radiofrequency resonance absorption

    NASA Technical Reports Server (NTRS)

    Rollwitz, W. L.; King, J. D.

    1978-01-01

    A noncontacting sensor having a measurement range of 0 to 6 in or more, and with an accuracy of 0.5 in or better is needed to control the machinery used in modern coal mining so that the thickness of the coal layer remaining over the rock is maintained within selected bounds. The feasibility of using the radiofrequency resonance absorption (RRAS) techniques of electron magnetic resonance (EMR) and nuclear magnetic resonance (NMR) as the basis of a coal thickness gauge is discussed. The EMR technique was found, by analysis and experiments, to be well suited for this application.

  9. Comparison of commercial analytical techniques for measuring chlorine dioxide in urban desalinated drinking water.

    PubMed

    Ammar, T A; Abid, K Y; El-Bindary, A A; El-Sonbati, A Z

    2015-12-01

    Most drinking water industries are closely examining options to maintain a certain level of disinfectant residual through the entire distribution system. Chlorine dioxide is one of the promising disinfectants that is usually used as a secondary disinfectant, whereas the selection of the proper monitoring analytical technique to ensure disinfection and regulatory compliance has been debated within the industry. This research endeavored to objectively compare the performance of commercially available analytical techniques used for chlorine dioxide measurements (namely, chronoamperometry, DPD (N,N-diethyl-p-phenylenediamine), Lissamine Green B (LGB WET) and amperometric titration), to determine the superior technique. The commonly available commercial analytical techniques were evaluated over a wide range of chlorine dioxide concentrations. In reference to pre-defined criteria, the superior analytical technique was determined. To discern the effectiveness of such superior technique, various factors, such as sample temperature, high ionic strength, and other interferences that might influence the performance were examined. Among the four techniques, chronoamperometry technique indicates a significant level of accuracy and precision. Furthermore, the various influencing factors studied did not diminish the technique's performance where it was fairly adequate in all matrices. This study is a step towards proper disinfection monitoring and it confidently assists engineers with chlorine dioxide disinfection system planning and management.

  10. Discriminating Induced-Microearthquakes Using New Seismic Features

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Horton, S.

    2016-12-01

    We studied characteristics of induced-microearthquakes on the basis of the waveforms recorded on a limited number of surface receivers using machine-learning techniques. Forty features in the time, frequency, and time-frequency domains were measured on each waveform, and several techniques such as correlation-based feature selection, Artificial Neural Networks (ANNs), Logistic Regression (LR) and X-mean were used as research tools to explore the relationship between these seismic features and source parameters. The results show that spectral features have the highest correlation to source depth. Two new measurements developed as seismic features for this study, spectral centroids and 2D cross-correlations in the time-frequency domain, performed better than the common seismic measurements. These features can be used by machine learning techniques for efficient automatic classification of low energy signals recorded at one or more seismic stations. We applied the technique to 440 microearthquakes-1.7Reference: Mousavi, S.M., S.P. Horton, C. A. Langston, B. Samei, (2016) Seismic features and automatic discrimination of deep and shallow induced-microearthquakes using neural network and logistic regression, Geophys. J. Int. doi: 10.1093/gji/ggw258.

  11. Embedded Fiber Optic Sensors for Measuring Transient Detonation/Shock Behavior;Time-of-Arrival Detection and Waveform Determination.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Marcus Alexander; Willis, Michael David; Covert, Timothy Todd

    2014-09-01

    The miniaturization of explosive components has driven the need for a corresponding miniaturization of the current diagnostic techniques available to measure the explosive phenomena. Laser interferometry and the use of spectrally coated optical windows have proven to be an essential interrogation technique to acquire particle velocity time history data in one- dimensional gas gun and relatively large-scale explosive experiments. A new diagnostic technique described herein allows for experimental measurement of apparent particle velocity time histories in microscale explosive configurations and can be applied to shocks/non-shocks in inert materials. The diagnostic, Embedded Fiber Optic Sensors (EFOS), has been tested in challengingmore » microscopic experimental configurations that give confidence in the technique's ability to measure the apparent particle velocity time histories of an explosive with pressure outputs in the tenths of kilobars to several kilobars. Embedded Fiber Optic Sensors also allow for several measurements to be acquired in a single experiment because they are microscopic, thus reducing the number of experiments necessary. The future of EFOS technology will focus on further miniaturization, material selection appropriate for the operating pressure regime, and extensive hydrocode and optical analysis to transform apparent particle velocity time histories into true particle velocity time histories as well as the more meaningful pressure time histories.« less

  12. A feature selection approach towards progressive vector transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Miao, Ru; Song, Jia; Feng, Min

    2017-09-01

    WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.

  13. Investigation of human biomarkers in exhaled breath by laser photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Dumitras, D. C.; Giubileo, G.; Puiu, A.

    2005-06-01

    The paper underlines the importance of breath tests in medicine and the potential of laser techniques to measure in-vivo and in real time human biomarkers. The presence of trace amounts of gases or the metabolites of a precursor in exhaled air could be linked to kidney or liver malfunction, asthma, diabetes, cancer, ulcers or neurological disorders. The measurement of some human biomarkers (ethylene, ammonia), based on laser photoacoustic spectroscopy methods, insure very high sensitivity and selectivity. The technical characteristics of this instrument were measured to determine the detection limits (sub-ppb for ethylene). The results of ethylene release following lipid peroxidation initiated by X-ray irradiation or ingestion of radioactive compounds are presented. The possibility to extend this technique for measurement of breath ammonia levels in patients with end-stage renal disease while they are undergoing hemodialysis is discussed.

  14. A Novel Fast Helical 4D-CT Acquisition Technique to Generate Low-Noise Sorting Artifact–Free Images at User-Selected Breathing Phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, David, E-mail: dhthomas@mednet.ucla.edu; Lamb, James; White, Benjamin

    2014-05-01

    Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. Themore » tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.« less

  15. A novel fast helical 4D-CT acquisition technique to generate low-noise sorting artifact-free images at user-selected breathing phases.

    PubMed

    Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel

    2014-05-01

    To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact-free images at a patient dose similar to or less than current 4D-CT techniques. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    PubMed

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  17. Design and Evaluation of Perceptual-based Object Group Selection Techniques

    NASA Astrophysics Data System (ADS)

    Dehmeshki, Hoda

    Selecting groups of objects is a frequent task in graphical user interfaces. It is required prior to many standard operations such as deletion, movement, or modification. Conventional selection techniques are lasso, rectangle selection, and the selection and de-selection of items through the use of modifier keys. These techniques may become time-consuming and error-prone when target objects are densely distributed or when the distances between target objects are large. Perceptual-based selection techniques can considerably improve selection tasks when targets have a perceptual structure, for example when arranged along a line. Current methods to detect such groups use ad hoc grouping algorithms that are not based on results from perception science. Moreover, these techniques do not allow selecting groups with arbitrary arrangements or permit modifying a selection. This dissertation presents two domain-independent perceptual-based systems that address these issues. Based on established group detection models from perception research, the proposed systems detect perceptual groups formed by the Gestalt principles of good continuation and proximity. The new systems provide gesture-based or click-based interaction techniques for selecting groups with curvilinear or arbitrary structures as well as clusters. Moreover, the gesture-based system is adapted for the graph domain to facilitate path selection. This dissertation includes several user studies that show the proposed systems outperform conventional selection techniques when targets form salient perceptual groups and are still competitive when targets are semi-structured.

  18. Differentiation of four Aspergillus species and one Zygosaccharomyces with two electronic tongues based on different measurement techniques.

    PubMed

    Söderström, C; Rudnitskaya, A; Legin, A; Krantz-Rülcker, C

    2005-09-29

    Two electronic tongues based on different measurement techniques were applied to the discrimination of four molds and one yeast. Chosen microorganisms were different species of Aspergillus and yeast specie Zygosaccharomyces bailii, which are known as food contaminants. The electronic tongue developed in Linköping University was based on voltammetry. Four working electrodes made of noble metals were used in a standard three-electrode configuration in this case. The St. Petersburg electronic tongue consisted of 27 potentiometric chemical sensors with enhanced cross-sensitivity. Sensors with chalcogenide glass and plasticized PVC membranes were used. Two sets of samples were measured using both electronic tongues. Firstly, broths were measured in which either one of the molds or the yeast grew until late logarithmic phase or border of the stationary phase. Broths inoculated by either one of molds or the yeast was measured at five different times during microorganism growth. Data were evaluated using principal component analysis (PCA), partial least square regression (PLS) and linear discriminant analysis (LDA). It was found that both measurement techniques could differentiate between fungi species. Merged data from both electronic tongues improved differentiation of the samples in selected cases.

  19. Three-dimensional motion measurements of free-swimming microorganisms using digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Lee, Sang Joon; Seo, Kyung Won; Choi, Yong Seok; Sohn, Myong Hwan

    2011-06-01

    A digital holographic microscope is employed to measure the 3D motion of free-swimming microorganisms. The focus function used to quantify image sharpness provides a better depth-directional accuracy with a smaller depth-of-focus compared with the intensity method in determining the depth-directional position of spherical particles of various diameters. The focus function is then applied to measure the 3D positions of free-swimming microorganisms, namely dinoflagellates C. polykrikoides and P. minimum. Both automatic segmentation and proper selection of a focus function for a selected segment are important processes in measuring the positional information of two free-swimming microorganisms of different shapes with various width-to-length ratios. The digital holographic microscopy technique improved in this work is useful for measuring 3D swimming trajectories, velocities and attitudes of hundreds of microorganisms simultaneously. It also exhibits exceptional depth-directional accuracy.

  20. Development of custom measurement system for biomechanical evaluation of independent wheelchair transfers

    PubMed Central

    Koontz, Alicia M.; Lin, Yen-Sheng; Kankipati, Padmaja; Boninger, Michael L.; Cooper, Rory A.

    2017-01-01

    This study describes a new custom measurement system designed to investigate the biomechanics of sitting-pivot wheelchair transfers and assesses the reliability of selected biomechanical variables. Variables assessed include horizontal and vertical reaction forces underneath both hands and three-dimensional trunk, shoulder, and elbow range of motion. We examined the reliability of these measures between 5 consecutive transfer trials for 5 subjects with spinal cord injury and 12 non-disabled subjects while they performed a self-selected sitting pivot transfer from a wheelchair to a level bench. A majority of the biomechanical variables demonstrated moderate to excellent reliability (r > 0.6). The transfer measurement system recorded reliable and valid biomechanical data for future studies of sitting-pivot wheelchair transfers. We recommend a minimum of five transfer trials to obtain a reliable measure of transfer technique for future studies. PMID:22068376

  1. Efficiency measurement of health care organizations: What models are used?

    PubMed Central

    Jaafaripooyan, Ebrahim; Emamgholipour, Sara; Raei, Behzad

    2017-01-01

    Background: Literature abounds with various techniques for efficiency measurement of health care organizations (HCOs), which should be used cautiously and appropriately. The present study aimed at discovering the rules regulating the interplay among the number of inputs, outputs, and decision- making units (DMUs) and identifying all methods used for the measurement of Iranian HCOs and critically appraising all DEA studies on Iranian HCOs in their application of such rules. Methods: The present study employed a systematic search of all studies related to efficiency measurement of Iranian HCOs. A search was conducted in different databases such as PubMed and Scopus between 2001 and 2015 to identify the studies related to the measurement in health care. The retrieved studies passed through a multi-stage (title, abstract, body) filtering process. Data extraction table for each study was completed and included method, number of inputs and outputs, DMUs, and their efficiency score. Results: Various methods were found for efficiency measurement. Overall, 122 studies were retrieved, of which 73 had exclusively employed DEA technique for measuring the efficiency of HCOs in Iran, and 23 with hybrid models (including DEA). Only 6 studies had explicitly used the rules of thumb. Conclusion: The number of inputs, outputs, and DMUs should be cautiously selected in DEA like techniques, as their proportionality can directly affect the discriminatory power of the technique. The given literature seemed to be, to a large extent, unsuccessful in attending to such proportionality. This study collected a list of key rules (of thumb) on the interplay of inputs, outputs, and DMUs, which could be considered by most researchers keen to apply DEA technique.

  2. Determination of foliar uptake of water droplets on waxy leaves in controlled environmental system

    USDA-ARS?s Scientific Manuscript database

    Pertinent techniques for determination of plant cuticle permeability are needed to select proper doses of active ingredients and spray additives to improve pesticide application efficacy. A controlled environmental system with 100% relative humidity was developed for direct measurements of foliar up...

  3. Method of excess fractions with application to absolute distance metrology: wavelength selection and the effects of common error sources.

    PubMed

    Falaggis, Konstantinos; Towers, David P; Towers, Catherine E

    2012-09-20

    Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.

  4. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1992-01-01

    The overall objective of this proposal is to understand the relationship between human orientation control and motion sickness susceptibility. Three areas related to orientation control will be investigated. These three areas are (1) reflexes associated with the control of eye movements and posture, (2) the perception of body rotation and position with respect to gravity, and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. We refer to this process as sensory selection. This proposal will attempt to quantify subjects' sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms. Measurements of reflexes, motion perception, sensory selection abilities, and motion sickness susceptibility will concentrate on pitch and roll motions since these seem most relevant to the space motion sickness problem. Vestibulo-ocular (VOR) and oculomotor reflexes will be measured using a unique two-axis rotation device developed in our laboratory over the last seven years. Posture control reflexes will be measured using a movable posture platform capable of independently altering proprioceptive and visual orientation cues. Motion perception will be quantified using closed loop feedback technique developed by Zacharias and Young (Exp Brain Res, 1981). This technique requires a subject to null out motions induced by the experimenter while being exposed to various confounding sensory orientation cues. A subject's sensory selection abilities will be measured by the magnitude and timing of his reactions to changes in sensory environments. Motion sickness susceptibility will be measured by the time required to induce characteristic changes in the pattern of electrogastrogram recordings while exposed to various sensory environments during posture and motion perception tests. The results of this work are relevant to NASA's interest in understanding the etiology of space motion sickness. If any of the reflex, perceptual, or sensory selection abilities of subjects are found to correlate with motion sickness susceptibility, this work may be an important step in suggesting a method of predicting motion sickness susceptibility. If sensory selection can provide a means to avoid sensory conflict, then further work may lead to training programs which could enhance a subject's sensory selection ability and therefore minimize motion sickness susceptibility.

  5. Genetic drift and selection in many-allele range expansions.

    PubMed

    Weinstein, Bryan T; Lavrentovich, Maxim O; Möbius, Wolfram; Murray, Andrew W; Nelson, David R

    2017-12-01

    We experimentally and numerically investigate the evolutionary dynamics of four competing strains of E. coli with differing expansion velocities in radially expanding colonies. We compare experimental measurements of the average fraction, correlation functions between strains, and the relative rates of genetic domain wall annihilations and coalescences to simulations modeling the population as a one-dimensional ring of annihilating and coalescing random walkers with deterministic biases due to selection. The simulations reveal that the evolutionary dynamics can be collapsed onto master curves governed by three essential parameters: (1) an expansion length beyond which selection dominates over genetic drift; (2) a characteristic angular correlation describing the size of genetic domains; and (3) a dimensionless constant quantifying the interplay between a colony's curvature at the frontier and its selection length scale. We measure these parameters with a new technique that precisely measures small selective differences between spatially competing strains and show that our simulations accurately predict the dynamics without additional fitting. Our results suggest that the random walk model can act as a useful predictive tool for describing the evolutionary dynamics of range expansions composed of an arbitrary number of genotypes with different fitnesses.

  6. Laser Sounder for Global Measurement of CO2 Concentrations in the Troposphere from Space

    NASA Technical Reports Server (NTRS)

    Abshire, James B.; Riris, Haris; Kawa, S. Randy; Sun, Xiaoli; Chen, Jeffrey; Stephen, Mark A.; Collatz, G. James; Mao, Jianping; Allan, Graham

    2007-01-01

    Measurements of tropospheric CO2 abundance with global-coverage, a few hundred km spatial and monthly temporal resolution are needed to quantify processes that regulate CO2 storage by the land and oceans. The Orbiting Carbon Observatory (OCO) is the first space mission focused on atmospheric CO2 for measuring total column CO, and O2 by detecting the spectral absorption in reflected sunlight. The OCO mission is an essential step, and will yield important new information about atmospheric CO2 distributions. However there are unavoidable limitations imposed by its measurement approach. These include best accuracy only during daytime at moderate to high sun angles, interference by cloud and aerosol scattering, and limited signal from CO2 variability in the lower tropospheric CO2 column. We have been developing a new laser-based technique for the remote measurement of the tropospheric CO2 concentrations from orbit. Our initial goal is to demonstrate a lidar technique and instrument technology that will permit measurements of the CO2 column abundance in the lower troposphere from aircraft. Our final goal is to develop a space instrument and mission approach for active measurements of the CO2 mixing ratio at the 1-2 ppmv level. Our technique is much less sensitive to cloud and atmospheric scattering conditions and would allow continuous measurements of CO2 mixing ratio in the lower troposphere from orbit over land and ocean surfaces during day and night. Our approach is to use the 1570nm CO2 band and a 3-channel laser absorption spectrometer (i.e. lidar used an altimeter mode), which continuously measures at nadir from a near polar circular orbit. The approach directs the narrow co-aligned laser beams from the instrument's lasers toward nadir, and measures the energy of the laser echoes reflected from land and water surfaces. It uses several tunable fiber laser transmitters which allowing measurement of the extinction from a single selected CO2 absorption line in the 1570 nm band. This band is free from interference from other gases and has temperature insensitive absorption lines. During the measurement the lasers are tuned on- and off- a selected CO2 line near 1572 nm and a selected O2 line near 768 nm in the Oxygen A band at kHz rates. The lasers use tunable diode seed lasers followed by fiber amplifiers, and have spectral widths much narrower than the gas absorption lines. The receiver uses a 1-m diameter telescope and photon counting detectors and measures the background light and energies of the laser echoes from the surface. The extinction and column densities for the CO2 and O2 gases are estimated from the ratio of the on and offline surface echo via the differential optical absorption technique. Our technique rapidly alternates between several on-line wavelengths set to the sides of the selected gas absorption lines. It exploits the atmospheric pressure broadening of the lines to weight the measurement sensitivity to the atmospheric column below 5 km. This maximizes sensitivity to CO2 in the boundary layer, where variations caused by surface sources and sinks are largest. Simultaneous measurements of O2 column will use an identical approach with an O2 line. Thee laser frequencies are tunable and have narrow (MHz) line widths. In combination with sensitive photon counting detectors these enables much higher spectral resolution and precision than is possible with passive spectrometer. 1aser backscatter profiles are also measured, which permits identifying measurements made to cloud tops and through aerosol layers. The measurement approach using lasers in common-nadir-zenith path allows retrieving CO2 column mixing ratios in the lower troposphere irrespective of sun angle. Pulsed laser signals, time gated receiver and a narrow receiver field-of-view are used to isolate the surface laser echo signals and to exclude photons scattered from clouds and aerosols. Nonetheless, the optical absorption change due to a change of a few ppO2 is small, <1 % which makes achieving the needed measurement sensitivities and stabilities quite challenging. Measurement SNRs and stabilities of >600:1 are needed to estimate CO2 mixing ratio at the 1-2 ppm level. We have calculated characteristics of the technique and have demonstrated aspects of the laser, detector and receiver approaches in th e laboratory We have also measured O2 in an absorption cell, and made C02 measurements over a 400 m long (one way) horizontal path using a sensor breadboard. We will describe these and more details of our approach in the paper.

  7. Phase Reconstruction from FROG Using Genetic Algorithms[Frequency-Resolved Optical Gating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omenetto, F.G.; Nicholson, J.W.; Funk, D.J.

    1999-04-12

    The authors describe a new technique for obtaining the phase and electric field from FROG measurements using genetic algorithms. Frequency-Resolved Optical Gating (FROG) has gained prominence as a technique for characterizing ultrashort pulses. FROG consists of a spectrally resolved autocorrelation of the pulse to be measured. Typically a combination of iterative algorithms is used, applying constraints from experimental data, and alternating between the time and frequency domain, in order to retrieve an optical pulse. The authors have developed a new approach to retrieving the intensity and phase from FROG data using a genetic algorithm (GA). A GA is a generalmore » parallel search technique that operates on a population of potential solutions simultaneously. Operators in a genetic algorithm, such as crossover, selection, and mutation are based on ideas taken from evolution.« less

  8. Evaluation of turbulence measurement techniques from a single Doppler lidar

    NASA Astrophysics Data System (ADS)

    Bonin, Timothy A.; Choukulkar, Aditya; Brewer, W. Alan; Sandberg, Scott P.; Weickmann, Ann M.; Pichugina, Yelena L.; Banta, Robert M.; Oncley, Steven P.; Wolfe, Daniel E.

    2017-08-01

    Measurements of turbulence are essential to understand and quantify the transport and dispersal of heat, moisture, momentum, and trace gases within the planetary boundary layer (PBL). Through the years, various techniques to measure turbulence using Doppler lidar observations have been proposed. However, the accuracy of these measurements has rarely been validated against trusted in situ instrumentation. Herein, data from the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) are used to verify Doppler lidar turbulence profiles through comparison with sonic anemometer measurements. For 17 days at the end of the experiment, a single scanning Doppler lidar continuously cycled through different turbulence measurement strategies: velocity-azimuth display (VAD), six-beam scans, and range-height indicators (RHIs) with a vertical stare.Measurements of turbulence kinetic energy (TKE), turbulence intensity, and stress velocity from these techniques are compared with sonic anemometer measurements at six heights on a 300 m tower. The six-beam technique is found to generally measure turbulence kinetic energy and turbulence intensity the most accurately at all heights (r2 ≈ 0.78), showing little bias in its observations (slope of ≈ 0. 95). Turbulence measurements from the velocity-azimuth display method tended to be biased low near the surface, as large eddies were not captured by the scan. None of the methods evaluated were able to consistently accurately measure the shear velocity (r2 = 0.15-0.17). Each of the scanning strategies assessed had its own strengths and limitations that need to be considered when selecting the method used in future experiments.

  9. Extraction and evaluation of gas-flow-dependent features from dynamic measurements of gas sensors array

    NASA Astrophysics Data System (ADS)

    Kalinowski, Paweł; Woźniak, Łukasz; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    Gas analyzers based on gas sensors are the devices which enable recognition of various kinds of volatile compounds. They have continuously been developed and investigated for over three decades, however there are still limitations which slow down the implementation of those devices in many applications. For example, the main drawbacks are the lack of selectivity, sensitivity and long term stability of those devices caused by the drift of utilized sensors. This implies the necessity of investigations not only in the field of development of gas sensors construction, but also the development of measurement procedures or methods of analysis of sensor responses which compensate the limitations of sensors devices. One of the fields of investigations covers the dynamic measurements of sensors or sensor-arrays response with the utilization of flow modulation techniques. Different gas delivery patterns enable the possibility of extraction of unique features which improves the stability and selectivity of gas detecting systems. In this article three utilized flow modulation techniques are presented, together with the proposition of the evaluation method of their usefulness and robustness in environmental pollutants detecting systems. The results of dynamic measurements of an commercially available TGS sensor array in the presence of nitrogen dioxide and ammonia are shown.

  10. Few-Nucleon Charge Radii and a Precision Isotope Shift Measurement in Helium

    NASA Astrophysics Data System (ADS)

    Hassan Rezaeian, Nima; Shiner, David

    2015-05-01

    Precision atomic theory and experiment provide a valuable method to determine few nucleon charge radii, complementing the more direct scattering approaches, and providing sensitive tests of few-body nuclear theory. Some puzzles with respect to this method exist, particularly in the muonic and electronic measurements of the proton radius, and as well with respect to measurements of nuclear size in helium. We perform precision measurements of the isotope shift of the 23S -23P transitions in 3He and 4He. A tunable laser frequency discriminator and electro-optic modulation technique give precise frequency and intensity control. We select (ts <50 ms) and stabilize the intensity of the required sideband and eliminate the unused sidebands (<= 10¬5) . The technique uses a MEMS fiber switch (ts = 10 ms) and several temperature stabilized narrow band (3 GHz) fiber gratings. A fiber based optical circulator and amplifier provide the desired isolation and net gain for the selected frequency. A beam with both species of helium is achieved using a custom fiber laser for simultaneous optical pumping. A servo-controlled retro-reflected laser beam eliminates Doppler effects. Careful detection design and software control allows for unbiased data collection. Current results will be discussed. This work is supported by NSF PHY-1068868 and PHY-1404498.

  11. An introduction to the processes, problems, and management of urban lakes

    USGS Publications Warehouse

    Britton, L.J.; Averett, R.C.; Ferreira, R.F.

    1975-01-01

    As lake studies become more common, sampling techniques for data collection need increased accuracy and consistency, in order to make meaningful comparisons between different lakes. Therefore, the report discusses the main factors involved in conducting lake studies. These factors include the types and frequency of measurements useful in lake reconnaissance studies and a review of literature on sampling equipment and techniques. A glossary of selected terms begins the report, which is intended for guideline use by urban planners and managers.

  12. Rank-k Maximal Statistics for Divergence and Probability of Misclassification

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.

    1972-01-01

    A technique is developed for selecting from n-channel multispectral data some k combinations of the n-channels upon which to base a given classification technique so that some measure of the loss of the ability to distinguish between classes, using the compressed k-dimensional data, is minimized. Information loss in compressing the n-channel data to k channels is taken to be the difference in the average interclass divergences (or probability of misclassification) in n-space and in k-space.

  13. Photogrammetry: An available surface characterization tool for solar concentrators. Part 1: Measurements of surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shortis, M.R.; Johnston, G.H.G.

    1996-08-01

    Close range photogrammetry is a sensing technique that allows the three-dimensional coordinates of selected points on a surface of almost any dimension and orientation to be assessed. Surface characterizations of paraboloidal reflecting surfaces at the ANU using photogrammetry have indicated that three-dimensional coordinate precisions approach 1:20,000 are readily achievable using this technique. This allows surface quality assessments to be made of large solar collecting devices with a precision that is difficult to achieve with other methods.

  14. Thin layer activation techniques at the U-120 cyclotron of Bucharest

    NASA Astrophysics Data System (ADS)

    Constantinescu, B.; Ivanov, E. A.; Pascovici, G.; Popa-Simil, L.; Racolta, P. M.

    1994-05-01

    The Thin Layer Activation (TLA) technique is a nuclear method especially used for different types of wear (or corrosion) investigations. Experimental results for selection criteria of nuclear reactions for various tribological studies, using the IPNE U-120 classical variable energy Cyclotron are presented. Measuring methods for the main types of wear phenomena and home made instrumentations dedicated for TLA industrial applications are also reported. Some typical TLA tribological applications, a nuclear scanning method to obtain wear profile of piston-rings are presented as well.

  15. Comparing the incomparable? A systematic review of competing techniques for converting descriptive measures of health status into QALY-weights.

    PubMed

    Mortimer, Duncan; Segal, Leonie

    2008-01-01

    Algorithms for converting descriptive measures of health status into quality-adjusted life year (QALY)--weights are now widely available, and their application in economic evaluation is increasingly commonplace. The objective of this study is to describe and compare existing conversion algorithms and to highlight issues bearing on the derivation and interpretation of the QALY-weights so obtained. Systematic review of algorithms for converting descriptive measures of health status into QALY-weights. The review identified a substantial body of literature comprising 46 derivation studies and 16 studies that provided evidence or commentary on the validity of conversion algorithms. Conversion algorithms were derived using 1 of 4 techniques: 1) transfer to utility regression, 2) response mapping, 3) effect size translation, and 4) "revaluing" outcome measures using preference-based scaling techniques. Although these techniques differ in their methodological/theoretical tradition, data requirements, and ease of derivation and application, the available evidence suggests that the sensitivity and validity of derived QALY-weights may be more dependent on the coverage and sensitivity of measures and the disease area/patient group under evaluation than on the technique used in derivation. Despite the recent proliferation of conversion algorithms, a number of questions bearing on the derivation and interpretation of derived QALY-weights remain unresolved. These unresolved issues suggest directions for future research in this area. In the meantime, analysts seeking guidance in selecting derived QALY-weights should consider the validity and feasibility of each conversion algorithm in the disease area and patient group under evaluation rather than restricting their choice to weights from a particular derivation technique.

  16. Measuring snow cover using satellite imagery during 1973 and 1974 melt season: North Santiam, Boise, and Upper Snake Basins, phase 1. [LANDSAT satellites, imaging techniques

    NASA Technical Reports Server (NTRS)

    Wiegman, E. J.; Evans, W. E.; Hadfield, R.

    1975-01-01

    Measurements are examined of snow coverage during the snow-melt season in 1973 and 1974 from LANDSAT imagery for the three Columbia River Subbasins. Satellite derived snow cover inventories for the three test basins were obtained as an alternative to inventories performed with the current operational practice of using small aircraft flights over selected snow fields. The accuracy and precision versus cost for several different interactive image analysis procedures was investigated using a display device, the Electronic Satellite Image Analysis Console. Single-band radiance thresholding was the principal technique employed in the snow detection, although this technique was supplemented by an editing procedure involving reference to hand-generated elevation contours. For each data and view measured, a binary thematic map or "mask" depicting the snow cover was generated by a combination of objective and subjective procedures. Photographs of data analysis equipment (displays) are shown.

  17. Magnetic moment investigation by frequency mixing techniques.

    PubMed

    Teliban, I; Thede, C; Chemnitz, S; Bechtold, C; Quadakkers, W J; Schütze, M; Quandt, E

    2009-11-01

    Gas turbines and other large industrial equipment are subjected to high-temperature oxidation and corrosion. Research and development of efficient protective coatings is the main task in the field. Also, knowledge about the depletion state of the coating during the operation time is important. To date, practical nondestructive methods for the measurement of the depletion state do not exist. By integrating magnetic phases into the coating, the condition of the coating can be determined by measuring its magnetic properties. In this paper, a new technique using frequency mixing is proposed to investigate the thickness of the coatings based on their magnetic properties. A sensor system is designed and tested on specific magnetic coatings. New approaches are proposed to overcome the dependency of the measurement on the distance between coil and sample that all noncontact techniques face. The novelty is a low cost sensor with high sensibility and selectivity which can provide very high signal-to-noise ratios. Prospects and limitations are discussed for future use of the sensor in industrial applications.

  18. A study to define an in-flight dynamics measurement and data applications program for space shuttle payloads

    NASA Technical Reports Server (NTRS)

    Rader, W. P.; Barrett, S.; Payne, K. R.

    1975-01-01

    Data measurement and interpretation techniques were defined for application to the first few space shuttle flights, so that the dynamic environment could be sufficiently well established to be used to reduce the cost of future payloads through more efficient design and environmental test techniques. It was concluded that: (1) initial payloads must be given comprehensive instrumentation coverage to obtain detailed definition of acoustics, vibration, and interface loads, (2) analytical models of selected initial payloads must be developed and verified by modal surveys and flight measurements, (3) acoustic tests should be performed on initial payloads to establish realistic test criteria for components and experiments in order to minimize unrealistic failures and retest requirements, (4) permanent data banks should be set up to establish statistical confidence in the data to be used, (5) a more unified design/test specification philosophy is needed, (6) additional work is needed to establish a practical testing technique for simulation of vehicle transients.

  19. Analysis and optimal design of moisture sensor for rice grain moisture measurement

    NASA Astrophysics Data System (ADS)

    Jain, Sweety; Mishra, Pankaj Kumar; Thakare, Vandana Vikas

    2018-04-01

    The analysis and design of a microstrip sensor for accurate determination of moisture content (MC) in rice grains based on oven drying technique, this technique is easy, fast and less time-consuming to other techniques. The sensor is designed with low insertion loss, reflection coefficient and maximum gain is -35dB and 5.88dB at 2.68GHz as well as discussed all the parameters such as axial ratio, maximum gain, smith chart etc, which is helpful for analysis the moisture measurement. The variation in percentage of moisture measurement with magnitude and phase of transmission coefficient is investigated at selected frequencies. The microstrip moisture sensor consists of one layer: substrate FR4, thickness 1.638 is simulated by computer simulated technology microwave studio (CST MWS). It is concluded that the proposed sensor is suitable for development as a complete sensor and to estimate the optimum moisture content of rice grains with accurately, sensitivity, compact, versatile and suitable for determining the moisture content of other crops and agriculture products.

  20. Automated vehicle guidance using discrete reference markers. [road surface steering techniques

    NASA Technical Reports Server (NTRS)

    Johnston, A. R.; Assefi, T.; Lai, J. Y.

    1979-01-01

    Techniques for providing steering control for an automated vehicle using discrete reference markers fixed to the road surface are investigated analytically. Either optical or magnetic approaches can be used for the sensor, which generates a measurement of the lateral offset of the vehicle path at each marker to form the basic data for steering control. Possible mechanizations of sensor and controller are outlined. Techniques for handling certain anomalous conditions, such as a missing marker, or loss of acquisition, and special maneuvers, such as u-turns and switching, are briefly discussed. A general analysis of the vehicle dynamics and the discrete control system is presented using the state variable formulation. Noise in both the sensor measurement and in the steering servo are accounted for. An optimal controller is simulated on a general purpose computer, and the resulting plots of vehicle path are presented. Parameters representing a small multipassenger tram were selected, and the simulation runs show response to an erroneous sensor measurement and acquisition following large initial path errors.

  1. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1989-01-01

    An experimental technique is used to measure the structural power flow through an aircraft fuselage with the excitation near the wing attachment location. Because of the large number of measurements required to analyze the whole of an aircraft fuselage, it is necessary that a balance be achieved between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, the structural intensity vectors at locations distributed throughout the fuselage are measured. To minimize the errors associated with using a four transducers technique the measurement positions are selected away from bulkheads and stiffeners. Because four separate transducers are used, with each transducer having its own drive and conditioning amplifiers, phase errors are introduced in the measurements that can be much greater than the phase differences associated with the measurements. To minimize these phase errors two sets of measurements are taken for each position with the orientation of the transducers rotated by 180 deg and an average taken between the two sets of measurements. Results are presented and discussed.

  2. Accurate characterization of carcinogenic DNA adducts using MALDI tandem time-of-flight mass spectrometry

    NASA Astrophysics Data System (ADS)

    Barnes, Charles A.; Chiu, Norman H. L.

    2009-01-01

    Many chemical carcinogens and their in vivo activated metabolites react readily with genomic DNA, and form covalently bound carcinogen-DNA adducts. Clinically, carcinogen-DNA adducts have been linked to various cancer diseases. Among the current methods for DNA adduct analysis, mass spectroscopic method allows the direct measurement of unlabeled DNA adducts. The goal of this study is to explore the use of matrix-assisted laser desorption/ionization tandem time-of-flight mass spectrometry (MALDI-TOF/TOF MS) to determine the identity of carcinogen-DNA adducts. Two of the known carcinogenic DNA adducts, namely N-(2'-deoxyguanosin-8-yl)-2-amino-1-methyl-6-phenyl-imidazo [4,5-b] pyridine (dG-C8-PhIP) and N-(2'-deoxyguanosin-8yl)-4-aminobiphenyl (dG-C8-ABP), were selected as our models. In MALDI-TOF MS measurements, the small matrix ion and its cluster ions did not interfere with the measurements of both selected dG adducts. To achieve a higher accuracy for the characterization of selected dG adducts, 1 keV collision energy in MALDI-TOF/TOF MS/MS was used to measure the adducts. In comparison to other MS/MS techniques with lower collision energies, more extensive precursor ion dissociations were observed. The detection of the corresponding fragment ions allowed the identities of guanine, PhIP or ABP, and the position of adduction to be confirmed. Some of the fragment ions of dG-C8-PhIP have not been reported by other MS/MS techniques.

  3. Experimental study of the stability and flow characteristics of floating liquid columns confined between rotating disks

    NASA Technical Reports Server (NTRS)

    Fowle, A. A.; Soto, L.; Strong, P. F.; Wang, C. A.

    1980-01-01

    A low Bond number simulation technique was used to establish the stability limits of cylindrical and conical floating liquid columns under conditions of isorotation, equal counter rotation, rotation of one end only, and parallel axis offset. The conditions for resonance in cylindrical liquid columns perturbed by axial, sinusoidal vibration of one end face are also reported. All tests were carried out under isothermal conditions with water and silicone fluids of various viscosities. A technique for the quantitative measurement of stream velocity within a floating, isothermal, liquid column confined between rotatable disks was developed. In the measurement, small, light scattering particles were used as streamline markers in common arrangement, but the capability of the measurement was extended by use of stereopair photography system to provide quantitative data. Results of velocity measurements made under a few selected conditions, which established the precision and accuracy of the technique, are given. The general qualitative features of the isothermal flow patterns under various conditions of end face rotation resulting from both still photography and motion pictures are presented.

  4. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  5. Detecting and Quantifying Topography in Neural Maps

    PubMed Central

    Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy

    2014-01-01

    Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279

  6. TOF-SIMS imaging technique with information entropy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Satoka; Kawashima, Y.; Kudo, Masahiro

    2005-05-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is capable of chemical imaging of proteins on insulated samples in principal. However, selection of specific peaks related to a particular protein, which are necessary for chemical imaging, out of numerous candidates had been difficult without an appropriate spectrum analysis technique. Therefore multivariate analysis techniques, such as principal component analysis (PCA), and analysis with mutual information defined by information theory, have been applied to interpret SIMS spectra of protein samples. In this study mutual information was applied to select specific peaks related to proteins in order to obtain chemical images. Proteins on insulated materials were measured with TOF-SIMS and then SIMS spectra were analyzed by means of the analysis method based on the comparison using mutual information. Chemical mapping of each protein was obtained using specific peaks related to each protein selected based on values of mutual information. The results of TOF-SIMS images of proteins on the materials provide some useful information on properties of protein adsorption, optimality of immobilization processes and reaction between proteins. Thus chemical images of proteins by TOF-SIMS contribute to understand interactions between material surfaces and proteins and to develop sophisticated biomaterials.

  7. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  8. A data base and analysis program for shuttle main engine dynamic pressure measurements

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1986-01-01

    A dynamic pressure data base management system is described for measurements obtained from space shuttle main engine (SSME) hot firing tests. The data were provided in terms of engine power level and rms pressure time histories, and power spectra of the dynamic pressure measurements at selected times during each test. Test measurements and engine locations are defined along with a discussion of data acquisition and reduction procedures. A description of the data base management analysis system is provided and subroutines developed for obtaining selected measurement means, variances, ranges and other statistics of interest are discussed. A summary of pressure spectra obtained at SSME rated power level is provided for reference. Application of the singular value decomposition technique to spectrum interpolation is discussed and isoplots of interpolated spectra are presented to indicate measurement trends with engine power level. Program listings of the data base management and spectrum interpolation software are given. Appendices are included to document all data base measurements.

  9. Formative Research in Attention and Appeal: A Series of Proposals.

    ERIC Educational Resources Information Center

    Mielke, Keith W.; Bryant, Jennings, Jr.

    Methods are suggested to measure the program appeal and audience attention of Children's Television Workshop productions. Among these are distractor techniques, one which permits subjects to discriminate between two simultaneously broadcast programs by selecting the audio track they most prefer and one used to rank order several programs.…

  10. Experimental verification of PSM polarimetry: monitoring polarization at 193nm high-NA with phase shift masks

    NASA Astrophysics Data System (ADS)

    McIntyre, Gregory; Neureuther, Andrew; Slonaker, Steve; Vellanki, Venu; Reynolds, Patrick

    2006-03-01

    The initial experimental verification of a polarization monitoring technique is presented. A series of phase shifting mask patterns produce polarization dependent signals in photoresist and are capable of monitoring the Stokes parameters of any arbitrary illumination scheme. Experiments on two test reticles have been conducted. The first reticle consisted of a series of radial phase gratings (RPG) and employed special apertures to select particular illumination angles. Measurement sensitivities of about 0.3 percent of the clear field per percent change in polarization state were observed. The second test reticle employed the more sensitive proximity effect polarization analyzers (PEPA), a more robust experimental setup, and a backside pinhole layer for illumination angle selection and to enable characterization of the full illuminator. Despite an initial complication with the backside pinhole alignment, the results correlate with theory. Theory suggests that, once the pinhole alignment is corrected in the near future, the second reticle should achieve a measurement sensitivity of about 1 percent of the clear field per percent change in polarization state. This corresponds to a measurement of the Stokes parameters after test mask calibration, to within about 0.02 to 0.03. Various potential improvements to the design, fabrication of the mask, and experimental setup are discussed. Additionally, to decrease measurement time, a design modification and double exposure technique is proposed to enable electrical detection of the measurement signal.

  11. The NASA B-757 HIRF Test Series: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Moeller, Karl J.; Dudley, Kenneth L.

    1997-01-01

    In 1995, the NASA Langley Research Center conducted a series of aircraft tests aimed at characterizing the electromagnetic environment (EME) in and around a Boeing 757 airliner. Measurements were made of the electromagnetic energy coupled into the aircraft and the signals induced on select structures as the aircraft was flown past known RF transmitters. These measurements were conducted to provide data for the validation of computational techniques for the assessment of electromagnetic effects in commercial transport aircraft. This paper reports on the results of flight tests using RF radiators in the HF, VHF, and UHF ranges and on efforts to use computational and analytical techniques to predict RF field levels inside the airliner at these frequencies.

  12. Moire technique utilization for detection and measurement of scoliosis

    NASA Astrophysics Data System (ADS)

    Zawieska, Dorota; Podlasiak, Piotr

    1993-02-01

    Moire projection method enables non-contact measurement of the shape or deformation of different surfaces and constructions by fringe pattern analysis. The fringe map acquisition of the whole surface of the object under test is one of the main advantages compared with 'point by point' methods. The computer analyzes the shape of the whole surface and next user can selected different points or cross section of the object map. In this paper a few typical examples of an application of the moire technique in solving different medical problems will be presented. We will also present to you the equipment the moire pattern analysis is done in real time using the phase stepping method with CCD camera.

  13. Advanced definition study for the determination of atmospheric ozone using the satellite eclipse technique

    NASA Technical Reports Server (NTRS)

    Emmons, R.; Preski, R. J.; Kierstead, F. H., Jr.; Doll, F. C.; Wight, D. T.; Romick, D. C.

    1973-01-01

    A study was made to evaluate the potential for remote ground-based measurement of upper atmospheric ozone by determining the absorption ratio of selected narrow bands of sunlight as reflected by satellites while passing into eclipse, using the NASA Mobile Satellite Photometric Observatory (MOSPO). Equipment modifications to provide optimum performance were analyzed and recommendations were made for improvements to the system to accomplish this. These included new sensor tubes, pulse counting detection circuitry, filters, beam splitters and associated optical revision, along with an automatic tracking capability plus corresponding operational techniques which should extend the overall measurement capability to include use of satellites down to 5th magnitude.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Laura; Harvey, Stephen P.; Teeter, Glenn

    We demonstrate the potential of X-ray photoelectron spectroscopy (XPS) to characterize new carrier-selective contacts (CSC) for solar cell application. We show that XPS not only provides information about the surface chemical properties of the CSC material, but that operando XPS, i.e. under light bias condition, can also directly measure the photovoltage that develops at the CSC/absorber interface, revealing device relevant information without the need of assembling a full solar cell. We present the application of the technique to molybdenum oxide hole-selective contact films on a crystalline silicon absorber.

  15. Local density measurement of additive manufactured copper parts by instrumented indentation

    NASA Astrophysics Data System (ADS)

    Santo, Loredana; Quadrini, Fabrizio; Bellisario, Denise; Tedde, Giovanni Matteo; Zarcone, Mariano; Di Domenico, Gildo; D'Angelo, Pierpaolo; Corona, Diego

    2018-05-01

    Instrumented flat indentation has been used to evaluate local density of additive manufactured (AM) copper samples with different relative density. Indentations were made by using tungsten carbide (WC) flat pins with 1 mm diameter. Pure copper powders were used in a selective laser melting (SLM) machine to produce samples to test. By changing process parameters, samples density was changed from the relative density of 63% to 71%. Indentation tests were performed on the xy surface of the AM samples. In order to make a correlation between indentation test results and sample density, the indentation pressure at fixed displacement was selected. Results show that instrumented indentation is a valid technique to measure density distribution along the geometry of an SLM part. In fact, a linear trend between indentation pressure and sample density was found for the selected density range.

  16. Model selection as a science driver for dark energy surveys

    NASA Astrophysics Data System (ADS)

    Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin

    2006-07-01

    A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.

  17. Thermodynamic Activity Measurements with Knudsen Cell Mass Spectrometry

    NASA Technical Reports Server (NTRS)

    Copland, Evan H.; Jacobson, Nathan S.

    2001-01-01

    Coupling the Knudsen effusion method with mass spectrometry has proven to be one of the most useful experimental techniques for studying the equilibrium between condensed phases and complex vapors. The Knudsen effusion method involves placing a condensed sample in a Knudsen cell, a small "enclosure", that is uniformly heated and held until equilibrium is attained between the condensed and vapor phases. The vapor is continuously sampled by effusion through a small orifice in the cell. A molecular beam is formed from the effusing vapor and directed into a mass spectrometer for identification and pressure measurement of the species in the vapor phase. Knudsen cell mass spectrometry (KCMS) has been used for nearly fifty years now and continues to be a leading technique for obtaining thermodynamic data. Indeed, much of the well-established vapor specie data in the JANAF tables has been obtained from this technique. This is due to the extreme versatility of the technique. All classes of materials can be studied and all constituents of the vapor phase can be measured over a wide range of pressures (approximately 10(exp -4) to 10(exp -11) bar) and temperatures (500-2800 K). The ability to selectively measure different vapor species makes KCMS a very powerful tool for the measurement of component activities in metallic and ceramic solutions. Today several groups are applying KCMS to measure thermodynamic functions in multicomponent metallic and ceramic systems. Thermodynamic functions, especially component activities, are extremely important in the development of CALPHAD (Calculation of Phase Diagrams) type thermodynamic descriptions. These descriptions, in turn, are useful for modeling materials processing and predicting reactions such as oxide formation and fiber/matrix interactions. The leading experimental methods for measuring activities are the Galvanic cell or electro-motive force (EMF) technique and the KCMS technique. Each has specific advantages, depending on material and conditions. The EMF technique is suitable for lower temperature measurements, provided a suitable cell can be constructed. KCMS is useful for higher temperature measurements in a system with volatile components. In this paper, we briefly review the KCMS technique and identify the major experimental issues that must be addressed for precise measurements. These issues include temperature measurements, cell material and cell design and absolute pressure calibration. The resolution of these issues are discussed together with some recent examples of measured thermodynamic data.

  18. Stimulated Raman Spectroscopy with Entangled Light: Enhanced Resolution and Pathway Selection

    PubMed Central

    2015-01-01

    We propose a novel femtosecond stimulated Raman spectroscopy (FSRS) technique that combines entangled photons with interference detection to select matter pathways and enhance the resolution. Following photoexcitation by an actinic pump, the measurement uses a pair of broad-band entangled photons; one (signal) interacts with the molecule and together with a third narrow-band pulse induces the Raman process. The other (idler) photon provides a reference for the coincidence measurement. This interferometric photon coincidence counting detection allows one to separately measure the Raman gain and loss signals, which is not possible with conventional probe transmission detection. Entangled photons further provide a unique temporal and spectral detection window that can better resolve fast excited-state dynamics compared to classical and correlated disentangled states of light. PMID:25177427

  19. Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2009-04-17

    We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.

  20. Metrics in method engineering

    NASA Astrophysics Data System (ADS)

    Brinkkemper, S.; Rossi, M.

    1994-12-01

    As customizable computer aided software engineering (CASE) tools, or CASE shells, have been introduced in academia and industry, there has been a growing interest into the systematic construction of methods and their support environments, i.e. method engineering. To aid the method developers and method selectors in their tasks, we propose two sets of metrics, which measure the complexity of diagrammatic specification techniques on the one hand, and of complete systems development methods on the other hand. Proposed metrics provide a relatively fast and simple way to analyze the technique (or method) properties, and when accompanied with other selection criteria, can be used for estimating the cost of learning the technique and the relative complexity of a technique compared to others. To demonstrate the applicability of the proposed metrics, we have applied them to 34 techniques and 15 methods.

  1. Internal corrosion monitoring of subsea oil and gas production equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joosten, M.W.; Fischer, K.P.; Strommen, R.

    1995-04-01

    Nonintrusive techniques will dominate subsea corrosion monitoring compared with the intrusive methods because such methods do not interfere with pipeline operations. The long-term reliability of the nonintrusive techniques in general is considered to be much better than that of intrusive-type probes. The nonintrusive techniques based on radioactive tracers (TLA, NA) and FSM and UT are expected to be the main types of subsea corrosion monitoring equipment in the coming years. Available techniques that could be developed specifically for subsea applications are: electrochemical noise, corrosion potentials (using new types of reference electrodes), multiprobe system for electrochemical measurements, and video camera inspectionmore » (mini-video camera with light source). The following innovative techniques have potential but need further development: ion selective electrodes, radioactive tracers, and Raman spectroscopy.« less

  2. Preconditioned conjugate gradient technique for the analysis of symmetric anisotropic structures

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1987-01-01

    An efficient preconditioned conjugate gradient (PCG) technique and a computational procedure are presented for the analysis of symmetric anisotropic structures. The technique is based on selecting the preconditioning matrix as the orthotropic part of the global stiffness matrix of the structure, with all the nonorthotropic terms set equal to zero. This particular choice of the preconditioning matrix results in reducing the size of the analysis model of the anisotropic structure to that of the corresponding orthotropic structure. The similarities between the proposed PCG technique and a reduction technique previously presented by the authors are identified and exploited to generate from the PCG technique direct measures for the sensitivity of the different response quantities to the nonorthotropic (anisotropic) material coefficients of the structure. The effectiveness of the PCG technique is demonstrated by means of a numerical example of an anisotropic cylindrical panel.

  3. Updated techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada

    USGS Publications Warehouse

    Hess, Glen W.

    2002-01-01

    Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.

  4. The marginal fit of E.max Press and E.max CAD lithium disilicate restorations: A critical review.

    PubMed

    Mounajjed, Radek; M Layton, Danielle; Azar, Basel

    2016-12-01

    This critical review aimed to assess the vertical marginal gap that was present when E.max lithium disilicate-based restoration (Press and CAD) are fabricated in-vitro. Published articles reporting vertical marginal gap measurements of in-vitro restorations that had been fabricated from E.Max lithium disilicate were sought with an electronic search of MEDLINE (PubMed) and hand search of selected dental journals. The outcomes were reviewed qualitatively. The majority of studies that compared the marginal fit of E.max press and E.max CAD restorations, found that the E.max lithium disilicate restorations fabricated with the press technique had significantly smaller marginal gaps than those fabricated with CAD technique. This research indicates that E.max lithium disilicate restorations fabricated with the press technique have measurably smaller marginal gaps when compared with those fabricated with CAD techniques within in-vitro environments. The marginal gaps achieved by the restorations across all groups were within a clinically acceptable range.

  5. Non-invasive methods for the determination of body and carcass composition in livestock: dual-energy X-ray absorptiometry, computed tomography, magnetic resonance imaging and ultrasound: invited review.

    PubMed

    Scholz, A M; Bünger, L; Kongsro, J; Baulain, U; Mitchell, A D

    2015-07-01

    The ability to accurately measure body or carcass composition is important for performance testing, grading and finally selection or payment of meat-producing animals. Advances especially in non-invasive techniques are mainly based on the development of electronic and computer-driven methods in order to provide objective phenotypic data. The preference for a specific technique depends on the target animal species or carcass, combined with technical and practical aspects such as accuracy, reliability, cost, portability, speed, ease of use, safety and for in vivo measurements the need for fixation or sedation. The techniques rely on specific device-driven signals, which interact with tissues in the body or carcass at the atomic or molecular level, resulting in secondary or attenuated signals detected by the instruments and analyzed quantitatively. The electromagnetic signal produced by the instrument may originate from mechanical energy such as sound waves (ultrasound - US), 'photon' radiation (X-ray-computed tomography - CT, dual-energy X-ray absorptiometry - DXA) or radio frequency waves (magnetic resonance imaging - MRI). The signals detected by the corresponding instruments are processed to measure, for example, tissue depths, areas, volumes or distributions of fat, muscle (water, protein) and partly bone or bone mineral. Among the above techniques, CT is the most accurate one followed by MRI and DXA, whereas US can be used for all sizes of farm animal species even under field conditions. CT, MRI and US can provide volume data, whereas only DXA delivers immediate whole-body composition results without (2D) image manipulation. A combination of simple US and more expensive CT, MRI or DXA might be applied for farm animal selection programs in a stepwise approach.

  6. A novel CT acquisition and analysis technique for breathing motion modeling

    NASA Astrophysics Data System (ADS)

    Low, Daniel A.; White, Benjamin M.; Lee, Percy P.; Thomas, David H.; Gaudio, Sergio; Jani, Shyam S.; Wu, Xiao; Lamb, James M.

    2013-06-01

    To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques.

  7. Evaluation of focused multipolar stimulation for cochlear implants in acutely deafened cats

    NASA Astrophysics Data System (ADS)

    George, Shefin S.; Wise, Andrew K.; Shivdasani, Mohit N.; Shepherd, Robert K.; Fallon, James B.

    2014-12-01

    Objective. The conductive nature of the fluids and tissues of the cochlea can lead to broad activation of spiral ganglion neurons using contemporary cochlear implant stimulation configurations such as monopolar (MP) stimulation. The relatively poor spatial selectivity is thought to limit implant performance, particularly in noisy environments. Several current focusing techniques have been proposed to reduce the spread of activation with the aim towards achieving improved clinical performance. Approach. The present research evaluated the efficacy of focused multipolar (FMP) stimulation, a relatively new focusing technique in the cochlea, and compared its efficacy to both MP stimulation and tripolar (TP) stimulation. The spread of neural activity across the inferior colliculus (IC), measured by recording the spatial tuning curve, was used as a measure of spatial selectivity. Adult cats (n = 6) were acutely deafened and implanted with an intracochlear electrode array before multi-unit responses were recorded across the cochleotopic gradient of the contralateral IC. Recordings were made in response to acoustic and electrical stimulation using the MP, TP and FMP configurations. Main results. FMP and TP stimulation resulted in greater spatial selectivity than MP stimulation. However, thresholds were significantly higher (p < 0.001) for FMP and TP stimulation compared to MP stimulation. There were no differences found in spatial selectivity and threshold between FMP and TP stimulation. Significance. The greater spatial selectivity of FMP and TP stimulation would be expected to result in improved clinical performance. However, further research will be required to demonstrate the efficacy of these modes of stimulation after longer durations of deafness.

  8. Dielectric properties characterization of saline solutions by near-field microwave microscopy

    NASA Astrophysics Data System (ADS)

    Gu, Sijia; Lin, Tianjun; Lasri, Tuami

    2017-01-01

    Saline solutions are of a great interest when characterizations of biological fluids are targeted. In this work a near-field microwave microscope is proposed for the characterization of liquids. An interferometric technique is suggested to enhance measurement sensitivity and accuracy. The validation of the setup and the measurement technique is conducted through the characterization of a large range of saline concentrations (0-160 mg ml-1). Based on the measured resonance frequency shift and quality factor, the complex permittivity is successfully extracted as exhibited by the good agreement found when comparing the results to data obtained from Cole-Cole model. We demonstrate that the near field microwave microscope (NFMM) brings a great advantage by offering the possibility to select a resonance frequency and a quality factor for a given concentration level. This method provides a very effective way to largely enhance the measurement sensitivity in high loss materials.

  9. An in-plane magnetic chiral dichroism approach for measurement of intrinsic magnetic signals using transmitted electrons

    PubMed Central

    Song, Dongsheng; Tavabi, Amir H.; Li, Zi-An; Kovács, András; Rusz, Ján; Huang, Wenting; Richter, Gunther; Dunin-Borkowski, Rafal E.; Zhu, Jing

    2017-01-01

    Electron energy-loss magnetic chiral dichroism is a powerful technique that allows the local magnetic properties of materials to be measured quantitatively with close-to-atomic spatial resolution and element specificity in the transmission electron microscope. Until now, the technique has been restricted to measurements of the magnetic circular dichroism signal in the electron beam direction. However, the intrinsic magnetization directions of thin samples are often oriented in the specimen plane, especially when they are examined in magnetic-field-free conditions in the transmission electron microscope. Here, we introduce an approach that allows in-plane magnetic signals to be measured using electron magnetic chiral dichroism by selecting a specific diffraction geometry. We compare experimental results recorded from a cobalt nanoplate with simulations to demonstrate that an electron magnetic chiral dichroism signal originating from in-plane magnetization can be detected successfully. PMID:28504267

  10. Selected Performance Measurements of the F-15 Active Axisymmetric Thrust-vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1998-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  11. Critical Review of Industrial Techniques for Thermal-Conductivity Measurements of Thermal Insulation Materials

    NASA Astrophysics Data System (ADS)

    Hammerschmidt, Ulf; Hameury, Jacques; Strnad, Radek; Turzó-Andras, Emese; Wu, Jiyu

    2015-07-01

    This paper presents a critical review of current industrial techniques and instruments to measure the thermal conductivity of thermal insulation materials, especially those insulations that can operate at temperatures above and up to . These materials generally are of a porous nature. The measuring instruments dealt with here are selected based on their maximum working temperature that should be higher than at least . These instruments are special types of the guarded hot-plate apparatus, the guarded heat-flow meter, the transient hot-wire and hot-plane instruments as well as the laser/xenon flash devices. All technical characteristics listed are quoted from the generally accessible information of the relevant manufacturers. The paper includes rankings of the instruments according to their standard retail price, the maximum sample size, and maximum working temperature, as well as the minimum in their measurement range.

  12. Selected Performance Measurements of the F-15 ACTIVE Axisymmetric Thrust-Vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1999-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  13. Balloon-borne photoionization mass spectrometer for measurement of stratospheric gases

    NASA Technical Reports Server (NTRS)

    Aikin, A. C.; Maier, E. J. R.

    1978-01-01

    A balloon-borne photoionization mass spectrometer used to measure stratospheric trace gases is described. Ions are created with photons from high-intensity krypton discharge lamps and a quadrupole mass analyzer is employed for ion identification. Differential pumping is achieved with liquid helium cryopumping. To insure measurement of unperturbed stratospheric air, the entire system is contained in a sealed gondola and the atmospheric sample is taken some distance away during descent. The photoionization technique allows the detection of a low ionization potential constituent, such as nitric oxide, at less than a part in one billion in the presence of the major atmospheric gases and their isotopes. Operation of the mass spectrometer system was demonstrated during a daytime flight from Palestine, Texas on 26 April 1977. The sensitivity achieved and the unique selectivity afforded by this technique offer a capability for trace constituent measurement not possible with the more conventional electron impact ionization approach.

  14. Measurements of Aitken Visual Binary Stars: 2017 Report

    NASA Astrophysics Data System (ADS)

    Sérot, J.

    2018-07-01

    This paper is a continuation of that published in [1]. It presents the measurements of 136 visual binary stars discovered by R.G. Aitken and listed in the WDS catalog. These measurements were obtained between January and December 2017 with an 11" reflector telescope and two types of cameras : an ASI 290MM CMOS-based camera and a Raptor Kite EM-CCD. Binaries with a secondary component up to magnitude 15 and separation between 0.4 and 5 arcsec have been measured. The selection also includes pairs exhibiting a large difference in magnitude between components (up to ?m=6). Measurements were mostly obtained using the auto-correlation technique described in [1] but also, and this is a innovative aspect of the paper, using the so-called bispectrum reduction technique supported by the latest version of the SpeckelToolBox software. As for [1], a significant part of the observed pairs had not been observed in the previous decades and show significant movements compared to their last measurement.

  15. The Sonographic Appearance of Spinal Fluid at Clinically Selected Interspaces in Sitting Versus Lateral Positions.

    PubMed

    Vitberg, Yaffa M; Tseng, Peggy; Kessler, David O

    2018-05-01

    Our objective was to describe the sonographic appearance of fluid at clinically selected interspinous spaces and see if additional interspaces could be identified as suitable and safe targets for needle insertion. We also measured the reproducibility of fluid measurements and assessed for positional differences. A prospective convenience sample of infants younger than 3 months was enrolled in the pediatric emergency department. Excluded were clinically unstable infants or those with spinal dysraphism. Infants were first held in standard lateral lumbar puncture position. Pediatric emergency medicine (PEM) physicians marked infants' backs at the level they would insert a needle using the landmark palpation technique. A PEM sonologist imaged and measured the spinal fluid in 2 orthogonal planes at this marked level in lateral then sitting positions. Fluid measurements were repeated by a second blinded PEM sonologist. Forty-six infants were enrolled. Ultrasound verified the presence of fluid at the marked level as determined by the landmark palpation technique in 98% of cases. Ultrasound identified additional suitable spaces 1 space higher (82%) and 2 spaces higher (41%). Intraclass correlation coefficient of all measurements was excellent (>0.85), with differences noted for sitting versus lateral position in mean area of fluid 0.34 mm versus 0.31 mm (difference, 0.03; 95% confidence interval [CI], 0.005-0.068), dorsal fluid pocket 0.23 mm versus 0.15 mm (difference, 0.08; 95% CI, 0.031-0.123), and nerve root-to-canal ratio 0.44 versus 0.51 (difference, 0.07; 95% CI, 0.004-0.117). Ultrasound can verify the presence of fluid at interspaces determined by the landmark palpation technique and identify additional suitable spaces at higher levels. There were statistically greater fluid measurements in sitting versus lateral positions. These novel fluid measurements were shown to be reliable.

  16. A new bimetallic plasmonic photocatalyst consisting of gold(core)-copper(shell) nanoparticle and titanium(IV) oxide support

    NASA Astrophysics Data System (ADS)

    Sato, Yuichi; Naya, Shin-ichi; Tada, Hiroaki

    2015-10-01

    Ultrathin Cu layers (˜2 atomic layers) have been selectively formed on the Au surfaces of Au nanoparticle-loaded rutile TiO2 (Au@Cu/TiO2) by a deposition precipitation-photodeposition technique. Cyclic voltammetry and photochronopotentiometry measurements indicate that the reaction proceeds via the underpotential deposition. The ultrathin Cu shell drastically increases the activity of Au/TiO2 for the selective oxidation of amines to the corresponding aldehydes under visible-light irradiation (λ > 430 nm). Photochronoamperometry measurements strongly suggest that the striking Cu shell effect stems from the enhancement of the charge separation in the localized surface plasmon resonance-excited Au/TiO2.

  17. Study of Vis/NIR spectroscopy measurement on acidity of yogurt

    NASA Astrophysics Data System (ADS)

    He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli

    2006-09-01

    A fast measurement of pH of yogurt using Vis/NIR-spectroscopy techniques was established in order to measuring the acidity of yogurt rapidly. 27 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The pH of yogurt on positions scanned by spectrum was measured by a pH meter. The mathematical model between pH and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS) by using Unscramble V9.2. Then 25 unknown samples from 5 different brands were predicted based on the mathematical model. The result shows that The correlation coefficient of pH based on PLS model is more than 0.890, and standard error of calibration (SEC) is 0.037, standard error of prediction (SEP) is 0.043. Through predicting the pH of 25 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0918. The results show the good to excellent prediction performances. The Vis/NIR spectroscopy technique had a significant greater accuracy for determining the value of pH. It was concluded that the VisINIRS measurement technique can be used to measure pH of yogurt fast and accurately, and a new method for the measurement of pH of yogurt was established.

  18. Optical techniques for the determination of nitrate in environmental waters: Guidelines for instrument selection, operation, deployment, maintenance, quality assurance, and data reporting

    USGS Publications Warehouse

    Pellerin, Brian A.; Bergamaschi, Brian A.; Downing, Bryan D.; Saraceno, John Franco; Garrett, Jessica D.; Olsen, Lisa D.

    2013-01-01

    The recent commercial availability of in situ optical sensors, together with new techniques for data collection and analysis, provides the opportunity to monitor a wide range of water-quality constituents on time scales in which environmental conditions actually change. Of particular interest is the application of ultraviolet (UV) photometers for in situ determination of nitrate concentrations in rivers and streams. The variety of UV nitrate sensors currently available differ in several important ways related to instrument design that affect the accuracy of their nitrate concentration measurements in different types of natural waters. This report provides information about selection and use of UV nitrate sensors by the U.S. Geological Survey to facilitate the collection of high-quality data across studies, sites, and instrument types. For those in need of technical background and information about sensor selection, this report addresses the operating principles, key features and sensor design, sensor characterization techniques and typical interferences, and approaches for sensor deployment. For those needing information about maintaining sensor performance in the field, key sections in this report address maintenance and calibration protocols, quality-assurance techniques, and data formats and reporting. Although the focus of this report is UV nitrate sensors, many of the principles can be applied to other in situ optical sensors for water-quality studies.

  19. Development of a laboratory prototype water quality monitoring system suitable for use in zero gravity

    NASA Technical Reports Server (NTRS)

    Misselhorn, J. E.; Witz, S.; Hartung, W. H.

    1973-01-01

    The development of a laboratory prototype water quality monitoring system for use in the evaluation of candidate water recovery systems and for study of techniques for measuring potability parameters is reported. Sensing techniques for monitoring of the most desirable parameters are reviewed in terms of their sensitivities and complexities, and their recommendations for sensing techniques are presented. Rationale for selection of those parameters to be monitored (pH, specific conductivity, Cr(+6), I2, total carbon, and bacteria) in a next generation water monitor is presented along with an estimate of flight system specifications. A master water monitor development schedule is included.

  20. Spectroscopic study of Pbs nano-structured layer prepared by Pld utilized as a Hall-effect magnetic sensor

    NASA Astrophysics Data System (ADS)

    Atwa, D. M.; Aboulfotoh, N.; El-magd, A. Abo; Badr, Y.

    2013-10-01

    Lead sulfide (PbS) nano-structured films have been grown on quartz substrates using PLD technique. The deposited films were characterized by several structural techniques, including scanning electron microscopy (SEM), transmission electron microscopy (TEM), and Selected-area electron diffraction patterns (SAED). The results prove the formation of cubic phase of PbS nanocrystals. Elemental analysis of the deposited films compared to the bulk target was obtained via laser induced fluorescence of the produced plasma particles and the energy dispersive X-ray "EDX" technique. The Hall coefficient measurements indicate an efficient performance of the deposited films as a magnetic sensor.

  1. Application of neutron capture autoradiography to Boron Delivery seeking techniques for selective accumulation of boron compounds to tumor with intra-arterial administration of boron entrapped water-in-oil-in-water emulsion

    NASA Astrophysics Data System (ADS)

    Mikado, S.; Yanagie, H.; Yasuda, N.; Higashi, S.; Ikushima, I.; Mizumachi, R.; Murata, Y.; Morishita, Y.; Nishimura, R.; Shinohara, A.; Ogura, K.; Sugiyama, H.; Iikura, H.; Ando, H.; Ishimoto, M.; Takamoto, S.; Eriguchi, M.; Takahashi, H.; Kimura, M.

    2009-06-01

    It is necessary to accumulate the 10B atoms selectively to the tumor cells for effective Boron Neutron Capture Therapy (BNCT). In order to achieve an accurate measurement of 10B accumulations in the biological samples, we employed a technique of neutron capture autoradiography (NCAR) of sliced samples of tumor tissues using CR-39 plastic track detectors. The CR-39 track detectors attached with the biological samples were exposed to thermal neutrons in the thermal column of the JRR3 of Japan Atomic Energy Agency (JAEA). We obtained quantitative NCAR images of the samples for VX-2 tumor in rabbit liver after injection of 10BSH entrapped water-in-oil-in-water (WOW) emulsion by intra-arterial injection via proper hepatic artery. The 10B accumulations and distributions in VX-2 tumor and normal liver of rabbit were investigated by means of alpha-track density measurements. In this study, we showed the selective accumulation of 10B atoms in the VX-2 tumor by intra-arterial injection of 10B entrapped WOW emulsion until 3 days after injection by using digitized NCAR images (i.e. alpha-track mapping).

  2. Estimation of end point foot clearance points from inertial sensor data.

    PubMed

    Santhiranayagam, Braveena K; Lai, Daniel T H; Begg, Rezaul K; Palaniswami, Marimuthu

    2011-01-01

    Foot clearance parameters provide useful insight into tripping risks during walking. This paper proposes a technique for the estimate of key foot clearance parameters using inertial sensor (accelerometers and gyroscopes) data. Fifteen features were extracted from raw inertial sensor measurements, and a regression model was used to estimate two key foot clearance parameters: First maximum vertical clearance (m x 1) after toe-off and the Minimum Toe Clearance (MTC) of the swing foot. Comparisons are made against measurements obtained using an optoelectronic motion capture system (Optotrak), at 4 different walking speeds. General Regression Neural Networks (GRNN) were used to estimate the desired parameters from the sensor features. Eight subjects foot clearance data were examined and a Leave-one-subject-out (LOSO) method was used to select the best model. The best average Root Mean Square Errors (RMSE) across all subjects obtained using all sensor features at the maximum speed for m x 1 was 5.32 mm and for MTC was 4.04 mm. Further application of a hill-climbing feature selection technique resulted in 0.54-21.93% improvement in RMSE and required fewer input features. The results demonstrated that using raw inertial sensor data with regression models and feature selection could accurately estimate key foot clearance parameters.

  3. THz Spectroscopy of the Atmosphere

    NASA Technical Reports Server (NTRS)

    Pickett, Herbert M.

    2000-01-01

    THz spectroscopy of the atmosphere has been driven by the need to make remote sensing measurements of OH. While the THz region can be used for sensitive detection on many atmospheric molecules, the THz region is the best region for measuring the diurnal behavior of stratospheric OH by remote sensing. The infrared region near 3 microns suffers from chemiluminescence and from spectral contamination due to water. The ultraviolet region near 300 nm requires solar illumination. The three techniques for OH emission measurements in the THz region include Fourier Transform interferometry, Fabry-Perot interferometry, and heterodyne radiometry. The first two use cryogenic direct detectors while the last technique uses a local oscillator and a mixer to down convert the THz signal to GHz frequencies. All techniques have been used to measure stratospheric OH from balloon platforms. OH results from the Fabry-Perot based FILOS instrument will be given. Heterodyne measurement of OH at 2.5 THz has been selected to be a component of the Microwave Limb Sounder on the Earth Observing System CHEM-1 polar satellite. The design of this instrument will be described. A balloon-based prototype heterodyne 2.5 THz radiometer had its first flight on, 24 May 1998. Results form this flight will be presented.

  4. Monochromatic neutron beam production at Brazilian nuclear research reactors

    NASA Astrophysics Data System (ADS)

    Stasiulevicius, Roberto; Rodrigues, Claudio; Parente, Carlos B. R.; Voi, Dante L.; Rogers, John D.

    2000-12-01

    Monochomatic beams of neutrons are obtained form a nuclear reactor polychromatic beam by the diffraction process, suing a single crystal energy selector. In Brazil, two nuclear research reactors, the swimming pool model IEA-R1 and the Argonaut type IEN-R1 have been used to carry out measurements with this technique. Neutron spectra have been measured using crystal spectrometers installed on the main beam lines of each reactor. The performance of conventional- artificial and natural selected crystals has been verified by the multipurpose neutron diffractometers installed at IEA-R1 and simple crystal spectrometer in operator at IEN- R1. A practical figure of merit formula was introduced to evaluate the performance and relative reflectivity of the selected planes of a single crystal. The total of 16 natural crystals were selected for use in the neutron monochromator, including a total of 24 families of planes. Twelve of these natural crystal types and respective best family of planes were measured directly with the multipurpose neutron diffractometers. The neutron spectrometer installed at IEN- R1 was used to confirm test results of the better specimens. The usually conventional-artificial crystal spacing distance range is limited to 3.4 angstrom. The interplane distance range has now been increased to approximately 10 angstrom by use of naturally occurring crystals. The neutron diffraction technique with conventional and natural crystals for energy selection and filtering can be utilized to obtain monochromatic sub and thermal neutrons with energies in the range of 0.001 to 10 eV. The thermal neutron is considered a good tool or probe for general applications in various fields, such as condensed matter, chemistry, biology, industrial applications and others.

  5. Preliminary study of the use of radiotracers for leak detection in industrial applications

    NASA Astrophysics Data System (ADS)

    Wetchagarun, S.; Petchrak, A.; Tippayakul, C.

    2015-05-01

    One of the most widespread uses of radiotracers in the industrial applications is the leak detection of the systems. This technique can be applied, for example, to detect leak in heat exchangers or along buried industrial pipelines. The ability to perform online investigation is one of the most important advantages of the radiotracer technique over other non-radioactive leak detection methods. In this paper, a preliminary study of the leak detection using radiotracer in the laboratory scale was presented. Br-82 was selected for this work due to its chemical property, its suitable half-life and its on-site availability. The NH4Br in the form of aqueous solution was injected into the experimental system as the radiotracer. Three NaI detectors were placed along the pipelines to measure system flow rate and to detect the leakage from the piping system. The results obtained from the radiotracer technique were compared to those measured by other methods. It is found that the flow rate obtained from the radiotracer technique agreed well with the one obtained from the flow meter. The leak rate result, however, showed discrepancy between results obtained from two different measuring methods indicating further study on leak detection was required before applying this technique in the industrial system.

  6. Ground Vibration Test Planning and Pre-Test Analysis for the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Bedrossian, Herand; Tinker, Michael L.; Hidalgo, Homero

    2000-01-01

    This paper describes the results of the modal test planning and the pre-test analysis for the X-33 vehicle. The pre-test analysis included the selection of the target modes, selection of the sensor and shaker locations and the development of an accurate Test Analysis Model (TAM). For target mode selection, four techniques were considered, one based on the Modal Cost technique, one based on Balanced Singular Value technique, a technique known as the Root Sum Squared (RSS) method, and a Modal Kinetic Energy (MKE) approach. For selecting sensor locations, four techniques were also considered; one based on the Weighted Average Kinetic Energy (WAKE), one based on Guyan Reduction (GR), one emphasizing engineering judgment, and one based on an optimum sensor selection technique using Genetic Algorithm (GA) search technique combined with a criteria based on Hankel Singular Values (HSV's). For selecting shaker locations, four techniques were also considered; one based on the Weighted Average Driving Point Residue (WADPR), one based on engineering judgment and accessibility considerations, a frequency response method, and an optimum shaker location selection based on a GA search technique combined with a criteria based on HSV's. To evaluate the effectiveness of the proposed sensor and shaker locations for exciting the target modes, extensive numerical simulations were performed. Multivariate Mode Indicator Function (MMIF) was used to evaluate the effectiveness of each sensor & shaker set with respect to modal parameter identification. Several TAM reduction techniques were considered including, Guyan, IRS, Modal, and Hybrid. Based on a pre-test cross-orthogonality checks using various reduction techniques, a Hybrid TAM reduction technique was selected and was used for all three vehicle fuel level configurations.

  7. Optical tracking of nanoscale particles in microscale environments

    NASA Astrophysics Data System (ADS)

    Mathai, P. P.; Liddle, J. A.; Stavis, S. M.

    2016-03-01

    The trajectories of nanoscale particles through microscale environments record useful information about both the particles and the environments. Optical microscopes provide efficient access to this information through measurements of light in the far field from nanoparticles. Such measurements necessarily involve trade-offs in tracking capabilities. This article presents a measurement framework, based on information theory, that facilitates a more systematic understanding of such trade-offs to rationally design tracking systems for diverse applications. This framework includes the degrees of freedom of optical microscopes, which determine the limitations of tracking measurements in theory. In the laboratory, tracking systems are assemblies of sources and sensors, optics and stages, and nanoparticle emitters. The combined characteristics of such systems determine the limitations of tracking measurements in practice. This article reviews this tracking hardware with a focus on the essential functions of nanoparticles as optical emitters and microenvironmental probes. Within these theoretical and practical limitations, experimentalists have implemented a variety of tracking systems with different capabilities. This article reviews a selection of apparatuses and techniques for tracking multiple and single particles by tuning illumination and detection, and by using feedback and confinement to improve the measurements. Prior information is also useful in many tracking systems and measurements, which apply across a broad spectrum of science and technology. In the context of the framework and review of apparatuses and techniques, this article reviews a selection of applications, with particle diffusion serving as a prelude to tracking measurements in biological, fluid, and material systems, fabrication and assembly processes, and engineered devices. In so doing, this review identifies trends and gaps in particle tracking that might influence future research.

  8. Synthesis of Novel CuO Nanosheets and Their Non-Enzymatic Glucose Sensing Applications

    PubMed Central

    Ibupoto, Zafar Hussain; Khun, Kimleang; Beni, Valerio; Liu, Xianjie; Willander, Magnus

    2013-01-01

    In this study, we have developed a sensitive and selective glucose sensor using novel CuO nanosheets which were grown on a gold coated glass substrate by a low temperature growth method. X-ray differaction (XRD) and scanning electron microscopy (SEM) techniques were used for the structural characterization of CuO nanostructures. CuO nanosheets are highly dense, uniform, and exhibited good crystalline array structure. X-ray photoelectron spectroscopy (XPS) technique was applied for the study of chemical composition of CuO nanosheets and the obtained information demonstrated pure phase CuO nanosheets. The novel CuO nanosheets were employed for the development of a sensitive and selective non-enzymatic glucose sensor. The measured sensitivity and a correlation coefficient are in order 5.20 × 102 μA/mMcm2 and 0.998, respectively. The proposed sensor is associated with several advantages such as low cost, simplicity, high stability, reproducibility and selectivity for the quick detection of glucose. PMID:23787727

  9. Synthesis of novel CuO nanosheets and their non-enzymatic glucose sensing applications.

    PubMed

    Ibupoto, Zafar Hussain; Khun, Kimleang; Beni, Valerio; Liu, Xianjie; Willander, Magnus

    2013-06-20

    In this study, we have developed a sensitive and selective glucose sensor using novel CuO nanosheets which were grown on a gold coated glass substrate by a low temperature growth method. X-ray differaction (XRD) and scanning electron microscopy (SEM) techniques were used for the structural characterization of CuO nanostructures. CuO nanosheets are highly dense, uniform, and exhibited good crystalline array structure. X-ray photoelectron spectroscopy (XPS) technique was applied for the study of chemical composition of CuO nanosheets and the obtained information demonstrated pure phase CuO nanosheets. The novel CuO nanosheets were employed for the development of a sensitive and selective non-enzymatic glucose sensor. The measured sensitivity and a correlation coefficient are in order 5.20 × 10² µA/mMcm² and 0.998, respectively. The proposed sensor is associated with several advantages such as low cost, simplicity, high stability, reproducibility and selectivity for the quick detection of glucose.

  10. Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap

    NASA Astrophysics Data System (ADS)

    Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.

    2016-06-01

    At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.

  11. Surface roughness effects on bidirectional reflectance

    NASA Technical Reports Server (NTRS)

    Smith, T. F.; Hering, R. G.

    1972-01-01

    An experimental study of surface roughness effects on bidirectional reflectance of metallic surfaces is presented. A facility capable of irradiating a sample from normal to grazing incidence and recording plane of incidence bidirectional reflectance measurements was developed. Samples consisting of glass, aluminum alloy, and stainless steel materials were selected for examination. Samples were roughened using standard grinding techniques and coated with a radiatively opaque layer of pure aluminum. Mechanical surface roughness parameters, rms heights and rms slopes, evaluated from digitized surface profile measurements are less than 1.0 micrometers and 0.28, respectively. Rough surface specular, bidirectional, and directional reflectance measurements for selected values of polar angle of incidence and wavelength of incident energy within the spectral range of 1 to 14 micrometers are reported. The Beckmann bidirectional reflectance model is compared with reflectance measurements to establish its usefulness in describing the magnitude and spatial distribution of energy reflected from rough surfaces.

  12. Optimising the measurement of bruises in children across conventional and cross polarized images using segmentation analysis techniques in Image J, Photoshop and circle diameter measurements.

    PubMed

    Harris, C; Alcock, A; Trefan, L; Nuttall, D; Evans, S T; Maguire, S; Kemp, A M

    2018-02-01

    Bruising is a common abusive injury in children, and it is standard practice to image and measure them, yet there is no current standard for measuring bruise size consistently. We aim to identify the optimal method of measuring photographic images of bruises, including computerised measurement techniques. 24 children aged <11 years (mean age of 6.9, range 2.5-10 years) with a bruise were recruited from the community. Demographics and bruise details were recorded. Each bruise was measured in vivo using a paper measuring tape. Standardised conventional and cross polarized digital images were obtained. The diameter of bruise images were measured by three computer aided measurement techniques: Image J (segmentation with Simple Interactive Object Extraction (maximum Feret diameter), 'Circular Selection Tool' (Circle diameter), & the Photoshop 'ruler' software (Photoshop diameter)). Inter and intra-observer effects were determined by two individuals repeating 11 electronic measurements, and relevant Intraclass Correlation Coefficient's (ICC's) were used to establish reliability. Spearman's rank correlation was used to compare in vivo with computerised measurements; a comparison of measurement techniques across imaging modalities was conducted using Kolmogorov-Smirnov tests. Significance was set at p < 0.05 for all tests. Images were available for 38 bruises in vivo, with 48 bruises visible on cross polarized imaging and 46 on conventional imaging (some bruises interpreted as being single in vivo appeared to be multiple in digital images). Correlation coefficients were >0.5 for all techniques, with maximum Feret diameter and maximum Photoshop diameter on conventional images having the strongest correlation with in vivo measurements. There were significant differences between in vivo and computer-aided measurements, but none between different computer-aided measurement techniques. Overall, computer aided measurements appeared larger than in vivo. Inter- and intra-observer agreement was high for all maximum diameter measurements (ICC's > 0.7). Whilst there are minimal differences between measurements of images obtained, the most consistent results were obtained when conventional images, segmented by Image J Software, were measured with a Feret diameter. This is therefore proposed as a standard for future research, and forensic practice, with the proviso that all computer aided measurements appear larger than in vivo. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  14. Workload: Measurement and Management

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Casner, Stephen

    2010-01-01

    Poster: The workload research project has as its task to survey the available literature on: (1) workload measurement techniques; and (2) the effects of workload on operator performance. The first set of findings provides practitioners with a collection of simple-to-use workload measurement techniques along with characterizations of the kinds of tasks each technique has been shown reliably address. This allows design practitioners to select and use the most appropriate techniques for the task(s) at hand. The second set of findings provides practitioners with the guidance they need to design for appropriate kinds and amounts of workload across all tasks for which the operator is responsible. This guidance helps practitioners design systems and procedures that ensure appropriate levels of engagement across all tasks, and avoid designs and procedures that result in operator boredom, complacency, loss of awareness, undue levels of stress, or skill atrophy that can result from workload that distracts operators from the tasks they perform and monitor, workload levels that are too low, too high, or too consistent or predictable. Only those articles that were peer reviewed, long standing and generally accepted in the field, and applicable to a relevant range of conditions in a select domain of interest, in analogous "extreme" environments to those in space were included. In addition, all articles were reviewed and evaluated on uni-dimensional and multi-dimensional considerations. Casner & Gore also examined the notion of thresholds and the conditions that may benefit mostly from the various methodological approaches. Other considerations included whether the tools would be suitable for guiding a requirement-related and design-related question. An initial review of over 225 articles was conducted and entered into an EndNote database. The reference list included a range of conditions in the domain of interest (subjective/objective measures), the seminal works in workload, as well as summary works

  15. Ultrasonic thermometry using pulse techniques.

    NASA Technical Reports Server (NTRS)

    Lynnworth, L. C.; Carnevale, E. H.

    1972-01-01

    Ultrasonic pulse techniques have been developed which, when applied to inert gases, provide temperature measurements up to 8000 K. The response time can be less than 1 msec. This is a significant feature in studying shock-heated or combusting gases. Using a momentary contact coupling technique, temperature has been measured inside steel from 300 to 1500 K. Thin-wire sensors have been used above 2000 K in nuclear and industrial applications where conditions preclude the use of thermocouples, resistance devices, or optical pyrometers. At 2500 K, temperature sensitivity of 0.1% is obtained in Re wire sensors 5 cm long by timing five round trips with an electronic instrument that resolves the time interval between selected echoes to 0.1 microsec. Sensors have been operated at rotational speeds over 2000 rpm and in noisy environments. Temperature profiling of up to ten regions using only a single guided path or beam has also been accomplished.

  16. Comparison of soft tissue balancing, femoral component rotation, and joint line change between the gap balancing and measured resection techniques in primary total knee arthroplasty: A meta-analysis.

    PubMed

    Moon, Young-Wan; Kim, Hyun-Jung; Ahn, Hyeong-Sik; Park, Chan-Deok; Lee, Dae-Hee

    2016-09-01

    This meta-analysis was designed to compare the accuracy of soft tissue balancing and femoral component rotation as well as change in joint line positions, between the measured resection and gap balancing techniques in primary total knee arthroplasty. Studies were included in the meta-analysis if they compared soft tissue balancing and/or radiologic outcomes in patients who underwent total knee arthroplasty with the gap balancing and measured resection techniques. Comparisons included differences in flexion/extension, medial/lateral flexion, and medial/lateral extension gaps (LEGs), femoral component rotation, and change in joint line positions. Finally, 8 studies identified via electronic (MEDLINE, EMBASE, and the Cochrane Library) and manual searches were included. All 8 studies showed a low risk of selection bias and provided detailed demographic data. There was some inherent heterogeneity due to uncontrolled bias, because all included studies were observational comparison studies. The pooled mean difference in gap differences between the gap balancing and measured resection techniques did not differ significantly (-0.09 mm, 95% confidence interval [CI]: -0.40 to +0.21 mm; P = 0.55), except that the medial/LEG difference was 0.58 mm greater for measured resection than gap balancing (95% CI: -1.01 to -0.15 mm; P = 0.008). Conversely, the pooled mean difference in femoral component external rotation (0.77°, 95% CI: 0.18° to 1.35°; P = 0.01) and joint line change (1.17 mm, 95% CI: 0.82 to 1.52 mm; P < 0.001) were significantly greater for the gap balancing than the measured resection technique. The gap balancing and measured resection techniques showed similar soft tissue balancing, except for medial/LEG difference. However, the femoral component was more externally rotated and the joint line was more elevated with gap balancing than measured resection. These differences were minimal (around 1 mm or 1°) and therefore may have little effect on the biomechanics of the knee joint. This suggests that the gap balancing and measured resection techniques are not mutually exclusive.

  17. The assessment of pi-pi selective stationary phases for two-dimensional HPLC analysis of foods: application to the analysis of coffee.

    PubMed

    Mnatsakanyan, Mariam; Stevenson, Paul G; Shock, David; Conlan, Xavier A; Goodie, Tiffany A; Spencer, Kylie N; Barnett, Neil W; Francis, Paul S; Shalliker, R Andrew

    2010-09-15

    Differences between alkyl, dipole-dipole, hydrogen bonding, and pi-pi selective surfaces represented by non-resonance and resonance pi-stationary phases have been assessed for the separation of 'Ristretto' café espresso by employing 2DHPLC techniques with C18 phase selectivity detection. Geometric approach to factor analysis (GAFA) was used to measure the detected peaks (N), spreading angle (beta), correlation, practical peak capacity (n(p)) and percentage usage of the separations space, as an assessment of selectivity differences between regional quadrants of the two-dimensional separation plane. Although all tested systems were correlated to some degree to the C18 dimension, regional measurement of separation divergence revealed that performance of specific systems was better for certain sample components. The results illustrate that because of the complexity of the 'real' sample obtaining a truly orthogonal two-dimensional system for complex samples of natural origin may be practically impossible. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  18. Molecular-Based Optical Measurement Techniques for Transition and Turbulence in High-Speed Flow

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Cutler, Andrew D.

    2013-01-01

    High-speed laminar-to-turbulent transition and turbulence affect the control of flight vehicles, the heat transfer rate to a flight vehicle's surface, the material selected to protect such vehicles from high heating loads, the ultimate weight of a flight vehicle due to the presence of thermal protection systems, the efficiency of fuel-air mixing processes in high-speed combustion applications, etc. Gaining a fundamental understanding of the physical mechanisms involved in the transition process will lead to the development of predictive capabilities that can identify transition location and its impact on parameters like surface heating. Currently, there is no general theory that can completely describe the transition-to-turbulence process. However, transition research has led to the identification of the predominant pathways by which this process occurs. For a truly physics-based model of transition to be developed, the individual stages in the paths leading to the onset of fully turbulent flow must be well understood. This requires that each pathway be computationally modeled and experimentally characterized and validated. This may also lead to the discovery of new physical pathways. This document is intended to describe molecular based measurement techniques that have been developed, addressing the needs of the high-speed transition-to-turbulence and high-speed turbulence research fields. In particular, we focus on techniques that have either been used to study high speed transition and turbulence or techniques that show promise for studying these flows. This review is not exhaustive. In addition to the probe-based techniques described in the previous paragraph, several other classes of measurement techniques that are, or could be, used to study high speed transition and turbulence are excluded from this manuscript. For example, surface measurement techniques such as pressure and temperature paint, phosphor thermography, skin friction measurements and photogrammetry (for model attitude and deformation measurement) are excluded to limit the scope of this report. Other physical probes such as heat flux gauges, total temperature probes are also excluded. We further exclude measurement techniques that require particle seeding though particle based methods may still be useful in many high speed flow applications. This manuscript details some of the more widely used molecular-based measurement techniques for studying transition and turbulence: laser-induced fluorescence (LIF), Rayleigh and Raman Scattering and coherent anti-Stokes Raman scattering (CARS). These techniques are emphasized, in part, because of the prior experience of the authors. Additional molecular based techniques are described, albeit in less detail. Where possible, an effort is made to compare the relative advantages and disadvantages of the various measurement techniques, although these comparisons can be subjective views of the authors. Finally, the manuscript concludes by evaluating the different measurement techniques in view of the precision requirements described in this chapter. Additional requirements and considerations are discussed to assist with choosing an optical measurement technique for a given application.

  19. MO-B-BRB-03: 3D Dosimetry in the Clinic: Validating Special Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juang, T.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  20. MO-B-BRB-02: 3D Dosimetry in the Clinic: IMRT Technique Validation in Sweden

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceberg, S.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  1. Measuring thermal conductivity of thin films and coatings with the ultra-fast transient hot-strip technique

    NASA Astrophysics Data System (ADS)

    Belkerk, B. E.; Soussou, M. A.; Carette, M.; Djouadi, M. A.; Scudeller, Y.

    2012-07-01

    This paper reports the ultra-fast transient hot-strip (THS) technique for determining the thermal conductivity of thin films and coatings of materials on substrates. The film thicknesses can vary between 10 nm and more than 10 µm. Precise measurement of thermal conductivity was performed with an experimental device generating ultra-short electrical pulses, and subsequent temperature increases were electrically measured on nanosecond and microsecond time scales. The electrical pulses were applied within metallized micro-strips patterned on the sample films and the temperature increases were analysed within time periods selected in the window [100 ns-10 µs]. The thermal conductivity of the films was extracted from the time-dependent thermal impedance of the samples derived from a three-dimensional heat diffusion model. The technique is described and its performance demonstrated on different materials covering a large thermal conductivity range. Experiments were carried out on bulk Si and thin films of amorphous SiO2 and crystallized aluminum nitride (AlN). The present approach can assess film thermal resistances as low as 10-8 K m2 W-1 with a precision of about 10%. This has never been attained before with the THS technique.

  2. Accurate Distances to Important Spiral Galaxies: M63, M74, NGC 1291, NGC 4559, NGC 4625, and NGC 5398

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McQuinn, Kristen B. W.; Skillman, Evan D.; Dolphin, Andrew E.

    Accurate distances are fundamental for interpreting various measured properties of galaxies. Surprisingly, many of the best-studied spiral galaxies in the Local Volume have distance uncertainties that are much larger than can be achieved with modern observation techniques. Using Hubble Space Telescope optical imaging, we use the tip of the red giant branch method to measure the distances to six galaxies that are included in the Spitzer Infrared Nearby Galaxies Survey program and its offspring surveys. The sample includes M63, M74, NGC 1291, NGC 4559, NGC 4625, and NGC 5398. We compare our results with distances reported to these galaxies basedmore » on a variety of methods. Depending on the technique, there can be a wide range in published distances, particularly from the Tully–Fisher relation. In addition, differences between the planetary nebular luminosity function and surface brightness fluctuation techniques can vary between galaxies, suggesting inaccuracies that cannot be explained by systematics in the calibrations. Our distances improve upon previous results, as we use a well-calibrated, stable distance indicator, precision photometry in an optimally selected field of view, and a Bayesian maximum likelihood technique that reduces measurement uncertainties.« less

  3. Neutron spectrometry for UF 6 enrichment verification in storage cylinders

    DOE PAGES

    Mengesha, Wondwosen; Kiff, Scott D.

    2015-01-29

    Verification of declared UF 6 enrichment and mass in storage cylinders is of great interest in nuclear material nonproliferation. Nondestructive assay (NDA) techniques are commonly used for safeguards inspections to ensure accountancy of declared nuclear materials. Common NDA techniques used include gamma-ray spectrometry and both passive and active neutron measurements. In the present study, neutron spectrometry was investigated for verification of UF 6 enrichment in 30B storage cylinders based on an unattended and passive measurement approach. MCNP5 and Geant4 simulated neutron spectra, for selected UF 6 enrichments and filling profiles, were used in the investigation. The simulated neutron spectra weremore » analyzed using principal component analysis (PCA). The PCA technique is a well-established technique and has a wide area of application including feature analysis, outlier detection, and gamma-ray spectral analysis. Results obtained demonstrate that neutron spectrometry supported by spectral feature analysis has potential for assaying UF 6 enrichment in storage cylinders. Thus the results from the present study also showed that difficulties associated with the UF 6 filling profile and observed in other unattended passive neutron measurements can possibly be overcome using the approach presented.« less

  4. Neuropsychological Test Selection for Cognitive Impairment Classification: A Machine Learning Approach

    PubMed Central

    Williams, Jennifer A.; Schmitter-Edgecombe, Maureen; Cook, Diane J.

    2016-01-01

    Introduction Reducing the amount of testing required to accurately detect cognitive impairment is clinically relevant. The aim of this research was to determine the fewest number of clinical measures required to accurately classify participants as healthy older adult, mild cognitive impairment (MCI) or dementia using a suite of classification techniques. Methods Two variable selection machine learning models (i.e., naive Bayes, decision tree), a logistic regression, and two participant datasets (i.e., clinical diagnosis, clinical dementia rating; CDR) were explored. Participants classified using clinical diagnosis criteria included 52 individuals with dementia, 97 with MCI, and 161 cognitively healthy older adults. Participants classified using CDR included 154 individuals CDR = 0, 93 individuals with CDR = 0.5, and 25 individuals with CDR = 1.0+. Twenty-seven demographic, psychological, and neuropsychological variables were available for variable selection. Results No significant difference was observed between naive Bayes, decision tree, and logistic regression models for classification of both clinical diagnosis and CDR datasets. Participant classification (70.0 – 99.1%), geometric mean (60.9 – 98.1%), sensitivity (44.2 – 100%), and specificity (52.7 – 100%) were generally satisfactory. Unsurprisingly, the MCI/CDR = 0.5 participant group was the most challenging to classify. Through variable selection only 2 – 9 variables were required for classification and varied between datasets in a clinically meaningful way. Conclusions The current study results reveal that machine learning techniques can accurately classifying cognitive impairment and reduce the number of measures required for diagnosis. PMID:26332171

  5. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyhan, M; Yue, N

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less

  6. Experience with novel technologies for direct measurement of atmospheric NO2

    NASA Astrophysics Data System (ADS)

    Hueglin, Christoph; Hundt, Morten; Mueller, Michael; Schwarzenbach, Beat; Tuzson, Bela; Emmenegger, Lukas

    2017-04-01

    Nitrogen dioxide (NO2) is an air pollutant that has a large impact on human health and ecosystems, and it plays a key role in the formation of ozone and secondary particulate matter. Consequently, legal limit values for NO2 are set in the EU and elsewhere, and atmospheric observation networks typically include NO2 in their measurement programmes. Atmospheric NO2 is principally measured by chemiluminescence detection, an indirect measurement technique that requires conversion of NO2 into nitrogen monoxide (NO) and finally calculation of NO2 from the difference between total nitrogen oxides (NOx) and NO. Consequently, NO2 measurements with the chemiluminescence method have a relatively high measurement uncertainty and can be biased depending on the selectivity of the applied NO2 conversion method. In the past years, technologies for direct and selective measurement of NO2 have become available, e.g. cavity attenuated phase shift spectroscopy (CAPS), cavity enhanced laser absorption spectroscopy and quantum cascade laser absorption spectrometry (QCLAS). These technologies offer clear advantages over the indirect chemiluminescence method. We tested the above mentioned direct measurement techniques for NO2 over extended time periods at atmospheric measurement stations and report on our experience including comparisons with co-located chemiluminescence instruments equipped with molybdenum as well as photolytic NO2 converters. A still open issue related to the direct measurement of NO2 is instrument calibration. Accurate and traceable reference standards and NO2 calibration gases are needed. We present results from the application of different calibration strategies based on the use of static NO2 calibration gases as well as dynamic NO2 calibration gases produced by permeation and by gas-phase titration (GPT).

  7. Radiated BPF sound measurement of centrifugal compressor

    NASA Astrophysics Data System (ADS)

    Ohuchida, S.; Tanaka, K.

    2013-12-01

    A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.

  8. Mid-infrared laser-absorption diagnostic for vapor-phase measurements in an evaporating n-decane aerosol

    NASA Astrophysics Data System (ADS)

    Porter, J. M.; Jeffries, J. B.; Hanson, R. K.

    2009-09-01

    A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.

  9. Sex Bias in Research and Measurement: A Type III Error.

    ERIC Educational Resources Information Center

    Project on Sex Stereotyping in Education, Red Bank, NJ.

    The module described in this document is part of a series of instructional modules on sex-role stereotyping in education. This document (including all but the cassette tape) is the module that examines how sex bias influences selection of research topics, sampling techniques, interpretation of data, and conclusions. Suggestions for designing…

  10. Collection Fusion Using Bayesian Estimation of a Linear Regression Model in Image Databases on the Web.

    ERIC Educational Resources Information Center

    Kim, Deok-Hwan; Chung, Chin-Wan

    2003-01-01

    Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…

  11. Students' Meaningful Learning Orientation and Their Meaningful Understandings of Meiosis and Genetics.

    ERIC Educational Resources Information Center

    Cavallo, Ann Liberatore

    This 1-week study explored the extent to which high school students (n=140) acquired meaningful understanding of selected biological topics (meiosis and the Punnett square method) and the relationship between these topics. This study: (1) examined "mental modeling" as a technique for measuring students' meaningful understanding of the…

  12. Effect of a Storyboarding Technique on Selected Measures of Fitness among University Employees

    ERIC Educational Resources Information Center

    Anshel, Mark H.; Sutarso, Toto

    2010-01-01

    The purpose of this study was to determine the effectiveness of storyboarding (i.e., participants' written narrative) on improving fitness among university employees over 10 weeks. Groups consisted of storytelling during the program orientation, storytelling plus two coaching sessions, or the normal program only (control). Using difference…

  13. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    ERIC Educational Resources Information Center

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  14. The Effects of Two Types of Assertion Training on Self-Assertion, Anxiety and Self Actualization.

    ERIC Educational Resources Information Center

    Langelier, Regis

    The standard assertion training package includes a selection of techniques from behavior therapy such as modeling, behavior rehearsal, and role-playing along with lectures and discussion, bibliotherapy, and audiovisual feedback. The effects of a standard assertion training package with and without videotape feedback on self-report measures of…

  15. Development and Validation of Academic Dishonesty Scale (ADS): Presenting a Multidimensional Scale

    ERIC Educational Resources Information Center

    Bashir, Hilal; Bala, Ranjan

    2018-01-01

    The purpose of the study was to develop a scale measuring academic dishonesty of undergraduate students. The sample of the study constitutes nine hundred undergraduate students selected via random sampling technique. After receiving expert's opinions for the face and content validity of the scale, the exploratory factor analysis (EFA) and…

  16. Identification and evaluation of reliable reference genes for quantitative real-time PCR analysis in tea plant (Camellia sinensis (L.) O. Kuntze)

    USDA-ARS?s Scientific Manuscript database

    Quantitative real-time polymerase chain reaction (qRT-PCR) is a commonly used technique for measuring gene expression levels due to its simplicity, specificity, and sensitivity. Reliable reference selection for the accurate quantification of gene expression under various experimental conditions is a...

  17. Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics

    PubMed Central

    Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas

    2014-01-01

    Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727

  18. Application and principles of photon-doppler velocimetry for explosives testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briggs, Matthew Ellsworth; Hill, Larry; Hull, Larry

    2010-01-01

    The velocimetry technique PDV is easier to field than its predecessors VISAR and Fabry-Perot, works on a broader variety of experiments, and is more accurate and simple to analyze. Experiments and analysis have now demonstrated the accuracy, precision and interpretation of what PDV does and does not measure, and the successful application of POV to basic and applied detonation problems. We present a selection of results intended to help workers assess the capabilities of PDV. First we present general considerations about the technique: various PDV configurations, single-signal, multisignal (e.g., triature) and frequency-shifted PDV; what types of motion are sensed andmore » missed by PDV; analysis schemes for velocity and position extraction; accuracy and precision of the results; and, experimental considerations for probe selection and positioning. We then present the status of various applications: detonation speeds and wall motion in cylinder tests, breakout velocity distributions from bare HE, ejecta, measurements from fibers embedded in HE, projectile velocity, resolving 2 and 3-D velocity vectors. This paper is an overview of work done by many groups around the world.« less

  19. Martial arts: time needed for training.

    PubMed

    Burke, David T; Protopapas, Marina; Bonato, Paolo; Burke, John T; Landrum, Rpbert F

    2011-03-01

    To measure the time needed to teach a series of martial arts techniques to proficiency. Fifteen volunteer subjects without any prior martial arts or self-defense experience were recruited. A panel of martial arts experts selected 21 different techniques including defensive stances, arm blocks, elbow strikes, palm strikes, thumbs to eyes, instep kicks and a carotid neck restraint. The critical elements of each technique were identified by the panel and incorporated into a teaching protocol, and then into a scoring system. Two black belt martial arts instructors directed a total of forty-five 45-minute training sessions. Videotaped proficiency testing was performed weekly. The videotapes were reviewed by the investigators to determine the proficiency levels of each subject for each technique. The techniques were rated by the average number of training sessions needed for an individual to develop proficiency in that technique. The mean number of sessions necessary to train individuals to proficiency ranged from 27 to 38.3. Using this system, the most difficult techniques seemed to be elbow strikes to the rear, striking with thumbs to the eyes and arm blocking. In this study 29 hours of training was necessary to train novice students to be proficient in 21 offensive and defensive martial arts techniques. To our knowledge, this is the first study that attempts to measure the learning curves involved when teaching martial arts techniques.

  20. Martial Arts: Time Needed for Training

    PubMed Central

    Burke, David T.; Protopapas, Marina; Bonato, Paolo; Burke, John T.; Landrum, Rpbert F.

    2011-01-01

    Purpose To measure the time needed to teach a series of martial arts techniques to proficiency. Methods Fifteen volunteer subjects without any prior martial arts or self-defense experience were recruited. A panel of martial arts experts selected 21 different techniques including defensive stances, arm blocks, elbow strikes, palm strikes, thumbs to eyes, instep kicks and a carotid neck restraint. The critical elements of each technique were identified by the panel and incorporated into a teaching protocol, and then into a scoring system. Two black belt martial arts instructors directed a total of forty-five 45-minute training sessions. Videotaped proficiency testing was performed weekly. The videotapes were reviewed by the investigators to determine the proficiency levels of each subject for each technique. Results The techniques were rated by the average number of training sessions needed for an individual to develop proficiency in that technique. The mean number of sessions necessary to train individuals to proficiency ranged from 27 to 38.3. Using this system, the most difficult techniques seemed to be elbow strikes to the rear, striking with thumbs to the eyes and arm blocking. Conclusions In this study 29 hours of training was necessary to train novice students to be proficient in 21 offensive and defensive martial arts techniques. To our knowledge, this is the first study that attempts to measure the learning curves involved when teaching martial arts techniques. PMID:22375215

  1. Multi-level gene/MiRNA feature selection using deep belief nets and active learning.

    PubMed

    Ibrahim, Rania; Yousri, Noha A; Ismail, Mohamed A; El-Makky, Nagwa M

    2014-01-01

    Selecting the most discriminative genes/miRNAs has been raised as an important task in bioinformatics to enhance disease classifiers and to mitigate the dimensionality curse problem. Original feature selection methods choose genes/miRNAs based on their individual features regardless of how they perform together. Considering group features instead of individual ones provides a better view for selecting the most informative genes/miRNAs. Recently, deep learning has proven its ability in representing the data in multiple levels of abstraction, allowing for better discrimination between different classes. However, the idea of using deep learning for feature selection is not widely used in the bioinformatics field yet. In this paper, a novel multi-level feature selection approach named MLFS is proposed for selecting genes/miRNAs based on expression profiles. The approach is based on both deep and active learning. Moreover, an extension to use the technique for miRNAs is presented by considering the biological relation between miRNAs and genes. Experimental results show that the approach was able to outperform classical feature selection methods in hepatocellular carcinoma (HCC) by 9%, lung cancer by 6% and breast cancer by around 10% in F1-measure. Results also show the enhancement in F1-measure of our approach over recently related work in [1] and [2].

  2. Solomon Technique Versus Selective Coagulation for Twin-Twin Transfusion Syndrome.

    PubMed

    Slaghekke, Femke; Oepkes, Dick

    2016-06-01

    Monochorionic twin pregnancies can be complicated by twin-to-twin transfusion syndrome (TTTS). The best treatment option for TTTS is fetoscopic laser coagulation of the vascular anastomoses between donor and recipient. After laser therapy, up to 33% residual anastomoses were seen. These residual anastomoses can cause twin anemia polycythemia sequence (TAPS) and recurrent TTTS. In order to reduce the number of residual anastomoses and their complications, a new technique, the Solomon technique, where the whole vascular equator will be coagulated, was introduced. The Solomon technique showed a reduction of recurrent TTS compared to the selective technique. The incidence of recurrent TTTS after the Solomon technique ranged from 0% to 3.9% compared to 5.3-8.5% after the selective technique. The incidence of TAPS after the Solomon technique ranged from 0% to 2.9% compared to 4.2-15.6% after the selective technique. The Solomon technique may improve dual survival rates ranging from 64% to 85% compared to 46-76% for the selective technique. There was no difference reported in procedure-related complications such as intrauterine infection and preterm premature rupture of membranes. The Solomon technique significantly reduced the incidence of TAPS and recurrent TTTS and may improve survival and neonatal outcome, without identifiable adverse outcome or complications; therefore, the Solomon technique is recommended for the treatment of TTTS.

  3. [Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin

    2008-09-01

    In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.

  4. RET selection on state-of-the-art NAND flash

    NASA Astrophysics Data System (ADS)

    Lafferty, Neal V.; He, Yuan; Pei, Jinhua; Shao, Feng; Liu, QingWei; Shi, Xuelong

    2015-03-01

    We present results generated using a new gauge-based Resolution Enhancement Technique (RET) Selection flow during the technology set up phase of a 3x-node NAND Flash product. As a testcase, we consider a challenging critical level for this ash product. The RET solutions include inverse lithography technology (ILT) optimized masks with sub-resolution assist features (SRAF) and companion illumination sources developed using a new pixel based Source Mask Optimization (SMO) tool that uses measurement gauges as a primary input. The flow includes verification objectives which allow tolerancing of particular measurement gauges based on lithographic criteria. Relative importance for particular gauges may also be set, to aid in down-selection from several candidate sources. The end result is a sensitive, objective score of RET performance. Using these custom-defined importance metrics, decisions on the final RET style can be made in an objective way.

  5. Red blood cell-deformability measurement: review of techniques.

    PubMed

    Musielak, M

    2009-01-01

    Cell-deformability characterization involves general measurement of highly complex relationships between cell biology and physical forces to which the cell is subjected. The review takes account of the modern technical solutions simulating the action of the force applied to the red blood cell in macro- and microcirculation. Diffraction ektacytometers and rheoscopes measure the mean deformability value for the total red blood cell population investigated and the deformation distribution index of individual cells, respectively. Deformation assays of a whole single cell are possible by means of optical tweezers. The single cell-measuring setups for micropipette aspiration and atomic force microscopy allow conducting a selective investigation of deformation parameters (e.g., cytoplasm viscosity, viscoelastic membrane properties). The distinction between instrument sensitivity to various RBC-rheological features as well as the influence of temperature on measurement are discussed. The reports quoted confront fascinating possibilities of the techniques with their medical applications since the RBC-deformability has the key position in the etiology of a wide range of conditions.

  6. Design of Laser Based Monitoring Systems for Compliance Management of Odorous and Hazardous Air Pollutants in Selected Chemical Industrial Estates at Hyderabad, India

    NASA Astrophysics Data System (ADS)

    Sudhakar, P.; Kalavathi, P.; Ramakrishna Rao, D.; Satyanarayna, M.

    2014-12-01

    Industrialization can no longer sustain without internalization of the concerns of the receiving environment and land-use. Increased awareness and public pressure, coupled with regulatory instruments and bodies exert constant pressure on industries to control their emissions to a level acceptable to the receiving environment. However, when a group of industries come-up together as an industrial estate, the cumulative impacts of all the industries together often challenges the expected/desired quality of receiving environment, requiring stringent pollution control and monitoring measures. Laser remote sensing techniques provide powerful tools for environmental monitoring. These methods provide range resolved measurements of concentrations of various gaseous pollutants and suspended particulate matter (SPM) not only in the path of the beam but over the entire area. A three dimensional mapping of the pollutants and their dispersal can be estimated using the laser remote sensing methods on a continuous basis. Laser Radar (Lidar) systems are the measurements technology used in the laser remote sensing methods. Differential absorption lidar (DIAL) and Raman Lidar technologies have proved to be very useful for remote sensing of air pollutants. DIAL and Raman lidar systems can be applied for range resolved measurements of molecules like SO2, NO2, O3 Hg, CO, C2H4, H2O, CH4, hydrocarbons etc. in real time on a continuous basis. This paper describes the design details of the DAIL and Raman lidar techniques for measurement of various hazardous air pollutants which are being released into the atmosphere by the chemical industries operating in the Bachupally industrial Estate area at Hyderabad, India. The relative merits of the two techniques have been studied and the minimum concentration of pollutants that can be measured using these systems are presented. A dispersion model of the air pollutants in the selected chemical industrial estates at Hyderabad has been developed.

  7. Adult thoracolumbar and lumbar scoliosis treated with long vertebral fusion to the sacropelvis: a comparison between new hybrid selective spinal fusion versus anterior-posterior spinal instrumentation.

    PubMed

    Yagi, Mitsuru; Patel, Ravi; Lawhorne, Thomas W; Cunningham, Matthew E; Boachie-Adjei, Oheneba

    2014-04-01

    Combined anteroposterior spinal fusion with instrumentation has been used for many years to treat adult thoracolumbar/lumbar scoliosis. This surgery remains a technical challenge to spine surgeons, and current literature reports high complication rates. The purpose of this study is to validate a new hybrid technique (a combination of single-rod anterior instrumentation and a shorter posterior instrumentation to the sacrum) to treat adult thoracolumbar/lumbar scoliosis. This study is a retrospective consecutive case series of surgically treated patients with adult lumbar or thoracolumbar scoliosis. This is a retrospective study of 33 matched pairs of patients with adult scoliosis who underwent two different surgical procedures: a new hybrid technique versus a third-generation anteroposterior spinal fusion. Preoperative and postoperative outcome measures include self-report measures, physiological measures, and functional measures. In a retrospective case-control study, 33 patients treated with the hybrid technique were matched with 33 patients treated with traditional anteroposterior fusion based on preoperative radiographic parameters. Mean follow-up in the hybrid group was 5.3 years (range, 2-11 years), compared with 4.6 years (range, 2-10 years) in the control group. Operating room (OR) time, estimated blood loss, and levels fused were collected as surrogates for surgical morbidity. Radiographic parameters were collected preoperatively, postoperatively, and at final follow-up. The Scoliosis Research Society Patient Questionnaire (SRS-22r) and Oswestry Disability Index (ODI) scores were collected for clinical outcomes. Operating room time, EBL, and levels fused were significantly less in the hybrid group compared with the control group (p<.0001). The postoperative thoracic Cobb angle was similar between the hybrid and control techniques (p=.24); however, the hybrid technique showed significant improvement in the thoracolumbar/lumbar curves (p=.004) and the lumbosacral fractional curve (p<.0001). The major complication rate was less in the hybrid group compared with the control group (18% vs. 39%, p=.01). Clinical outcomes at final follow-up were not significantly different based on overall SRS-22r scores and ODI scores. The new hybrid technique demonstrates good long-term results, with less morbidity and fewer complications than traditional anteroposterior surgery select patients with thoracolumbar/lumbar scoliosis. This study received no funding. No potential conflict of interest-associated bias existed. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Selective sampling and measurement of Cr (VI) in water with polyquaternary ammonium salt as a binding agent in diffusive gradients in thin-films technique.

    PubMed

    Chen, Hong; Zhang, Yang-Yang; Zhong, Ke-Li; Guo, Lian-Wen; Gu, Jia-Li; Bo, Le; Zhang, Meng-Han; Li, Jian-Rong

    2014-04-30

    A diffusive gradients in thin films (DGT) device with polyquaternary ammonium salt (PQAS) as a novel binding agent (PQAS DGT) combined with graphite furnace atomic absorption spectrometry (GFAAS) was developed for the selective sampling and measurement of Cr (VI) in water. The performance of PQAS DGT was independent of pH 3-12 and ionic strength from 1 × 10(-3) to 1 molL(-1). DGT validation experiments showed that Cr (VI) was measured accurately as well as selectively by PQAS DGT, whereas Cr (III) was not determined quantitatively. Compared with diphenylcarbazide spectrophotometric method (DPC), the measurement of Cr (VI) with PQAS DGT was agreement with that of DPC method in the industrial wastewater. PQAS-DGT device had been successfully deployed in local freshwater. The concentrations of Cr (VI) determined by PQAS DGT coupled with GFAAS in Nuer River, Ling River and North Lake were 0.73 ± 0.09 μg L(-1), 0.50 ± 0.07 μg L(-1) and 0.61 ± 0.07 μg L(-1), respectively. The results indicate that PQAS DGT device can be used for the selective sampling and measurement Cr (VI) in water and its detection limit is lower than that of DPC method. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Laser speckle technique for burner liner strain measurements

    NASA Technical Reports Server (NTRS)

    Stetson, K. A.

    1982-01-01

    Thermal and mechanical strains were measured on samples of a common material used in jet engine burner liners, which were heated from room temperature to 870 C and cooled back to 220 C, in a laboratory furnance. The physical geometry of the sample surface was recorded at selected temperatures by a set of 12 single exposure speckle-grams. Sequential pairs of specklegrams were compared in a heterodyne interferometer which give high precision measurement of differential displacements. Good speckle correlation between the first and last specklegrams is noted which allows a check on accumulate errors.

  10. Investigations of medium wavelength magnetic anomalies in the eastern Pacific using MAGSAT data

    NASA Technical Reports Server (NTRS)

    Harrison, C. G. A. (Principal Investigator)

    1981-01-01

    The suitability of using magnetic field measurements obtained by MAGSAT is discussed with regard to resolving the medium wavelength anomaly problem. A procedure for removing the external field component from the measured field is outlined. Various methods of determining crustal magnetizations are examined in light of satellite orbital parameters resulting in the selection of the equivalent source technique for evaluating scalar measurements. A matrix inversion of the vector components is suggested as a method for arriving at a scalar potential representation of the field.

  11. Imaging trace gases in volcanic plumes with Fabry Perot Interferometers

    NASA Astrophysics Data System (ADS)

    Kuhn, Jonas; Platt, Ulrich; Bobrowski, Nicole; Lübcke, Peter; Wagner, Thomas

    2017-04-01

    Within the last decades, progress in remote sensing of atmospheric trace gases revealed many important insights into physical and chemical processes in volcanic plumes. In particular, their evolution could be studied in more detail than by traditional in-situ techniques. A major limitation of standard techniques for volcanic trace gas remote sensing (e.g. Differential Optical Absorption Spectroscopy, DOAS) is the constraint of the measurement to a single viewing direction since they use dispersive spectroscopy with a high spectral resolution. Imaging DOAS-type approaches can overcome this limitation, but become very time consuming (of the order of minutes to record a single image) and often cannot match the timescales of the processes of interest for volcanic gas measurements (occurring at the order of seconds). Spatially resolved imaging observations with high time resolution for volcanic sulfur dioxide (SO2) emissions became possible with the introduction of the SO2-Camera. Reducing the spectral resolution to two spectral channels (using interference filters) that are matched to the SO2 absorption spectrum, the SO2-Camera is able to record full frame SO2 slant column density distributions at a temporal resolution on the order of < 1s. This for instance allows for studying variations in SO2 fluxes on very short time scales and applying them in magma dynamics models. However, the currently employed SO2-Camera technique is limited to SO2 detection and, due to its coarse spectral resolution, has a limited spectral selectivity. This limits its application to very specific, infrequently found measurement conditions. Here we present a new approach, based on matching the transmission profile of Fabry Perot Interferometers (FPIs) to periodic spectral absorption features of trace gases. The FPI's transmission spectrum is chosen to achieve a high correlation with the spectral absorption of the trace gas, allowing a high selectivity and sensitivity with still using only a few spectral channels. This would not only improve SO2 imaging, but also allow for the application of the technique to further gases of interest in volcanology (and other areas of atmospheric research). Imaging halogen species would be particularly interesting for volcanic trace gas studies. Bromine monoxide (BrO) and chlorine dioxide (OClO) both yield absorption features that allow their detection with the FPI correlation technique. From BrO and OClO data, ClO levels in the plume could be calculated. We present an outline of applications of the FPI technique to imaging a series of trace gases in volcanic plumes. Sample calculations on the sensitivity and selectivity of the technique, first proof of concept studies and proposals for technical implementations are presented.

  12. Diagnostic Lumbar Puncture

    PubMed Central

    Doherty, Carolynne M; Forbes, Raeburn B

    2014-01-01

    Diagnostic Lumbar Puncture is one of the most commonly performed invasive tests in clinical medicine. Evaluation of an acute headache and investigation of inflammatory or infectious disease of the nervous system are the most common indications. Serious complications are rare, and correct technique will minimise diagnostic error and maximise patient comfort. We review the technique of diagnostic Lumbar Puncture including anatomy, needle selection, needle insertion, measurement of opening pressure, Cerebrospinal Fluid (CSF) specimen handling and after care. We also make some quality improvement suggestions for those designing services incorporating diagnostic Lumbar Puncture. PMID:25075138

  13. PROGRESS ON THE STUDY OF THE URANIUM-ALUMINUM-IRON CONSTITUTION DIAGRAM FOR THE PERIOD SEPTEMBER 1-DECEMBER 31, 1963

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, R.B.

    The U--Al--Fe constitution diagram up to about 1000 ppm each of aluminum and iron is sthdied. The techniques used for this study include optical, electron, and x-ray metallography; microprobe analysis, electric conductivity, and hardness measurements. A combination of techniques are giving evidence of the amount of solid solubility of aluminum and iron in alpha, beta, and gamma uranium at selected higher temperatures. The U-Al and U-Fe phase diagrams are also being determined. (N.W.R.)

  14. Computation of transonic flow past projectiles at angle of attack

    NASA Technical Reports Server (NTRS)

    Reklis, R. P.; Sturek, W. B.; Bailey, F. R.

    1978-01-01

    Aerodynamic properties of artillery shell such as normal force and pitching moment reach peak values in a narrow transonic Mach number range. In order to compute these quantities, numerical techniques have been developed to obtain solutions to the three-dimensional transonic small disturbance equation about slender bodies at angle of attack. The computation is based on a plane relaxation technique involving Fourier transforms to partially decouple the three-dimensional difference equations. Particular care is taken to assure accurate solutions near corners found in shell designs. Computed surface pressures are compared to experimental measurements for circular arc and cone cylinder bodies which have been selected as test cases. Computed pitching moments are compared to range measurements for a typical projectile shape.

  15. Oxygen detection using the laser diode absorption technique

    NASA Technical Reports Server (NTRS)

    Disimile, P. J.; Fox, C. W.

    1991-01-01

    Accurate measurement of the concentration and flow rate of gaseous oxygen is becoming of greater importance. The detection technique presented is based on the principal of light absorption by the Oxygen A-Band. Oxygen molecules have characteristics which attenuate radiation in the 759-770 nm wavelength range. With an ability to measure changes in the relative light transmission to less than 0.01 percent, a sensitive optical gas detection system was configured. This system is smaller in size and light in weight, has low energy requirements and has a rapid response time. In this research program, the application of temperature tuning laser diodes and their ability to be wavelength shifted to a selected absorption spectral peak has allowed concentrations as low as 1300 ppm to be detected.

  16. Skeletal mass in rheumatoid arthritis: a comparison with forearm bone mineral content. [Photon transmission scanning of bone tissues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zanzi, I.; Roginsky, M.S.; Ellis, K.J.

    1976-01-01

    The evaluation of diffuse osteoporosis in patients with rheumatoid arthritis (RA) remains controversial. An important associated problem is the compounded effect of osteopenia secondary to chronic corticosteroid treatment. Photon-absorptiometric techniques have been utilized for measurements of selected sites of the skeleton, such as the distal femur and the distal radius. The development of the technique of in-vivo total body neutron activation analysis (TBNAA) along with whole body counting, has made possible the direct measurement of skeletal mass (total body calcium, TBCa). The TBCa and radial bone mineral content (BMC) were evaluated in 19 Caucassian women with R.A., with and withoutmore » a history of corticosteroid treatment. (auth)« less

  17. In Situ Noble-Gas Based Chronology on Mars

    NASA Technical Reports Server (NTRS)

    Swindle, T. D.

    2000-01-01

    Determining radiometric ages in situ on another planet's surface has never been done, and there are good reasons to think that it will be extremely difficult. It is certainly hard to imagine that such ages could be measured as precisely as they could be measured on returned samples in state-of-the-art terrestrial laboratories. However, it may be possible, by using simple noble-gas-based chronology techniques, to determine ages on Mars to a precision that is scientifically useful. This abstract will: (1) describe the techniques we envision; (2) give some examples of how such information might be scientifically useful; and (3) describe the system we are developing, including the requirements in terms of mass, power, volume, and sample selection and preparation.

  18. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1991-01-01

    An experimental technique is used to measure structural intensity through an aircraft fuselage with an excitation load applied near one of the wing attachment locations. The fuselage is a relatively large structure, requiring a large number of measurement locations to analyze the whole of the structure. For the measurement of structural intensity, multiple point measurements are necessary at every location of interest. A tradeoff is therefore required between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, structural intensity vectors are measured at locations distributed throughout the fuselage. To minimize the errors associated with using the four transducer technique, the measurement locations are selected to be away from bulkheads and stiffeners. Furthermore, to eliminate phase errors between the four transducer measurements, two sets of data are collected for each position, with the orientation of the platform with the four transducers rotated by 180 degrees and an average taken between the two sets of data. The results of these measurements together with a discussion of the suitability of the approach for measuring structural intensity on a real structure are presented.

  19. Measuring food intake in studies of obesity.

    PubMed

    Lissner, Lauren

    2002-12-01

    The problem of how to measure habitual food intake in studies of obesity remains an enigma in nutritional research. The existence of obesity-specific underreporting was rather controversial until the advent of the doubly labelled water technique gave credence to previously anecdotal evidence that such a bias does in fact exist. This paper reviews a number of issues relevant to interpreting dietary data in studies involving obesity. Topics covered include: participation biases, normative biases,importance of matching method to study, selective underreporting, and a brief discussion of the potential implications of generalised and selective underreporting in analytical epidemiology. It is concluded that selective underreporting of certain food types by obese individuals would produce consequences in analytical epidemiological studies that are both unpredictable and complex. Since it is becoming increasingly acknowledged that selective reporting error does occur, it is important to emphasise that correction for energy intake is not sufficient to eliminate the biases from this type of error. This is true both for obesity-related selective reporting errors and more universal types of selective underreporting, e.g. foods of low social desirability. Additional research is urgently required to examine the consequences of this type of error.

  20. Quality assessment of the TLS data in conservation of monuments

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub S.; Zawieska, Dorota

    2015-06-01

    Laser scanning has been recently confirming its high potential in the field of acquiring 3D data for architectural and engineering objects. The objective of this paper is to analyse the quality of the TLS data acquired for different surfaces of monumental objects, with consideration of distances and the scanning angles. Tests concerning the quality of the survey data and shapes of architectural objects, characterised by diversified curvature, structure and the uniformity of the surface, were performed. The obtained results proved that utilisation of terrestrial laser scanning techniques does not allow to achieve expected accuracy for some historical surfaces and it should be substituted by alternative, photogrammetric techniques. Therefore, the typology of constructions of historical objects is important not only for selection of the optimum technique of surveys, but also for its appropriate utilisation. The test objects were architectural details of the Main Hall of the Warsaw University of Technology. Scans were acquired using the 5006h scanner. Diversified geometry of scans was tested, and the relations between the distance and obtained accuracy were specified. In the case of numerous conservational works the precise surface reconstruction is often important, in order to specify damages. Therefore, the repeatability of obtained TLS results for selected surfaces was also tested. Different surfaces were analysed, which are composed of different materials having glittery elements and inhomogeneous structure. The obtained results and performed analyses revealed the high imperfections of the TLS technique applied for measuring surfaces of historical objects. The presented accuracy of measurements of projection of historical surfaces, obtained using the TLS technique may be applied by art conservators, museum professionals, archaeologists and other specialists, to perform wide analyses of historical heritage objects.

  1. Laser based in-situ and standoff detection of chemical warfare agents and explosives

    NASA Astrophysics Data System (ADS)

    Patel, C. Kumar N.

    2009-09-01

    Laser based detection of gaseous, liquid and solid residues and trace amounts has been developed ever since lasers were invented. However, the lack of availability of reasonably high power tunable lasers in the spectral regions where the relevant targets can be interrogated as well as appropriate techniques for high sensitivity, high selectivity detection has hampered the practical exploitation of techniques for the detection of targets important for homeland security and defense applications. Furthermore, emphasis has been on selectivity without particular attention being paid to the impact of interfering species on the quality of detection. Having high sensitivity is necessary but not a sufficient condition. High sensitivity assures a high probability of detection of the target species. However, it is only recently that the sensor community has come to recognize that any measure of probability of detection must be associated with a probability of false alarm, if it is to have any value as a measure of performance. This is especially true when one attempts to compare performance characteristics of different sensors based on different physical principles. In this paper, I will provide a methodology for characterizing the performance of sensors utilizing optical absorption measurement techniques. However, the underlying principles are equally application to all other sensors. While most of the current progress in high sensitivity, high selectivity detection of CWAs, TICs and explosives involve identifying and quantifying the target species in-situ, there is an urgent need for standoff detection of explosives from safe distances. I will describe our results on CO2 and quantum cascade laser (QCL) based photoacoustic sensors for the detection of CWAs, TICs and explosives as well the very new results on stand-off detection of explosives at distances up to 150 meters. The latter results are critically important for assuring safety of military personnel in battlefield environment, especially from improvised explosive devices (IEDs), and of civilian personnel from terrorist attacks in metropolitan areas.

  2. Electrochemical hydrogen sulfide biosensors.

    PubMed

    Xu, Tailin; Scafa, Nikki; Xu, Li-Ping; Zhou, Shufeng; Abdullah Al-Ghanem, Khalid; Mahboob, Shahid; Fugetsu, Bunshi; Zhang, Xueji

    2016-02-21

    The measurement of sulfide, especially hydrogen sulfide, has held the attention of the analytical community due to its unique physiological and pathophysiological roles in biological systems. Electrochemical detection offers a rapid, highly sensitive, affordable, simple, and real-time technique to measure hydrogen sulfide concentration, which has been a well-documented and reliable method. This review details up-to-date research on the electrochemical detection of hydrogen sulfide (ion selective electrodes, polarographic hydrogen sulfide sensors, etc.) in biological samples for potential therapeutic use.

  3. Comparison of gamma densitometry and electrical capacitance measurements applied to hold-up prediction of oil–water flow patterns in horizontal and slightly inclined pipes

    NASA Astrophysics Data System (ADS)

    Perera, Kshanthi; Kumara, W. A. S.; Hansen, Fredrik; Mylvaganam, Saba; Time, Rune W.

    2018-06-01

    Measurement techniques are vital for the control and operation of multiphase oil–water flow in pipes. The development of such techniques depends on laboratory experiments involving flow visualization, liquid fraction (‘hold-up’), phase slip and pressure drop measurements. They provide valuable information by revealing the physics, spatial and temporal structures of complex multiphase flow phenomena. This paper presents the hold-up measurement of oil–water flow in pipelines using gamma densitometry and electrical capacitance tomography (ECT) sensors. The experiments were carried out with different pipe inclinations from  ‑5° to  +6° for selected mixture velocities (0.2–1.5 m s‑1), and at selected watercuts (0.05–0.95). Mineral oil (Exxsol D60) and water were used as test fluids. Nine flow patterns were identified including a new pattern called stratified wavy and mixed interface flow. As a third direct method, visual observations and high-speed videos were used for the flow regime and interface identification. ECT and gamma densitometry hold-up measurements show similar trends for changes in pipeline inclinations. Changing the pipe inclination affected the flow mostly at lower mixture velocities and caused a change of flow patterns, allowing the highest change of hold-up. ECT hold-up measurements overpredict the gamma densitometry measurements at higher input water cuts and underpredict at intermediate water cuts. Gamma hold-up results showed good agreement with the literature results, having a maximum deviation of 6%, while it was as high as 22% for ECT in comparison to gamma densitometry. Uncertainty analysis of the measurement techniques was carried out with single-phase oil flow. This shows that the measurement error associated with gamma densitometry is approximately 3.2%, which includes 1.3% statistical error and 2.9% error identified as electromagnetically induced noise in electronics. Thus, gamma densitometry can predict hold-up with a higher accuracy in comparison to ECT when applied to oil–water systems at minimized electromagnetic noise.

  4. FAIR exempting separate T (1) measurement (FAIREST): a novel technique for online quantitative perfusion imaging and multi-contrast fMRI.

    PubMed

    Lai, S; Wang, J; Jahng, G H

    2001-01-01

    A new pulse sequence, dubbed FAIR exempting separate T(1) measurement (FAIREST) in which a slice-selective saturation recovery acquisition is added in addition to the standard FAIR (flow-sensitive alternating inversion recovery) scheme, was developed for quantitative perfusion imaging and multi-contrast fMRI. The technique allows for clean separation between and thus simultaneous assessment of BOLD and perfusion effects, whereas quantitative cerebral blood flow (CBF) and tissue T(1) values are monitored online. Online CBF maps were obtained using the FAIREST technique and the measured CBF values were consistent with the off-line CBF maps obtained from using the FAIR technique in combination with a separate sequence for T(1) measurement. Finger tapping activation studies were carried out to demonstrate the applicability of the FAIREST technique in a typical fMRI setting for multi-contrast fMRI. The relative CBF and BOLD changes induced by finger-tapping were 75.1 +/- 18.3 and 1.8 +/- 0.4%, respectively, and the relative oxygen consumption rate change was 2.5 +/- 7.7%. The results from correlation of the T(1) maps with the activation images on a pixel-by-pixel basis show that the mean T(1) value of the CBF activation pixels is close to the T(1) of gray matter while the mean T(1) value of the BOLD activation pixels is close to the T(1) range of blood and cerebrospinal fluid. Copyright 2001 John Wiley & Sons, Ltd.

  5. Apical extrusion of debris and irrigant using hand and rotary systems: A comparative study

    PubMed Central

    Ghivari, Sheetal B; Kubasad, Girish C; Chandak, Manoj G; Akarte, NR

    2011-01-01

    Aim: To evaluate and compare the amount of debris and irrigant extruded quantitatively by using two hand and rotary nickel–titanium (Ni–Ti) instrumentation techniques. Materials and Methods: Eighty freshly extracted mandibular premolars having similar canal length and curvature were selected and mounted in a debris collection apparatus. After each instrument change, 1 ml of distilled water was used as an irrigant and the amount of irrigant extruded was measured using the Meyers and Montgomery method. After drying, the debris was weighed using an electronic microbalance to determine its weight. Statistical analysis used: The data was analyzed statistically to determine the mean difference between the groups. The mean weight of the dry debris and irrigant within the group and between the groups was calculated by the one-way ANOVA and multiple comparison (Dunnet D) test. Results: The step-back technique extruded a greater quantity of debris and irrigant in comparison to other hand and rotary Ni–Ti systems. Conclusions: All instrumentation techniques extrude debris and irrigant, it is prudent on the part of the clinician to select the instrumentation technique that extrudes the least amount of debris and irrigant, to prevent a flare-up phenomena. PMID:21814364

  6. Measurement component technology. Volume 1: Cryogenic pressure measurement technology, high pressure flange seals, hydrogen embrittlement of pressure transducer material, close coupled versus remote transducer installation and temperature compensation of pressure transducers

    NASA Technical Reports Server (NTRS)

    Hayakawa, K. K.; Udell, D. R.; Iwata, M. M.; Lytle, C. F.; Chrisco, R. M.; Greenough, C. S.; Walling, J. A.

    1972-01-01

    The results are presented of an investigation into the availability and performance capability of measurement components in the area of cryogenic temperature, pressure, flow and liquid detection components and high temperature strain gages. In addition, technical subjects allied to the components were researched and discussed. These selected areas of investigation were: (1) high pressure flange seals, (2) hydrogen embrittlement of pressure transducer diaphragms, (3) The effects of close-coupled versus remote transducer installation on pressure measurement, (4) temperature transducer configuration effects on measurements, and (5) techniques in temperature compensation of strain gage pressure transducers. The purpose of the program was to investigate the latest design and application techniques in measurement component technology and to document this information along with recommendations for upgrading measurement component designs for future S-2 derivative applications. Recommendations are provided for upgrading existing state-of-the-art in component design, where required, to satisfy performance requirements of S-2 derivative vehicles.

  7. Longitudinal evaluation of patients with oral potentially malignant disorders using optical imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Schwarz, Richard A.; Pierce, Mark C.; Mondrik, Sharon; Gao, Wen; Quinn, Mary K.; Bhattar, Vijayashree; Williams, Michelle D.; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Richards-Kortum, Rebecca

    2012-02-01

    Dysplastic and cancerous alterations in oral tissue can be detected noninvasively in vivo using optical techniques including autofluorescence imaging, high-resolution imaging, and spectroscopy. Interim results are presented from a longitudinal study in which optical imaging and spectroscopy were used to evaluate the progression of lesions over time in patients at high risk for development of oral cancer. Over 100 patients with oral potentially malignant disorders have been enrolled in the study to date. Areas of concern in the oral cavity are measured using widefield autofluorescence imaging and depth-sensitive optical spectroscopy during successive clinical visits. Autofluorescence intensity patterns and autofluorescence spectra are tracked over time and correlated with clinical observations. Patients whose lesions progress and who undergo surgery are also measured in the operating room immediately prior to surgery using autofluorescence imaging and spectroscopy, with the addition of intraoperative high-resolution imaging to characterize nuclear size, nuclear crowding, and tissue architecture at selected sites. Optical measurements are compared to histopathology results from biopsies and surgical specimens collected from the measured sites. Autofluorescence imaging and spectroscopy measurements are continued during post-surgery followup visits. We examined correlations between clinical impression and optical classification over time with an average followup period of 4 months. The data collected to date suggest that multimodal optical techniques may aid in noninvasive monitoring of the progression of oral premalignant lesions, biopsy site selection, and accurate delineation of lesion extent during surgery.

  8. Desing of Digital Calliper for Control of Selected Parameters of Railway Wheels

    NASA Astrophysics Data System (ADS)

    Ticha, Šarka; Zelnak, Michal; Vavrina, Jan

    2014-12-01

    This contribution deals with new design of digital calliper for transferring of width dimension scale from the ring interface to tyre of railway wheel. Based on the problem definition were solved variants of design with possibility of improvement current measurement technique. For selected variant of production was developed calibration procedures for ensure of required accuracy. At the end solution that was significantly influenced by economic recession was realized variant for single production. Manufacturer and exclusive supplier of this digital calliper is UNIMETRA Company, Ltd.

  9. Estimating the effect of gang membership on nonviolent and violent delinquency: a counterfactual analysis.

    PubMed

    Barnes, J C; Beaver, Kevin M; Miller, J Mitchell

    2010-01-01

    This study reconsiders the well-known link between gang membership and criminal involvement. Recently developed analytical techniques enabled the approximation of an experimental design to determine whether gang members, after being matched with similarly situated nongang members, exhibited greater involvement in nonviolent and violent delinquency. Findings indicated that while gang membership is a function of self-selection, selection effects alone do not account for the greater involvement in delinquency exhibited by gang members. After propensity score matching was employed, gang members maintained a greater involvement in both nonviolent and violent delinquency when measured cross-sectionally, but only violent delinquency when measured longitudinally. Additional analyses using inverse probability of treatment weights reaffirmed these conclusions. © 2010 Wiley-Liss, Inc.

  10. Manufacture and calibration of optical supersmooth roughness artifacts for intercomparisons

    NASA Astrophysics Data System (ADS)

    Ringel, Gabriele A.; Kratz, Frank; Schmitt, Dirk-Roger; Mangelsdorf, Juergen; Creuzet, Francois; Garratt, John D.

    1995-09-01

    Intercomparison roughness measurements have been carried out on supersmooth artifacts fabricated from BK7, fused silica, and Zerodur. The surface parameters were determined using the optical heterodyne profiler Z5500 (Zygo), a special prototype of the mechanical profiler Nanostep (Rank Taylor Hobson), and an Atomic Force Microscope (Park Scientific Instruments) with an improved acquisition technique. The intercomparison was performed after the range of collected spatial wavelengths for each instrument was adjusted using digital filtering techniques. It is demonstrated for different roughness ranges that the applied superpolishing techniques yield supersmooth artifacts which can be used for more intercomparisons. More than 100 samples were investigated. Criteria were developed to select artifacts from the sample stock.

  11. Non-Contact Technique for Determining the Mechanical Stress in thin Films on Wafers by Profiler

    NASA Astrophysics Data System (ADS)

    Djuzhev, N. A.; Dedkova, A. A.; E Gusev, E.; Makhiboroda, M. A.; Glagolev, P. Y.

    2017-04-01

    This paper presents an algorithm for analysis of relief for the purpose of calculating mechanical stresses in a selected direction on the plate in the form of software package Matlab. The method allows for the measurement sample in the local area. It provides a visual representation of the data and allows to get stress distribution on wafer surface. Automated analysis process reduces the likelihood of errors researcher. Achieved time saving during processing results. In carrying out several measurements possible drawing card plate to predict yield crystals. According to this technique done in measurement of mechanical stresses of thermal silicon oxide film on a silicon substrate. Analysis of the results showed objectivity and reliability calculations. This method can be used for selecting the optimal parameters of the material deposition conditions. In software of device-technological simulation TCAD defined process time, temperature and oxidation of the operation of the sample environment for receiving the set value of the dielectric film thickness. Calculated thermal stresses are in the system silicon-silicon oxide. There is a good correlation between numerical simulations and analytical calculation. It is shown that the nature of occurrence of mechanical stress is not limited to the difference of thermal expansion coefficients of materials.

  12. Pixel-based flood mapping from SAR imagery: a comparison of approaches

    NASA Astrophysics Data System (ADS)

    Landuyt, Lisa; Van Wesemael, Alexandra; Van Coillie, Frieke M. B.; Verhoest, Niko E. C.

    2017-04-01

    Due to their all-weather, day and night capabilities, SAR sensors have been shown to be particularly suitable for flood mapping applications. Thus, they can provide spatially-distributed flood extent data which are valuable for calibrating, validating and updating flood inundation models. These models are an invaluable tool for water managers, to take appropriate measures in times of high water levels. Image analysis approaches to delineate flood extent on SAR imagery are numerous. They can be classified into two categories, i.e. pixel-based and object-based approaches. Pixel-based approaches, e.g. thresholding, are abundant and in general computationally inexpensive. However, large discrepancies between these techniques exist and often subjective user intervention is needed. Object-based approaches require more processing but allow for the integration of additional object characteristics, like contextual information and object geometry, and thus have significant potential to provide an improved classification result. As means of benchmark, a selection of pixel-based techniques is applied on a ERS-2 SAR image of the 2006 flood event of River Dee, United Kingdom. This selection comprises Otsu thresholding, Kittler & Illingworth thresholding, the Fine To Coarse segmentation algorithm and active contour modelling. The different classification results are evaluated and compared by means of several accuracy measures, including binary performance measures.

  13. Topographic measurement of buried thin-film interfaces using a grazing resonant soft x-ray scattering technique

    NASA Astrophysics Data System (ADS)

    Gann, Eliot; Watson, Anne; Tumbleston, John R.; Cochran, Justin; Yan, Hongping; Wang, Cheng; Seok, Jaewook; Chabinyc, Michael; Ade, Harald

    2014-12-01

    The internal structures of thin films, particularly interfaces between different materials, are critical to system properties and performance across many disciplines, but characterization of buried interface topography is often unfeasible. In this work, we demonstrate that grazing resonant soft x-ray scattering (GRSoXS), a technique measuring diffusely scattered soft x rays from grazing incidence, can reveal the statistical topography of buried thin-film interfaces. By controlling and predicting the x-ray electric field intensity throughout the depth of the film and simultaneously the scattering contrast between materials, we are able to unambiguously identify the microstructure at different interfaces of a model polymer bilayer system. We additionally demonstrate the use of GRSoXS to selectively measure the topography of the surface and buried polymer-polymer interface in an organic thin-film transistor, revealing different microstructure and markedly differing evolution upon annealing. In such systems, where only indirect control of interface topography is possible, accurate measurement of the structure of interfaces for feedback is critically important. While we demonstrate the method here using organic materials, we also show that the technique is readily extendable to any thin-film system with elemental or chemical contrasts exploitable at absorption edges.

  14. High individual variation in pheromone production by tree-killing bark beetles (Coleoptera: Curculionidae: Scolytinae)

    NASA Astrophysics Data System (ADS)

    Pureswaran, Deepa S.; Sullivan, Brian T.; Ayres, Matthew P.

    2008-01-01

    Aggregation via pheromone signalling is essential for tree-killing bark beetles to overcome tree defenses and reproduce within hosts. Pheromone production is a trait that is linked to fitness, so high individual variation is paradoxical. One explanation is that the technique of measuring static pheromone pools overestimates true variation among individuals. An alternative hypothesis is that aggregation behaviour dilutes the contribution of individuals to the trait under selection and reduces the efficacy of natural selection on pheromone production by individuals. We compared pheromone measurements from traditional hindgut extractions of female southern pine beetles with those obtained by aerating individuals till they died. Aerations showed greater total pheromone production than hindgut extractions, but coefficients of variation (CV) remained high (60-182%) regardless of collection technique. This leaves the puzzle of high variation unresolved. A novel but simple explanation emerges from considering bark beetle aggregation behaviour. The phenotype visible to natural selection is the collective pheromone plume from hundreds of colonisers. The influence of a single beetle on this plume is enhanced by high variation among individuals but constrained by large group sizes. We estimated the average contribution of an individual to the pheromone plume across a range of aggregation sizes and showed that large aggregation sizes typical in mass attacks limit the potential of natural selection because each individual has so little effect on the overall plume. Genetic variation in pheromone production could accumulate via mutation and recombination, despite strong effects of the pheromone plume on the fitness of individuals within the aggregation. Thus, aggregation behaviour, by limiting the efficacy of natural selection, can allow the persistence of extreme phenotypes in nature.

  15. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  16. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  17. Thermodynamic Analysis of the Selectivity Enhancement Obtained by Using Smart Hydrogels That Are Zwitterionic When Detecting Glucose With Boronic Acid Moieties

    PubMed Central

    Horkay, F.; Cho, S. H.; Tathireddy, P.; Rieth, L.; Solzbacher, F.; Magda, J.

    2011-01-01

    Because the boronic acid moiety reversibly binds to sugar molecules and has low cytotoxicity, boronic acid-containing hydrogels are being used in a variety of implantable glucose sensors under development, including sensors based on optical, fluorescence, and swelling pressure measurements. However, some method of glucose selectivity enhancement is often necessary, because isolated boronic acid molecules have a binding constant with glucose that is some forty times smaller than their binding constant with fructose, the second most abundant sugar in the human body. In many cases, glucose selectivity enhancement is obtained by incorporating pendant tertiary amines into the hydrogel network, thereby giving rise to a hydrogel that is zwitterionic at physiological pH. However, the mechanism by which incorporation of tertiary amines confers selectivity enhancement is poorly understood. In order to clarify this mechanism, we use the osmotic deswelling technique to compare the thermodynamic interactions of glucose and fructose with a zwitterionic smart hydrogel containing boronic acid moieties. We also investigate the change in the structure of the hydrogel that occurs when it binds to glucose or to fructose using the technique of small angle neutron scattering. PMID:22190765

  18. Deflectometry using a Hartmann screen to measure tilt, decentering and focus errors in a spherical surface

    NASA Astrophysics Data System (ADS)

    Muñoz-Potosi, A. F.; Granados-Agustín, F.; Campos-García, M.; Valdivieso-González, L. G.; Percino-Zacarias, M. E.

    2017-11-01

    Among the various techniques that can be used to assess the quality of optical surfaces, deflectometry evaluates the reflection experienced by rays impinging on a surface whose topography is under study. We propose the use of a screen spatial filter to select rays from a light source. The screen must be placed at a distance shorter than the radius of curvature of the surface under study. The location of the screen depends on the exit pupil of the system and the caustic area. The reflected rays are measured using an observation plane/screen/CCD located beyond the point of convergence of the rays. To implement an experimental design of the proposed technique and determine the topography of the surface under study, it is necessary to measure tilt, decentering and focus errors caused by mechanical misalignment, which could influence the results of this technique but are not related to the quality of the surface. The aim of this study is to analyze an ideal spherical surface with known radius of curvature to identify the variations introduced by such misalignment errors.

  19. Tracking brain motion during the cardiac cycle using spiral cine-DENSE MRI

    PubMed Central

    Zhong, Xiaodong; Meyer, Craig H.; Schlesinger, David J.; Sheehan, Jason P.; Epstein, Frederick H.; Larner, James M.; Benedict, Stanley H.; Read, Paul W.; Sheng, Ke; Cai, Jing

    2009-01-01

    Cardiac-synchronized brain motion is well documented, but the accurate measurement of such motion on the pixel-by-pixel basis has been hampered by the lack of proper imaging technique. In this article, the authors present the implementation of an autotracking spiral cine displacement-encoded stimulation echo (DENSE) magnetic resonance imaging (MRI) technique for the measurement of pulsatile brain motion during the cardiac cycle. Displacement-encoded dynamic MR images of three healthy volunteers were acquired throughout the cardiac cycle using the spiral cine-DENSE pulse sequence gated to the R wave of an electrocardiogram. Pixelwise Lagrangian displacement maps were computed, and 2D displacement as a function of time was determined for selected regions of interests. Different intracranial structures exhibited characteristic motion amplitude, direction, and pattern throughout the cardiac cycle. Time-resolved displacement curves revealed the pathway of pulsatile motion from brain stem to peripheral brain lobes. These preliminary results demonstrated that the spiral cine-DENSE MRI technique can be used to measure cardiac-synchronized pulsatile brain motion on the pixel-by-pixel basis with high temporal∕spatial resolution and sensitivity. PMID:19746774

  20. Measuring high-density built environment for public health research: Uncertainty with respect to data, indicator design and spatial scale.

    PubMed

    Sun, Guibo; Webster, Chris; Ni, Michael Y; Zhang, Xiaohu

    2018-05-07

    Uncertainty with respect to built environment (BE) data collection, measure conceptualization and spatial scales is evident in urban health research, but most findings are from relatively lowdensity contexts. We selected Hong Kong, an iconic high-density city, as the study area as limited research has been conducted on uncertainty in such areas. We used geocoded home addresses (n=5732) from a large population-based cohort in Hong Kong to extract BE measures for the participants' place of residence based on an internationally recognized BE framework. Variability of the measures was mapped and Spearman's rank correlation calculated to assess how well the relationships among indicators are preserved across variables and spatial scales. We found extreme variations and uncertainties for the 180 measures collected using comprehensive data and advanced geographic information systems modelling techniques. We highlight the implications of methodological selection and spatial scales of the measures. The results suggest that more robust information regarding urban health research in high-density city would emerge if greater consideration were given to BE data, design methods and spatial scales of the BE measures.

  1. An overview of the characterization of occupational exposure to nanoaerosols in workplaces

    NASA Astrophysics Data System (ADS)

    Castellano, Paola; Ferrante, Riccardo; Curini, Roberta; Canepari, Silvia

    2009-05-01

    Currently, there is a lack of standardized sampling and metric methods that can be applied to measure the level of exposure to nanosized aerosols. Therefore, any attempt to characterize exposure to nanoparticles (NP) in a workplace must involve a multifaceted approach characterized by different sampling and analytical techniques to measure all relevant characteristics of NP exposure. Furthermore, as NP aerosols are always complex mixtures of multiple origins, sampling and analytical methods need to be improved to selectively evaluate the apportionment from specific sources to the final nanomaterials. An open question at the world's level is how to relate specific toxic effects of NP with one or more among several different parameters (such as particle size, mass, composition, surface area, number concentration, aggregation or agglomeration state, water solubility and surface chemistry). As the evaluation of occupational exposure to NP in workplaces needs dimensional and chemical characterization, the main problem is the choice of the sampling and dimensional separation techniques. Therefore a convenient approach to allow a satisfactory risk assessment could be the contemporary use of different sampling and measuring techniques for particles with known toxicity in selected workplaces. Despite the lack of specific NP exposure limit values, exposure metrics, appropriate to nanoaerosols, are discussed in the Technical Report ISO/TR 27628:2007 with the aim to enable occupational hygienists to characterize and monitor nanoaerosols in workplaces. Moreover, NIOSH has developed the Document Approaches to Safe Nanotechnology (intended to be an information exchange with NIOSH) in order to address current and future research needs to understanding the potential risks that nanotechnology may have to workers.

  2. Mapping patterns of soil properties and soil moisture using electromagnetic induction to investigate the impact of land use changes on soil processes

    NASA Astrophysics Data System (ADS)

    Robinet, Jérémy; von Hebel, Christian; van der Kruk, Jan; Govers, Gerard; Vanderborght, Jan

    2016-04-01

    As highlighted by many authors, classical or geophysical techniques for measuring soil moisture such as destructive soil sampling, neutron probes or Time Domain Reflectometry (TDR) have some major drawbacks. Among other things, they provide point scale information, are often intrusive and time-consuming. ElectroMagnetic Induction (EMI) instruments are often cited as a promising alternative hydrogeophysical methods providing more efficiently soil moisture measurements ranging from hillslope to catchment scale. The overall objective of our research project is to investigate whether a combination of geophysical techniques at various scales can be used to study the impact of land use change on temporal and spatial variations of soil moisture and soil properties. In our work, apparent electrical conductivity (ECa) patterns are obtained with an EM multiconfiguration system. Depth profiles of ECa were subsequently inferred through a calibration-inversion procedure based on TDR data. The obtained spatial patterns of these profiles were linked to soil profile and soil water content distributions. Two catchments with contrasting land use (agriculture vs. natural forest) were selected in a subtropical region in the south of Brazil. On selected slopes within the catchments, combined EMI and TDR measurements were carried out simultaneously, under different atmospheric and soil moisture conditions. Ground-truth data for soil properties were obtained through soil sampling and auger profiles. The comparison of these data provided information about the potential of the EMI technique to deliver qualitative and quantitative information about the variability of soil moisture and soil properties.

  3. Source counting in MEG neuroimaging

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.

    2009-02-01

    Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.

  4. Silicon Micromachined Microlens Array for THz Antennas

    NASA Technical Reports Server (NTRS)

    Lee, Choonsup; Chattopadhyay, Goutam; Mehdi, IImran; Gill, John J.; Jung-Kubiak, Cecile D.; Llombart, Nuria

    2013-01-01

    5 5 silicon microlens array was developed using a silicon micromachining technique for a silicon-based THz antenna array. The feature of the silicon micromachining technique enables one to microfabricate an unlimited number of microlens arrays at one time with good uniformity on a silicon wafer. This technique will resolve one of the key issues in building a THz camera, which is to integrate antennas in a detector array. The conventional approach of building single-pixel receivers and stacking them to form a multi-pixel receiver is not suited at THz because a single-pixel receiver already has difficulty fitting into mass, volume, and power budgets, especially in space applications. In this proposed technique, one has controllability on both diameter and curvature of a silicon microlens. First of all, the diameter of microlens depends on how thick photoresist one could coat and pattern. So far, the diameter of a 6- mm photoresist microlens with 400 m in height has been successfully microfabricated. Based on current researchers experiences, a diameter larger than 1-cm photoresist microlens array would be feasible. In order to control the curvature of the microlens, the following process variables could be used: 1. Amount of photoresist: It determines the curvature of the photoresist microlens. Since the photoresist lens is transferred onto the silicon substrate, it will directly control the curvature of the silicon microlens. 2. Etching selectivity between photoresist and silicon: The photoresist microlens is formed by thermal reflow. In order to transfer the exact photoresist curvature onto silicon, there needs to be etching selectivity of 1:1 between silicon and photoresist. However, by varying the etching selectivity, one could control the curvature of the silicon microlens. The figure shows the microfabricated silicon microlens 5 x5 array. The diameter of the microlens located in the center is about 2.5 mm. The measured 3-D profile of the microlens surface has a smooth curvature. The measured height of the silicon microlens is about 280 microns. In this case, the original height of the photoresist was 210 microns. The change was due to the etching selectivity of 1.33 between photoresist and silicon. The measured surface roughness of the silicon microlens shows the peak-to-peak surface roughness of less than 0.5 microns, which is adequate in THz frequency. For example, the surface roughness should be less than 7 microns at 600 GHz range. The SEM (scanning electron microscope) image of the microlens confirms the smooth surface. The beam pattern at 550 GHz shows good directivity.

  5. Unbiased feature selection in learning random forests for high-dimensional data.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi

    2015-01-01

    Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.

  6. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  7. Thermal stress effects in intermetallic matrix composites

    NASA Technical Reports Server (NTRS)

    Wright, P. K.; Sensmeier, M. D.; Kupperman, D. S.; Wadley, H. N. G.

    1993-01-01

    Intermetallic matrix composites develop residual stresses from the large thermal expansion mismatch (delta-alpha) between the fibers and matrix. This work was undertaken to: establish improved techniques to measure these thermal stresses in IMC's; determine residual stresses in a variety of IMC systems by experiments and modeling; and, determine the effect of residual stresses on selected mechanical properties of an IMC. X ray diffraction (XRD), neutron diffraction (ND), synchrotron XRD (SXRD), and ultrasonics (US) techniques for measuring thermal stresses in IMC were examined and ND was selected as the most promising technique. ND was demonstrated on a variety of IMC systems encompassing Ti- and Ni-base matrices, SiC, W, and Al2O3 fibers, and different fiber fractions (Vf). Experimental results on these systems agreed with predictions of a concentric cylinder model. In SiC/Ti-base systems, little yielding was found and stresses were controlled primarily by delta-alpha and Vf. In Ni-base matrix systems, yield strength of the matrix and Vf controlled stress levels. The longitudinal residual stresses in SCS-6/Ti-24Al-llNb composite were modified by thermomechanical processing. Increasing residual stress decreased ultimate tensile strength in agreement with model predictions. Fiber pushout strength showed an unexpected inverse correlation with residual stress. In-plane shear yield strength showed no dependence on residual stress. Higher levels of residual tension led to higher fatigue crack growth rates, as suggested by matrix mean stress effects.

  8. Quality assessment of DInSAR deformation measurements in volcanic areas by comparing GPS and SBAS results

    NASA Astrophysics Data System (ADS)

    Bonforte, A.; Casu, F.; de Martino, P.; Guglielmino, F.; Lanari, R.; Manzo, M.; Obrizzo, F.; Puglisi, G.; Sansosti, E.; Tammaro, U.

    2009-04-01

    Differential Synthetic Aperture Radar Interferometry (DInSAR) is a methodology able to measure ground deformation rates and time series of relatively large areas. Several different approaches have been developed over the past few years: they all have in common the capability to measure deformations on a relatively wide area (say 100 km by 100 km) with a high density of the measuring points. For these reasons, DInSAR represents a very useful tool for investigating geophysical phenomena, with particular reference to volcanic areas. As for any measuring technique, the knowledge of the attainable accuracy is of fundamental importance. In the case of DInSAR technology, we have several error sources, such as orbital inaccuracies, phase unwrapping errors, atmospheric artifacts, effects related to the reference point selection, thus making very difficult to define a theoretical error model. A practical way to obtain assess the accuracy is to compare DInSAR results with independent measurements, such as GPS or levelling. Here we present an in-deep comparison between the deformation measurement obtained by exploiting the DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm and by continuous GPS stations. The selected volcanic test-sites are Etna, Vesuvio and Campi Flegrei, in Italy. From continuous GPS data, solutions are computed at the same days SAR data are acquired for direct comparison. Moreover, three dimensional GPS displacement vectors are projected along the radar line of sight of both ascending and descending acquisition orbits. GPS data are then compared with the coherent DInSAR pixels closest to the GPS station. Relevant statistics of the differences between the two measurements are computed and correlated to some scene parameter that may affect DInSAR accuracy (altitude, terrain slope, etc.).

  9. Optical tracking of nanoscale particles in microscale environments

    PubMed Central

    Mathai, P. P.; Liddle, J. A.; Stavis, S. M.

    2016-01-01

    The trajectories of nanoscale particles through microscale environments record useful information about both the particles and the environments. Optical microscopes provide efficient access to this information through measurements of light in the far field from nanoparticles. Such measurements necessarily involve trade-offs in tracking capabilities. This article presents a measurement framework, based on information theory, that facilitates a more systematic understanding of such trade-offs to rationally design tracking systems for diverse applications. This framework includes the degrees of freedom of optical microscopes, which determine the limitations of tracking measurements in theory. In the laboratory, tracking systems are assemblies of sources and sensors, optics and stages, and nanoparticle emitters. The combined characteristics of such systems determine the limitations of tracking measurements in practice. This article reviews this tracking hardware with a focus on the essential functions of nanoparticles as optical emitters and microenvironmental probes. Within these theoretical and practical limitations, experimentalists have implemented a variety of tracking systems with different capabilities. This article reviews a selection of apparatuses and techniques for tracking multiple and single particles by tuning illumination and detection, and by using feedback and confinement to improve the measurements. Prior information is also useful in many tracking systems and measurements, which apply across a broad spectrum of science and technology. In the context of the framework and review of apparatuses and techniques, this article reviews a selection of applications, with particle diffusion serving as a prelude to tracking measurements in biological, fluid, and material systems, fabrication and assembly processes, and engineered devices. In so doing, this review identifies trends and gaps in particle tracking that might influence future research. PMID:27213022

  10. CLOSED-LOOP STRIPPING ANALYSIS (CLSA) OF SYNTHETIC MUSK COMPOUNDS FROM FISH TISSUES WITH MEASUREMENT BY GAS CHROMATOGRAPHY-MASS SPECTROMETRY WITH SELECTED-ION MONITORING

    EPA Science Inventory

    Synthetic musk compounds have been found in surface water, fish tissues, and human breast milk. Current techniques for separating these compounds from fish tissues require tedious sample clean-upprocedures A simple method for the deterrnination of these compounds in fish tissues ...

  11. A Multiple-range Self-balancing Thermocouple Potentiometer

    NASA Technical Reports Server (NTRS)

    Warshawsky, I; Estrin, M

    1951-01-01

    A multiple-range potentiometer circuit is described that provides automatic measurement of temperatures or temperature differences with any one of several thermocouple-material pairs. Techniques of automatic reference junction compensation, span adjustment, and zero suppression are described that permit rapid selection of range and wire material, without the necessity for restandardization, by setting of two external tap switches.

  12. Application of Difference-in-Difference Techniques to the Evaluation of Drought-Tainted Water Conservation Programs.

    ERIC Educational Resources Information Center

    Bamezai, Anil

    1995-01-01

    Some of the threats to internal validity that arise when evaluating the impact of water conservation programs during a drought are illustrated. These include differential response to the drought, self-selection bias, and measurement error. How to deal with these problems when high-quality disaggregate data are available is discussed. (SLD)

  13. Authentication of Electromagnetic Interference Removal in Johnson Noise Thermometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britton Jr, Charles L.; Roberts, Michael

    This report summarizes the testing performed offsite at the TVA Kingston Fossil Plant (KFP). This location is selected as a valid offsite test facility because the environment is very similar to the expected industrial nuclear power plant environment. This report will discuss the EMI discovered in the environment, the removal technique validity, and results from the measurements.

  14. Selection of reference genes for RT-qPCR analysis in the monarch butterfly, Danaus plexippus (L.), a migrating bio-indicator

    USDA-ARS?s Scientific Manuscript database

    Quantitative real-time PCR (qRT-PCR) is a reliable and reproducible technique for measuring and evaluating changes in gene expression. To facilitate gene expression studies and obtain more accurate qRT-PCR data, normalization relative to stable housekeeping genes is required. In this study, expres...

  15. Reviewing effectiveness of ankle assessment techniques for use in robot-assisted therapy.

    PubMed

    Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Shane

    2014-01-01

    This article provides a comprehensive review of studies that investigated ankle assessment techniques to better understand those that can be used in the real-time monitoring of rehabilitation progress for implementation in conjunction with robot-assisted therapy. Seventy-six publications published between January 1980 and August 2013 were selected based on eight databases. They were divided into two main categories (16 qualitative and 60 quantitative studies): 13 goniometer studies, 18 dynamometer studies, and 29 studies about innovative techniques. A total of 465 subjects participated in the 29 quantitative studies of innovative measurement techniques that may potentially be integrated in a real-time monitoring device, of which 19 studies included less than 10 participants. Results show that qualitative ankle assessment methods are not suitable for real-time monitoring in robot-assisted therapy, though they are reliable for certain patients, while the quantitative methods show great potential. The majority of quantitative techniques are reliable in measuring ankle kinematics and kinetics but are usually available only for use in the sagittal plane. Limited studies determine kinematics and kinetics in all three planes (sagittal, transverse, and frontal) where motions of the ankle joint and the subtalar joint actually occur.

  16. Feature selection for the classification of traced neurons.

    PubMed

    López-Cabrera, José D; Lorenzo-Ginori, Juan V

    2018-06-01

    The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Color vision but not visual attention is altered in migraine.

    PubMed

    Shepherd, Alex J

    2006-04-01

    To examine visual search performance in migraine and headache-free control groups and to determine whether reports of selective color vision deficits in migraine occur preattentively. Visual search is a classic technique to measure certain components of visual attention. The technique can be manipulated to measure both preattentive (automatic) and attentive processes. Here, visual search for colored targets was employed to extend earlier reports that the detection or discrimination of colors selective for the short-wavelength sensitive cone photoreceptors in the retina (S or "blue" cones) is impaired in migraine. Visual search performance for small and large color differences was measured in 34 migraine and 34 control participants. Small and large color differences were included to assess attentive and preattentive processing, respectively. In separate conditions, colored stimuli were chosen that would be detected selectively by either the S-, or by the long- (L or "red") and middle (M or "green")-wavelength sensitive cone photoreceptors. The results showed no preattentive differences between the migraine and control groups. For active, or attentive, search, differences between the migraine and control groups occurred for colors detected by the S-cones only, there were no differences for colors detected by the L- and M-cones. The migraine group responded significantly more slowly than the control group for the S-cone colors. The pattern of results indicates that there are no overall differences in search performance between migraine and control groups. The differences found for the S-cone colors are attributed to impaired discrimination of these colors in migraine and not to differences in attention.

  18. Selectivity/Specificity Improvement Strategies in Surface-Enhanced Raman Spectroscopy Analysis

    PubMed Central

    Wang, Feng; Cao, Shiyu; Yan, Ruxia; Wang, Zewei; Wang, Dan; Yang, Haifeng

    2017-01-01

    Surface-enhanced Raman spectroscopy (SERS) is a powerful technique for the discrimination, identification, and potential quantification of certain compounds/organisms. However, its real application is challenging due to the multiple interference from the complicated detection matrix. Therefore, selective/specific detection is crucial for the real application of SERS technique. We summarize in this review five selective/specific detection techniques (chemical reaction, antibody, aptamer, molecularly imprinted polymers and microfluidics), which can be applied for the rapid and reliable selective/specific detection when coupled with SERS technique. PMID:29160798

  19. Coal thickness guage using RRAS techniques, parts 2 and 3

    NASA Technical Reports Server (NTRS)

    King, J. D.; Rollwitz, W. L.

    1980-01-01

    Electron magnetic resonance was investigated as a sensing technique for use in measuring the thickness of the layer of coal overlying the rock substrate. The goal is development of a thickness gauge which will be usable for control of mining machinery to maintain the coal thickness within selected bounds. A sensor must be noncontracting, have a measurement range of 6 inches or more, and an accuracy of 1/2 inch or better. The sensor should be insensitive to variations in spacing between the sensor and the surface, the response speed should be adequate to permit use on continuous mining equipment, and the device should be rugged and otherwise suited for operation under conditions of high vibration, moisture, and dust. Finally, the sensor measurement must not be adversely affected by the natural effects occurring in coal such as impurities, voids, cracks, layering, high moisture level, and other conditions that are likely to be encountered.

  20. Infrared Thermal Testing Of Mechanical Assemblies At The Military Depot And Field Level: A Progress Report

    NASA Astrophysics Data System (ADS)

    Kaplan, Herbert

    1988-01-01

    Based on encouraging results on the Army's programs for infrared mass screening of printed circuit boards at the depot level, the US Army CECOM (Communication-Electronics Command) undertook a one-year investigation of the applicability of similar techniques to screening and diagnostics of mechanical assemblies. These included tanks, helicopters, transport vehicles and their major subassemblies (transmissions, engines, axles, etc.) at field and depot levels. Honeyhill Technical Company was tasked to classify candidate assemblies and perform preliminary measurements using Army-owned general-purpose thermal imaging equipment. The investigations yielded positive results, and it was decided to pursue a comprehensive measurements program using field-mobile equipment specifically procured for the program. This paper summarizes the results of the investigations, outlines the measurements techniques utilized, describes the classification and selection of candidate assemblies, and reports on progress toward the goals of the program.

  1. Quantitative broadband absorption and scattering spectroscopy in turbid media by combined frequency-domain and steady state methodologies

    DOEpatents

    Tromberg, Bruce J [Irvine, CA; Berger, Andrew J [Rochester, NY; Cerussi, Albert E [Lake Forest, CA; Bevilacqua, Frederic [Costa Mesa, CA; Jakubowski, Dorota [Irvine, CA

    2008-09-23

    A technique for measuring broadband near-infrared absorption spectra of turbid media that uses a combination of frequency-domain and steady-state reflectance methods. Most of the wavelength coverage is provided by a white-light steady-state measurement, whereas the frequency-domain data are acquired at a few selected wavelengths. Coefficients of absorption and reduced scattering derived from the frequency-domain data are used to calibrate the intensity of the steady-state measurements and to determine the reduced scattering coefficient at all wavelengths in the spectral window of interest. The absorption coefficient spectrum is determined by comparing the steady-state reflectance values with the predictions of diffusion theory, wavelength by wavelength. Absorption spectra of a turbid phantom and of human breast tissue in vivo, derived with the combined frequency-domain and steady-state technique, agree well with expected reference values.

  2. A self-tuning automatic voltage regulator designed for an industrial environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flynn, D.; Hogg, B.W.; Swidenbank, E.

    Examination of the performance of fixed parameter controllers has resulted in the development of self-tuning strategies for excitation control of turbogenerator systems. In conjunction with the advanced control algorithms, sophisticated measurement techniques have previously been adopted on micromachine systems to provide generator terminal quantities. In power stations, however, a minimalist hardware arrangement would be selected leading to relatively simple measurement techniques. The performance of a range of self-tuning schemes is investigated on an industrial test-bed, employing a typical industrial hardware measurement system. Individual controllers are implemented on a standard digital automatic voltage regulator, as installed in power stations. This employsmore » a VME platform, and the self-tuning algorithms are introduced by linking to a transputer network. The AVR includes all normal features, such as field forcing, VAR limiting and overflux protection. Self-tuning controller performance is compared with that of a fixed gain digital AVR.« less

  3. Average Dielectric Property Analysis of Complex Breast Tissue with Microwave Transmission Measurements

    PubMed Central

    Garrett, John D.; Fear, Elise C.

    2015-01-01

    Prior information about the average dielectric properties of breast tissue can be implemented in microwave breast imaging techniques to improve the results. Rapidly providing this information relies on acquiring a limited number of measurements and processing these measurement with efficient algorithms. Previously, systems were developed to measure the transmission of microwave signals through breast tissue, and simplifications were applied to estimate the average properties. These methods provided reasonable estimates, but they were sensitive to multipath. In this paper, a new technique to analyze the average properties of breast tissues while addressing multipath is presented. Three steps are used to process transmission measurements. First, the effects of multipath were removed. In cases where multipath is present, multiple peaks were observed in the time domain. A Tukey window was used to time-gate a single peak and, therefore, select a single path through the breast. Second, the antenna response was deconvolved from the transmission coefficient to isolate the response from the tissue in the breast interior. The antenna response was determined through simulations. Finally, the complex permittivity was estimated using an iterative approach. This technique was validated using simulated and physical homogeneous breast models and tested with results taken from a recent patient study. PMID:25585106

  4. Space charge distributions in insulating polymers: A new non-contacting way of measurement.

    PubMed

    Marty-Dessus, D; Ziani, A C; Petre, A; Berquez, L

    2015-04-01

    A new technique for the determination of space charge profiles in insulating polymers is proposed. Based on the evolution of an existing thermal wave technique called Focused Laser Intensity Modulation Method ((F)LIMM), it allows non-contact measurements on thin films exhibiting an internal charge to be studied. An electrostatic model taking into account the new sample-cell geometry proposed was first developed. It has been shown, in particular, that it was theoretically possible to calculate the internal charge from experimental measurements while allowing an evaluation of the air layer appearing between the sample and the electrode when non-contact measurements are performed. These predictions were confirmed by an experimental implementation for two thin polymer samples (25 μm-polyvinylidenefluoride and 50 μm-polytetrafluoroethylene (PTFE)) used as tests. In these cases, minimum air-layer thickness was determined with an accuracy of 3% and 20%, respectively, depending on the signal-to-noise ratio during the experimental procedure. In order to illustrate the reachable possibilities of this technique, 2D and 3D cartographies of a negative space charge implanted by electron beam within the PTFE test sample were depicted: like in conventional (F)LIMM, a multidimensional representation of a selectively implanted charge remains possible at a few microns depth, but using a non-contacting way of measurement.

  5. Standardised imaging technique for guided M-mode and Doppler echocardiography in the horse.

    PubMed

    Long, K J; Bonagura, J D; Darke, P G

    1992-05-01

    Eighteen echocardiographic images useful for diagnostic imaging, M-mode echocardiography, and Doppler echocardiography of the equine heart were standardised by relating the position of the axial beam to various intracardiac landmarks. The transducer orientation required for each image was recorded in 14 adult horses by describing the degree of sector rotation and the orientation of the axial beam relative to the thorax. Repeatable images could be obtained within narrow limits of angulation and rotation for 14 of the 18 standardised images evaluated. Twenty-seven National Hunt horses were subsequently examined using this standardised technique. Selected cardiac dimensions were measured from two-dimensional and guided M-mode studies. Satisfactory results were achieved in 26 of the 27 horses. There was no linear correlation between any of the measured cardiac values and bodyweight. There was no significant difference between measurements taken from the left and the right hemithorax. Six horses were imaged on three consecutive days to assess the repeatability of the measurements. No significant difference was found between measurements obtained on different days. This study demonstrates a method for standardised echocardiographic evaluation of the equine heart that is repeatable, valuable for teaching techniques of equine echocardiography, applicable for diagnostic imaging and quantification of cardiac size, and useful for the evaluation of blood-flow patterns by Doppler ultrasound.

  6. Estimated monthly percentile discharges at ungaged sites in the Upper Yellowstone River Basin in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1986-01-01

    Once-monthly streamflow measurements were used to estimate selected percentile discharges on flow-duration curves of monthly mean discharge for 40 ungaged stream sites in the upper Yellowstone River basin in Montana. The estimation technique was a modification of the concurrent-discharge method previously described and used by H.C. Riggs to estimate annual mean discharge. The modified technique is based on the relationship of various mean seasonal discharges to the required discharges on the flow-duration curves. The mean seasonal discharges are estimated from the monthly streamflow measurements, and the percentile discharges are calculated from regression equations. The regression equations, developed from streamflow record at nine gaging stations, indicated a significant log-linear relationship between mean seasonal discharge and various percentile discharges. The technique was tested at two discontinued streamflow-gaging stations; the differences between estimated monthly discharges and those determined from the discharge record ranged from -31 to +27 percent at one site and from -14 to +85 percent at the other. The estimates at one site were unbiased, and the estimates at the other site were consistently larger than the recorded values. Based on the test results, the probable average error of the technique was + or - 30 percent for the 21 sites measured during the first year of the program and + or - 50 percent for the 19 sites measured during the second year. (USGS)

  7. Measurement of radiation dose with BeO dosimeters using optically stimulated luminescence technique in radiotherapy applications.

    PubMed

    Şahin, Serdar; Güneş Tanır, A; Meriç, Niyazi; Aydınkarahaliloğlu, Ercan

    2015-09-01

    The radiation dose delivered to the target by using different radiotherapy applications has been measured with the help of beryllium oxide (BeO) dosimeters to be placed inside the rando phantom. Three-Dimensional Conformal Radiotherapy (3DCRT), Intensity-Modulated Radiotherapy (IMRT) and Intensity-Modulated Arc Therapy (IMAT) have been used as radiotherapy application. Individual treatment plans have been made for the three radiotherapy applications of rando phantom. The section 4 on the phantom was selected as target and 200 cGy doses were delivered. After the dosimeters placed on section 4 (target) and the sections 2 and 6 (non-target) were irradiated, the result was read through the OSL technique on the Risø TL/OSL system. This procedure was repeated three times for each radiotherapy application. The doses delivered to the target and the non-target sections as a result of the 3DCRT, IMRT and IMAT plans were analyzed. The doses received by the target were measured as 204.71 cGy, 204.76 cGy and 205.65 cGy, respectively. The dose values obtained from treatment planning system (TPS) were compared to the dose values obtained using the OSL technique. It has been concluded that, the radiation dose can be measured with the OSL technique by using BeO dosimeters in medical practices. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Performance of the Bowen ratio systems on a 22 deg slope

    NASA Technical Reports Server (NTRS)

    Nie, D.; Flitcroft, I.; Kanemasu, E. T.

    1990-01-01

    The Bowen ratio energy balance technique was used to assess the energy fluxes on inclined surfaces during the First ISLSCP Field Experiment (FIFE). Since air flow over sloping surface may differ from that over flat terrain, it is important to examine whether Bowen ratio measurements taken on sloping surfaces are valid. In this study, the suitability of using the Bowen ratio technique on sloping surfaces was tested by examining the assumptions that the technique requires for valid measurements. This was accomplished by studying the variation of Bowen ratio measurements along a selected slope at the FIFE site. In September 1988, four Bowen ratio systems were set up in a line along the 22 degree north-facing slope with northerly air flow (wind went up the slope). In July of 1989, six Bowen ratio systems were similarly installed with southerly air flow (the wind went down slope). Results indicated that, at distances between 10 to 40 meters from the top of the slope, no temperature or vapor pressure gradient parallel to the slope was detected. Uniform Bowen ratio values were obtained on the slope, and thus the sensible or latent heat flux should be similar along the slope. This indicates that the assumptions for valid flux measurements are reasonably met at the slope. The Bowen ratio technique should give the best estimates of the energy fluxes on slopes similar to that in this study.

  9. Measurement of fracture toughness by nanoindentation methods: Recent advances and future challenges

    DOE PAGES

    Sebastiani, Marco; Johanns, K. E.; Herbert, Erik G.; ...

    2015-04-30

    In this study, we describe recent advances and developments for the measurement of fracture toughness at small scales by the use of nanoindentation-based methods including techniques based on micro-cantilever beam bending and micro-pillar splitting. A critical comparison of the techniques is made by testing a selected group of bulk and thin film materials. For pillar splitting, cohesive zone finite element simulations are used to validate a simple relationship between the critical load at failure, the pillar radius, and the fracture toughness for a range of material properties and coating/substrate combinations. The minimum pillar diameter required for nucleation and growth ofmore » a crack during indentation is also estimated. An analysis of pillar splitting for a film on a dissimilar substrate material shows that the critical load for splitting is relatively insensitive to the substrate compliance for a large range of material properties. Experimental results from a selected group of materials show good agreement between single cantilever and pillar splitting methods, while a discrepancy of ~25% is found between the pillar splitting technique and double-cantilever testing. It is concluded that both the micro-cantilever and pillar splitting techniques are valuable methods for micro-scale assessment of fracture toughness of brittle ceramics, provided the underlying assumptions can be validated. Although the pillar splitting method has some advantages because of the simplicity of sample preparation and testing, it is not applicable to most metals because their higher toughness prevents splitting, and in this case, micro-cantilever bend testing is preferred.« less

  10. Initial development of an NIR strain measurement technique in brittle geo-materials

    NASA Astrophysics Data System (ADS)

    Butcher, Emily; Gibson, Andrew; Benson, Philip

    2016-04-01

    Visible-Near Infrared Spectroscopy (VIS-NIR) is a technique developed for the non-contact measurement of compositional characteristics of surfaces. The technique is rapid, sensitive to change in surface topology and has found applications ranging from planetary geology, soil science, pharmacy to materials testing. The technique has also been used in a limited fashion to measure strain changes in rocks and minerals (Ord and Hobbs 1986). However, there have been few quantitative studies linking such changes in material strains (and other rock physics parameters) to the resulting VIS-NIT signature. This research seeks to determine whether improvements in VIS-NIR equipment means that such a technique is a viable method to measure strains in rock via this remote (non-contact) method. We report new experiments carried out using 40 mm Brazilian Tensile discs of Carrera Marble and Darley Dale Sandstone using an Instron 600LX in the University of Portsmouth Rock Mechanics Laboratory. The tensile test was selected for this experiment as the sample shape and sensor arrangements allow access to a 'flat' surface area throughout the test, allowing surface measurements to be continuously taken whilst the discs are strained to failure. An ASD Labspec 5000 with 25 mm foreoptic was used to collect reflectance spectra in the range 350-2500 nm during each tensile test. Results from Carrera Marble experiments show that reflectance at 2050 nm negatively correlates (by polynomial regression) with axial strain between 0.05-0.5%, with r2 of 0.99. Results from Darley Dale Sandstone data show that reflectance at 1970 nm positively correlates with axial deformation between 0.05-0.5%, with r2 of 0.98. Initial analyses suggests that the VIS-NIR possesses an output that scales in a quantifiable manner with rock strain, and shows promise as a technique for strain measurement. The method has particular application for allowing our laboratory measurements to "ground truth" data taken from drone and other remote sensing techniques that could employ this method. However, further work is underway to understand the exact nature of the correlations - for instance, whether reflectance is related to deformation to the mineral lattice, macro-surface or micro-surface.

  11. Endodontic filling removal procedure: an ex vivo comparative study between two rotary techniques.

    PubMed

    Vale, Mônica Sampaio do; Moreno, Melinna dos Santos; Silva, Priscila Macêdo França da; Botelho, Thereza Cristina Farias

    2013-01-01

    In this study, we compared the ex vivo removal capacity of two endodontic rotary techniques and determined whether there was a significant quantitative difference in residual material when comparing root thirds. Forty extracted molars were used. The palatal roots were selected, and the canals were prepared using a step-back technique and filled using a lateral condensation technique with gutta-percha points and Endofill sealer. After two weeks of storage in a 0.9% saline solution at 37 ºC in an oven, the specimens were divided into 2 groups of 20, with group 1 samples subjected to Gates-Glidden drills and group 2 samples subjected to the ProTaper retreatment System. Hedstroem files and eucalyptol solvent were used in both groups to complete the removal procedure. Then, the roots thirds were radiographed and the images were submitted to the NIH ImageJ program to measure the residual filling material in mm. Each root third was related to the total area of the root canals. The data were analyzed using Student's t test. There was a statistically significant difference between the two techniques as more filling material was removed by technique 2 (ProTaper) than technique 1 (Gates-Glidden drills, p < 0.05). The apical third had a greater amount of residual filling material than the cervical and middle thirds, and the difference was statistically significant (p < 0.05). None of the selected techniques removed all filling material, and the material was most difficult to remove from the apical third. The ProTaper files removed more material than the Gates-Glidden drills.

  12. Application of Cavity Enhanced Absorption Spectroscopy to the Detection of Nitric Oxide, Carbonyl Sulphide, and Ethane—Breath Biomarkers of Serious Diseases

    PubMed Central

    Wojtas, Jacek

    2015-01-01

    The paper presents one of the laser absorption spectroscopy techniques as an effective tool for sensitive analysis of trace gas species in human breath. Characterization of nitric oxide, carbonyl sulphide and ethane, and the selection of their absorption lines are described. Experiments with some biomarkers showed that detection of pathogenic changes at the molecular level is possible using this technique. Thanks to cavity enhanced spectroscopy application, detection limits at the ppb-level and short measurements time (<3 s) were achieved. Absorption lines of reference samples of the selected volatile biomarkers were probed using a distributed feedback quantum cascade laser and a tunable laser system consisting of an optical parametric oscillator and difference frequency generator. Setup using the first source provided a detection limit of 30 ppb for nitric oxide and 250 ppb for carbonyl sulphide. During experiments employing a second laser, detection limits of 0.9 ppb and 0.3 ppb were obtained for carbonyl sulphide and ethane, respectively. The conducted experiments show that this type of diagnosis would significantly increase chances for effective therapy of some diseases. Additionally, it offers non-invasive and real time measurements, high sensitivity and selectivity as well as minimizing discomfort for patients. For that reason, such sensors can be used in screening for early detection of serious diseases. PMID:26091398

  13. Application of Cavity Enhanced Absorption Spectroscopy to the Detection of Nitric Oxide, Carbonyl Sulphide, and Ethane--Breath Biomarkers of Serious Diseases.

    PubMed

    Wojtas, Jacek

    2015-06-17

    The paper presents one of the laser absorption spectroscopy techniques as an effective tool for sensitive analysis of trace gas species in human breath. Characterization of nitric oxide, carbonyl sulphide and ethane, and the selection of their absorption lines are described. Experiments with some biomarkers showed that detection of pathogenic changes at the molecular level is possible using this technique. Thanks to cavity enhanced spectroscopy application, detection limits at the ppb-level and short measurements time (<3 s) were achieved. Absorption lines of reference samples of the selected volatile biomarkers were probed using a distributed feedback quantum cascade laser and a tunable laser system consisting of an optical parametric oscillator and difference frequency generator. Setup using the first source provided a detection limit of 30 ppb for nitric oxide and 250 ppb for carbonyl sulphide. During experiments employing a second laser, detection limits of 0.9 ppb and 0.3 ppb were obtained for carbonyl sulphide and ethane, respectively. The conducted experiments show that this type of diagnosis would significantly increase chances for effective therapy of some diseases. Additionally, it offers non-invasive and real time measurements, high sensitivity and selectivity as well as minimizing discomfort for patients. For that reason, such sensors can be used in screening for early detection of serious diseases.

  14. Ion Trapping with Fast-Response Ion-Selective Microelectrodes Enhances Detection of Extracellular Ion Channel Gradients

    PubMed Central

    Messerli, Mark A.; Collis, Leon P.; Smith, Peter J.S.

    2009-01-01

    Previously, functional mapping of channels has been achieved by measuring the passage of net charge and of specific ions with electrophysiological and intracellular fluorescence imaging techniques. However, functional mapping of ion channels using extracellular ion-selective microelectrodes has distinct advantages over the former methods. We have developed this method through measurement of extracellular K+ gradients caused by efflux through Ca2+-activated K+ channels expressed in Chinese hamster ovary cells. We report that electrodes constructed with short columns of a mechanically stable K+-selective liquid membrane respond quickly and measure changes in local [K+] consistent with a diffusion model. When used in close proximity to the plasma membrane (<4 μm), the ISMs pose a barrier to simple diffusion, creating an ion trap. The ion trap amplifies the local change in [K+] without dramatically changing the rise or fall time of the [K+] profile. Measurement of extracellular K+ gradients from activated rSlo channels shows that rapid events, 10–55 ms, can be characterized. This method provides a noninvasive means for functional mapping of channel location and density as well as for characterizing the properties of ion channels in the plasma membrane. PMID:19217875

  15. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    PubMed Central

    Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  16. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  17. Electro-optic and holographic measurement techniques for the atmospheric sciences. [considering spacecraft simulation applications

    NASA Technical Reports Server (NTRS)

    Moore, W. W., Jr.; Lemons, J. F.; Kurtz, R. L.; Liu, H.-K.

    1977-01-01

    A comprehensive examination is made of recent advanced research directions in the applications of electro-optical and holographic instrumentations and methods to atmospheric sciences problems. In addition, an overview is given of the in-house research program for environmental and atmospheric measurements with emphasis on particulates systems. Special treatment is made of the instrument methods and applications work in the areas of laser scattering spectrometers and pulsed holography sizing systems. Selected engineering tests data on space simulation chamber programs are discussed.

  18. Preliminary Work for Identifying and Tracking Combustion Reaction Pathways by Coherent Microwave Mapping of Photoelectrons

    DTIC Science & Technology

    2016-06-24

    wall Radar technique has been built and preliminary results of pyrolysis of iso-butane have been obtained. Qualitative measurements of ethylene in...The (2+1) REMPI ionizations of ethylene (C2H4, 11B3u(π,3p) Rydberg manifold) was selectively induced at 310─325nm. The ethylene was detectable at...quantitative measurements of ethylene as one of the pyrolysis products by using coherent microwave Rayleigh scattering (Radar) from Resonant Enhanced Multi

  19. Coliform species recovered from untreated surface water and drinking water by the membrane filter, standard, and modified most-probable-number techniques.

    PubMed Central

    Evans, T M; LeChevallier, M W; Waarvick, C E; Seidler, R J

    1981-01-01

    The species of total coliform bacteria isolated from drinking water and untreated surface water by the membrane filter (MF), the standard most-probable-number (S-MPN), and modified most-probable-number (M-MPN) techniques were compared. Each coliform detection technique selected for a different profile of coliform species from both types of water samples. The MF technique indicated that Citrobacter freundii was the most common coliform species in water samples. However, the fermentation tube techniques displayed selectivity towards the isolation of Escherichia coli and Klebsiella. The M-MPN technique selected for more C. freundii and Enterobacter spp. from untreated surface water samples and for more Enterobacter and Klebsiella spp. from drinking water samples than did the S-MPN technique. The lack of agreement between the number of coliforms detected in a water sample by the S-MPN, M-MPN, and MF techniques was a result of the selection for different coliform species by the various techniques. PMID:7013706

  20. Modulation and synchronization technique for MF-TDMA system

    NASA Technical Reports Server (NTRS)

    Faris, Faris; Inukai, Thomas; Sayegh, Soheil

    1994-01-01

    This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.

  1. Research on Hartmann test for progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Qin, Lin-ling; Yu, Jing-chi

    2009-05-01

    Recently, in the world some growing-up measurements for Progressive addition lenses and relevant equipments have been developed. They are single point measurement, moiré deflectometry, Ronchi test techniques. Hartmann test for Progressive addition lenses is proposed in the article. The measurement principle of Hartmann test for ophthalmic lenses and the power compensation of off-axis rays are introduced. The experimental setup used to test lenses is put forward. For experimental test, a spatial filter is used for selecting a clean Gaussian beam; a collimating lens with focal distance f =300 mm is used to produce collimated beam. The Hartmann plate with a square array of holes separated at 2 mm is selected. The selection of laser and CCD camera is critical to the accuracy of experiment and the image processing algorithm. The spot patterns from CCD are obtained from the experimental tests. The power distribution map for lenses can be obtained by image processing in theory. The results indicate that Hartmann test for Progressive addition lenses is convenient and feasible; also its structure is simple.

  2. Development of Millimeter-Wave Velocimetry and Acoustic Time-of-Flight Tomography for Measurements in Densely Loaded Gas-Solid Riser Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fort, James A.; Pfund, David M.; Sheen, David M.

    2007-04-01

    The MFDRC was formed in 1998 to advance the state-of-the-art in simulating multiphase turbulent flows by developing advanced computational models for gas-solid flows that are experimentally validated over a wide range of industrially relevant conditions. The goal was to transfer the resulting validated models to interested US commercial CFD software vendors, who would then propagate the models as part of new code versions to their customers in the US chemical industry. Since the lack of detailed data sets at industrially relevant conditions is the major roadblock to developing and validating multiphase turbulence models, a significant component of the work involvedmore » flow measurements on an industrial-scale riser contributed by Westinghouse, which was subsequently installed at SNL. Model comparisons were performed against these datasets by LANL. A parallel Office of Industrial Technology (OIT) project within the consortium made similar comparisons between riser measurements and models at NETL. Measured flow quantities of interest included volume fraction, velocity, and velocity-fluctuation profiles for both gas and solid phases at various locations in the riser. Some additional techniques were required for these measurements beyond what was currently available. PNNL’s role on the project was to work with the SNL experimental team to develop and test two new measurement techniques, acoustic tomography and millimeter-wave velocimetry. Acoustic tomography is a promising technique for gas-solid flow measurements in risers and PNNL has substantial related experience in this area. PNNL is also active in developing millimeter wave imaging techniques, and this technology presents an additional approach to make desired measurements. PNNL supported the advanced diagnostics development part of this project by evaluating these techniques and then by adapting and developing the selected technology to bulk gas-solids flows and by implementing them for testing in the SNL riser testbed.« less

  3. Multitemporal spectroscopy for crop stress detection using band selection methods

    NASA Astrophysics Data System (ADS)

    Mewes, Thorsten; Franke, Jonas; Menz, Gunter

    2008-08-01

    A fast and precise sensor-based identification of pathogen infestations in wheat stands is essential for the implementation of site-specific fungicide applications. Several works have shown possibilities and limitations for the detection of plant stress using spectral sensor data. Hyperspectral data provide the opportunity to collect spectral reflectance in contiguous bands over a broad range of the electromagnetic spectrum. Individual phenomena like the light absorption of leaf pigments can be examined in detail. The precise knowledge of stress-dependent shifting in certain spectral wavelengths provides great advantages in detecting fungal infections. This study focuses on band selection techniques for hyperspectral data to identify relevant and redundant information in spectra regarding a detection of plant stress caused by pathogens. In a laboratory experiment, five 1 sqm boxes with wheat were multitemporarily measured by a ASD Fieldspec® 3 FR spectroradiometer. Two stands were inoculated with Blumeria graminis - the pathogen causing powdery mildew - and one stand was used to simulate the effect of water deficiency. Two stands were kept healthy as control stands. Daily measurements of the spectral reflectance were taken over a 14-day period. Three ASD Pro Lamps were used to illuminate the plots with constant light. By applying band selection techniques, the three types of different wheat vitality could be accurately differentiated at certain stages. Hyperspectral data can provide precise information about pathogen infestations. The reduction of the spectral dimension of sensor data by means of band selection procedures is an appropriate method to speed up the data supply for precision agriculture.

  4. Colour-Difference Measurement Method for Evaluation of Quality of Electrolessly Deposited Copper on Polymer after Laser-Induced Selective Activation

    PubMed Central

    Gedvilas, Mindaugas; Ratautas, Karolis; Kacar, Elif; Stankevičienė, Ina; Jagminienė, Aldona; Norkus, Eugenijus; Li Pira, Nello; Račiukaitis, Gediminas

    2016-01-01

    In this work a novel colour-difference measurement method for the quality evaluation of copper deposited on a polymer is proposed. Laser-induced selective activation (LISA) was performed onto the surface of the polycarbonate/acrylonitrile butadiene styrene (PC/ABS) polymer by using nanosecond laser irradiation. The laser activated PC/ABS polymer was copper plated by using the electroless copper plating (ECP) procedure. The sheet resistance measured by using a four-point probe technique was found to decrease by the power law with the colour-difference of the sample images after LISA and ECP procedures. The percolation theory of the electrical conductivity of the insulator conductor mixture has been adopted in order to explain the experimental results. The new proposed method was used to determine an optimal set of the laser processing parameters for best plating conditions. PMID:26960432

  5. X-ray energy selected imaging with Medipix II

    NASA Astrophysics Data System (ADS)

    Ludwig, J.; Zwerger, A.; Benz, K.-W.; Fiederle, M.; Braml, H.; Fauler, A.; Konrath, J.-P.

    2004-09-01

    Two different X-ray tube accelerating voltages (60 and 70kV) are used for diagnosis of front teeth and molars. Different energy ranges are necessary as function of tooth thickness to obtain similar contrast for imaging. This technique drives the costs for the X-ray tube up and allows for just two optimized settings. Energy range selection for the detection of the penetrating X-rays would overcome these severe setbacks. The single photon counting chip MEDIPIX2 http://www.cern.ch/medipix exhibits exactly this feature.First simulations and measurements have been carried out using a dental X-ray source. As a demonstrator a real tooth has been used with different cavities and filling materials. Simulations showed in general larger improvements as compared to measurements regarding SNR and contrast: A beneficial factor of 4% wrt SNR and 25% for contrast, measurements showed factors of 2.5 and up to 10%, respectively.

  6. Impulse oscillometry in the evaluation of diseases of the airways in children

    PubMed Central

    Komarow, Hirsh D.; Myles, Ian A.; Uzzaman, Ashraf; Metcalfe, Dean D.

    2012-01-01

    Objective To provide an overview of impulse oscillometry and its application to the evaluation of children with diseases of the airways. Data Sources Medline and PubMed search, limited to English language and human disease, with keywords forced oscillation, impulse oscillometry, and asthma. Study Selections The opinions of the authors were used to select studies for inclusion in this review. Results Impulse oscillometry is a noninvasive and rapid technique requiring only passive cooperation by the patient. Pressure oscillations are applied at the mouth to measure pulmonary resistance and reactance. It is employed by health care professionals to help diagnose pediatric pulmonary diseases such asthma and cystic fibrosis; assess therapeutic responses; and measure airway resistance during provocation testing. Conclusions Impulse oscillometry provides a rapid, noninvasive measure of airway impedance. It may be easily employed in the diagnosis and management of diseases of the airways in children. PMID:21354020

  7. JNDS of interaural time delay (ITD) of selected frequency bands in speech and music signals

    NASA Astrophysics Data System (ADS)

    Aliphas, Avner; Colburn, H. Steven; Ghitza, Oded

    2002-05-01

    JNDS of interaural time delay (ITD) of selected frequency bands in the presence of other frequency bands have been reported for noiseband stimuli [Zurek (1985); Trahiotis and Bernstein (1990)]. Similar measurements will be reported for speech and music signals. When stimuli are synthesized with bandpass/band-stop operations, performance with complex stimuli are similar to noisebands (JNDS in tens or hundreds of microseconds); however, the resulting waveforms, when viewed through a model of the auditory periphery, show distortions (irregularities in phase and level) at the boundaries of the target band of frequencies. An alternate synthesis method based upon group-delay filtering operations does not show these distortions and is being used for the current measurements. Preliminary measurements indicate that when music stimuli are created using the new techniques, JNDS of ITDs are increased significantly compared to previous studies, with values on the order of milliseconds.

  8. Universality and predictability in molecular quantitative genetics.

    PubMed

    Nourmohammad, Armita; Held, Torsten; Lässig, Michael

    2013-12-01

    Molecular traits, such as gene expression levels or protein binding affinities, are increasingly accessible to quantitative measurement by modern high-throughput techniques. Such traits measure molecular functions and, from an evolutionary point of view, are important as targets of natural selection. We review recent developments in evolutionary theory and experiments that are expected to become building blocks of a quantitative genetics of molecular traits. We focus on universal evolutionary characteristics: these are largely independent of a trait's genetic basis, which is often at least partially unknown. We show that universal measurements can be used to infer selection on a quantitative trait, which determines its evolutionary mode of conservation or adaptation. Furthermore, universality is closely linked to predictability of trait evolution across lineages. We argue that universal trait statistics extends over a range of cellular scales and opens new avenues of quantitative evolutionary systems biology. Copyright © 2013. Published by Elsevier Ltd.

  9. Interpretation of tropospheric ozone variability in data with different vertical and temporal resolution

    NASA Astrophysics Data System (ADS)

    Petropavlovskikh, I. V.; Disterhoft, P.; Johnson, B. J.; Rieder, H. E.; Manney, G. L.; Daffer, W.

    2012-12-01

    This work attributes tropospheric ozone variability derived from the ground-based Dobson and Brewer Umkehr measurements and from ozone sonde data to local sources and transport. It assesses capability and limitations in both types of measurements that are often used to analyze long- and short-term variability in tropospheric ozone time series. We will address the natural and instrument-related contribution to the variability found in both Umkehr and sonde data. Validation of Umkehr methods is often done by intercomparisons against independent ozone measuring techniques such as ozone sounding. We will use ozone-sounding in its original and AK-smoothed vertical profiles for assessment of ozone inter-annual variability over Boulder, CO. We will discuss possible reasons for differences between different ozone measuring techniques and its effects on the derived ozone trends. Next to standard evaluation techniques we utilize a STL-decomposition method to address temporal variability and trends in the Boulder Umkehr data. Further, we apply a statistical modeling approach to the ozone data set to attribute ozone variability to individual driving forces associated with natural and anthropogenic causes. To this aim we follow earlier work applying a backward selection method (i.e., a stepwise elimination procedure out of a set of total 44 explanatory variables) to determine those explanatory variables which contribute most significantly to the observed variability. We will present also some results associated with completeness (sampling rate) of the existing data sets. We will also use MERRA (Modern-Era Retrospective analysis for Research and Applications) re-analysis results selected for Boulder location as a transfer function in understanding of the effects that the temporal sampling and vertical resolution bring into trend and ozone variability analysis. Analyzing intra-annual variability in ozone measurements over Boulder, CO, in relation to the upper tropospheric subtropical and polar jets, we will address the stratospheric and tropospheric intrusions in the middle latitude troposphere ozone field.

  10. Clinical evaluation of new automatic coronary-specific best cardiac phase selection algorithm for single-beat coronary CT angiography.

    PubMed

    Wang, Hui; Xu, Lei; Fan, Zhanming; Liang, Junfu; Yan, Zixu; Sun, Zhonghua

    2017-01-01

    The aim of this study was to evaluate the workflow efficiency of a new automatic coronary-specific reconstruction technique (Smart Phase, GE Healthcare-SP) for selection of the best cardiac phase with least coronary motion when compared with expert manual selection (MS) of best phase in patients with high heart rate. A total of 46 patients with heart rates above 75 bpm who underwent single beat coronary computed tomography angiography (CCTA) were enrolled in this study. CCTA of all subjects were performed on a 256-detector row CT scanner (Revolution CT, GE Healthcare, Waukesha, Wisconsin, US). With the SP technique, the acquired phase range was automatically searched in 2% phase intervals during the reconstruction process to determine the optimal phase for coronary assessment, while for routine expert MS, reconstructions were performed at 5% intervals and a best phase was manually determined. The reconstruction and review times were recorded to measure the workflow efficiency for each method. Two reviewers subjectively assessed image quality for each coronary artery in the MS and SP reconstruction volumes using a 4-point grading scale. The average HR of the enrolled patients was 91.1±19.0bpm. A total of 204 vessels were assessed. The subjective image quality using SP was comparable to that of the MS, 1.45±0.85 vs 1.43±0.81 respectively (p = 0.88). The average time was 246 seconds for the manual best phase selection, and 98 seconds for the SP selection, resulting in average time saving of 148 seconds (60%) with use of the SP algorithm. The coronary specific automatic cardiac best phase selection technique (Smart Phase) improves clinical workflow in high heart rate patients and provides image quality comparable with manual cardiac best phase selection. Reconstruction of single-beat CCTA exams with SP can benefit the users with less experienced in CCTA image interpretation.

  11. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  12. Novel thermal efficiency-based model for determination of thermal conductivity of membrane distillation membranes

    DOE PAGES

    Vanneste, Johan; Bush, John A.; Hickenbottom, Kerri L.; ...

    2017-11-21

    Development and selection of membranes for membrane distillation (MD) could be accelerated if all performance-determining characteristics of the membrane could be obtained during MD operation without the need to recur to specialized or cumbersome porosity or thermal conductivity measurement techniques. By redefining the thermal efficiency, the Schofield method could be adapted to describe the flux without prior knowledge of membrane porosity, thickness, or thermal conductivity. A total of 17 commercially available membranes were analyzed in terms of flux and thermal efficiency to assess their suitability for application in MD. The thermal-efficiency based model described the flux with an average %RMSEmore » of 4.5%, which was in the same range as the standard deviation on the measured flux. The redefinition of the thermal efficiency also enabled MD to be used as a novel thermal conductivity measurement device for thin porous hydrophobic films that cannot be measured with the conventional laser flash diffusivity technique.« less

  13. Understanding the determinants of problem-solving behavior in a complex environment

    NASA Technical Reports Server (NTRS)

    Casner, Stephen A.

    1994-01-01

    It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.

  14. Novel thermal efficiency-based model for determination of thermal conductivity of membrane distillation membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanneste, Johan; Bush, John A.; Hickenbottom, Kerri L.

    Development and selection of membranes for membrane distillation (MD) could be accelerated if all performance-determining characteristics of the membrane could be obtained during MD operation without the need to recur to specialized or cumbersome porosity or thermal conductivity measurement techniques. By redefining the thermal efficiency, the Schofield method could be adapted to describe the flux without prior knowledge of membrane porosity, thickness, or thermal conductivity. A total of 17 commercially available membranes were analyzed in terms of flux and thermal efficiency to assess their suitability for application in MD. The thermal-efficiency based model described the flux with an average %RMSEmore » of 4.5%, which was in the same range as the standard deviation on the measured flux. The redefinition of the thermal efficiency also enabled MD to be used as a novel thermal conductivity measurement device for thin porous hydrophobic films that cannot be measured with the conventional laser flash diffusivity technique.« less

  15. Dual-band frequency selective surface with large band separation and stable performance

    NASA Astrophysics Data System (ADS)

    Zhou, Hang; Qu, Shao-Bo; Peng, Wei-Dong; Lin, Bao-Qin; Wang, Jia-Fu; Ma, Hua; Zhang, Jie-Qiu; Bai, Peng; Wang, Xu-Hua; Xu, Zhuo

    2012-05-01

    A new technique of designing a dual-band frequency selective surface with large band separation is presented. This technique is based on a delicately designed topology of L- and Ku-band microwave filters. The two band-pass responses are generated by a capacitively-loaded square-loop frequency selective surface and an aperture-coupled frequency selective surface, respectively. A Faraday cage is located between the two frequency selective surface structures to eliminate undesired couplings. Based on this technique, a dual-band frequency selective surface with large band separation is designed, which possesses large band separation, high selectivity, and stable performance under various incident angles and different polarizations.

  16. [Macroprolactinemia identification in patients with hyperprolactinemia].

    PubMed

    Sandoval, Carolina; González, Baldomero; Cheng, Sonia; Esquenazi, Yoshua; Mercado, Moisés

    2007-08-01

    Macroprolactinemia is defined as hyperprolactinemia with predominance of the big-big prolactine isoform. Its frequency has not been clearly established due to technical difficulties to identify it. This method used to detect it is gel filtration chromatography, an expensive and complicated procedure that could not be used routinely. To validate the PEG precipitation technique, to identify the presence of macroprolactinemia and to correlate it with the clinical characteristics in a group of pre-selected patients with elevated serum PRL levels from different causes. There were studied 14 patients non pre-selected with high PRL serum levels. Prolactine levels were determined with commercial immunometric quimioluminescence essays. All the essays were duplicated, and healthy patients serum (without hyperprolactinemia) were used. Technique consists on mixing 250 microL of serum with the same volume of polyethylene glycol. Later it was centrifugated at 3000 rpm during 30 minutes at 4 degrees C. Prolactine level was measured in supernatant. Within patients 7 to 14 macroprolactinemia was ruled out and confirmed truth hyperprolactinemia, some times slight and nontumoral, some times mild, and some times due to prolactin-producer hypophysial macroadenomas. Polyethylene glycol precipitation technique is reliable to detect macroprolactinemia.

  17. An Information-Based Machine Learning Approach to Elasticity Imaging

    PubMed Central

    Hoerig, Cameron; Ghaboussi, Jamshid; Insana, Michael. F.

    2016-01-01

    An information-based technique is described for applications in mechanical-property imaging of soft biological media under quasi-static loads. We adapted the Autoprogressive method that was originally developed for civil engineering applications for this purpose. The Autoprogressive method is a computational technique that combines knowledge of object shape and a sparse distribution of force and displacement measurements with finite-element analyses and artificial neural networks to estimate a complete set of stress and strain vectors. Elasticity imaging parameters are then computed from estimated stresses and strains. We introduce the technique using ultrasonic pulse-echo measurements in simple gelatin imaging phantoms having linear-elastic properties so that conventional finite-element modeling can be used to validate results. The Autoprogressive algorithm does not require any assumptions about the material properties and can, in principle, be used to image media with arbitrary properties. We show that by selecting a few well-chosen force-displacement measurements that are appropriately applied during training and establish convergence, we can estimate all nontrivial stress and strain vectors throughout an object and accurately estimate an elastic modulus at high spatial resolution. This new method of modeling the mechanical properties of tissue-like materials introduces a unique method of solving the inverse problem and is the first technique for imaging stress without assuming the underlying constitutive model. PMID:27858175

  18. Assessing the service quality of Iran military hospitals: Joint Commission International standards and Analytic Hierarchy Process (AHP) technique

    PubMed Central

    Bahadori, Mohammadkarim; Ravangard, Ramin; Yaghoubi, Maryam; Alimohammadzadeh, Khalil

    2014-01-01

    Background: Military hospitals are responsible for preserving, restoring and improving the health of not only armed forces, but also other people. According to the military organizations strategy, which is being a leader and pioneer in all areas, providing quality health services is one of the main goals of the military health care organizations. This study was aimed to evaluate the service quality of selected military hospitals in Iran based on the Joint Commission International (JCI) standards and comparing these hospitals with each other and ranking them using the analytic hierarchy process (AHP) technique in 2013. Materials and Methods: This was a cross-sectional and descriptive study conducted on five military hospitals, selected using the purposive sampling method, in 2013. Required data collected using checklists of accreditation standards and nominal group technique. AHP technique was used for prioritizing. Furthermore, Expert Choice 11.0 was used to analyze the collected data. Results: Among JCI standards, the standards of access to care and continuity of care (weight = 0.122), quality improvement and patient safety (weight = 0.121) and leadership and management (weight = 0.117) had the greatest importance, respectively. Furthermore, in the overall ranking, BGT (weight = 0.369), IHM (0.238), SAU (0.202), IHK (weight = 0.125) and SAB (weight = 0.066) ranked first to fifth, respectively. Conclusion: AHP is an appropriate technique for measuring the overall performance of hospitals and their quality of services. It is a holistic approach that takes all hospital processes into consideration. The results of the present study can be used to improve hospitals performance through identifying areas, which are in need of focus for quality improvement and selecting strategies to improve service quality. PMID:25250364

  19. Current and evolving echocardiographic techniques for the quantitative evaluation of cardiac mechanics: ASE/EAE consensus statement on methodology and indications endorsed by the Japanese Society of Echocardiography.

    PubMed

    Mor-Avi, Victor; Lang, Roberto M; Badano, Luigi P; Belohlavek, Marek; Cardim, Nuno Miguel; Derumeaux, Genevieve; Galderisi, Maurizio; Marwick, Thomas; Nagueh, Sherif F; Sengupta, Partho P; Sicari, Rosa; Smiseth, Otto A; Smulevitz, Beverly; Takeuchi, Masaaki; Thomas, James D; Vannan, Mani; Voigt, Jens-Uwe; Zamorano, Jose Luis

    2011-03-01

    Echocardiographic imaging is ideally suited for the evaluation of cardiac mechanics because of its intrinsically dynamic nature. Because for decades, echocardiography has been the only imaging modality that allows dynamic imaging of the heart, it is only natural that new, increasingly automated techniques for sophisticated analysis of cardiac mechanics have been driven by researchers and manufacturers of ultrasound imaging equipment. Several such techniques have emerged over the past decades to address the issue of reader's experience and inter-measurement variability in interpretation. Some were widely embraced by echocardiographers around the world and became part of the clinical routine, whereas others remained limited to research and exploration of new clinical applications. Two such techniques have dominated the research arena of echocardiography: (1) Doppler-based tissue velocity measurements, frequently referred to as tissue Doppler or myocardial Doppler, and (2) speckle tracking on the basis of displacement measurements. Both types of measurements lend themselves to the derivation of multiple parameters of myocardial function. The goal of this document is to focus on the currently available techniques that allow quantitative assessment of myocardial function via image-based analysis of local myocardial dynamics, including Doppler tissue imaging and speckle-tracking echocardiography, as well as integrated back- scatter analysis. This document describes the current and potential clinical applications of these techniques and their strengths and weaknesses, briefly surveys a selection of the relevant published literature while highlighting normal and abnormal findings in the context of different cardiovascular pathologies, and summarizes the unresolved issues, future research priorities, and recommended indications for clinical use.

  20. Current and evolving echocardiographic techniques for the quantitative evaluation of cardiac mechanics: ASE/EAE consensus statement on methodology and indications endorsed by the Japanese Society of Echocardiography.

    PubMed

    Mor-Avi, Victor; Lang, Roberto M; Badano, Luigi P; Belohlavek, Marek; Cardim, Nuno Miguel; Derumeaux, Geneviève; Galderisi, Maurizio; Marwick, Thomas; Nagueh, Sherif F; Sengupta, Partho P; Sicari, Rosa; Smiseth, Otto A; Smulevitz, Beverly; Takeuchi, Masaaki; Thomas, James D; Vannan, Mani; Voigt, Jens-Uwe; Zamorano, José Luis

    2011-03-01

    Echocardiographic imaging is ideally suited for the evaluation of cardiac mechanics because of its intrinsically dynamic nature. Because for decades, echocardiography has been the only imaging modality that allows dynamic imaging of the heart, it is only natural that new, increasingly automated techniques for sophisticated analysis of cardiac mechanics have been driven by researchers and manufacturers of ultrasound imaging equipment.Several such technique shave emerged over the past decades to address the issue of reader's experience and inter measurement variability in interpretation.Some were widely embraced by echocardiographers around the world and became part of the clinical routine,whereas others remained limited to research and exploration of new clinical applications.Two such techniques have dominated the research arena of echocardiography: (1) Doppler based tissue velocity measurements,frequently referred to as tissue Doppler or myocardial Doppler, and (2) speckle tracking on the basis of displacement measurements.Both types of measurements lend themselves to the derivation of multiple parameters of myocardial function. The goal of this document is to focus on the currently available techniques that allow quantitative assessment of myocardial function via image-based analysis of local myocardial dynamics, including Doppler tissue imaging and speckle-tracking echocardiography, as well as integrated backscatter analysis. This document describes the current and potential clinical applications of these techniques and their strengths and weaknesses,briefly surveys a selection of the relevant published literature while highlighting normal and abnormal findings in the context of different cardiovascular pathologies, and summarizes the unresolved issues, future research priorities, and recommended indications for clinical use.

  1. Integration and Improvement of Geophysical Root Biomass Measurements for Determining Carbon Credits

    NASA Astrophysics Data System (ADS)

    Boitet, J. I.

    2013-12-01

    Carbon trading schemes fundamentally rely on accurate subsurface carbon quantification in order for governing bodies to grant carbon credits inclusive of root biomass (What is Carbon Credit. 2013). Root biomass makes up a large chunk of the subsurface carbon and is difficult, labor intensive, and costly to measure. This paper stitches together the latest geophysical root measurement techniques into site-dependent recommendations for technique combinations and modifications that maximize large-scale root biomass measurement accuracy and efficiency. "Accuracy" is maximized when actual root biomass is closest to measured root biomass. "Efficiency" is maximized when time, labor, and cost of measurement is minimized. Several combinations have emerged which satisfy both criteria under different site conditions. Use of ground penetrating radar (GPR) and/or electrical resistivity tomography (ERT) allow for large tracts of land to be surveyed under appropriate conditions. Among other characteristics, GPR does best with detecting coarse roots in dry soil. ERT does best in detecting roots in moist soils, but is especially limited by electrode configuration (Mancuso, S. 2012). Integration of these two technologies into a baseline protocol based on site-specific characteristics, especially soil moisture and plants species heterogeneity, will drastically theoretically increase efficiency and accuracy of root biomass measurements. Modifications of current measurement protocols using these existing techniques will also theoretically lead to drastic improvements in both accuracy and efficiency. These modifications, such as efficient 3D imaging by adding an identical electrode array perpendicular to the first array used in the Pulled Array Continuous Electrical Profiling (PACEP) technique for ERT, should allow for more widespread application of these techniques for understanding root biomass. Where whole-site measurement is not feasible either due to financial, equipment, or physical limitations, measurements from randomly selected plots must be assumed representative of the entire system and scaled up. This scaling introduces error roughly inversely proportional to the number and size of plots measured. References Mancuso, S. (2012). Measuring roots: An updated approach Springer. What is carbon credit. (2013). Retrieved 7/20, 2013, from http://carbontradexchange.com/knowledge/what-is-carbon-credit

  2. Iris as a reflector for differential absorption low-coherence interferometry to measure glucose level in the anterior chamber

    PubMed Central

    Zhou, Yong; Zeng, Nan; Ji, Yanhong; Li, Yao; Dai, Xiangsong; Li, Peng; Duan, Lian; Ma, Hui; He, Yonghong

    2011-01-01

    We present a method of glucose concentration detection in the anterior chamber with a differential absorption optical low-coherent interferometry (LCI) technique. Back-reflected light from the iris, passing through the anterior chamber twice, was selectively obtained with the LCI technique. Two light sources, one centered within (1625 nm) and the other centered outside (1310 nm) of a glucose absorption band were used for differential absorption measurement. In the eye model and pig eye experiments, we obtained a resolution glucose level of 26.8 mg/dL and 69.6 mg/dL, respectively. This method has a potential application for noninvasive detection of glucose concentration in aqueous humor, which is related to the glucose concentration in blood. PMID:21280906

  3. Targeting the untargeted in molecular phenomics with structurally-selective ion mobility-mass spectrometry.

    PubMed

    May, Jody Christopher; Gant-Branum, Randi Lee; McLean, John Allen

    2016-06-01

    Systems-wide molecular phenomics is rapidly expanding through technological advances in instrumentation and bioinformatics. Strategies such as structural mass spectrometry, which utilizes size and shape measurements with molecular weight, serve to characterize the sum of molecular expression in biological contexts, where broad-scale measurements are made that are interpreted through big data statistical techniques to reveal underlying patterns corresponding to phenotype. The data density, data dimensionality, data projection, and data interrogation are all critical aspects of these approaches to turn data into salient information. Untargeted molecular phenomics is already having a dramatic impact in discovery science from drug discovery to synthetic biology. It is evident that these emerging techniques will integrate closely in broad efforts aimed at precision medicine. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A method of treating the non-grey error in total emittance measurements

    NASA Technical Reports Server (NTRS)

    Heaney, J. B.; Henninger, J. H.

    1971-01-01

    In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.

  5. Liver diffusion-weighted MR imaging: reproducibility comparison of ADC measurements obtained with multiple breath-hold, free-breathing, respiratory-triggered, and navigator-triggered techniques.

    PubMed

    Chen, Xin; Qin, Lei; Pan, Dan; Huang, Yanqi; Yan, Lifen; Wang, Guangyi; Liu, Yubao; Liang, Changhong; Liu, Zaiyi

    2014-04-01

    To prospectively compare the reproducibility of normal liver apparent diffusion coefficient (ADC) measurements by using different respiratory motion compensation techniques with multiple breath-hold (MBH), free-breathing (FB), respiratory-triggered (RT), and navigator-triggered (NT) diffusion-weighted (DW) imaging and to compare the ADCs at different liver anatomic locations. The study protocol was approved by the institutional review board, and written informed consent was obtained from each participant. Thirty-nine volunteers underwent liver DW imaging twice. Imaging was performed with a 1.5-T MR imager with MBH, FB, RT, and NT techniques (b = 0, 100, and 500 sec/mm(2)). Three representative sections--superior, central, and inferior--were selected on left and right liver lobes, respectively. On each selected section, three regions of interest were drawn, and ADCs were measured. Analysis of variance was used to assess ADCs among the four techniques and various anatomic locations. Reproducibility of ADCs was assessed with the Bland-Altman method. ADCs obtained with MBH (range: right lobe, [1.641-1.662] × 10(-3)mm(2)/sec; left lobe, [2.034-2.054] ×10(-3)mm(2)/sec) were higher than those obtained with FB (right, [1.349-1.391] ×10(-3)mm(2)/sec; left, [1.630-1.700] ×10(-3)mm(2)/sec), RT (right, [1.439-1.455] ×10(-3)mm(2)/sec; left, [1.720-1.755] ×10(-3)mm(2)/sec), or NT (right, [1.387-1.400] ×10(-3)mm(2)/sec; left, [1.661-1.736] ×10(-3)mm(2)/sec) techniques (P < .001); however, no significant difference was observed between ADCs obtained with FB, RT, and NT techniques (P = .130 to P >.99). ADCs showed a trend to decrease moving from left to right. Reproducibility in the left liver lobe was inferior to that in the right, and the central middle segment in the right lobe had the most reproducible ADC. Statistical differences in ADCs were observed in the left-right direction in the right lobe (P < .001), but they were not observed in the superior-inferior direction (P = .144-.450). However, in the left liver lobe, statistical differences existed in both directions (P = .001 to P = .016 in the left-right direction, P < .001 in the superior-inferior direction). Both anatomic location and DW imaging technique influence liver ADC measurements and their reproducibility. FB DW imaging is recommended for liver DW imaging because of its good reproducibility and shorter acquisition time compared with that of MBH, RT, and NT techniques. RSNA, 2014

  6. Selective Two-Photon Absorptive Resonance Femtosecond-Laser Electronic-Excitation Tagging (STARFLEET) Velocimetry in Flow and Combustion Diagnostics

    NASA Technical Reports Server (NTRS)

    Jiang, Naibo; Halls, Benjamin R.; Stauffer, Hans U.; Roy, Sukesh; Danehy, Paul M.; Gord, James R.

    2016-01-01

    Selective Two-Photon Absorptive Resonance Femtosecond-Laser Electronic-Excitation Tagging (STARFLEET), a non-seeded ultrafast-laser-based velocimetry technique, is demonstrated in reactive and non-reactive flows. STARFLEET is pumped via a two-photon resonance in N2 using 202.25-nm 100-fs light. STARFLEET greatly reduces the per-pulse energy required (30 µJ/pulse) to generate the signature FLEET emission compared to the conventional FLEET technique (1.1 mJ/pulse). This reduction in laser energy results in less energy deposited in the flow, which allows for reduced flow perturbations (reactive and non-reactive), increased thermometric accuracy, and less severe damage to materials. Velocity measurements conducted in a free jet of N2 and in a premixed flame show good agreement with theoretical velocities and further demonstrate the significantly less-intrusive nature of STARFLEET.

  7. Development of poloxamer gel formulations via hot-melt extrusion technology.

    PubMed

    Mendonsa, Nicole S; Murthy, S Narasimha; Hashemnejad, Seyed Meysam; Kundu, Santanu; Zhang, Feng; Repka, Michael A

    2018-02-15

    Poloxamer gels are conventionally prepared by the "hot" or the "cold" process. But these techniques have some disadvantages such as high energy consumption, requires expensive equipment and often have scale up issues. Therefore, the objective of this work was to develop poloxamer gels by hot-melt extrusion technology. The model drug selected was ketoprofen. The formulations developed were 30% and 40% poloxamer gels. Of these formulations, the 30% poloxamer gels were selected as ideal gels. DSC and XRD studies showed an amorphous nature of the drug after extrusion. It was observed from the permeation studies that with increasing poloxamer concentration, a decrease in drug permeation was obtained. Other studies conducted for the formulations included in-vitro release studies, texture analysis, rheological studies and pH measurements. In conclusion, the hot-melt extrusion technology could be successfully employed to develop poloxamer gels by overcoming the drawbacks associated with the conventional techniques. Published by Elsevier B.V.

  8. On the role of selective attention in visual perception

    PubMed Central

    Luck, Steven J.; Ford, Michelle A.

    1998-01-01

    What is the role of selective attention in visual perception? Before answering this question, it is necessary to differentiate between attentional mechanisms that influence the identification of a stimulus from those that operate after perception is complete. Cognitive neuroscience techniques are particularly well suited to making this distinction because they allow different attentional mechanisms to be isolated in terms of timing and/or neuroanatomy. The present article describes the use of these techniques in differentiating between perceptual and postperceptual attentional mechanisms and then proposes a specific role of attention in visual perception. Specifically, attention is proposed to resolve ambiguities in neural coding that arise when multiple objects are processed simultaneously. Evidence for this hypothesis is provided by two experiments showing that attention—as measured electrophysiologically—is allocated to visual search targets only under conditions that would be expected to lead to ambiguous neural coding. PMID:9448247

  9. Data Quality Monitoring in Clinical Trials: Has It Been Worth It? An Evaluation and Prediction of the Future by All Stakeholders

    PubMed Central

    Kalali, Amir; West, Mark; Walling, David; Hilt, Dana; Engelhardt, Nina; Alphs, Larry; Loebel, Antony; Vanover, Kim; Atkinson, Sarah; Opler, Mark; Sachs, Gary; Nations, Kari; Brady, Chris

    2016-01-01

    This paper summarizes the results of the CNS Summit Data Quality Monitoring Workgroup analysis of current data quality monitoring techniques used in central nervous system (CNS) clinical trials. Based on audience polls conducted at the CNS Summit 2014, the panel determined that current techniques used to monitor data and quality in clinical trials are broad, uncontrolled, and lack independent verification. The majority of those polled endorse the value of monitoring data. Case examples of current data quality methodology are presented and discussed. Perspectives of pharmaceutical companies and trial sites regarding data quality monitoring are presented. Potential future developments in CNS data quality monitoring are described. Increased utilization of biomarkers as objective outcomes and for patient selection is considered to be the most impactful development in data quality monitoring over the next 10 years. Additional future outcome measures and patient selection approaches are discussed. PMID:27413584

  10. Synchrotron radiation x-ray topography and defect selective etching analysis of threading dislocations in GaN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sintonen, Sakari, E-mail: sakari.sintonen@aalto.fi; Suihkonen, Sami; Jussila, Henri

    2014-08-28

    The crystal quality of bulk GaN crystals is continuously improving due to advances in GaN growth techniques. Defect characterization of the GaN substrates by conventional methods is impeded by the very low dislocation density and a large scale defect analysis method is needed. White beam synchrotron radiation x-ray topography (SR-XRT) is a rapid and non-destructive technique for dislocation analysis on a large scale. In this study, the defect structure of an ammonothermal c-plane GaN substrate was recorded using SR-XRT and the image contrast caused by the dislocation induced microstrain was simulated. The simulations and experimental observations agree excellently and themore » SR-XRT image contrasts of mixed and screw dislocations were determined. Apart from a few exceptions, defect selective etching measurements were shown to correspond one to one with the SR-XRT results.« less

  11. Comparison of soft tissue balancing, femoral component rotation, and joint line change between the gap balancing and measured resection techniques in primary total knee arthroplasty

    PubMed Central

    Moon, Young-Wan; Kim, Hyun-Jung; Ahn, Hyeong-Sik; Park, Chan-Deok; Lee, Dae-Hee

    2016-01-01

    Abstract Background: This meta-analysis was designed to compare the accuracy of soft tissue balancing and femoral component rotation as well as change in joint line positions, between the measured resection and gap balancing techniques in primary total knee arthroplasty. Methods: Studies were included in the meta-analysis if they compared soft tissue balancing and/or radiologic outcomes in patients who underwent total knee arthroplasty with the gap balancing and measured resection techniques. Comparisons included differences in flexion/extension, medial/lateral flexion, and medial/lateral extension gaps (LEGs), femoral component rotation, and change in joint line positions. Finally, 8 studies identified via electronic (MEDLINE, EMBASE, and the Cochrane Library) and manual searches were included. All 8 studies showed a low risk of selection bias and provided detailed demographic data. There was some inherent heterogeneity due to uncontrolled bias, because all included studies were observational comparison studies. Results: The pooled mean difference in gap differences between the gap balancing and measured resection techniques did not differ significantly (−0.09 mm, 95% confidence interval [CI]: −0.40 to +0.21 mm; P = 0.55), except that the medial/LEG difference was 0.58 mm greater for measured resection than gap balancing (95% CI: −1.01 to −0.15 mm; P = 0.008). Conversely, the pooled mean difference in femoral component external rotation (0.77°, 95% CI: 0.18° to 1.35°; P = 0.01) and joint line change (1.17 mm, 95% CI: 0.82 to 1.52 mm; P < 0.001) were significantly greater for the gap balancing than the measured resection technique. Conclusion: The gap balancing and measured resection techniques showed similar soft tissue balancing, except for medial/LEG difference. However, the femoral component was more externally rotated and the joint line was more elevated with gap balancing than measured resection. These differences were minimal (around 1 mm or 1°) and therefore may have little effect on the biomechanics of the knee joint. This suggests that the gap balancing and measured resection techniques are not mutually exclusive. PMID:27684862

  12. On the use of photothermal techniques for the characterization of solar-selective coatings

    NASA Astrophysics Data System (ADS)

    Ramírez-Rincón, J. A.; Ares-Muzio, O.; Macias, J. D.; Estrella-Gutiérrez, M. A.; Lizama-Tzec, F. I.; Oskam, G.; Alvarado-Gil, J. J.

    2018-03-01

    The efficiency of the conversion of solar energy into thermal energy is determined by the optical and thermal properties of the selective coating, in particular, the solar absorptance and thermal emittance at the desired temperature of the specific application. Photothermal techniques are the most appropriate methods to explore these properties, however, a quantitative determination using photothermal radiometry, which is based on the measurement of emitted radiation caused by the heating generated by a modulated light source, has proven to be elusive. In this work, we present experimental results for selective coatings based on electrodeposited black nickel-nickel on both stainless steel and copper substrates, as well as for commercial TiNOX coatings on aluminum, illustrating that the radiation emitted by the surface depends on the optical absorption, thermal emissivity and on the light-into-heat energy conversion efficiency (quantum efficiency). We show that a combination of photothermal radiometry and photoacoustic spectroscopy can successfully account for these parameters, and provide values for the emissivity in agreement with values obtained by Fourier-transform infrared spectroscopy.

  13. Femtosecond laser nanosurgery of sub-cellular structures in HeLa cells by employing Third Harmonic Generation imaging modality as diagnostic tool.

    PubMed

    Tserevelakis, George J; Psycharakis, Stylianos; Resan, Bojan; Brunner, Felix; Gavgiotaki, Evagelia; Weingarten, Kurt; Filippidis, George

    2012-02-01

    Femtosecond laser assisted nanosurgery of microscopic biological specimens is a relatively new technique which allows the selective disruption of sub-cellular structures without causing any undesirable damage to the surrounding regions. The targeted structures have to be stained in order to be clearly visualized for the nanosurgery procedure. However, the validation of the final nanosurgery result is difficult, since the targeted structure could be simply photobleached rather than selectively destroyed. This fact comprises a main drawback of this technique. In our study we employed a multimodal system which integrates non-linear imaging modalities with nanosurgery capabilities, for the selective disruption of sub-cellular structures in HeLa cancer cells. Third Harmonic Generation (THG) imaging modality was used as a tool for the identification of structures that were subjected to nanosurgery experiments. No staining of the biological samples was required, since THG is an intrinsic property of matter. Furthermore, cells' viability after nanosurgery processing was verified via Two Photon Excitation Fluorescence (TPEF) measurements. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Fractional exhaled nitric oxide-measuring devices: technology update

    PubMed Central

    Maniscalco, Mauro; Vitale, Carolina; Vatrella, Alessandro; Molino, Antonio; Bianco, Andrea; Mazzarella, Gennaro

    2016-01-01

    The measurement of exhaled nitric oxide (NO) has been employed in the diagnosis of specific types of airway inflammation, guiding treatment monitoring by predicting and assessing response to anti-inflammatory therapy and monitoring for compliance and detecting relapse. Various techniques are currently used to analyze exhaled NO concentrations under a range of conditions for both health and disease. These include chemiluminescence and electrochemical sensor devices. The cost effectiveness and ability to achieve adequate flexibility in sensitivity and selectivity of NO measurement for these methods are evaluated alongside the potential for use of laser-based technology. This review explores the technologies involved in the measurement of exhaled NO. PMID:27382340

  15. Test Procedures for Characterizing, Evaluating, and Managing Separator Materials used in Secondary Alkaline Batteries

    NASA Technical Reports Server (NTRS)

    Guasp, Edwin; Manzo, Michelle A.

    1997-01-01

    Secondary alkaline batteries, such as nickel-cadmium and silver-zinc, are commonly used for aerospace applications. The uniform evaluation and comparison of separator properties for these systems is dependent upon the measurement techniques. This manual presents a series of standard test procedures that can be used to evaluate, compare, and select separator materials for use in alkaline batteries. Detailed test procedures evaluating the following characteristics are included in this manual: physical measurements of thickness and area weight, dimensional stability measurements, electrolyte retention, resistivity, permeability as measured via bubble pressure, surface evaluation via SEM, chemical stability, and tensile strength.

  16. Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) for spaceborne measurements of CO

    NASA Astrophysics Data System (ADS)

    Johnson, Brian R.; Kampe, Thomas U.; Cook, William B.; Miecznik, Grzegorz; Novelli, Paul C.; Snell, Hilary E.; Turner-Valle, Jennifer A.

    2003-11-01

    An instrument concept for an Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) has been developed for measuring tropospheric carbon monoxide (CO) from space. The concept is based upon a correlation technique similar in nature to multi-order Fabry-Perot (FP) interferometer or gas filter radiometer techniques, which simultaneously measure atmospheric emission from several infrared vibration-rotation lines of CO. Correlation techniques provide a multiplex advantage for increased throughput, high spectral resolution and selectivity necessary for profiling tropospheric CO. Use of unconventional multilayer interference filter designs leads to improvement in CO spectral line correlation compared with the traditional FP multi-order technique, approaching the theoretical performance of gas filter correlation radiometry. In this implementation, however, the gas cell is replaced with a simple, robust solid interference filter. In addition to measuring CO, the correlation filter technique can be applied to measurements of other important gases such as carbon dioxide, nitrous oxide and methane. Imaging the scene onto a 2-D detector array enables a limited range of spectral sampling owing to the field-angle dependence of the filter transmission function. An innovative anamorphic optical system provides a relatively large instrument field-of-view for imaging along the orthogonal direction across the detector array. An important advantage of the IMOFPS concept is that it is a small, low mass and high spectral resolution spectrometer having no moving parts. A small, correlation spectrometer like IMOFPS would be well suited for global observations of CO2, CO, and CH4 from low Earth or regional observations from Geostationary orbit. A prototype instrument is in development for flight demonstration on an airborne platform with potential applications to atmospheric chemistry, wild fire and biomass burning, and chemical dispersion monitoring.

  17. Quantification of model uncertainty in aerosol optical thickness retrieval from Ozone Monitoring Instrument (OMI) measurements

    NASA Astrophysics Data System (ADS)

    Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.

    2013-09-01

    We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.

  18. MO-B-BRB-00: Three Dimensional Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  19. MO-B-BRB-01: 3D Dosimetry in the Clinic: Background and Motivation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, L.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  20. MO-B-BRB-04: 3D Dosimetry in End-To-End Dosimetry QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibbott, G.

    Full three-dimensional (3D) dosimetry using volumetric chemical dosimeters probed by 3D imaging systems has long been a promising technique for the radiation therapy clinic, since it provides a unique methodology for dose measurements in the volume irradiated using complex conformal delivery techniques such as IMRT and VMAT. To date true 3D dosimetry is still not widely practiced in the community; it has been confined to centres of specialized expertise especially for quality assurance or commissioning roles where other dosimetry techniques are difficult to implement. The potential for improved clinical applicability has been advanced considerably in the last decade by themore » development of improved 3D dosimeters (e.g., radiochromic plastics, radiochromic gel dosimeters and normoxic polymer gel systems) and by improved readout protocols using optical computed tomography or magnetic resonance imaging. In this session, established users of some current 3D chemical dosimeters will briefly review the current status of 3D dosimetry, describe several dosimeters and their appropriate imaging for dose readout, present workflow procedures required for good dosimetry, and analyze some limitations for applications in select settings. We will review the application of 3D dosimetry to various clinical situations describing how 3D approaches can complement other dose delivery validation approaches already available in the clinic. The applications presented will be selected to inform attendees of the unique features provided by full 3D techniques. Learning Objectives: L. John Schreiner: Background and Motivation Understand recent developments enabling clinically practical 3D dosimetry, Appreciate 3D dosimetry workflow and dosimetry procedures, and Observe select examples from the clinic. Sofie Ceberg: Application to dynamic radiotherapy Observe full dosimetry under dynamic radiotherapy during respiratory motion, and Understand how the measurement of high resolution dose data in an irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.« less

  1. Analytical methods for quantifying greenhouse gas flux in animal production systems.

    PubMed

    Powers, W; Capelari, M

    2016-08-01

    Given increased interest by all stakeholders to better understand the contribution of animal agriculture to climate change, it is important that appropriate methodologies be used when measuring greenhouse gas (GHG) emissions from animal agriculture. Similarly, a fundamental understanding of the differences between methods is necessary to appropriately compare data collected using different approaches and design meaningful experiments. Sources of carbon dioxide, methane, and nitrous oxide emissions in animal production systems includes the animals, feed storage areas, manure deposition and storage areas, and feed and forage production fields. These 3 gases make up the primary GHG emissions from animal feeding operations. Each of the different GHG may be more or less prominent from each emitting source. Similarly, the species dictates the importance of methane emissions from the animals themselves. Measures of GHG flux from animals are often made using respiration chambers, head boxes, tracer gas techniques, or in vitro gas production techniques. In some cases, a combination of techniques are used (i.e., head boxes in combination with tracer gas). The prominent methods for measuring GHG emissions from housing include the use of tracer gas techniques or direct or indirect ventilation measures coupled with concentration measures of gases of interest. Methods for collecting and measuring GHG emissions from manure storage and/or production lots include the use of downwind measures, often using photoacoustic or open path Fourier transform infrared spectroscopy, combined with modeling techniques or the use of static chambers or flux hood methods. Similar methods can be deployed for determining GHG emissions from fields. Each method identified has its own benefits and challenges to use for the stated application. Considerations for use include intended goal, equipment investment and maintenance, frequency and duration of sampling needed to achieve desired representativeness of emissions over time, accuracy and precision of the method, and environmental influences on the method. In the absence of a perfect method for all situations, full knowledge of the advantages and disadvantages of each method is extremely important during the development of the experimental design and interpretation of results. The selection of the suitable technique depends on the animal production system, resource availability, and objective for measurements.

  2. Evaluation of the hydrometer for testing immunoglobulin G1 concentrations in Holstein colostrum.

    PubMed

    Pritchett, L C; Gay, C C; Hancock, D D; Besser, T E

    1994-06-01

    Hydrometer measurement in globulin and IgG1 concentration measured by the radial immunodiffusion technique were compared for 915 samples of first milking colostrum from Holstein cows. Least squares analysis of the relationship between hydrometer measurement and IgG1 concentration was improved by log transformation of IgG1 concentration and resulted in a significant linear relationship between hydrometer measurement and log10 IgG1 concentration; r2 = .469. At 50 mg of globulin/ml of colostrum, the recommended hydrometer cutoff point for colostrum selection, the sensitivity of the hydrometer as a test of IgG1 concentration in Holstein colostrum was 26%, and the negative predictive value was 67%. The negative predictive value and sensitivity of the hydrometer as a test of IgG1 in Holstein colostrum was improved, and the cost of misclassification of colostrum was minimized, when the cutoff point for colostrum selection was increased above the recommended 50 mg/ml.

  3. Detection of vitamin b1 (thiamine) using modified carbon paste electrodes with polypyrrole

    NASA Astrophysics Data System (ADS)

    Muppariqoh, N. M.; Wahyuni, W. T.; Putra, B. R.

    2017-03-01

    Vitamin B1 (thiamine) is oxidized in alkaline medium and can be detected by cyclic voltammetry technique using carbon paste electrode (CPE) as a working electrode. polypyrrole-modified CPE were used in this study to increase sensitivity and selectivity measurement of thiamine. Molecularly imprinted polymers (MIP) of the modified CPE was prepared through electrodeposition of pyrrole. Measurement of thiamine performed in KCl 0.05 M (pH 10, tris buffer) using CPE and the modified CPE gave an optimum condition anodic current of thiamine at 0.3 V, potential range (-1.6_1 V), and scan rate of 100 mV/s. Measurement of thiamine using polypyrrole modified CPE (CPE-MIPpy) showed better result than CPE itself with detection limit of 6.9×10-5 M and quantitation limit 2.1×10-4 M. CPE-MIPpy is selective to vita min B1. In conclusion, CPE-MIPpy as a working electrode showed better performance of thiamine measurement than that of CPE.

  4. Site-selective nitrogen isotopic ratio measurement of nitrous oxide using a TE-cooled CW-RT-QCL based spectrometer.

    PubMed

    Li, Jingsong; Zhang, Lizhu; Yu, Benli

    2014-12-10

    The feasibility of laser spectroscopic isotopic composition measurements of atmospheric N2O was demonstrated, although making them useful will require further improvements. The system relies on a thermoelectrically (TE) cooled continuous-wave (CW) room temperature (RT) quantum cascade laser source emitting wavelength of around 4.6μm, where strong fundamental absorption bands occur for the considered specie and its isotopomers. The analysis technique is based on wavelength modulation spectroscopy with second-harmonic detection and the combination of long-path absorption cell. Primary laboratory tests have been performed to estimate the achievable detection limits and the signal reproducibility levels in view of possible measurements of (15)N/(14)N and (18)O/(16)O isotope ratios. The experiment results showed that the site-selective (15)N/(14)N ratio can be measured with a precision of 3‰ with 90s averaging time using natural-abundance N2O sample of 12.7ppm. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Data documentation for the bare soil experiment at the University of Arkansas, June - August 1980

    NASA Technical Reports Server (NTRS)

    Sadeghi, A. M.

    1984-01-01

    The primary objective of this study is to evaluate the relationships between soil moisture and reflectivity of a bare soil, using microwave techniques. A drainage experiment was conducted on a Captina silt loam in cooperation with personnel in the Electrical Engineering Department. Measurements included soil moisture pressures at various depths, neutron probe measurements, gravimetric moisture samples, and reflectivity of the soil surface at selected frequencies including 1.5 and 6.0 GHz and at the incident angle of 45 deg. All measurements were made in conjuction with that of reflectivity data.

  6. Measuring reactive oxygen and nitrogen species with fluorescent probes: challenges and limitations

    PubMed Central

    Kalyanaraman, Balaraman; Darley-Usmar, Victor; Davies, Kelvin J.A.; Dennery, Phyllis A.; Forman, Henry Jay; Grisham, Matthew B.; Mann, Giovanni E.; Moore, Kevin; Roberts, L. Jackson; Ischiropoulos, Harry

    2013-01-01

    The purpose of this position paper is to present a critical analysis of the challenges and limitations of the most widely used fluorescent probes for detecting and measuring reactive oxygen and nitrogen species. Where feasible, we have made recommendations for the use of alternate probes and appropriate analytical techniques that measure the specific products formed from the reactions between fluorescent probes and reactive oxygen and nitrogen species. We have proposed guidelines that will help present and future researchers with regard to the optimal use of selected fluorescent probes and interpretation of results. PMID:22027063

  7. Carbon dioxide sensor. [partial pressure measurement using monochromators

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Analytical techniques for measuring CO2 were evaluated and rated for use with the advanced extravehicular mobility unit. An infrared absorption concept using a dual-wavelength monochromator was selected for investigation. A breadboard carbon dioxide sensor (CDS) was assembled and tested. The CDS performance showed the capability of measuring CO2 over the range of 0 to 4.0 kPa (0 to 30 mmHg) P sub (CO2). The volume and weight of a flight configured CDS should be acceptable. It is recommended that development continue to complete the design of a flight prototype.

  8. Optical filter selection for high confidence discrimination of strongly overlapping infrared chemical spectra.

    PubMed

    Major, Kevin J; Poutous, Menelaos K; Ewing, Kenneth J; Dunnill, Kevin F; Sanghera, Jasbinder S; Aggarwal, Ishwar D

    2015-09-01

    Optical filter-based chemical sensing techniques provide a new avenue to develop low-cost infrared sensors. These methods utilize multiple infrared optical filters to selectively measure different response functions for various chemicals, dependent on each chemical's infrared absorption. Rather than identifying distinct spectral features, which can then be used to determine the identity of a target chemical, optical filter-based approaches rely on measuring differences in the ensemble response between a given filter set and specific chemicals of interest. Therefore, the results of such methods are highly dependent on the original optical filter choice, which will dictate the selectivity, sensitivity, and stability of any filter-based sensing method. Recently, a method has been developed that utilizes unique detection vector operations defined by optical multifilter responses, to discriminate between volatile chemical vapors. This method, comparative-discrimination spectral detection (CDSD), is a technique which employs broadband optical filters to selectively discriminate between chemicals with highly overlapping infrared absorption spectra. CDSD has been shown to correctly distinguish between similar chemicals in the carbon-hydrogen stretch region of the infrared absorption spectra from 2800-3100 cm(-1). A key challenge to this approach is how to determine which optical filter sets should be utilized to achieve the greatest discrimination between target chemicals. Previous studies used empirical approaches to select the optical filter set; however this is insufficient to determine the optimum selectivity between strongly overlapping chemical spectra. Here we present a numerical approach to systematically study the effects of filter positioning and bandwidth on a number of three-chemical systems. We describe how both the filter properties, as well as the chemicals in each set, affect the CDSD results and subsequent discrimination. These results demonstrate the importance of choosing the proper filter set and chemicals for comparative discrimination, in order to identify the target chemical of interest in the presence of closely matched chemical interferents. These findings are an integral step in the development of experimental prototype sensors, which will utilize CDSD.

  9. Range estimation techniques in single-station thunderstorm warning sensors based upon gated, wideband, magnetic direction finder technology

    NASA Technical Reports Server (NTRS)

    Pifer, Alburt E.; Hiscox, William L.; Cummins, Kenneth L.; Neumann, William T.

    1991-01-01

    Gated, wideband, magnetic direction finders (DFs) were originally designed to measure the bearing of cloud-to-ground lightning relative to the sensor. A recent addition to this device uses proprietary waveform discrimination logic to select return stroke signatures and certain range dependent features in the waveform to provide an estimate of range of flashes within 50 kms. The enhanced ranging techniques are discussed which were designed and developed for use in single station thunderstorm warning sensor. Included are the results of on-going evaluations being conducted under a variety of meteorological and geographic conditions.

  10. Critical layer thickness in In/sub 0. 2/Ga/sub 0. 8/As/GaAs single strained quantum well structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, I.J.; Gourley, P.L.; Dawson, L.R.

    1987-09-28

    We report accurate determination of the critical layer thickness (CLT) for single strained-layer epitaxy in the InGaAs/GaAs system. Our samples were molecular beam epitaxially grown, selectively doped, single quantum well structures comprising a strained In/sub 0.2/Ga/sub 0.8/As layer imbedded in GaAs. We determined the CLT by two sensitive techniques: Hall-effect measurements at 77 K and photoluminescence microscopy. Both techniques indicate a CLT of about 20 nm. This value is close to that determined previously (--15 nm) for comparable strained-layer superlattices, but considerably less than the value of --45 nm suggested by recent x-ray rocking-curve measurements. We show by a simplemore » calculation that photoluminescence microscopy is more than two orders of magnitude more sensitive to dislocations than x-ray diffraction. Our results re-emphasize the necessity of using high-sensitivity techniques for accurate determination of critical layer thicknesses.« less

  11. Accuracy of Gypsum Casts after Different Impression Techniques and Double Pouring

    PubMed Central

    Silva, Stephania Caroline Rodolfo; Messias, Aion Mangino; Abi-Rached, Filipe de Oliveira; de Souza, Raphael Freitas; Reis, José Maurício dos Santos Nunes

    2016-01-01

    This study evaluated the accuracy of gypsum casts after different impression techniques and double pouring. Ten patients were selected and for each one it was obtained 5 partial putty/wash impressions with vinyl polysiloxane (VPS) material from teeth #13 to #16 with partial metal stock trays. The following techniques were performed: (1) one-step; two-step relief with: (2) PVC film; (3) slow-speed tungsten carbide bur and scalpel blade, (4) small movements of the tray and (5) without relief—negative control. The impressions were disinfected with 0.5% sodium hypochlorite for 10 minutes and stored during 110 and 230 minutes for the first and second pouring, respectively, with type IV gypsum. Three intra-oral lateral photographs of each patient were taken using a tripod and a customized radiographic positioner. The images were imported into ImageJ software and the total area of the buccal surface from teeth #13 to #16 was measured. A 4.0% coefficient of variance was criterion for using these measurements as Baseline values. The casts were photographed and analyzed using the same standardization for the clinical images. The area (mm2) obtained from the difference between the measurements of each gypsum cast and the Baseline value of the respective patient were calculated and analyzed by repeated-measures two way-ANOVA and Mauchly’s Sphericity test (α = 0.05). No significant effect was observed for Impression technique (P = 0.23), Second pouring (P = 0.99) and their interaction (P = 0.25). The impression techniques and double pouring did not influence the accuracy of the gypsum casts. PMID:27736967

  12. Accuracy of Gypsum Casts after Different Impression Techniques and Double Pouring.

    PubMed

    Silva, Stephania Caroline Rodolfo; Messias, Aion Mangino; Abi-Rached, Filipe de Oliveira; de Souza, Raphael Freitas; Reis, José Maurício Dos Santos Nunes

    2016-01-01

    This study evaluated the accuracy of gypsum casts after different impression techniques and double pouring. Ten patients were selected and for each one it was obtained 5 partial putty/wash impressions with vinyl polysiloxane (VPS) material from teeth #13 to #16 with partial metal stock trays. The following techniques were performed: (1) one-step; two-step relief with: (2) PVC film; (3) slow-speed tungsten carbide bur and scalpel blade, (4) small movements of the tray and (5) without relief-negative control. The impressions were disinfected with 0.5% sodium hypochlorite for 10 minutes and stored during 110 and 230 minutes for the first and second pouring, respectively, with type IV gypsum. Three intra-oral lateral photographs of each patient were taken using a tripod and a customized radiographic positioner. The images were imported into ImageJ software and the total area of the buccal surface from teeth #13 to #16 was measured. A 4.0% coefficient of variance was criterion for using these measurements as Baseline values. The casts were photographed and analyzed using the same standardization for the clinical images. The area (mm2) obtained from the difference between the measurements of each gypsum cast and the Baseline value of the respective patient were calculated and analyzed by repeated-measures two way-ANOVA and Mauchly's Sphericity test (α = 0.05). No significant effect was observed for Impression technique (P = 0.23), Second pouring (P = 0.99) and their interaction (P = 0.25). The impression techniques and double pouring did not influence the accuracy of the gypsum casts.

  13. A comparative analysis of swarm intelligence techniques for feature selection in cancer classification.

    PubMed

    Gunavathi, Chellamuthu; Premalatha, Kandasamy

    2014-01-01

    Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL.

  14. Combinatorial Methodology for Screening Selectivity in Polymeric Pervaporation Membranes.

    PubMed

    Godbole, Rutvik V; Ma, Lan; Doerfert, Michael D; Williams, Porsche; Hedden, Ronald C

    2015-11-09

    Combinatorial methodology is described for rapid screening of selectivity in polymeric pervaporation membrane materials for alcohol-water separations. The screening technique is demonstrated for ethanol-water separation using a model polyacrylate system. The materials studied are cross-linked random copolymers of a hydrophobic comonomer (n-butyl acrylate, B) and a hydrophilic comonomer (2-hydroxyethyl acrylate, H). A matrix of materials is prepared that has orthogonal variations in two key variables, H:B ratio and cross-linker concentration. For mixtures of ethanol and water, equilibrium selectivities and distribution coefficients are obtained by combining swelling measurements with high-throughput HPLC analysis. Based on the screening results, two copolymers are selected for further study as pervaporation membranes to quantify permeability selectivity and the flux of ethanol. The screening methodology described has good potential to accelerate the search for new membrane materials, as it is adaptable to a broad range of polymer chemistries.

  15. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    NASA Astrophysics Data System (ADS)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  16. Material radioassay and selection for the XENON1T dark matter experiment

    NASA Astrophysics Data System (ADS)

    Aprile, E.; Aalbers, J.; Agostini, F.; Alfonsi, M.; Amaro, F. D.; Anthony, M.; Arneodo, F.; Barrow, P.; Baudis, L.; Bauermeister, B.; Benabderrahmane, M. L.; Berger, T.; Breur, P. A.; Brown, A.; Brown, E.; Bruenner, S.; Bruno, G.; Budnik, R.; Bütikofer, L.; Calvén, J.; Cardoso, J. M. R.; Cervantes, M.; Cichon, D.; Coderre, D.; Colijn, A. P.; Conrad, J.; Cussonneau, J. P.; Decowski, M. P.; de Perio, P.; Di Gangi, P.; Di Giovanni, A.; Diglio, S.; Eurin, G.; Fei, J.; Ferella, A. D.; Fieguth, A.; Franco, D.; Fulgione, W.; Gallo Rosso, A.; Galloway, M.; Gao, F.; Garbini, M.; Geis, C.; Goetzke, L. W.; Grandi, L.; Greene, Z.; Grignon, C.; Hasterok, C.; Hogenbirk, E.; Itay, R.; Kaminsky, B.; Kessler, G.; Kish, A.; Landsman, H.; Lang, R. F.; Lellouch, D.; Levinson, L.; Le Calloch, M.; Lin, Q.; Lindemann, S.; Lindner, M.; Lopes, J. A. M.; Manfredini, A.; Maris, I.; Marrodán Undagoitia, T.; Masbou, J.; Massoli, F. V.; Masson, D.; Mayani, D.; Messina, M.; Micheneau, K.; Miguez, B.; Molinario, A.; Murra, M.; Naganoma, J.; Ni, K.; Oberlack, U.; Pakarha, P.; Pelssers, B.; Persiani, R.; Piastra, F.; Pienaar, J.; Piro, M.-C.; Pizzella, V.; Plante, G.; Priel, N.; Rauch, L.; Reichard, S.; Reuter, C.; Rizzo, A.; Rosendahl, S.; Rupp, N.; Saldanha, R.; dos Santos, J. M. F.; Sartorelli, G.; Scheibelhut, M.; Schindler, S.; Schreiner, J.; Schumann, M.; Scotto Lavina, L.; Selvi, M.; Shagin, P.; Shockley, E.; Silva, M.; Simgen, H.; Sivers, M. v.; Stein, A.; Thers, D.; Tiseni, A.; Trinchero, G.; Tunnell, C.; Upole, N.; Wang, H.; Wei, Y.; Weinheimer, C.; Wulf, J.; Ye, J.; Zhang, Y.; Laubenstein, M.; Nisi, S.

    2017-12-01

    The XENON1T dark matter experiment aims to detect weakly interacting massive particles (WIMPs) through low-energy interactions with xenon atoms. To detect such a rare event necessitates the use of radiopure materials to minimize the number of background events within the expected WIMP signal region. In this paper we report the results of an extensive material radioassay campaign for the XENON1T experiment. Using gamma-ray spectroscopy and mass spectrometry techniques, systematic measurements of trace radioactive impurities in over one hundred samples within a wide range of materials were performed. The measured activities allowed for stringent selection and placement of materials during the detector construction phase and provided the input for XENON1T detection sensitivity estimates through Monte Carlo simulations.

  17. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  18. The effect of patient selection and surgical technique on the results of Conserve® Plus hip resurfacing--3.5- to 14-year follow-up.

    PubMed

    Amstutz, Harlan C; Takamura, Karren M; Le Duff, Michel J

    2011-04-01

    The results of metal-on-metal hip Conserve® Plus resurfacings with up to 14 years of follow-up with and without risk factors of small component size and/or large femoral defects were compared as performed with either first- or second-generation surgical techniques. There was a 99.7% survivorship at ten years for ideal hips (large components and small defects) and a 95.3% survivorship for hips with risk factors optimized technique has measurably improved durability in patients with risk factors at the 8-year mark. The lessons learned can help offset the observed learning curve of resurfacing. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Surface analysis of space telescope material specimens

    NASA Technical Reports Server (NTRS)

    Fromhold, A. T.; Daneshvar, K.

    1985-01-01

    Qualitative and quantitative data on Space Telescope materials which were exposed to low Earth orbital atomic oxygen in a controlled experiment during the 41-G (STS-17) mission were obtained utilizing the experimental techniques of Rutherford backscattering (RBS), particle induced X-ray emission (PIXE), and ellipsometry (ELL). The techniques employed were chosen with a view towards appropriateness for the sample in question, after consultation with NASA scientific personnel who provided the material specimens. A group of eight samples and their controls selected by NASA scientists were measured before and after flight. Information reported herein include specimen surface characterization by ellipsometry techniques, a determination of the thickness of the evaporated metal specimens by RBS, and a determination of trace impurity species present on and within the surface by PIXE.

  20. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  1. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  2. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  3. The Comparative Effectiveness of Different Item Analysis Techniques in Increasing Change Score Reliability.

    ERIC Educational Resources Information Center

    Crocker, Linda M.; Mehrens, William A.

    Four new methods of item analysis were used to select subsets of items which would yield measures of attitude change. The sample consisted of 263 students at Michigan State University who were tested on the Inventory of Beliefs as freshmen and retested on the same instrument as juniors. Item change scores and total change scores were computed for…

  4. Use of near infared spectroscopy to measure the chemical and mechanical properties of solid wood

    Treesearch

    Stephen S. Kelley; Timothy G. Rials; Rebecca Snell; Leslie H. Groom; Amie Sluiter

    2004-01-01

    Near infrared (NIR) spectroscopy (500 nm-2400 nm), coupled with multivariate analytic (MVA) statistical techniques, have been used to predict the chemical and mechanical properties of solid loblolly pine wood. The samples were selected from different radial locations and heights of three loblolly pine trees grown in Arkansas. The chemical composition and mechanical...

  5. Use of near infrared spectroscopy to measure the chemical and mechanical properties of solid wood

    Treesearch

    Stephen S. Kelley; Timothy G. Rials; Rebecca Snell; Leslie H. Groom; Amie Sluiter

    2004-01-01

    Near infrared (NIR) spectroscopy (500 nm-2400 nm), coupled with multivariate analytic (MVA) statistical techniques, have been used to predict the chemical and mechanical properties of solid loblolly pine wood. The samples were selected from different radial locations and heights of three loblolly pine trees grown in Arkansas. The chemical composition and mechanical...

  6. Variable Selection Strategies for Small-area Estimation Using FIA Plots and Remotely Sensed Data

    Treesearch

    Andrew Lister; Rachel Riemann; James Westfall; Mike Hoppus

    2005-01-01

    The USDA Forest Service's Forest Inventory and Analysis (FIA) unit maintains a network of tens of thousands of georeferenced forest inventory plots distributed across the United States. Data collected on these plots include direct measurements of tree diameter and height and other variables. We present a technique by which FIA plot data and coregistered...

  7. Individual and Familial Correlates of Career Salience among Upwardly Mobile College Women. Final Report.

    ERIC Educational Resources Information Center

    Guttmacher, Mary Johnson

    A case study was conducted using a sample of 271 women selected from a state college by a stratified random cluster technique that approximates proportional representation of women in all four classes and all college majors. The data source was an extensive questionnaire designed to measure the attitudes and behavior of interest. The major…

  8. Fine-Structure Artifact of the Velocity Distribution of Cs Beam Tubes as Measured by the Pulsed Microwave Power Technique

    DTIC Science & Technology

    1990-10-15

    Officer MOIE Program manager SSD/MSSB AFSTC/WCO OL-AB UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATION PAGE la . REPORT SECURITY...34 Metrologia , 9, 1973, pp. 107-112. 2. H. Hellwig, S. Jarvis, D. J. Glaze, D. Halford, and H. E. Bell, "Time domain velocity selection modulation as a

  9. A Study in Critical Listening Using Eight to Ten Year Olds in an Analysis of Commercial Propaganda Emanating from Television.

    ERIC Educational Resources Information Center

    Cook, Jimmie Ellis

    Selected eight to ten year old Maryland children were used in this study measuring the effect of lessons in becoming aware of propaganda employed by commercial advertisers in television programs. Sixteen 45-minute lessons directed to the propaganda techniques of Band Wagon, Card Stacking, Glittering Generalities, Name Calling, Plain Folks,…

  10. A State-of-the-Art Review of Techniques and Procedures for the Measurement of Complex Human Performance. Consulting Report.

    ERIC Educational Resources Information Center

    Fink, C. Dennis; And Others

    Recent efforts to assess complex human performances in various work settings are reviewed. The review is based upon recent psychological, educational, and industrial literature, and technical reports sponsored by the military services. A few selected military and industrial locations were also visited in order to learn about current research and…

  11. Land Mobile Satellite Service (LMSS) channel simulator: An end-to-end hardware simulation and study of the LMSS communications links

    NASA Technical Reports Server (NTRS)

    Salmasi, A. B. (Editor); Springett, J. C.; Sumida, J. T.; Richter, P. H.

    1984-01-01

    The design and implementation of the Land Mobile Satellite Service (LMSS) channel simulator as a facility for an end to end hardware simulation of the LMSS communications links, primarily with the mobile terminal is described. A number of studies are reported which show the applications of the channel simulator as a facility for validation and assessment of the LMSS design requirements and capabilities by performing quantitative measurements and qualitative audio evaluations for various link design parameters and channel impairments under simulated LMSS operating conditions. As a first application, the LMSS channel simulator was used in the evaluation of a system based on the voice processing and modulation (e.g., NBFM with 30 kHz of channel spacing and a 2 kHz rms frequency deviation for average talkers) selected for the Bell System's Advanced Mobile Phone Service (AMPS). The various details of the hardware design, qualitative audio evaluation techniques, signal to channel impairment measurement techniques, the justifications for criteria of different parameter selection in regards to the voice processing and modulation methods, and the results of a number of parametric studies are further described.

  12. Endogenous glucocorticoid analysis by liquid chromatography-tandem mass spectrometry in routine clinical laboratories.

    PubMed

    Hawley, James M; Keevil, Brian G

    2016-09-01

    Liquid chromatography-tandem mass spectrometry (LC-MS/MS) is a powerful analytical technique that offers exceptional selectivity and sensitivity. Used optimally, LC-MS/MS provides accurate and precise results for a wide range of analytes at concentrations that are difficult to quantitate with other methodologies. Its implementation into routine clinical biochemistry laboratories has revolutionised our ability to analyse small molecules such as glucocorticoids. Whereas immunoassays can suffer from matrix effects and cross-reactivity due to interactions with structural analogues, the selectivity offered by LC-MS/MS has largely overcome these limitations. As many clinical guidelines are now beginning to acknowledge the importance of the methodology used to provide results, the advantages associated with LC-MS/MS are gaining wider recognition. With their integral role in both the diagnosis and management of hypo- and hyperadrenal disorders, coupled with their widespread pharmacological use, the accurate measurement of glucocorticoids is fundamental to effective patient care. Here, we provide an up-to-date review of the LC-MS/MS techniques used to successfully measure endogenous glucocorticoids, particular reference is made to serum, urine and salivary cortisol. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Predictive analysis effectiveness in determining the epidemic disease infected area

    NASA Astrophysics Data System (ADS)

    Ibrahim, Najihah; Akhir, Nur Shazwani Md.; Hassan, Fadratul Hafinaz

    2017-10-01

    Epidemic disease outbreak had caused nowadays community to raise their great concern over the infectious disease controlling, preventing and handling methods to diminish the disease dissemination percentage and infected area. Backpropagation method was used for the counter measure and prediction analysis of the epidemic disease. The predictive analysis based on the backpropagation method can be determine via machine learning process that promotes the artificial intelligent in pattern recognition, statistics and features selection. This computational learning process will be integrated with data mining by measuring the score output as the classifier to the given set of input features through classification technique. The classification technique is the features selection of the disease dissemination factors that likely have strong interconnection between each other in causing infectious disease outbreaks. The predictive analysis of epidemic disease in determining the infected area was introduced in this preliminary study by using the backpropagation method in observation of other's findings. This study will classify the epidemic disease dissemination factors as the features for weight adjustment on the prediction of epidemic disease outbreaks. Through this preliminary study, the predictive analysis is proven to be effective method in determining the epidemic disease infected area by minimizing the error value through the features classification.

  14. Data-driven discovery of partial differential equations.

    PubMed

    Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2017-04-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

  15. [Validation of an in-house method for the determination of zinc in serum: Meeting the requirements of ISO 17025].

    PubMed

    Llorente Ballesteros, M T; Navarro Serrano, I; López Colón, J L

    2015-01-01

    The aim of this report is to propose a scheme for validation of an analytical technique according to ISO 17025. According to ISO 17025, the fundamental parameters tested were: selectivity, calibration model, precision, accuracy, uncertainty of measurement, and analytical interference. A protocol has been developed that has been applied successfully to quantify zinc in serum by atomic absorption spectrometry. It is demonstrated that our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  16. A comparison of participation outcome measures and the International Classification of Functioning, Disability and Health Core Sets for traumatic brain injury.

    PubMed

    Chung, Pearl; Yun, Sarah Jin; Khan, Fary

    2014-02-01

    To compare the contents of participation outcome measures in traumatic brain injury with the International Classification of Functioning, Disability and Health (ICF) Core Sets for traumatic brain injury. A systematic search with an independent review process selected relevant articles to identify outcome measures in participation in traumatic brain injury. Instruments used in two or more studies were linked to the ICF categories, which identified categories in participation for comparison with the ICF Core Sets for traumatic brain injury. Selected articles (n = 101) identified participation instruments used in two or more studies (n = 9): Community Integration Questionnaire, Craig Handicap Assessment and Reporting Technique, Mayo-Portland Adaptability Inventory-4 Participation Index, Sydney Psychosocial Reintegration Scale Version-2, Participation Assessment with Recombined Tool-Objective, Community Integration Measure, Participation Objective Participation Subjective, Community Integration Questionnaire-2, and Quality of Community Integration Questionnaire. Each instrument was linked to 4-35 unique second-level ICF categories, of which 39-100% related to participation. Instruments addressed 86-100% and 50-100% of the participation categories in the Comprehensive and Brief ICF Core Sets for traumatic brain injury, respectively. Participation measures in traumatic brain injury were compared with the ICF Core Sets for traumatic brain injury. The ICF Core Sets for traumatic brain injury could contribute to the development and selection of participation measures.

  17. Contactless Abdominal Fat Reduction With Selective RF™ Evaluated by Magnetic Resonance Imaging (MRI): Case Study.

    PubMed

    Downie, Jeanine; Kaspar, Miroslav

    2016-04-01

    Noninvasive body shaping methods seem to be an ascending part of the aesthetics market. As a result, the pressure to develop reliable methods for the collection and presentation of their results has also increased. The most used techniques currently include ultrasound measurements of fat thickness in the treated area, caliper measurements, bioimpedance-based scale measurements or circumferential tape measurements. Although these are the most used techniques, almost all of them have some limitations in reproducibility and/or accuracy. This study shows Magnetic Resonance Imaging (MRI) as the new method for the presentation of results in the body shaping industry. Six subjects were treated by a contactless selective radiofrequency device (BTL Vanquish ME, BTL Industries Inc., Boston, MA). The MRI fat thickness was measured at the baseline and at 4-weeks following the treatment. In addition to MRI images and measurements, digital photographs and anthropometric evaluations such as weight, abdominal circumference, and caliper fat thickness measurements were recorded. Abdominal fat thickness measurements from the MRI were performed from the same slices determined by the same tissue artefacts. The MRI fat thickness difference between the baseline measurement and follow up visit showed an average reduction of 5.36 mm as calculated from the data of 5 subjects. One subject dropped out of study due to non-study related issues. The results were statistically significant based on the Student's T-test evaluation. Magnetic resonance imaging abdominal fat thickness measurements seems to be the best method for the evaluation of fat thickness reduction after non-invasive body shaping treatments. In this study, this method shows average fat thickness reduction of 5.36 mm while the weight of the subjects didn't change significantly. A large spot size measuring 1317 cm(2) (204 square inches) covers the abdomen flank to flank. The average thickness of 5.36 mm of the fat layer reduced under the applicator translates into significant cumulative circumferential reduction. The reduction was not related with dieting.

  18. Feature selection through validation and un-censoring of endovascular repair survival data for predicting the risk of re-intervention.

    PubMed

    Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong

    2017-08-03

    Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.

  19. Covariate selection with group lasso and doubly robust estimation of causal effects

    PubMed Central

    Koch, Brandon; Vock, David M.; Wolfson, Julian

    2017-01-01

    Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276

  20. Covariate selection with group lasso and doubly robust estimation of causal effects.

    PubMed

    Koch, Brandon; Vock, David M; Wolfson, Julian

    2018-03-01

    The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.

Top